An interview with Sue Probert on the development and impact of semantic data models and their lasting impact on global trade.

In this interview, we talk to Sue Probert, who has just completed her second term as Chair of UN/CEFACT. UN/CEFACT (United Nations Centre for Trade Facilitation and Electronic Business) develops global standards and semantic data models to facilitate and harmonise international trade procedures and business processes. Over the past decades, Sue has not only witnessed but also shaped key developments in this field, playing a pivotal role in revolutionising how we exchange data globally. We look forward to exploring the insights and experiences from this life long journey.

You’ve had an extensive career in the field of electronic data exchange. Would you kindly start by telling us about your early career and how you got into this field?

Absolutely. I began my trade facilitation journey in the 1980s working for an IBM dealership in the UK. Back then, I was involved in developing a system that allowed exporters to create standardised export documents more efficiently using laser printing technology. This early experience really sparked my interest in the standardisation of data models and electronic trade. At that time, however, it was not yet a question of electronic data exchange between companies. Instead, the aim was to develop functions with which printers could efficiently print out the relevant business documents. So the documents were all still paper-based.

However, in the early 1990s, my company suddenly decided to stop developing the document creation system. I became redundant and found myself on the street, with no car, no laptop, no phone. It’s one of those times when you are forced to think about your future. Three months later, I had started a new business.

And I had negotiated with my previous employer that I could take over all the software that I had been responsible for developing together with most of the development team. So I started a tiny little company in my house. In the beginning, I had six employees, and when the kids went to school, their bedrooms were used as offices.

We focused on software solutions for international trade and through the UK SITPRO organisation, I joined a joint UN/ECE and OASIS ebXML project where I first encountered many people in the XML World. One of them wanted to work together with UN/CEFACT and develop new XML solutions for the international trade. Because of our expertise in both fields this resulted in one of the dotcom companies deciding to buy my company. That’s one of those really crucial events that resulted in a wonderful range of life experiences. I continued to work for this company for the next three years, partly in Silicon Valley and partly in the UK.

How did you then find your way to UN/CEFACT?

By selling the company, I was financially independent and could therefore decide freely about what I wanted to do next. The world of international trade continued to fascinate me. So I decided to give back some of my experience and started contributing directly to UN/CEFACT as an expert volunteer.

“Reference data models are definitely the most important thing I have worked on, not just in the last six years, but much longer. “

And you have come a long way since then. You recently finished your six-year chairmanship of UN/CEFACT, how do you look back on this time? Which of your contributions would you like to see have a longer-term impact?

Reference data models are definitely the most important thing I have worked on, not just in the last six years, but much longer. These Reference Data Models are structured in a meaningful way to represent data related to international supply chains. They ensure that the semantic data used in cross-border trade processes is well defined, standardised, and universally understood across different systems, organisations, and countries.

To understand this universal and standardised approach, let’s use the term “buyer” as an example. A buyer needs to be clearly defined so that everybody involved in a transaction knows very well who is responsible for the payment. The UN/CEFACT data model includes numerous attributes for the buyer party, many designed for general use, such as the company name, address and contact information. However, some attributes are only necessary in specific transactions, such as those involving regulated goods, special tax conditions, or unique contractual agreements. The semantic data models of UN/CEFACT are a kind of library in which all the important data relevant for international trade are defined in a standardised and comprehensive way.

Can you explain in more detail what advantages reference models have and how they are useful for companies in general?

A semantic reference data model allows trading partners to reuse the same data definitions regardless of the syntax format that they may be adopting for data exchange. This means that a company can switch from one syntax exchange format to another, or even adopt new formats in the future, without losing the underlying meaning of the data. This is particularly valuable for international trade, where you have to deal with different regulations and practices across country borders. Our models ensure that the semantic data definitions remain consistent and reliable, no matter where it’s used.

This reusability is the key advantage of semantic data models. In UN/CEAFCT we have continuously developed and expanded our international supply chain reference model and now offer a model that reflects processes in the international supply chain better than any other known supply chain model.

You’ve also been involved in the adoption of UN/EDIFACT, XML and JSON technologies. How did these change the landscape of data exchange?

On the one hand, each new syntax format certainly had a major impact on the technical implementation of data exchange. When XML became very popular around the 2000s and JSON a decade later, new standards and data formats were developed that were specifically tailored to the new data formats in terms of semantics and syntax.

On the other side, these changes have not fundamentally altered the operational processes within international trade. This continuity in processes highlights the importance for companies to shift their attention to semantic reference models that prioritise a clear understanding and alignment with these operational workflows.

“What’s important is that companies focus on the semantics of the data they are exchanging. If they get the semantics right, they can adapt to any format that comes along. “

So, would you say there’s a best data exchange format for companies to use today?

I wouldn’t say there’s only one best format. Each format—whether it’s UN/EDIFACT, XML, JSON, or even traditional paper forms—serves the same fundamental purpose: enabling data exchange between trading partners. The choice of exchange format often depends on the specific needs of the organisation and the technical expertise available. What’s important is that companies focus on the semantics of the data they are exchanging. If they get the semantics right, they can adapt to any format that comes along. But it is an important thing to remember that the developers in any organisation are often only experienced to implement what they’ve learned recently. And currently that’s most likely JSON and not semantic data models – this is a continual challenge in the real world. Another issue is that important lessons learned over the years are not always remembered over time.

What do you recommend companies do to ensure that they are well equipped for efficient data exchange?

My recommendation would be to focus on the semantics of your internal data systems and align them with international standards as much as possible. This alignment will make it much easier to exchange data with external partners, no matter what format is being used. If your internal systems are too rigid to change, then at least make sure that your external data exchanges are standardised.

When companies introduce a new ERP system or digitise processes, they all too often only think about their own internal procedures and lose sight of their external business partners. I find it amazing that they don’t think more about this. The question of how data is exchanged externally should be given a much higher priority.

“I spent my life meeting with people who think they’re doing something for the first time. They’re not. It’s a long, long journey.“

And finally, what do you see as the biggest challenge for the future of data exchange?

The biggest challenge will be ensuring that all the different formats and technologies continue to be part of the picture. I spent my life meeting with people who think they’re doing something for the first time. They’re not. It’s a long, long journey and we all need to acknowledge both past and future in order to move forward. Otherwise we will just reinvent the same problems. There’s a lot of valuable data being exchanged in older formats like UN/EDIFACT, and we need to make sure that this remains accessible and usable. The future of data exchange needs to be inclusive of all relevant technologies.

Sue, thank you for the interview

GEFEG.FX 2024-Q3 Release News

With the new GEFEG.FX quarterly release 2024-Q3, the following functionalities are also available for use.

Schematron Editor – More efficient validation with precise checks

The GEFEG.FX Schema Editor makes working with XML schema much easier and more efficient. You can specifically restrict formats, value scopes and accuracies of elements and attributes in GEFEG.FX schema. Transmitted values of the XML file must fulfil precisely these requirements.

In practical situations, it is often not enough just to check the syntax; complex business rules, such as totalling calculations or if-then conditions, must also be fulfilled. These specific requirements can be perfectly covered in GEFEG.FX with Schematron rules.

You can use the Schematron editor to edit and test individual Schematron rules directly and specifically in your XSD project. You don’t have to process the entire file, instead changes can be checked quickly and precisely.

This is how it works:

  • Simply click on the ‘Check’ note of your Schematron rule and select ‘Edit and test Schematron rule’ in the context menu to open the editor for the respective rule.

 

Your benefits:

  • Fast validation: Check your XML files easily and precisely.
  • Clear results: Thanks to the markings in green (error-free) and red (incorrect), you know immediately where action is required.
  • Efficient workflow: Edit rules directly in the ‘Assertion’ field, test the changes immediately and repeat the process until the desired result is achieved.

 

With the Schematron editor, you can prepare or correct Schematron rules more quickly and ensure smooth and correct data processing.

Directly test Schematron rules in GEFEG.FX

Which export is the best choice to create an XSD from my GEFEG.FX schema?

Depending on the application, use different export options to generate an XSD file from your GEFEG.FX schema.

In the B2B environment, an XSD file is often used for different scenarios:

  • Message structure: An XSD can be used to represent the structure of a message by clearly displaying all the necessary elements and attributes of an XML file.
  • Validation: An XSD is also used to validate XML instances. Higher requirements are placed here, as messages can be designed at element level in GEFEG.FX.

 

If you want to use your XSD file for validation, we recommend exporting it as a „Validation Schema“. This export takes into account all changes that you have made at element level and creates an XSD file that integrates these adjustments. This differs from the ‘Profile Schema’ export, where such changes are not applied.

The ‘Validation Schema’ export is available as an add-on and offers you a customised solution for validating complex XML instances.

With the right export, you can ensure that your XSD file meets exactly the requirements you need for your application.

Tipps and Tricks for GEFEG.FX: Open the Windows Explorer in the Manager

Opening the Windows Explorer in the GEFEG.FX Manager

Here’s a pro tip for those occasions when you need to handle something outside of GEFEG.FX. Simply highlight the relevant section in the GEFEG.FX Manager, then select “Open folder in Explorer” from the menu. This will give you direct access to the files you need right within Windows Explorer. It’s a quick and efficient way to manage your data without leaving the GEFEG.FX environment.

This function is particularly useful if you want to fill a test data folder with test messages: Open the test data folder directly from GEFEG.FX, copy the test messages to the corresponding folder in Windows, and then update the test data folder in GEFEG.FX Your test messages are then immediately available for validation in the GEFEG.FX. Your test messages are then immediately available for validation in GEFEG.FX.

Data packages in GEFEG.FX

The following new, supplemented or modified data packages are available for download according to your license scope.

  • cXML – New data package
  • New: Sample data provided for API and JSON
  • UBL 2.2, 2.3, 2.4
  • RosettaNet Update: New PIPs provided
  • WCO Data Model version 4.1.0

Data update now available with GEFEG.FX

The World Customs Organisation (WCO) has recently published version 4.1.0 of its data model. GEFEG.FX users of the WCO Data Model can now access the new version publication 4.1.0 after performing an internet update.

New: Booking Reservation Information DIP Now Included in version 4.1.0

As with previous updates, the World Customs Organisation provides key regulatory data requirements in response to new or amended legislation. These are first submitted by customs authorities and implementers as amendments to the WCO data model and then implemented.

With the release of version 4.1.0, the WCO data model introduces the new Booking Reservation Information (BRI) dataset. This dataset is now available as a derived information package (DIP), which has been specifically designed to simplify the implementation tasks of the users of the WCO Data Model and the cruise industry.

This updated version also integrates the UPU dataset and the Joint Message Standards, which further improves the simplification and processing of postal items. These enhancements are aimed at improving efficiency and compliance across the global customs landscape.

Customs authorities around the globe further strive for effectiveness and efficiency

It is an important objective for the WCO to provide and further develop its global standard for seamless cross-border transactions for all Customs administrations worldwide.

What are the benefits of the WCO Data Model, which is intended to be the basis for information exchange of cross-border regulatory processes in a global supply chain?

The Data Model opens the possibility for Customs authorities to achieve interoperability and collaboration in Single Window and other implementations. Data flow and integration of business data for Customs procedures are simplified and harmonized.

The main components of the WCO data model consist of ‘Base Information Packages’ and ‘Additional Information Packages’.

Information packages are used to compile information that is transmitted by the trading partners on the one hand. On the other hand, customs authorities process this information for typical customs processes and procedures. Customs processes cover Single Window, or other implementations, including those at the virtual border. This includes, for example, declaration of goods movement, licenses, permits, certificates, or other types of regulatory cross-border trade documents.

Delivery of the WCO Data Model in a structured and reusable format in GEFEG.FX

In cooperation with the World Customs Organization, GEFEG has been delivering the WCO Data Model with GEFEG.FX software since the early 2010s. For customs authorities, government organisations, traders and other parties involved in cross-border regulatory processes, this has opened up new opportunities for joint development work and user-specific use of the WCO Data Model. The advantage for our users: GEFEG.FX simplifies and rationalises the reuse of the WZO data model. Furthermore, a ready-to-use XML schema export function compatible with the WZO data model also contributes to the support of customised implementations.

Easy and effective use of the WCO Data Model

Many users of the WCO Data Model packages in GEFEG.FX have already successfully made use of the simple and efficient methods for reusing the WCO Data Model. They use GEFEG.FX to plan and implement their country and/or region-specific customs data requirements based on legislation. Our users have an important task with every new release. They need to determine whether their existing implementations need to be modified to incorporate the latest WCO definitions of objects and customs procedures. This is the only way to ensure continuous compliance with the data model.

Welcome to the WCO Data Model 4.1.0 webinar

GEFEG invites all interested users of the WCO data model to participate in our webinar on the changes in the latest version 4.1.0 of the WCO data model. unserem Webinar (in englischer Sprache) über die Änderungen der neuesten Version 4.1.0 des WZO-Datenmodells . The webinar will look at the potential impact of the new version and its implementation by business and technical implementers. The audience will also receive information on the ‘how-to’ documents supplied with the new release, which will support all users in applying all the typical steps involved in implementing the new version of the WZO data model. The participants then have the opportunity to express their wishes, questions and comments during the 15-minute question and answer session.

Integrate ISO 20022, open APIs and interoperable data with GEFEG solutions for a future-proof data ecosystem

GEFEG is excited to announce its participation in the Middle East Banking Innovation Summit (MEBIS) 2024. MEBIS is a leading event focused on advancing digital banking. As a pioneer in digital standardization, GEFEG empowers financial institutions to optimize their operations with advanced data exchange solutions.

GEFEG supports the financial industry in its digital transformation by focusing on semantic data and digital standards for data exchange. GEFEG’s solutions provide seamless API integration, enhance interoperability, and ensure compliance with global regulations. Through automation and standardization, GEFEG accelerates the development of financial products and services.

At MEBIS 2024 in Dubai, where digital transformation, AI, Open Banking, and Open Finance are key topics, GEFEG will showcase solutions that drive the digitalization of financial services. Our innovative offerings promote collaboration, interoperability, and agility for financial experts and institutions navigating today’s dynamic landscape.

Visit us at our booth or contact us via email or LinkedIn. Discover how Open APIs and ISO 20022 can enhance your data ecosystem and support the harmonization of financial data across your platforms. Let us demonstrate how global standards can boost interoperability and efficiency.

Now also develop JSON schema guides with GEFEG.FX – New functions in the JSON schema editor

Enhancement for the development of JSON schemas for EDI and business data management: You can now also create JSON schema guides with GEFEG.FX. This means that the proven guide technology is now also available for JSON schemas.

Read more: New JSON schema guide functions – More flexibility and quality for EDI

 

What else is new in the GEFEG.FX 2024-Q2 release?

With the new GEFEG.FX quarterly release 2024-Q2, the following new or enhanced functionalities are also available for use.

Assign file names automatically in the publishing project, as of now in the current release

The new version of GEFEG.FX allows you to automatically assign file names when creating documentation, whereas previously each new file name had to be defined manually. From now on, GEFEG.FX automatically uses the names of the GEFEG.FX objects as aliases for the documentation files that you want to create.

The new process saves time and eliminates potential sources of error, as manual naming per documentation file is no longer necessary.

 

ISO 20022 schema exports from data models is now easier

More and more B2B standards are being published as syntax-neutral data models, including the ISO 20022 data model for the financial industry. The use of data models with GEFEG.FX has the unique advantage that company-specific GEFEG.FX guidelines can be created on the basis of the data models. In these guides, users describe the requirements of their company, such as the restriction of elements.

XML schema formats generated from the data model or data model guideline are used for data exchange in production systems. The smooth, automatic flow of data from the data model to the schema is therefore an important prerequisite for successful data exchange.

Previously, a GEFEG.FX schema had to be created manually in an intermediate step and then exported as an XSD file. This process has been optimised – from the data model to the GEFEG.FX schema! Now you can export the XML schema directly from a data model via publishing projects with a single click.

 

Improved conversion of continuous text in Microsoft Word documents for PDF documents

GEFEG.FX enables you to document data structures simply and efficiently. Many users use Microsoft Word to present their data clearly with additional information. With the Word file format, the user data is clearly presented together with supplementary information and provides a clear insight into the structure and properties of your data.

The output of these Word documents as PDF files has now been improved. If you now create documentation with GEFEG.FX publishing projects, plain text is now generated in all Notes with text content, line breaks in continuous text are omitted. This eliminates potential sources of error and simplifies the subsequent work steps with the aim of generating a PDF file as the documentation result.

 

Improved guide comparison display

The guide comparison shows differences between two comparison objects in a comparison list below the two data structures. As we have noticed in support cases that the display is not always easy to understand, this view has been streamlined and new categories have been added to help users recognise differences more quickly and understand them better. This makes it easier to understand the differences between the data structures.

 

Data packages in GEFEG.FX

The following new, supplemented or modified data packages are available for download

  • UN/EDIFACT: Version D.23B will not be available in accordance with the UN/CEFACT decision, as no change requests and therefore no changes have been submitted
  • UN/Locode, Status as of 2023-2
  • GS1 EANCOM® Application Guidelines: Fashion 2.1 added
  • ISO20022: Version 2024 of the e-Repository is available.
  • Odette Recommendations: An updated version is available
  • VDA Recommendations: An updated version is available
  • xCBL 3.0 + 3.5: Elements now also contain descriptions
  • DK Guideline Schemas V3.7 (Deutsche Kreditwirtschaft)

 

What is New in the GEFEG.FX 2023-Q4 Release

With the new GEFEG.FX quarterly release 2023-Q4, the following new or enhanced functionalities are available for use.

New data packages in GEFEG.FX

  • UN/EDIFACT
    • UN/EDIFACT D.23A
  • ISO20022: External code lists update
  • DK Guidelines 3.7: pain.001, pain.002 and pain.008
  • GS1 eCom Standards: GS1 XML 3.6

New filter function for Notes in GEFEG.FX – More organised overview with the new notes filter in the models and schemas editors

Notes filter in the GEFEG.FX view

The use of notes offers every user the opportunity to enhance their guides or standards with additional, valuable information and is an essential aspect of specifying guides with GEFEG.FX. This can include validation rules, mapping IDs, internal notes, customer descriptions and much more. Notes in GEFEG.FX are used for output in documentation in MS Word/HTML format, but also for executing special functions, such as validation. In the case of comprehensive commenting, navigation in the Notes section can become time-consuming due to the large number of Notes.

Thanks to the new Notes filter function, you now have the option of only displaying the notes relevant to your current task. This allows you to concentrate on the essentials when updating by hiding all notes that are not relevant to your current task. This is particularly useful for comprehensive guides in which the Notes section contains a wealth of information. For example, you can hide mapping IDs that are not relevant outside of mapping projects or focus specifically on validation rules.

Notes filter in the GEFEG.FX search

The filter function not only extends to the display, but can also be used effectively when searching. If you only want to include certain notes in a search, you can find the relevant results much more quickly by setting a filter. This further filter function can be called up and used via the search function. There you can easily select the notes you want to display or hide.

All in just one project: Create and update multilingual documentation centrally

Creating multilingual documentation has never been so easy! Previously, it was necessary to create a separate publishing project for each language. This also required reports and layouts for each language, which was associated with a high potential for errors during updates, as these files had to be maintained separately.

From now on, all languages can be managed in a single publishing project. This not only saves time during creation, but also for future updates. There are also fewer systematic errors if only one publishing project has to be maintained instead of several.

The new functionality is controlled via layouts. The layouts now have the “Languages” function at the top left. Here you can add the required languages and then select the desired language(s) to be taken into account when generating new documentation. Individual fields can also be linked to a specific language.

Does existing documentation need to be updated?

No, it is not absolutely necessary to change existing publishing projects. The new function is primarily recommended for customers who want to create new multilingual documentation.

If you already manage multilingual documentation in GEFEG.FX, you can of course simply retain the layouts and report files already in use or switch to the new language function later as part of a publishing project update.

With the rapid expansion of the internet came the ever-increasing need to standardise communication between web applications and their users. For this reason, JSON was developed in 2001 to facilitate communication between a client and server as an open file format. Because of its ease of use and great versatility, JSON became established as the central exchange format on the Internet.

Though JSON was first used on the web, this data format is also becoming increasingly popular in electronic data exchange. As with all exchange formats, JSON files must be read, evaluated and verified by the recipient. For this reason, JSON Schema was developed. In a JSON schema, rules and conditions are defined that must be observed by JSON data. Typically, such criteria are, among others, properties, references, and types.

Also with JSON, an understanding of the underlying business processes is necessary. Thus, software is needed that effectively handles these challenges. For these reasons, we have developed the JSON schema editor as an extension to our GEFEG.FX software solution. The new editor now offers all companies the possibility to effectively manage all these challenges.

Powerful JSON schema design

GEFEG’s JSON Schema Editor provides a powerful solution to design JSON schemas. Companies can now intuitively design complex data structures that meet the requirements of their specific business processes. The editor’s user-friendly interface allows users to easily define and organise JSON objects and their properties. This not only facilitates the design phase, but also ensures a consistent and clear presentation of the data – a big plus for smooth cooperation with partners and customers.

Efficient editing of JSON schemas

GEFEG.FX’s JSON Schema Editor offers a comprehensive range of editing tools that allow organisations to precisely customise JSON schemas. Whether it’s updating existing schemas or adding new elements, the editor makes the process easier than ever. With the ability to create and modify complex hierarchies, users are in control of their data structures and can make changes with ease.

Converting XML files into JSON schemas

Another notable feature of GEFEG’s JSON Schema Editor is the ability to convert XML files into JSON schema structures. XML files are widely used and are an important technical format for data exchange. By converting XML files, a JSON schema is available with one click, which organisations can use, for example, for use in databases and processing procedures, without having to laboriously re-enter the XML schema in JSON-format.

Since experience shows that companies are using multiple data exchange formats and need to orchestrate the interplay of multiple syntaxes, this step enables a smooth migration to a more modern data exchange landscape and facilitates the adaptation to the latest EDI trends. In addition, the conversion from XML to JSON schema helps to improve interoperability between different systems and platforms, resulting in seamless and efficient data exchange.

Conclusion

In the dynamic world of EDI, innovative solutions are critical to meet the demands of the modern business world. The GEFEG.FX JSON Schema Editor complements the existing functional areas and offers a powerful way to design, customise and reuse customer-specific data structures in JSON format. This fulfils important current requirements in the electronic exchange of business data. GEFEG continues to provide critical functionality that businesses need to succeed in an ever-evolving digital landscape.

We look forward to presenting the features and benefits of the new JSON editor to you and supporting you in its use. You would like to learn more about the JSON editor? Please contact us!

Roman Strand, Senior Manager Master Data + Data Exchange at GS1 Germany, on the success and future of EANCOM® and the GS1 application recommendations based on EANCOM®.

Roman Strand has been working for GS1 Germany for more than 20 years and is, among other things, the head of the national EDI/eCommerce specialist group. In this interview, he explains the role of the EANCOM® standard and why the GS1 specification with the associated application recommendations will continue to determine the business of mass data exchange in the future.

Hello Mr Strand, you have been working for GS1 Germany for over 20 years. What were the major focal points of your work during this period?

I work all the time at GS1 in the national and international standardisation department. In the early years I was the apprentice under Norbert Horst, who helped develop the EANCOM® standard in Germany. During this time I learned a lot and also started working with GEFEG.FX. I have remained loyal to this department and the topic of national and international standardisation to this day. In various committees, I drive the further development of our standards together with our partners from the business community. In addition, I work as a trainer and conduct EDI training courses in which our customers are trained to become certified EDI managers, among other things.

Which topics did you deal with a lot last year?

Next to the further development and maintenance of our EANCOM® and XML standards, we deal with the current digitalisation topics and check to what extent new innovations could be relevant for our work at GS1. Furthermore, we had our big anniversary celebration last year, because EANCOM® is now more than 30 years on the market and our application recommendations have been around for more than 20 years.

Why was the EANCOM® standard developed and what function does it fulfil?

The EANCOM® standard was developed before my time at GS1. There is the mother standard EDIFACT, which is much too big and complex. The great achievement of the EANCOM® standard is to reduce this complexity of the mother standard to those elements that are important for our customers. Approximately 220 EDIFACT messages became 50 EANCOM® messages, which were then further adapted to industry-specific EANCOM® application recommendations. The leaner a standard is, the more economically and efficiently it can be implemented. This simplification made the widespread use of the standard by many companies possible in the first place. We also translated the English-language standard almost completely into German. This was another great simplification for the German community.

How were you personally involved in the development of the EANCOM® standard?

The development of the EANCOM® standard is mainly driven by our customers from trade, industry and other sectors. They pass on their requirements to GS1, which are then processed in the EDI/eCommerce specialist group. The decisions of the expert group are then implemented by me, among others, as a representative of GS1.

How can I imagine the role of GS1 in this process?

There are many published standards on the market for electronic data exchange between companies. But behind very few of them is a reliable organisation that is continuously committed to the further development of its standard. With us, clients can be confident that implementing the standard is a future-proof investment. If, for example, there is a legal change that also has to be taken into account in the standard, we adapt the standard.
Furthermore, we are responsible for the documentation and specification of the EANCOM® standard. Again, our focus is on simplification. Among other things, we ensure that as many codes as possible are used from code lists instead of free-text fields. Because with free-text fields, automated data processing is often associated with errors.

You use GEFEG.FX for data modelling and documentation of the EANCOM® standard. For what reasons do you rely on the software for these work steps?

I have been working with GEFEG.FX for many years now and it took me a while before I could really use the software to its full extent. In the foreground you have your specification and in the background you have the standard, which is linked to the corresponding code lists. This means that as a user, when developing my own specification, I cannot do anything in the foreground that is not already stored in the underlying standard. As soon as there is a deviation from the standard, GEFEG.FX reports an error message and ensures that there is no erroneous deviation. For me, this control function is the main advantage of GEFEG.FX as a standard tool. Otherwise, a comma could always be quickly forgotten or some other small syntactical error overlooked.
With the standard running in the background, validations or checks can be carried out conveniently. In addition, documentation can be created quickly at the touch of a button using the various output options. Thanks to these functions, you don’t have to start all over again and save a lot of time in many work steps.

How do you assess the future development of the EANCOM® standard?

For me, EANCOM® is Classic EDI, which is considered old-fashioned by many workers in innovative companies. However, in my opinion, this classic EDI offers many advantages. It is a defined structure that works in the mass data exchange business and will continue to work in the future. I once said to my colleague who has been working in EDI at GS1 as long as I have: “Until we retire, we don’t have to worry about EANCOM® being shut down.”
Because the business is still going and the demand remains high. There have been and continue to be new technologies that are supposed to replace classic EDI. When I started at GS1, there was a lot of hype about XML. The same happened years later with blockchain technology and today with APIs. All three technologies were seen as replacements for classic EDI, but in the end they are all just additions that offer supporting possibilities in the EDI area. Mass data exchange will continue to be regulated by classic EDI and therefore I assume that the future of the EANCOM® standard is also secured.

Are there any challenges or difficulties that need to be considered in the further development of the standard?

The problem of a global standard is its complexity. Over the years, new information has been added to the standard. For example, every relevant change in the law led to new additions without anything ever being deleted, even if no one has used it for 20 years.
We should therefore work more towards lean EANCOM® standards, in which only the information that is absolutely necessary is stored. After all, this reduction of complexity is one of the central strengths of GS1 standards. We achieve this above all by developing application recommendations in which the underlying standard is specified even further for a specific application. This leads to less information needed and fewer potential sources of error.

We are nearing the end of our conversation. Is there anything else important beyond the EANCOM® standard that you would like to talk about?

Yes, we are currently working on a semantic data model and are thus building a new content basis that contains all relevant information that is to be exchanged electronically. GEFEG is also involved in this development process. With the data model, our customers have the possibility to freely decide which syntax form they use for their data formats for electronic data exchange This fundamental work will therefore help users to be more independent of a specific syntax in the future and to be able to decide freely whether an XML, EANCOM® or even an API should be used for data exchange.

Mr Strand, thank you for this interview.