An interview with Sue Probert on the development and impact of semantic data models and their lasting impact on global trade.

In this interview, we talk to Sue Probert, who has just completed her second term as Chair of UN/CEFACT. UN/CEFACT (United Nations Centre for Trade Facilitation and Electronic Business) develops global standards and semantic data models to facilitate and harmonise international trade procedures and business processes. Over the past decades, Sue has not only witnessed but also shaped key developments in this field, playing a pivotal role in revolutionising how we exchange data globally. We look forward to exploring the insights and experiences from this life long journey.

You’ve had an extensive career in the field of electronic data exchange. Would you kindly start by telling us about your early career and how you got into this field?

Absolutely. I began my trade facilitation journey in the 1980s working for an IBM dealership in the UK. Back then, I was involved in developing a system that allowed exporters to create standardised export documents more efficiently using laser printing technology. This early experience really sparked my interest in the standardisation of data models and electronic trade. At that time, however, it was not yet a question of electronic data exchange between companies. Instead, the aim was to develop functions with which printers could efficiently print out the relevant business documents. So the documents were all still paper-based.

However, in the early 1990s, my company suddenly decided to stop developing the document creation system. I became redundant and found myself on the street, with no car, no laptop, no phone. It’s one of those times when you are forced to think about your future. Three months later, I had started a new business.

And I had negotiated with my previous employer that I could take over all the software that I had been responsible for developing together with most of the development team. So I started a tiny little company in my house. In the beginning, I had six employees, and when the kids went to school, their bedrooms were used as offices.

We focused on software solutions for international trade and through the UK SITPRO organisation, I joined a joint UN/ECE and OASIS ebXML project where I first encountered many people in the XML World. One of them wanted to work together with UN/CEFACT and develop new XML solutions for the international trade. Because of our expertise in both fields this resulted in one of the dotcom companies deciding to buy my company. That’s one of those really crucial events that resulted in a wonderful range of life experiences. I continued to work for this company for the next three years, partly in Silicon Valley and partly in the UK.

How did you then find your way to UN/CEFACT?

By selling the company, I was financially independent and could therefore decide freely about what I wanted to do next. The world of international trade continued to fascinate me. So I decided to give back some of my experience and started contributing directly to UN/CEFACT as an expert volunteer.

“Reference data models are definitely the most important thing I have worked on, not just in the last six years, but much longer. “

And you have come a long way since then. You recently finished your six-year chairmanship of UN/CEFACT, how do you look back on this time? Which of your contributions would you like to see have a longer-term impact?

Reference data models are definitely the most important thing I have worked on, not just in the last six years, but much longer. These Reference Data Models are structured in a meaningful way to represent data related to international supply chains. They ensure that the semantic data used in cross-border trade processes is well defined, standardised, and universally understood across different systems, organisations, and countries.

To understand this universal and standardised approach, let’s use the term “buyer” as an example. A buyer needs to be clearly defined so that everybody involved in a transaction knows very well who is responsible for the payment. The UN/CEFACT data model includes numerous attributes for the buyer party, many designed for general use, such as the company name, address and contact information. However, some attributes are only necessary in specific transactions, such as those involving regulated goods, special tax conditions, or unique contractual agreements. The semantic data models of UN/CEFACT are a kind of library in which all the important data relevant for international trade are defined in a standardised and comprehensive way.

Can you explain in more detail what advantages reference models have and how they are useful for companies in general?

A semantic reference data model allows trading partners to reuse the same data definitions regardless of the syntax format that they may be adopting for data exchange. This means that a company can switch from one syntax exchange format to another, or even adopt new formats in the future, without losing the underlying meaning of the data. This is particularly valuable for international trade, where you have to deal with different regulations and practices across country borders. Our models ensure that the semantic data definitions remain consistent and reliable, no matter where it’s used.

This reusability is the key advantage of semantic data models. In UN/CEAFCT we have continuously developed and expanded our international supply chain reference model and now offer a model that reflects processes in the international supply chain better than any other known supply chain model.

You’ve also been involved in the adoption of UN/EDIFACT, XML and JSON technologies. How did these change the landscape of data exchange?

On the one hand, each new syntax format certainly had a major impact on the technical implementation of data exchange. When XML became very popular around the 2000s and JSON a decade later, new standards and data formats were developed that were specifically tailored to the new data formats in terms of semantics and syntax.

On the other side, these changes have not fundamentally altered the operational processes within international trade. This continuity in processes highlights the importance for companies to shift their attention to semantic reference models that prioritise a clear understanding and alignment with these operational workflows.

“What’s important is that companies focus on the semantics of the data they are exchanging. If they get the semantics right, they can adapt to any format that comes along. “

So, would you say there’s a best data exchange format for companies to use today?

I wouldn’t say there’s only one best format. Each format—whether it’s UN/EDIFACT, XML, JSON, or even traditional paper forms—serves the same fundamental purpose: enabling data exchange between trading partners. The choice of exchange format often depends on the specific needs of the organisation and the technical expertise available. What’s important is that companies focus on the semantics of the data they are exchanging. If they get the semantics right, they can adapt to any format that comes along. But it is an important thing to remember that the developers in any organisation are often only experienced to implement what they’ve learned recently. And currently that’s most likely JSON and not semantic data models – this is a continual challenge in the real world. Another issue is that important lessons learned over the years are not always remembered over time.

What do you recommend companies do to ensure that they are well equipped for efficient data exchange?

My recommendation would be to focus on the semantics of your internal data systems and align them with international standards as much as possible. This alignment will make it much easier to exchange data with external partners, no matter what format is being used. If your internal systems are too rigid to change, then at least make sure that your external data exchanges are standardised.

When companies introduce a new ERP system or digitise processes, they all too often only think about their own internal procedures and lose sight of their external business partners. I find it amazing that they don’t think more about this. The question of how data is exchanged externally should be given a much higher priority.

“I spent my life meeting with people who think they’re doing something for the first time. They’re not. It’s a long, long journey.“

And finally, what do you see as the biggest challenge for the future of data exchange?

The biggest challenge will be ensuring that all the different formats and technologies continue to be part of the picture. I spent my life meeting with people who think they’re doing something for the first time. They’re not. It’s a long, long journey and we all need to acknowledge both past and future in order to move forward. Otherwise we will just reinvent the same problems. There’s a lot of valuable data being exchanged in older formats like UN/EDIFACT, and we need to make sure that this remains accessible and usable. The future of data exchange needs to be inclusive of all relevant technologies.

Sue, thank you for the interview

GEFEG.FX 2024-Q3 Release News

With the new GEFEG.FX quarterly release 2024-Q3, the following functionalities are also available for use.

Schematron Editor – More efficient validation with precise checks

The GEFEG.FX Schema Editor makes working with XML schema much easier and more efficient. You can specifically restrict formats, value scopes and accuracies of elements and attributes in GEFEG.FX schema. Transmitted values of the XML file must fulfil precisely these requirements.

In practical situations, it is often not enough just to check the syntax; complex business rules, such as totalling calculations or if-then conditions, must also be fulfilled. These specific requirements can be perfectly covered in GEFEG.FX with Schematron rules.

You can use the Schematron editor to edit and test individual Schematron rules directly and specifically in your XSD project. You don’t have to process the entire file, instead changes can be checked quickly and precisely.

This is how it works:

  • Simply click on the ‘Check’ note of your Schematron rule and select ‘Edit and test Schematron rule’ in the context menu to open the editor for the respective rule.

 

Your benefits:

  • Fast validation: Check your XML files easily and precisely.
  • Clear results: Thanks to the markings in green (error-free) and red (incorrect), you know immediately where action is required.
  • Efficient workflow: Edit rules directly in the ‘Assertion’ field, test the changes immediately and repeat the process until the desired result is achieved.

 

With the Schematron editor, you can prepare or correct Schematron rules more quickly and ensure smooth and correct data processing.

Directly test Schematron rules in GEFEG.FX

Which export is the best choice to create an XSD from my GEFEG.FX schema?

Depending on the application, use different export options to generate an XSD file from your GEFEG.FX schema.

In the B2B environment, an XSD file is often used for different scenarios:

  • Message structure: An XSD can be used to represent the structure of a message by clearly displaying all the necessary elements and attributes of an XML file.
  • Validation: An XSD is also used to validate XML instances. Higher requirements are placed here, as messages can be designed at element level in GEFEG.FX.

 

If you want to use your XSD file for validation, we recommend exporting it as a „Validation Schema“. This export takes into account all changes that you have made at element level and creates an XSD file that integrates these adjustments. This differs from the ‘Profile Schema’ export, where such changes are not applied.

The ‘Validation Schema’ export is available as an add-on and offers you a customised solution for validating complex XML instances.

With the right export, you can ensure that your XSD file meets exactly the requirements you need for your application.

Tipps and Tricks for GEFEG.FX: Open the Windows Explorer in the Manager

Opening the Windows Explorer in the GEFEG.FX Manager

Here’s a pro tip for those occasions when you need to handle something outside of GEFEG.FX. Simply highlight the relevant section in the GEFEG.FX Manager, then select “Open folder in Explorer” from the menu. This will give you direct access to the files you need right within Windows Explorer. It’s a quick and efficient way to manage your data without leaving the GEFEG.FX environment.

This function is particularly useful if you want to fill a test data folder with test messages: Open the test data folder directly from GEFEG.FX, copy the test messages to the corresponding folder in Windows, and then update the test data folder in GEFEG.FX Your test messages are then immediately available for validation in the GEFEG.FX. Your test messages are then immediately available for validation in GEFEG.FX.

Data packages in GEFEG.FX

The following new, supplemented or modified data packages are available for download according to your license scope.

  • cXML – New data package
  • New: Sample data provided for API and JSON
  • UBL 2.2, 2.3, 2.4
  • RosettaNet Update: New PIPs provided
  • WCO Data Model version 4.1.0

Data update now available with GEFEG.FX

The World Customs Organisation (WCO) has recently published version 4.1.0 of its data model. GEFEG.FX users of the WCO Data Model can now access the new version publication 4.1.0 after performing an internet update.

New: Booking Reservation Information DIP Now Included in version 4.1.0

As with previous updates, the World Customs Organisation provides key regulatory data requirements in response to new or amended legislation. These are first submitted by customs authorities and implementers as amendments to the WCO data model and then implemented.

With the release of version 4.1.0, the WCO data model introduces the new Booking Reservation Information (BRI) dataset. This dataset is now available as a derived information package (DIP), which has been specifically designed to simplify the implementation tasks of the users of the WCO Data Model and the cruise industry.

This updated version also integrates the UPU dataset and the Joint Message Standards, which further improves the simplification and processing of postal items. These enhancements are aimed at improving efficiency and compliance across the global customs landscape.

Customs authorities around the globe further strive for effectiveness and efficiency

It is an important objective for the WCO to provide and further develop its global standard for seamless cross-border transactions for all Customs administrations worldwide.

What are the benefits of the WCO Data Model, which is intended to be the basis for information exchange of cross-border regulatory processes in a global supply chain?

The Data Model opens the possibility for Customs authorities to achieve interoperability and collaboration in Single Window and other implementations. Data flow and integration of business data for Customs procedures are simplified and harmonized.

The main components of the WCO data model consist of ‘Base Information Packages’ and ‘Additional Information Packages’.

Information packages are used to compile information that is transmitted by the trading partners on the one hand. On the other hand, customs authorities process this information for typical customs processes and procedures. Customs processes cover Single Window, or other implementations, including those at the virtual border. This includes, for example, declaration of goods movement, licenses, permits, certificates, or other types of regulatory cross-border trade documents.

Delivery of the WCO Data Model in a structured and reusable format in GEFEG.FX

In cooperation with the World Customs Organization, GEFEG has been delivering the WCO Data Model with GEFEG.FX software since the early 2010s. For customs authorities, government organisations, traders and other parties involved in cross-border regulatory processes, this has opened up new opportunities for joint development work and user-specific use of the WCO Data Model. The advantage for our users: GEFEG.FX simplifies and rationalises the reuse of the WZO data model. Furthermore, a ready-to-use XML schema export function compatible with the WZO data model also contributes to the support of customised implementations.

Easy and effective use of the WCO Data Model

Many users of the WCO Data Model packages in GEFEG.FX have already successfully made use of the simple and efficient methods for reusing the WCO Data Model. They use GEFEG.FX to plan and implement their country and/or region-specific customs data requirements based on legislation. Our users have an important task with every new release. They need to determine whether their existing implementations need to be modified to incorporate the latest WCO definitions of objects and customs procedures. This is the only way to ensure continuous compliance with the data model.

Welcome to the WCO Data Model 4.1.0 webinar

GEFEG invites all interested users of the WCO data model to participate in our webinar on the changes in the latest version 4.1.0 of the WCO data model. unserem Webinar (in englischer Sprache) über die Änderungen der neuesten Version 4.1.0 des WZO-Datenmodells . The webinar will look at the potential impact of the new version and its implementation by business and technical implementers. The audience will also receive information on the ‘how-to’ documents supplied with the new release, which will support all users in applying all the typical steps involved in implementing the new version of the WZO data model. The participants then have the opportunity to express their wishes, questions and comments during the 15-minute question and answer session.

Now also develop JSON schema guides with GEFEG.FX – New functions in the JSON schema editor

Enhancement for the development of JSON schemas for EDI and business data management: You can now also create JSON schema guides with GEFEG.FX. This means that the proven guide technology is now also available for JSON schemas.

Read more: New JSON schema guide functions – More flexibility and quality for EDI

 

What else is new in the GEFEG.FX 2024-Q2 release?

With the new GEFEG.FX quarterly release 2024-Q2, the following new or enhanced functionalities are also available for use.

Assign file names automatically in the publishing project, as of now in the current release

The new version of GEFEG.FX allows you to automatically assign file names when creating documentation, whereas previously each new file name had to be defined manually. From now on, GEFEG.FX automatically uses the names of the GEFEG.FX objects as aliases for the documentation files that you want to create.

The new process saves time and eliminates potential sources of error, as manual naming per documentation file is no longer necessary.

 

ISO 20022 schema exports from data models is now easier

More and more B2B standards are being published as syntax-neutral data models, including the ISO 20022 data model for the financial industry. The use of data models with GEFEG.FX has the unique advantage that company-specific GEFEG.FX guidelines can be created on the basis of the data models. In these guides, users describe the requirements of their company, such as the restriction of elements.

XML schema formats generated from the data model or data model guideline are used for data exchange in production systems. The smooth, automatic flow of data from the data model to the schema is therefore an important prerequisite for successful data exchange.

Previously, a GEFEG.FX schema had to be created manually in an intermediate step and then exported as an XSD file. This process has been optimised – from the data model to the GEFEG.FX schema! Now you can export the XML schema directly from a data model via publishing projects with a single click.

 

Improved conversion of continuous text in Microsoft Word documents for PDF documents

GEFEG.FX enables you to document data structures simply and efficiently. Many users use Microsoft Word to present their data clearly with additional information. With the Word file format, the user data is clearly presented together with supplementary information and provides a clear insight into the structure and properties of your data.

The output of these Word documents as PDF files has now been improved. If you now create documentation with GEFEG.FX publishing projects, plain text is now generated in all Notes with text content, line breaks in continuous text are omitted. This eliminates potential sources of error and simplifies the subsequent work steps with the aim of generating a PDF file as the documentation result.

 

Improved guide comparison display

The guide comparison shows differences between two comparison objects in a comparison list below the two data structures. As we have noticed in support cases that the display is not always easy to understand, this view has been streamlined and new categories have been added to help users recognise differences more quickly and understand them better. This makes it easier to understand the differences between the data structures.

 

Data packages in GEFEG.FX

The following new, supplemented or modified data packages are available for download

  • UN/EDIFACT: Version D.23B will not be available in accordance with the UN/CEFACT decision, as no change requests and therefore no changes have been submitted
  • UN/Locode, Status as of 2023-2
  • GS1 EANCOM® Application Guidelines: Fashion 2.1 added
  • ISO20022: Version 2024 of the e-Repository is available.
  • Odette Recommendations: An updated version is available
  • VDA Recommendations: An updated version is available
  • xCBL 3.0 + 3.5: Elements now also contain descriptions
  • DK Guideline Schemas V3.7 (Deutsche Kreditwirtschaft)

 

In the middle of 2023, the World Customs Organization released a new major version of its Data Model. GEFEG.FX users of the WCO Data Model version 4.0.0 can now access the new version publication.

What is new in v4.0.0? Regulatory data requirements due to new or amended legislation submitted as change requests by Customs authorities and WCO Data Model implementers have been added and/or modified.

Furthermore, this major version release addresses pending critical cosmetic and breaking correctional activities since the last major version to improve the efficiency and usage of the WCO Data Model.

The involvement of Customs authorities globally supports an important objective of the WCO Data Model: Requirements of national and regional legislations or guidelines for implementation are considered and incorporated into the WCO Data Model. There are changes in the Information Packages (BIP), i.e. two of the major BIPs have been merged to create a new Additional Information Package (AIP). Please note that this type of change, amongst others, makes this release version non-backwards compatible with previous versions.

Customs authorities around the globe further strive for effectiveness and efficiency

It is an important objective for the WCO to provide and further develop its global standard for seamless cross-border transactions for all Customs administrations worldwide.

What are the benefits of the WCO Data Model, which is intended to be the basis for information exchange of cross-border regulatory processes in a global supply chain?

  • The Data Model opens the possibility for Customs authorities to achieve interoperability and collaboration in Single Window and other implementations.
  • Data flow and integration of business data for Customs procedures are simplified and harmonized.
  • The main components of the WCO Data Model consist of “Base Information Packages” and “Additional Information Packages”.

Information Packages compile information submitted by trading parties on one side and processed by Customs authorities for typical Customs processes and procedures on the other side. Customs processes cover Single Window, or other implementations, including those at the virtual border. This includes, for example, declaration of goods movement, licenses, permits, certificates, or other types of regulatory cross-border trade documents.

Delivery of the WCO Data Model in a structured and reusable format in GEFEG.FX

In cooperation with the World Customs Organization, GEFEG has been delivering the WCO Data Model with GEFEG.FX software since the early 2010s. Thus, new possibilities for joint development work and user-specific usage of the WCO Data Model opened up for Customs authorities, governmental organizations, traders and other parties involved in cross-border regulatory processes. For users, the reuse of the WCO Data Model is simplified and streamlined with GEFEG.FX. A ready-to-use WCO Data Model compliant XML schema export also contributes to this.

Easy and effective use of the WCO Data Model

Many users of the WCO Data Model packages in GEFEG.FX have been impressed by the simple and efficient methods for reusing the WCO Data Model to plan and implement their country and/or regional specific Customs data requirements based on legislation. With each new release, it is important for users to determine if their existing implementations need to be modified to incorporate the latest WCO definitions of objects and Customs procedures to ensure compliance with the Data Model.

In this release, changes have been applied as submitted by member administrations plus corrective changes from WCO intersessional development work approved and maintained by the WCO Data Model Project Team and incorporated into GEFEG.FX by the GEFEG Implementation Support Team.

Join the WCO DM 4.0.0 Webinar

GEFEG invites all interested users of the WCO Data Model to join our webinar on the deliverables and changes in the latest published version 4.0.0 of the WCO Data Model with GEFEG.FX. The webinar will also address the potential impact of the new version and its implementations by business and technical implementers. In addition, GEFEG will present the “How-to” documents delivered with the new release to assist all users in applying all the typical steps when implementing the new version of the WCO Data Model. In the 15-minute question and answer session, participants will have the opportunity to express wishes, questions and comments.

GEFEG.FX – New in the 2023-Q3 Update

With the new GEFEG.FX GEFEG.FX Quartarly Release 2023-Q3, the following new or further developed functionalities are available for use.

New data packages in GEFEG.FX

  • UN/EDIFACT
    • UN/EDIFACT D.22B
    • UN/EDIFACT D.22A
    • UNECE / ISO code lists update
    • UN Locode 2022-2
  • ISO 20022
    • Model & schema 2023-07
    • External code lists 2023-07
    • Models and schemas 2023-03-21
    • Code lists update
  • GS1 eCom standards
    • GS1 Semantic Data Dictionary (SDD): Despatch Advice
    • GS1 XML 3.5.1
    • GS1 Application Guidelines 9.3
    • GS1 XML update for code lists and example values
    • GS1 XML enriched with codes, as of 3.0

Using JSON schema for EDI: Advanced integration with the new GEFEG.FX JSON Editor

Times change and this is usually accompanied by new requirements. This naturally also affects the electronic exchange of business data, and here in particular new technical exchange formats as a supplement to the classic EDI and XML formats. For some time now, GEFEG.FX’s range of services has been enriched by its own API Editor.

With the new release, a JSON editor is now also available and offers a powerful way to design, customise and reuse customer specific data structures in JSON format.

For more information, see the following article: Now ready for use for GEFEG.FX users: The JSON Editor

UN/EDIFACT syntax versions consolidated!

With the publication of the UN/EDIFACT Syntax Verison 3 Part 11, users of the UN/EDIFACT standard will be able to use selected fields of syntax version 4 in syntax version 3, among others. This new syntax is implemented in GEFEG.FX as of the D.22A release.

Users now have the possibility to choose between the following syntaxes:

  • Syntax 3,
  • Syntax 4 and
  • Syntax 3 Part 11

In short, the new syntax allows compatibility between the previous syntaxes.

We would be happy to discuss what this means for your EDIFACT based guides and whether there is a need for action in a non-binding meeting. Get in touch with us!

New functions require new data format versions

What would software be without constant further developments, functional enhancements, bug fixes and, in the case of GEFEG.FX, new data modules? Of course, this is a purely rhetorical question, because we are constantly implementing new customer requirements or expanding existing functions.
Sometimes you as a user will be prompted by a pop-up to update the data format version in your GEFEG.FX version due to an innovation: If possible, always save your GEFEG.FX Guides in a current data format version immediately in order to be able to use the newly provided functions and improvements.

You can read the current data format version of your GEFEG.FX object in the GEFEG.FX information area.

Screenshot from GEFEG.FX showing the current data format version of a GEFEG.FX object

The next practical case will follow with the next release. With regard to EDI guides and EDI standards in particular, the following applies: With the next 2023-Q4 release, these will only be able to be processed with the data format version from 2019-Q3. GEFEG recommends that you update the data format version of your files before the next release upgrade so that you can continue to work seamlessly with your EDI guides.

Please also note the following information:

Regardless of the specific situation of the 2023-Q4 Release, we would like to point out, in connection with the data format version that all GEFEG.FX users within an organization should use the same GEFEG.FX release version to avoid compatibility problems. Under certain circumstances, it could otherwise happen that GEFEG.FX objects saved with a newer GEFEG.FX version cannot be opened with older GEFEG.FX versions.

Over the last 30 years, a number of standards for electronic data exchange between business partners have been created in order to standardise it, often with the aim of adapting it to the business processes of an industry. These standards, which are now well established, have contributed significantly to the fact that companies today save costs by using standardised data exchange formats. After this successful innovation push through B2B standards in electronic data exchange, the search continues for new, innovative and more efficient solutions to further optimise data exchange.

GS1 has taken the next step with the development and publication of the “GS1 Semantic Data Dictionary” (SDD). The semantic data model was developed to define all business-relevant data needed for the exchange of EDI messages in the supply chain, such as purchase order, order confirmation, invoice and other messages. Unlike syntaxes such as UN/EDIFACT, XML Schema, X12, the message structure does not matter: GS1 SDDs focus on the semantic content of the business processes. This creates a comprehensive, higher-level data model that acts as a dictionary and library.

How does the GS1 SDD model benefit users?

Generally speaking, any number of subsets can be derived from the GS1 SDD model and/or the subsets for the supply chain processes. Users leverage the GS1 SDD data model to develop data models tailored to their own use case, containing only the information that the user actually needs. Irrespective of the technical aspects of data exchange, this further improves harmonisation, maintainability, consistency and interoperability of the data. In GEFEG.FX, these subsets can be created as model guides and used with the usual range of functions such as documentation, structural adjustments, mapping options.
Based on the data model, users can develop their APIs or, in the next step, they can link the user-specific data model with a logical mapping to the data formats of the eBusiness standards used, e.g. GS1 EANCOM® or GS1 XML.
Another advantage for users: In case of future updates of the data model, changes can be transferred to the already linked data formats with little effort. Users can manage their interfaces with business partners even more efficiently.

GS1 SDD as a new data module in the GEFEG Distribution

With great commitment, GEFEG has participated in the development process of the GS1 SDD and contributed to the release of this data model. As of now, we offer the GS1 SDD data model as an additional data module in the distribution of GEFEG.FX. The data package also contains the model subsets of the Orders, Invoice and Order Response processes. In addition, it contains detailed mappings for supply chain messages to the GS1 EANCOM®, GS1 XML and SCRDM CII data formats of the same name.

The new data package allows you to easily integrate the GS1 SDD into your existing message formats using the included mappings and to communicate smoothly between different data formats.

Should you require further information or support, we will be happy to assist you. We look forward to introducing you to the advantages of the new GEFEG.FX data package and assisting you with the implementation.

Roman Strand, Senior Manager Master Data + Data Exchange at GS1 Germany, on the success and future of EANCOM® and the GS1 application recommendations based on EANCOM®.

Roman Strand has been working for GS1 Germany for more than 20 years and is, among other things, the head of the national EDI/eCommerce specialist group. In this interview, he explains the role of the EANCOM® standard and why the GS1 specification with the associated application recommendations will continue to determine the business of mass data exchange in the future.

Hello Mr Strand, you have been working for GS1 Germany for over 20 years. What were the major focal points of your work during this period?

I work all the time at GS1 in the national and international standardisation department. In the early years I was the apprentice under Norbert Horst, who helped develop the EANCOM® standard in Germany. During this time I learned a lot and also started working with GEFEG.FX. I have remained loyal to this department and the topic of national and international standardisation to this day. In various committees, I drive the further development of our standards together with our partners from the business community. In addition, I work as a trainer and conduct EDI training courses in which our customers are trained to become certified EDI managers, among other things.

Which topics did you deal with a lot last year?

Next to the further development and maintenance of our EANCOM® and XML standards, we deal with the current digitalisation topics and check to what extent new innovations could be relevant for our work at GS1. Furthermore, we had our big anniversary celebration last year, because EANCOM® is now more than 30 years on the market and our application recommendations have been around for more than 20 years.

Why was the EANCOM® standard developed and what function does it fulfil?

The EANCOM® standard was developed before my time at GS1. There is the mother standard EDIFACT, which is much too big and complex. The great achievement of the EANCOM® standard is to reduce this complexity of the mother standard to those elements that are important for our customers. Approximately 220 EDIFACT messages became 50 EANCOM® messages, which were then further adapted to industry-specific EANCOM® application recommendations. The leaner a standard is, the more economically and efficiently it can be implemented. This simplification made the widespread use of the standard by many companies possible in the first place. We also translated the English-language standard almost completely into German. This was another great simplification for the German community.

How were you personally involved in the development of the EANCOM® standard?

The development of the EANCOM® standard is mainly driven by our customers from trade, industry and other sectors. They pass on their requirements to GS1, which are then processed in the EDI/eCommerce specialist group. The decisions of the expert group are then implemented by me, among others, as a representative of GS1.

How can I imagine the role of GS1 in this process?

There are many published standards on the market for electronic data exchange between companies. But behind very few of them is a reliable organisation that is continuously committed to the further development of its standard. With us, clients can be confident that implementing the standard is a future-proof investment. If, for example, there is a legal change that also has to be taken into account in the standard, we adapt the standard.
Furthermore, we are responsible for the documentation and specification of the EANCOM® standard. Again, our focus is on simplification. Among other things, we ensure that as many codes as possible are used from code lists instead of free-text fields. Because with free-text fields, automated data processing is often associated with errors.

You use GEFEG.FX for data modelling and documentation of the EANCOM® standard. For what reasons do you rely on the software for these work steps?

I have been working with GEFEG.FX for many years now and it took me a while before I could really use the software to its full extent. In the foreground you have your specification and in the background you have the standard, which is linked to the corresponding code lists. This means that as a user, when developing my own specification, I cannot do anything in the foreground that is not already stored in the underlying standard. As soon as there is a deviation from the standard, GEFEG.FX reports an error message and ensures that there is no erroneous deviation. For me, this control function is the main advantage of GEFEG.FX as a standard tool. Otherwise, a comma could always be quickly forgotten or some other small syntactical error overlooked.
With the standard running in the background, validations or checks can be carried out conveniently. In addition, documentation can be created quickly at the touch of a button using the various output options. Thanks to these functions, you don’t have to start all over again and save a lot of time in many work steps.

How do you assess the future development of the EANCOM® standard?

For me, EANCOM® is Classic EDI, which is considered old-fashioned by many workers in innovative companies. However, in my opinion, this classic EDI offers many advantages. It is a defined structure that works in the mass data exchange business and will continue to work in the future. I once said to my colleague who has been working in EDI at GS1 as long as I have: “Until we retire, we don’t have to worry about EANCOM® being shut down.”
Because the business is still going and the demand remains high. There have been and continue to be new technologies that are supposed to replace classic EDI. When I started at GS1, there was a lot of hype about XML. The same happened years later with blockchain technology and today with APIs. All three technologies were seen as replacements for classic EDI, but in the end they are all just additions that offer supporting possibilities in the EDI area. Mass data exchange will continue to be regulated by classic EDI and therefore I assume that the future of the EANCOM® standard is also secured.

Are there any challenges or difficulties that need to be considered in the further development of the standard?

The problem of a global standard is its complexity. Over the years, new information has been added to the standard. For example, every relevant change in the law led to new additions without anything ever being deleted, even if no one has used it for 20 years.
We should therefore work more towards lean EANCOM® standards, in which only the information that is absolutely necessary is stored. After all, this reduction of complexity is one of the central strengths of GS1 standards. We achieve this above all by developing application recommendations in which the underlying standard is specified even further for a specific application. This leads to less information needed and fewer potential sources of error.

We are nearing the end of our conversation. Is there anything else important beyond the EANCOM® standard that you would like to talk about?

Yes, we are currently working on a semantic data model and are thus building a new content basis that contains all relevant information that is to be exchanged electronically. GEFEG is also involved in this development process. With the data model, our customers have the possibility to freely decide which syntax form they use for their data formats for electronic data exchange This fundamental work will therefore help users to be more independent of a specific syntax in the future and to be able to decide freely whether an XML, EANCOM® or even an API should be used for data exchange.

Mr Strand, thank you for this interview.

Digitalisation project for finished vehicles logistics successfully completed

GEFEG supports the development of new messages with technical know-how

The ECG – Association of European Vehicle Logistics in cooperation with the automotive organizations Odette International and the German Association of the Automotive Industry (VDA) have successfully completed a project to develop recommended messages for the logistics of finished vehicles. Coordinated EDIFACT and XML messages were completed as a new message set covering digital communication across all outbound logistics processes.

More efficient communication and reduced process costs

“The recommended standard EDIFACT and XML messages will allow vehicle manufacturers (OEMs) and Finished Vehicle Logistics Service Providers (LSPs) to communicate with each other in a more efficient way. They will also avoid a proliferation of many different message types across the finished vehicle supply chain, significantly reducing IT development costs for individual companies.” (Quote from Standardisation of FVL digital messages)

The standard consists of a process description for all process stages in outbound vehicle logistics. In addition, EDI experts have developed EDIFACT and also XML messages with the aim of offering alternative electronic communication solutions. For each standard message format at least one sample message was published.

Experts from the automotive industry and vehicle logistics involved in the project

Many OEMs from the Automotive Industry and Logistic Service Providers were involved in the creation of the new standard, so that a large number of different requirements where considered. The joint recommendation “Digitilisation of Finished Vehicle Logistics Process Description” is available as a free download for all interested parties.

EDIFACT and XML messages for finished vehicle logistics available in GEFEG.FX

GEFEG supported the project especially in the technical implementation of EDIFACT and XML messages with the GEFEG.FX software. In coordination with Odette International and the VDA, GEFEG is able to deliver the newly developed messages as a supplement to the automotive standards.

The World Customs Organisation has published the latest version of its data model at the end of 2021. GEFEG.FX users can now obtain the latest version 3.11 of the WCO data model via an update of the GEFEG distribution.

What is new in version 3.11?

Data requirements due to new or amended legislation submitted as amendments by customs authorities and WCO data model implementers have been added and/or amended.

In this release there are 5 new Compositions, 5 new Attributes, 8 new Classes and 3 new Codelists approved and maintained by the WCO Data Model Projects Team and provided by the GEFEG Implementation Support Team in GEFEG.FX. Learn more about the details in our webinar.

The involvement of customs authorities worldwide supports an important goal of the WCO data model

Data requirements of national and regional legislations or implementation guidelines are taken into account and incorporated into the latest WCO data model, which in turn takes into account feedback and input from customs authorities around the world. This version is backwards compatible with previous versions as it does not include or remove any new information packages or business processes. Extensive changes are expected in the next major version of the WCO data model, which is currently scheduled for release in mid-2023.

Improve the effectiveness and efficiency of customs authorities around the world

It is an important goal of the WCO to provide and further develop its global standard to enable seamless cross-border transactions for all customs administrations worldwide.

How do you benefit from using the WCO data model as a basis for the information exchange of cross-border regulatory processes in a global supply chain?

  • The WCO data model opens the possibility to achieve interoperability and cooperation in Single Window and other implementations by customs authorities.

  • Simplified and harmonised data flow and integration of business data for customs procedures

  • The “Base Information Packages” are central components of the WCO data model. These information packages compile information for typical customs processes and procedures in single window implementations or at the virtual border. This is information that has to be submitted by the trading parties and processed by the customs authorities. These include, for example, declarations of goods movements, licences, permits, certificates or other types of documents for cross-border trade.

Output of the WCO data model in a structured and reusable format in GEFEG.FX

In cooperation with the World Customs Organization, GEFEG has been providing the WCO data model with the GEFEG.FX software since the beginning of 2010. This opens up new opportunities for customs authorities, governmental organisations, traders and others involved in cross-border regulatory processes for joint development work and user-specific use of the WCO data model. For users, GEFEG.FX simplifies and streamlines the reuse of the WCO data model, whether for national customs implementations or for use by government partner agencies. A ready-to-use XML schema export also contributes to this goal.

Easy and effective use of the WCO data model

Many users of the WCO data model packages in GEFEG.FX are impressed by the simple and efficient methods of reusing the WCO data model to plan and implement their country and/or region-specific customs data requirements based on legislation. With each new release, it is important for users to determine whether their existing implementations need to be modified to incorporate the latest WCO definitions of objects and customs procedures to ensure consistency with the data model. If a migration to the latest version of the WCO data model is required, this can easily be carried out in GEFEG.FX.

Join the WCO Data Model 3.11 Webinar

GEFEG invites all interested users of the WCO data model and all other interested parties to join our webinar on the new features and changes in the latest version 3.11 of the WCO data model with GEFEG.FX. The webinar will also address the potential impact of the new version and its implementations by business and technical implementers. In addition, GEFEG will present the “how-to” documents that are updated and delivered with each new version to help all users to perform all typical steps when implementing the new version of the WCO data model. During the 15-minute question and answer session, participants have the opportunity to ask questions, express wishes and make comments.