18. Future

Raymond Meester
13 min readJul 4, 2024

--

Aleksandra’s future lies in the Netherlands. She is freshly married to Thomas and together they have found a beautiful lot next to a typical Dutch canal. That is where their future home will be, the foundation stone of which has just been laid.

When it comes to the company’s future, she also has all kinds of plans. Thus, there is still something left to wish for. Her biggest dream is to operate internationally and eventually set up a branch in Ukraine.

To be successful internationally, the company needs to innovate. The Bakery Group will have to be modular so that a new country can be added quickly. Central IT systems will also have to automatically scale with the growth of the company. Anyway, before that happens, let’s build its own house first.

In this chapter, we look at different future scenarios to realize the modular organization. The house of the future so to speak. What will the field of data integration look like in a few years and how can it capitalize on new technological developments.

The limits of code

Just as a house is made up of individual bricks, computer programs are made up of individual pieces of code. This code is usually bundled into modules. Programmers are therefore used to work in a modular way. But all those “bricks” must be held together. After all, without cement, a house would be uninhabitable. Programmers then talk about compiling code. Once a program is compiled, the code is fixed.

Fortunately, compiled software is often more flexible than hardened cement. Consider music, for example. When you save a recording in a digital file, it can be copied endlessly. Try that with a house!

3d printed house in Eindhoven, 2021

Still, nothing is impossible. For example, prefabricated parts are printed with a 3D printer. Or even an entire house.

So digital copying is very easy. Usually, however, we don’t want to copy something exactly, we want to modify something. Changes in software follow changes in the organization. This normally goes as follows: the user mentions the new requirements to the programmer as an issue. Then the programmer starts modifying the program, compiling the code again and sharing it with the user.

An application that is compiled, as we saw, is like the walls of a house whose cement has hardened. But just like a flexible layout (for example, through walls that can be easily moved), once it is running, software can also be flexible. For example, the programmer can give the user more influence by building in options or configuration files.

Yet this approach is still not flexible enough for an organization to respond appropriately to the rapidly changing marketplace. Thus, businesses and applications they use do not have the same modularity that software programmers have during construction. There remains a gap between IT and the business.

The future of data integration

Changes within information technology happen slower than many people think. One of the most widely used techniques in data integration today is still FTP (1973), a protocol that has been in use for more than fifty years. REST is an architectural style that is still not fully established. It, too, was invented more than 20 years ago. So we can say that things within IT, and especially within data integration, don’t change too quickly.

That is certainly not to say that there are no developments in the field. We have already seen IKEA-like platforms in the cloud, new data hubs and streaming brokers. In which direction it will continue to develop is hard to say. Nevertheless, I want to share some of my thoughts on this in this last chapter.

Data Integration Language

In my experience, integration will also slowly fade into the fog of the cloud. In doing so, I expect it to become easier to use modules relatively easily at different levels within an organization.

Integration Layer

Based on modules at each level of the integration layer, a user can integrate at the business, application and data integration levels. The low-level data integration modules, in turn, are application-level modules and those, in turn, are business-level modules.

At least this is the idea behind DIL, or Data Integration Language. A programming language I designed in the summer of 2022.

The programming language is designed so that both low-level tools (programming) and low-code tools (modeling) can be based on it. In the former case, you work primarily at the data level and in the latter you will build integrations more at the business level.

Low-code is a way to bridge the gaping distance between IT and Business. Instead of a programming language, low-code uses visual building blocks and modeling tools to create certain functionalities. The downside is that it is not easy to build a low-code platform. Each part of a program, such as the user interface, data models or business logic, requires different forms of visualization. In this area, code is more universal anyway, since everything is presented as text. DIL makes both possible. It is one way to think about integration on different levels.

Low-code as a solution?

There have been attempts to create low-code platforms for years, but they always led to inflexible programs. In recent years, companies like Mendix, Microsoft and Outsystems have managed to make progress with low-code. Of course, it is still not as open and comprehensive as most programming languages, but it can be used to build full-fledged applications.

The functionalities of low-code platforms are growing. This is necessary to create flexible programs. The downside is that low-code programs are becoming increasingly complex. More and more low-code platform specialists are appearing to manage everything. This undermines the freedom and accessibility of low-code.

Low-code is a part of the solution by bringing programming closer to business practices. But it is not the solution to the central problem of application building: programs are modular only during the building process, not when they are in use.

Integrated data

Now what about data integration? Again, there are trends and experiments that can make applications integrate with each other faster and easier. Besides low-code for integration, these include, as we have already read, containers, (hybrid) cloud, streaming and data hubs.

Containers as Lego blocks

An analogy that has long been used in software is Lego. Lego lets you build complex structures with simple building blocks. Starting with instructions, of course, but finally creative users can invent and build all kinds of constructions themselves. The beauty of Lego is that, unlike bricks, you can take the bricks apart again to build something new.

There is also another side to Lego blocks. After all, a limited set of building blocks and materials can also be limiting, and the concept of Lego is sometimes difficult to translate into practice. In software, you want building blocks that offer some flexibility and still fit together seamlessly.

Another analogy commonly used today is that of containers. In the introduction, we saw that containers decouple goods from the means of transporting them. Containers, whether transported by container ship, truck or train, all have the same dimensions. From the outside, we do not know what is inside them.

In data integration, data formats already have somewhat this function. But the common example within software today, of course, is Docker containers.

What if you could use Docker container to build the application and integration layer as if they were Lego bricks? This is the idea of container clusters. You can have a cluster of containers that contain both the technical integration modules (gateway/brokers) and modules that perform the functional integration role (DataFlow/ESB/API). These modules run in a cluster, such as Kubernetes. Together, these tools form the data integration layer.

The building blocks are at a higher level than code. From code, you build applications. After these applications are in containers, you build chains of containers. The chains consist of application containers on one side and containers with integration components on the other side. It is a combination of the three analogies:

  1. Lego as a building block
  2. containers as a standard way of packing and moving software
  3. a swarm of containers for dynamic adding building blocks

A concrete example of this principle is Amazon’s AWS Application Composer:

Although this focuses mainly on the application layer, the idea is also interesting for the integration layer.

integration Platform as a Service

Low-code development environments try to minimize writing code as much as possible when creating an application. Often the emphasis is on modeling and creating graphical user interfaces. Such environments are also known as Rapid Application Development or Model-Driven development. The idea is that you can create applications faster, build integrations closer to the business and more standardized.

Companies like OutSystems and Mendix have had good results building integrations, mobile apps and Web applications with them. Still, application low-code platforms are often too limited to build flexible and scalable integrations.

iPaaS (integration Platform as a Service) instead concentrate on integration functionalities built in a low-code fashion in the browser. An example is the integration platform integrationmadeeasy.com. This integration platform is low-code and as-a-service to enable data integration in a more high-level way.

The problem is that a platform is often a closed ecosystem. People do not think from the building blocks of a business perspective, but from the platform. However, the business and platform are never the same, which is why you would want to deploy small independent and open modules that contain a specific functionality.

When the concepts of low-code, integration platform as-a-service and cloud native come together, an integration layer can also be built modularly. The various integration software together form one whole. If one building block is no longer adequate or there is a better building block you only need to replace one building block.

Standardization and data nodes

Another movement is a better standardization of data formats. Think of standards, such as HL7, GS1, X12 and so on. Here it is mainly standardization organizations that want to facilitate exchange. In practice, there are often so many possibilities that a standard supported by an industry is still bulky and also difficult to implement.

While formats are standardized, this is not true for many other things. For example, each standard looks different. They use different data formats, different terminologies and different technologies. You do see companies and institutions responding to this and acting as hubs.

A next step would be a standardized unit for data exchange. An example is RDF that can serve as a meta-model for all kinds of specific formats. Say a standardization of standardization. The advantage is then that if you know the basis of, say, HL7 in healthcare, you also know the basis of GS1. In addition to content standardization, there is also standardization of identity, authorization and authentication. An example is the iShare agreement system that provides a standard for exchange between logistics parties.

New protocols

In addition to standardization of formats, there are also developments in protocols, such as QUIC and RSocket.

QUIC

The Internet was originally developed at the forerunner of DARPA, the research arm of the U.S. Defense Department. The goal was to build a robust decentralized network that would still work if one part failed, for example, due to an failure or an attack.

The Internet was originally built based on the TCP protocol. This protocol breaks data down into small packets that pass through multiple network layers and reassembles them at the client. These packets can dynamically travel multiple routes until they reach their destination.

On the top of the TCP protocol, the HTTP protocol was later designed to run Web sites. Meanwhile, the World Wide Web has grown so much that it is running into its limitations. There is therefore a need for a more secure and robust Internet. Initiatives arrived to replace the TCP protocol with the QUIC protocol and HTTP with HTTP/3 or RSockets. Now that is a book by itself, which by the way is already written and can be read here.

It remains to be seen whether future integration modules, such as brokers, will make use of QUIC. In addition, it remains to be seen whether the idea of robust data packets arriving at their destination via different routes can be realized in the application layer. So when one integration module cannot reach it it can be processed by another module.

RSocket

One of the limitations of using HTTP is that the ways of communication are limited. You can send something or retrieve something, but there are other communication patterns. This has mostly solved on the middleware level by brokers, but RSocket tries to solve this on a network level.

RSocket is a new protocol initially developed by Netflix. The motivation behind its development was to replace the hypertext transfer protocol (HTTP), which is inefficient for many tasks such as microservices communication, with a protocol with less overhead.

In addition to the usual communication patterns:

  • fire-and-forget (send, no reply)
  • request-and-response

RSocket adds the following patterns:

  • request/stream (finite data stream of many)
  • channel (two-way data streams)

Runtime delivery

This book started with the fact that when integrated, people are a lot more flexible than systems. You can see just how flexible people are very well in the following image:

There are no traffic signs or traffic lights, everyone just drives past each other. And this magically goes well. Well, almost always. There are just as many videos on YouTube where it just doesn’t go right. The point, of course, is that we secretly want data to magically flow like this between applications.

In the animal kingdom, they are already further along in this regard. Bees fly through each other in swarms without ever touching each other. Using bionics, engineers are now trying to make drones fly exactly like this and even work together. In the logistics sector, there are experiments with technologies that allow vehicles, such as cars and trucks, to communicate with each other, so that the situation in the video of an Indian intersection could become an everyday occurrence in other countries as well. And then without accidents, of course.

Basically, you want a direct and intuitive way to deal with the digital world. A much more direct approach, which you might call runtime delivery, has a company that has direct online access to the building blocks for programs. In other words, this is not a low-code platform to create new applications, but a “high-code” service to build a business with.

The individual building blocks perform only a single task (separation of concerns) within a specific domain, completely independent on other building blocks. These different building blocks do their work within the cloud. This ensures a streamlined service, accessible from anywhere. Runtime-delivery building blocks allow enterprise developers who currently rely on Excel or low-code platforms to add functionality at the enterprise level.

Runtime delivery may sound like a bright dream of the future, but it is already a reality. A good example is a business rules engine (BRE) that functions totally independently from other software. The BRE uses “If … then … else” constructs. So simple, a company can easily modify them themselves.

Hybrid cloud

Hybrid cloud is an application landscape that uses a mix of environments to host applications. For example, on premise, private cloud and public cloud (AWS/Azure). This provides more options where application and business functionality is executed, but does present challenges for data integration. First, dependencies between networks and more security barriers. A fact is that hybrid cloud has become prevalent and data integration must be able to deal with it.

Cloud Native Application Bundle (CNAB) is a new initiative from Microsoft, VMWare, Docker and IBM, among others, that allows containers to run in any type of cluster. A Docker container by itself works anywhere, but this is not always true when connected to a cluster (Azure AKS, Amazon EKS, OpenShift). CNAB makes this possible.

Once the cloud provider automatically scales the servers in the cluster for you, it is called serverless. Sometimes this is done using functions you can call (OpenWhisk, Azure function and AWS Lambda). These functions are also building blocks. It is also possible to create your own serverless functions/workloads using KNative.

Serverless

Thus, each container is its own service. Through service discovery, new containers are automatically included in a cluster. From a chain perspective, it is serverless. That means hardware resources are automatically allocated. As soon as more data flows through a chain, the containers can automatically scale up.

The modular organization

We discussed some ideas for the future. In summary:

  1. An integration module has a technical or functional function.
  2. Using low-code, these functions are easy to operate.
  3. The different modules can be made specifically suitable for the cloud packaged as containers.
  4. Modules can run and scale in a container cluster (such as Kubernetes).
  5. Through a composer, the modules can build an application landscape with integration layer like Lego blocks.

The ideal cake

Ideally, there are no barriers to people and information. This ideal picture does not exist, of course, because every situation is different and reality is unruly. Nevertheless, better integration can be achieved through the right mix of these new developments. Every time a new building block is added for further integration, another step has been taken. A step in connecting data. A step in connecting people.

In short:

Time for a piece of cake!

--

--