8. Through the fog of the cloud

Types of Cloud Integration

Raymond Meester
9 min readApr 23, 2024

All of The Bakery Group’s recipes are securely stored in the cloud. But what does this mean? Cloud is a somewhat vague term. Isn’t it synonymous these days with cloud providers, such as AWS or Azure? But surely there is also a private cloud? Or is it more about the technology, like Kubernetes and Docker containers? But I can run those locally, right? Is it then about hosting? But hosting of what? VMs, Containers, Lambda’s or SaaS applications?

In short, when you look through the fog of the cloud, you see a rainbow palette of colors blending together. In this chapter we discus the various colors that each represent a different type of cloud integration. Flying above the cloud for an overview can’t hurt. Basically, cloud integration can mean three things:

1. integration with the cloud

2. cloud integration

3. cloud native integration

In this chapter, we discuss all three types of cloud integration. What do they entail organizationally and technically?

Integration with the cloud

Both cloud vendors and cloud specialists sometimes think only from the cloud in which they operate. For example, only from Azure or AWS. This is not the reality of integration.

Suppose The Bakery Group still has a number of servers and databases running on premise. The organization is also modernizing its application landscape and is using Amazon EKS (Elastic Kubernetes Service) to do so. It also purchases a number of SaaS services hosted in Azure.

Different applications and services need data from each other. If you stay within a specific domain, this is relatively easy. Domain here refers to a cloud provider or on premise. If you are going to exchange data between domains, you will have to go across the Internet.

This presents a number of challenges for integration that have little to do with exchanging data, but more with how to bring the domains together. Think of all the networking and security layers that sit between a cloud application and an on premise application.

Networking, security and protocols

Meanwhile, The Bakery Group already has a number of modern services running in Amazon AWS. The data is available through an API that needs to be encrypted (HTTPS) and also mandatory uses authentication, such as via oAuth2. So where traditionally more attention has been paid to the data and protocols, cloud integration pays extra attention to security.

In addition, on premise and cloud are often completely different worlds so the data formats and protocols are very different. You are also more dependent on each other’s networks. So security, network (failures) and different protocols must be taken into account. Integration with the cloud and between different clouds therefore takes considerably more time than integration between two applications on premise.

One possibility is to arrange various things at the network level, such as firewalls, network trusts, whitelists and VPN Tunnels. So here, integration depends heavily on work done by a network specialist and the stability of those solutions.

As an alternative to working with APIs, a broker can be used. Either through a technological solution (for example, a message broker, such as Azure Service Bus), or through a value-added network (VAN), a separate EDI organization that handles this. With a message broker, however, clients must support the protocol used. Of course, there are additional costs associated with a VAN.

Finally, a message gateway can also be deployed. In this case, you put a piece of tooling on an on premise server and connect it to APIs or a broker in the cloud. The message gateway is a bridge between different protocols and networks.

In summary, integration between on premise and the cloud and between cloud providers presents challenges for which many different solutions are possible.

2. Integration in the cloud

So while integration with and between cloud providers takes more integration effort, integration in the cloud is actually easier and there are more benefits to be gained.

For a moment, imagine another organization. In this case, we assume the startup Chattie. A company that develops chatbots with artificial intelligence. Virtually all of this organization’s IT runs with a microservices architecture in the Google Cloud. So what makes integration seamless? Here are the key technical components of a microservices environment:

Service API: The basis for a microservice is the API. The idea around a microservice is that not the entire application is developed, but a small part around a so-called business capability. This capability is integrated via a well-defined REST API (the intelligence is in the endpoints). Such a microservice usually lives in a container that can be deployed independently.

Service Management: soon the start-up creates many microservice containers, all of which need to be managed. The standard for doing this is Kubernetes. This platform originally developed at Google provides orchestration through configuration and management of containers.

The main tasks of Kubernetes are configuring pods (with one or more containers), starting and scaling pods and managing pods in a cluster. Kubernetes has its support for configuration of load balancers, port forwarding and firewalls or DNS to access services in a cluster. In addition, there are tools for monitoring, logging and debugging containers. Finally, a service catalog can provide an overview of all services available on the platform. See:


Service Discovery: If an application landscape and the applications themselves are divided into all sorts of small services, how do they find each other? This is particularly an issue as containers are dynamically loaded and scaled. The solution is service discovery.

Service discovery forms a bridge between the name of a service and the IP address assigned by Kubernetes. By the way, most Service Discovery solutions can do more than just point a system to the correct addresses. Service discovery tools such as Consul and ETCD also offer a health check. If the service is not healthy or unresponsive, service discovery will stop returning the address and thus fail-over to one of the other service nodes.

Service Mesh: Finally, a service mesh is increasingly being deployed. In particular, it is this functionality that facilitates integration. With a service mesh, certain functionality no longer needs to be built into the code of the service, but each service gets a “sidecar.” This is an extension that takes care of routing, retries, encryption, authorization, certificates, timeouts, etc. So it is a specific communication layer that handles communication between services. See also: https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh

Thus, although setting up a microservices environment in Kubernetes is relatively complex, it offers strong integration.

Cloud native integration

The last category is cloud native integration. The native addition does not necessarily mean that it could not run on premise, but that the cloud software is made with the cloud in mind.

For cloud native integration software, an open source component is often taken as the basis, such as:

  1. Tools (E.g. Apache Kafka, Kong API Gateway etc.).
  2. Frameworks (E.g. Spring integration, Apache Camel or Zato)
  3. Programming languages (E.g. .Net core, Java or Python)

These are being made available as a cloud service. The whole configuration, installation, container management, CI/CD, versioning and so on is taken care of by the software vendor. This can be done by Microsoft (Azure), Amazon (AWS) or Google (Google Cloud), for example, or by a third party. Consider Dell Boomi offered as a cloud service.

So a microservice is primarily an architectural concept with a technical implementation, but a cloud service (X as a Service), on the other hand, is a product/service. Let’s examine for each technology type how it is offered as a cloud service.


First of all, tools. These are often software that you can also install on premise. An example is Apache Kafka. When offered as a cloud service, these are often called “Managed Tool X.” In our example, “Managed Kafka.”

Sometimes they are also given a slightly modified name, such as AmazonMQ (Managed ActiveMQ on AWS) or Amazon MSK (Amazon Managed Streaming for Apache Kafka). In this case, Amazon handles installation, container management and scalability. By the way, it does not mean that Amazon necessarily participates actively in the development of the open source project.

In some cases, these tools are developed in-house, such as Azure Service Bus by Microsoft or Amazon SQS by Amazon. Third-party providers may also offer them, such as cloud offerings from Mule, Tibco or Red Hat.

Usually these tools are not really low-code, but they do offer different ways to configure. With AmazonMQ, for example, this involves modifying the XML that normally resides on the server.


The merging of different tools and frameworks with a low-code interface is usually marketed as iPaaS, integration Platform as a Service. Usually, iPaaS is the cloud variant of what was previously called an ESB. An iPaaS, such as integrationmadeeasy.com, for example, is therefore used for the same kinds of purposes, such as exchanging data.

If an organization is already primarily using APIs, Microservices and SaaS applications, iPaaS is a good solution for getting them integrated with each other via low-code.

Before the advent of cloud computing, integration could be categorized as internal integration or business-to-business (B2B) integration. Internal integration needs were fulfilled through an on-premise middleware platform and usually used a service bus (ESB) to manage the exchange of data between systems. B2B integration was done through EDI gateways or a value-added network (VAN).

The advent of SaaS applications created a new kind of demand that was met through cloud-based integration. Since their emergence, many of these services have also developed the ability to integrate legacy or on-premise applications, as well as act as EDI gateways.

Low-code is often a core component of such cloud integration platforms, also to differentiate themselves from traditional ESB and VANs. Users of these platforms are called Citizen Integrators because they are less concerned with code and more concerned with business processes.

User interface of SnapLogic

Cloud Functions

It is also possible to write cloud integrations directly in code using, for example, AWS Lambda or Azure Functions. Basically, these are small pieces of code that are executed in the cloud. Instead of a microservice that is usually written on the laptop and packed into a container, it is written and executed directly online. The cloud platform then controls that entire infrastructure in the background.

Suppose data needs to be filtered. In this case, a service can put that on a queue. Let’s assume a queue on Amazon SQS. A cloud function written in Python retrieves messages from the queue and returns them filtered.

The advantage, of course, is that it is serverless (there are no worries about underlying infrastructure), but scalability is also important. Here is an example of how this roughly works:

In addition to the cloud feature that filters messages, it runs a check that looks to see if the queue is empty. Suppose there are a few million messages on the queue. It would take a long time for one running cloud function to filter all the messages. AWS can therefore automatically scale up the number of instances. It checks every 5 seconds to see if the queue is empty, if not it adds several instances. At any given time, there are as many as a thousand instances picking up the message. Once the queue is empty, it automatically scales back.