The evolution of computer networking has gone through major phases in the last decades. New technology, innovation and importance, but also new delivery and consumption models were key drivers in this evolution. In the early days of IT computers were hardly connected. Later one realized that data exchange over networks was way more effective than using offline media like paper, floppy disks and tapes. By that time we’ve started to use local area networks. Most of us will remember the days with token ring networks. Quickly that was extended over multiple buildings, branch offices and so on. Then one realized that centralization of data was important, so in-house data centers were introduced. Data and applications hosted on servers at the other side of the office’s wall. Sitting next to the data had some clear benefits, the latency was (relative) low, bandwidth was cheap and capacity was sufficient. Connectivity to the Internet and partners on the other hand was very limited. Sharing a 64 kbit/s connection with hundreds of employees was pretty common.
In the next stage of evolution data centers were outsourced to external companies. Servers were moved from the in-house data center to dedicated data centers, providing much more flexibility when it comes to connectivity, power, and cooling. The first limitations became visible. There was simply not enough bandwidth available (or it was too expensive) and the latency was way too high to separate the application and data. So, companies switched to virtual desktops, reducing the proximity between data and applications. Security wise this was also thought as a secure architecture, since the border of the datacenter was guarded by firewalls.
In the past years public cloud made impressive inroads in IT architecture. This solved a lot of problems, such as no more capacity guessing, broad connectivity, low investments and the ability to focus on core business, for example. At the same time the entire landscape complexity increased tremendously. Looking at today’s landscape, the range of different cloud services used in an architecture is increasing rapidly. Try to make a list of all the services used by your company and you probably will lose count. Imagine how this is for the (network) architecture. Applications are now built on a mix of microservices (containers, serverless, PaaS services, etc.) and not always hosted in near proximity. For example, your identity provider most likely is not hosted in the same environment as the microservices that are used to run the application. Connectivity between users and applications or even between microservices in the same application is more and more based on using the Internet.
This introduced a number of challenges, especially on performance, security and manageability of the platform. Taking security as an example, a zero-trust networking architecture requires a completely different approach than hiding your crown jewels behind a ‘secure’ firewall. Especially now that a lot of people are working from home and that we are using multiple devices (laptops, tablets, mobile phones) to connect our applications. A less visible trend is that more devices are getting connected to your environment, think of cars, lightning, and printers for example. The influence of the broadly available connectivity is visible everywhere, but at times a big challenge for networking experts.
To make the challenges more tangible and to address the changing network and application landscape in the cloud and create a scalable environment, a strong architectural approach is required. The cloud connectivity problem can be divided into 3 layers.
Regardless of the solution you choose, it is important to approach the connectivity problem based on these three layers.
The Application layer describes the elements of the Application landscape, like PaaS, SaaS or workload in IaaS environments. A great example is a “Spoke” VPC or VNET, where your instances are running.
The Access layer provides all connectivity to and from the cloud. Whether it is Internet Ingress/Egress, private connectivity through Direct Connect, ExpressRoute, VPN or SD WAN integration, the access layer should address all these connectivity requirements.
That leaves the Transit layer. This is the layer that interconnects all access and application elements and where insertion of security controls like NGFW should take place.
Because of the speed of change within your cloud network environment, day 2 operations are becoming more challenging. What initially were isolated workloads deployed in their own environment, now often needs to connect to other environments within the cloud, on the Internet or to on-premise data centers. This not only poses security challenges, but also makes connectivity a more dynamic problem, which becomes harder to automate yourself out of. A controller centric (SDN) connectivity solution, that can adjust the network according to these dynamic changes, creates a much more scalable architecture, where network correctness can be guaranteed. This is the approach Aviatrix uses, for native cloud constructs.
The transition from on-premise to cloud also means a transition of security controls and requirements. Although cloud platforms provide capabilities to address those, these are not always sufficient to address the complex requirements of the business.
When faced with implementing third party controls in an IaaS or PaaS environment, a lot of challenges arise however. For example, implementing a Next-Gen Firewall, or simply trying to filter egress traffic from cloud workloads to the Internet can be complicated. Aviatrix simplifies the implementation of security controls through automation, orchestration service insertion and adding additional capabilities. On top of this, Aviatrix provides tremendous visibility on network topology and traffic though their CoPilot solution.
The challenges described become even bigger when facing the transition to Multi-Cloud. A transition we see happening within a lot of organizations because of consumption of specific PaaS services, or IaaS platform diversification. Security controls should be simplified through automation, orchestration, service insertion and additional capabilities. Ideally with tremendous visibility on network topology and traffic.