The Next Generation of DevOps
February 15, 2020
The Promise of SOA, Virtualization, and Deployment Automation
SOA: Standardization and Governance—or Killing Innovation?
In the past, we have tried to solve the problem of large, tightly coupled monolith applications by horizontally splitting our applications using the following principles:
- Standardized service contracts which allow for loose coupling of services. These also mean that the boundaries of each service are explicit.
- Encouraging service reusability and autonomy. Services should be able to build upon each other, much like Lego blocks.
- Ability to discover services in a self-service manner.
We had good intentions, and we actually made good progress by leveraging Enterprise Service Buses (ESBs) to implement the service-oriented architecture (SOA) paradigm. We were able to physically separate monolith applications into different layers, with each layer independently deployable. This allowed us to develop each layer independently of each other. Separating the applications into multiple layers also allowed us to scale better – horizontally and vertically.
However, over time, ESB services grew into large monolith applications as well. Part of this was because of deficiencies in certain ESB products which had limited out-of-the-box capabilities, and hence encouraged a lot of custom coding. It was also partly because of implementation teams being lazy to follow best practices while building these ESB services.
On top of that, SOA encouraged the standardization and reusability of services. This meant we had to go through a painful process of appearing before Architecture Review Boards to make the case for new services, and we had to spend a lot of time debating the pros and cons of adding a new service to the existing catalog. One could make a case in favor for this service governance, and I personally see a lot of benefits to this governance, but it also stifles innovation. Standardization also means that we have to use a lot of shared libraries, which slows down the runtime environment.
Virtualized Application Platforms
Virtual machines (VMs) solved the huge problem of being unsure about configuration parameters across multiple environments. You could now create a “Golden Virtual Image” and replicate it as many times as you wanted. However, as usual, over time there were a lot of unique configuration changes that crept into each of these images. It was difficult to keep track of these changes, and hence it was impossible to replicate them.
The only way to fix this problem is through immutable systems—an operational model where no configuration changes, patches, or software updates are allowed on production systems. Out-of-date workloads are simply replaced with newer images in an automated, systematic way. This approach means you can have higher confidence in the production code. It is the safest way to roll back an update to avoid service interruption. I will go into more detail about how to achieve this in the next section, where I talk about containers.
Deployment Automation
As an integration developer, I used to hate the system integration testing (SIT) phase. As an outcome of SOA, we had different teams for the UI layer, the business logic layer, the ESB layer, and the back-end application or data layer. Each of these teams built their respective components in a silo, and we would try to put everything together during the SIT phase. Invariably, due to misunderstood requirements, things would break down, and guess who got blamed for every single defect…the integration layer, AKA me. I have terrible memories from the number of nights and weekends I have spent on calls trying to fix these problems.
Continuous integration (CI) and test-driven development (TDD) helped solve a lot of these code integration problems. We would run a static code analysis tool, like Sonar, every time a piece of code was checked into the repository. Then we would package it, deploy the code package, and then run integration tests–all automated through scripts.
However, the operations team that was in charge of this deployment automation delivering code into production as a manual process. They would wait until the SIT process was completed, and would then manually move code into production. As long as there is a human element involved in this process, we can never guarantee that the same process will be followed every time.
Next Generation DevOps–Combining Containers, Microservices, and Continuous Delivery
Containers
Container technologies are the next stage of evolution after virtual machines. VMs need a lot of system resources, because each VM runs a full copy of an operating system and virtual copy of all the hardware that the operating system needs to run. Translation: lots of RAM and lots of CPU cycles. Alternatively, you can run all your containers on a single operating system instance. As a result, a physical server can have several multiples of times more containers compared to virtual machines.
This makes containers much cheaper than virtual machines, while also giving additional advantages regarding the portability and standardization of runtime environments. This also makes containers the go-to solution for cloud platforms.
Microservices Architecture
Microservices use a lot of SOA concepts, but especially emphasize the self-sufficiency and independence aspects of services. Each microservice is developed, tested, and deployed independently, and each one runs as a separate process. The microservices architecture pattern attempts to fix the mistakes of SOA implementations—large monolithic ESB applications.
Some of the key design considerations for microservices are:
- They implement true loose coupling. Each microservice must be independently deployable, and as a consequence, they should also be physically separated from each other.
- They perform one piece of functionality and are not reusable across multiple services.
- You should able to use whatever technology you want to build each one, since they are all independent of each other.
- Once you publish the service interface contracts in advance, you should be able to develop, test, and deploy the microservices in parallel tracks. This considerably cuts down on time-to-market.
A microservices architecture makes it much easier to deal with technical debt. New functionality can be developed and deployed with the existing application if these new modules are independently deployable. Also, you can make changes to the existing application much more easily if the application was designed to have independently deployable functionality modules.
Continuous Delivery
In my experience, you can extend any continuous integration pipeline into a continuous delivery one. The major difference between the two is the level of confidence people have in the automation processes of the continuous integration pipeline. CI needs a number of manual interventions (config changes and tests) from the operations team, all of which can be automated to a very high degree, if not completely.
One major factor is mutable versus immutable deployments. I will go into much more detail around them in a future blog post.
Putting them all together–MAGIC!
Continuous delivery, microservices architecture, and containers are most effective when used together within an agile development environment. My team has had a lot of success combining these three paradigms, and it has allowed us to achieve a many key customer goals:
- Deliver business capability in an incremental fashion but with unmatched time-to-market
- Ability to deal with technical debt relatively quickly
- A self-healing environment that scales effortlessly and has very high reliability
Buyer Beware!
As with any architecture paradigm, discipline during execution and adherence to best practices are key here. Listed below are some design issues that might arise during an implementation involving microservices, containers, and continuous delivery. This is by no means an exhaustive list—rather, they are a few things I have dealt with in the recent past. Hopefully it will help you!
Containers:
Containers come with some inherent issues. Since they are all sharing the same operating systems, it is possible that if one of the users on one of the containers has super-user privileges, they might hack into any other container.
Also, you have to set up a governance process around containers getting deployed in your production environment. Obviously you don’t want to stifle innovation, but you do not want an anything-goes, Wild West situation in your production environment either.
Microservices:
I will make the point around governance with microservices as well. There needs to be some method to the madness around technologies being used in the microservices. I would recommend a scoring matrix which allows you to evaluate which technologies are acceptable and tie them to key metrics.
Data storage is another issue with microservices. Most organizations have data teams that enforce the usage of a centralized data store. One alternative that I have used is to have a container(s) which is dedicated to data storage and is linked to other containers. This way, we are not tied down to a centralized database, and we have control over the data storage.
Continuous Delivery:
Any errors coming out of the continuous delivery process must take priority over new development. If the development team chooses to continue building new features while the existing one haven’t been fixed, then they might end up with code that will take a lot of fixing.