“Stateless, stateful, multi-tenant, single-tenant, virtualization, micro-services, orchestrators, virtualization…”

Many similar terms are now a part of my vocabulary as a KBMax architect. It’s a CPQ configurator, and you would be surprised to know that nowadays they’re a part of everyone’s life, even if not obvious. In today’s world, everyone is making use of a cloud system in some form, but may or may not be aware of the implications of this service being used daily, like cost, availability, security, and privacy.

“What is Cloud Architecture, and Why Should I Care?”

If you are selecting CPQ software (or any software) for your company, you will be interested to prevent wasted resources on hidden costs in the future, by simply investing in future-proof software. I have seen several companies convinced that they were getting a cloud application, only to realize later that they had bought ‘virtualized software’ and had paid a very high price to end up getting stuck with that same version of the software always, increasing those hidden costs.
Because I’ve been programming for 40 years, I want to tell you the history of cloud architecture from my perspective. You can Google each term mentioned above, but I think you can better understand the ‘why’ that backs some of the architectural choices after learning about the history behind them. Like in any field, ‘discoveries’ are the response to your present problems and needs.

History of Distributed Computing
Mainframes
I will be skipping the mainframe as it’s too deep in the history of computing.

Connected Desktops & Servers

It better is to begin with the start of this century, where desktop computers were pretty common and their servers were around, but only a few tasks like file sharing, printing, user authentication, and their databases. The data was physically ‘owned’ and the software and hardware were sold with each other. During this time for the users, the software was majorly a local application with examples on a desktop. Security was never a real problem, even though everyone was an ‘administrator’, and some users even kept a ‘post it’ with their password attached to the monitor. A big issue here was software versioning, the maintenance of the hardware, the backup, and also the underutilization of the servers.

I remember that the ‘server’ was the most expensive part of ‘the deal’, and also the most underutilized: It was not rare to log in to a server and then see that the most demanding CPU application was, the spinning 3D text of the windows screen saver. Updating an application was a real pain and needed several technicians to deal with different problems popping up. Naturally, maintenance only happened when it was needed.

Web-Based Computing

Then the web started to enter into the business meaningfully, as having a website quickly became mandatory. That underutilized server started to be used by the most forward-thinking companies. Organizations did not know that they were being ‘brave’, but they were, considering the many huge security problems they began to encounter. IT departments began to grow: servers, dedicated internet connections, network peripherals, rack, cables, UPSs, etc…

Server Virtualization

It was time for someone new to look into the things: Virtualization. “Instead of having several underutilized servers, what if we were to create a virtual copy of every single server and then put each of them into one physical server?” The idea was amazing, and honestly, it is still a great idea to date.
From the desktop software perspective, everything was absolutely the same until we began to witness web applications come into actual business use. A web application is split into 2 parts: The UI (user interface) built-in HTML, CSS, and Javascript for executing logic in the client, and then the remaining part that’s executed on the server-side.

This is where we see the emergence of a new architectural term: Stateful. Stateful stresses that this server is fully aware of the client as well as the context of every transaction. Every transaction is performed in the context of previous transactions, and then this current transaction can get affected by what happened during the previous transactions. For such reasons, stateful apps make use of the same servers every time they process a request from a user. Here’s the main issue: because each server entertains a limited number of clients it can serve, scaling is pretty difficult. You can’t simply add a new server. Instead, you get to create a logic such as, “Customers with a name that begins with A to G go to server 1, from G to O go to server 2…”, and so on.

However, ‘stateful’ also translates that this context gets stored in the server. Sometimes the server also includes the database, or it hosts various virtual servers on the same hardware.

As you can imagine, if the server drops for any reason at all…everything is lost!

The solution to such a challenge brings an opposite term to the fore: Stateless. A stateless server does not store the client context, but it only processes this request. The client jumps between various servers and a ‘load balancer’ which assigns clients to every server, delegating the load. If a server goes down in the case, no problem! The ‘load balancer’ pauses to divert the traffic to an offline server. If the load then increases beyond the server capacity, it’s only a matter of building additional stateless servers. The opposite also holds: If the number of clients goes down, servers can be easily turned off. This process is also referred to as an ‘elastic pool’.

From a developer’s perspective, this huge moment brought about a huge shift. We had to move away from the desktop, where everything was just a single application where all the resources and their software components were compressed. All the skills and the problems were in ‘singularity’. This also paved way for a new set of problems: servers, connections, front-end logic, back-end logic, new languages, and also data abstraction. Many tried to adapt their knowledge to this new shift, instead of starting from scratch and then specializing in specific areas (people are still stuck today in the singularity of the desktop application). You still come across software regularly that you can tell was a “port” of a desktop application to a cloud infrastructure, especially when you are forced to deal with ‘installations’, ‘files’, and ‘versions.’

Cloud Applications

We’re now in 2011: Occupy Wall Street is in full swing, and many ‘cloud applications are now being developed for a ‘cloud operating system’. It’s also important to note that the execution of the software was still happening in many virtual machines. The software architecture was pretty ‘monolithic’, meaning that every stateless server had a copy of the software. The max optimization had a web component and ‘worker’ components. The ‘worker’ was the ‘execution’ piece similar to creating documents, compressing the files, computing algorithms, and even long-running tasks.

This architecture was also not efficient from a CPU utilization perspective and resulted in overloaded or underused servers. It was also not efficient for the developers to deal with this architecture, as to update a small part of the software you had to release the application on all servers, resulting in downtime. Monolithic architecture is much easier to develop, test, and deploy…but hard to scale.

Here again, we were presented with another challenge that pushed the cloud forward: “How can we break down such a monolithic application in many smaller pieces?”

Microservices, Orchestrators, and Containers!

The answer to monolithic cloud applications is linking ‘orchestration with microservices’. Let’s imagine splitting a single application into self-contained pieces of business functionality, following the ‘UNIX philosophy: “Do one thing and do it well”. Once you have your split application into these pieces, you can term them as ‘microservices’. Imagine all such microservices as pieces in a Tetris game, fitting all the virtual servers together in a way to better utilize the CPU, memory, network, and storage resources.

The ‘Orchestrator’ is the one ‘playing Tetris’, and then the VMs are called ‘Nodes’ (or the levels/boards in our Tetris game). If one node goes down, the orchestrator can then create new nodes or move their services to one or more nodes. If you get to update a microservice, this Orchestrator can then keep the old version alive, deploy the new version, and then also stop the old version.

As you can rightly imagine the orchestrator with microservices is a very flexible and robust architecture, but has a huge problem…someone needs to manage and also maintain the Orchestrator!

This then made us reimagine the ‘virtual machine’. A virtual machine is a logical server that has the entire software stack: Drivers, Operating systems, and even Applications. A physical machine can host multiple VMs, even for various customers as they are totally isolated from one another. This approach is very powerful, but not very efficient as the OS makes use of a lot of resources to just ‘exist’, and every single VM needs maintenance like upgrades, security patches, and even configurations among others.

A Container isolates the application and shares the OS between all the other containers. Instead of virtualizing the hardware, like a VM in a container, the OS is virtualized.

Ok, back to the architecture. You may have already guessed that the perfect candidate to sit within a container is a microservice. Having microservices separated, you get to host ‘contained microservices’ of different instances within the same Orchestrator.

Serverless

We are at the final step of our history lesson, so let’s now have a look at Serverless’s last term. If the orchestrator is managed by some cloud provider, developing an application like KBMax is all a matter of developing and then deploying containerized microservices. The microservices can be scaled, moved, restarted, and upgraded automatically, without wasting any company resources. But how is this application served to each customer?

The first option, Single-Tenancy, is the most obvious for an ‘on premise’ option since it involves a dedicated application and dedicated storage and database. It is a cloud application with one instance of everything dedicated to each customer. This approach has pros and cons: Each customer has a different upgrade path, backup, and control. However, it swiftly becomes a resource drain as the client is under a false impression that they can control the release and the security.
The opposite option is called Multi-tenancy: There is only one application and one storage/database. All the customers use the same application and their data is then maintained together on the same storage/DB. This option is pretty efficient and the software releases are usually more frequent and less intrusive. But there is a catch. If you’re a company particularly attentive to security, you don’t want your data stored alongside other companies’ data. KBMax configurator software solves this problem with the help of Hybrid Tenancy: One single application, but a dedicated storage/DB for each customer.

Choosing a Real CPQ Software Cloud

Hopefully, this deep dive through cloud computing history helps you understand more about the ins and outs of cloud architecture. We love to share this knowledge with other companies so that they can learn how to spot out the applications that are not following the recent trends for optimizing speed, security, and access.

The KBMax 3D product configurator software is built using the latest cloud architectures following development best practices, ensuring top performance and security for our CPQ cloud customers. We often come across customers who were sold a ‘fake cloud’ by another CPQ vendor, only to realize the grift once it was too late. Look for a future article, where we’ll discuss the differences between types of cloud infrastructures (SaaS, IaaS, PaaS, etc.) and how they can fundamentally change the customer experience and total cost of ownership. Because, yeah, not all ‘clouds’ are the same.