NxNW Tech Meetup: Kubernetes + Serverless by utilizing OpenShift

Florian Moss
9 min readSep 4, 2020

I gave a presentation on September 3rd 2020 at the NxNW Tech Meetup that covered Kubernetes and Serverless and how everyone, even with no prior knowledge about Kubernetes, can utilize those technologies by using OpenShift 🤯.

The demo is meant to cover every single persona from total beginner 🤷‍♂️ to CKA/CKAD certified developer 👩‍🔬.

My belief is that technology is always at its best when it is accessible to as many people as possible, while also enabling powerusers to reach their full potential. Let’s take virtualization as an example, everyone can run a VM — because it is easy to do. At the same time, you can scale it out and run an entire enterprise on a virtualized environment. Deploying an application with Kubernetes and making it serverless should be just as easy in my opinion.

If you read until the end, even if you have never heard about containers, Kubernernetes or Serverless, I promise you two things:

1️⃣ You will be able to deploy an application with Kubernetes.

2️⃣ You will be able to deploy a Serverless application by utilizing kNative.

Containerization

⚠️ ️If you are already familiar with the concept of containers, please go ahead and skip this section.

There is one thing I want you to take away from this: Containers are nothing but Linux processes❗️ That’s really it, that’s all you need to know. Ok, there is a bit more to it, but even if that’s all you knew, you’d be able to follow along. But let me give you some more background on this.

Because containers are just Linux processes, they can be managed separate from the underlying host. This means an application becomes a self-contained entity. As a result, you don’t need to worry anymore about managing the host, but you can focus on simply managing the container.

Virtualization versus Containers

The graphic above makes this even clearer. The left diagram shows the architecture that you are used to, multiple Operating Systems that sit on top of a Hypervisor and each OS hosts a single application. Just think about how wasteful that is, you need to run all those processes that come with an OS, simply to run a basic application?! As if that wasn’t enough, thanks to AWS, Azure and GCP — you also have to pay for this waste. 24/7 for 365 days. It doesn’t matter how much you are actually using the server, as long as you run it, you’re paying for it. The more you scale this out horizontally, the more waste you accumulate. This is clearly not a very dense use of our resources.

Containers on the other hand simply require a runtime on top of an OS. From this point on, you are able to run as many containers as you like, as long as there are resources available. This means that containers allow for much denser environments, which in turn leads to huge savings on your machine footprint. If you’re thinking ecological, also on your carbon footprint!

It is good for you wallet, and the environment. Fantastic.

Think about it this way: A virtualized environment is like sending off a containership on the road, loaded with a single container. While containerization means, sending the same containership off, but loaded with a full freight, ready to maximise our return on investment.

With OpenShift, you don’t even need to know how to build these containers — you will see why further down.

The question that arises from this, of course, is: How do we manage these containers? Good question, the answer is:

Kubernetes

Kubernetes is simply a tool that helps manage containers. That’s it. If you knew nothing about containers and Kubernetes, please remember this one fact.

In fact, Kubernetes is anything but simple when it comes to using it. That’s the reason why so many companies shy away from it’s implementation. So, all we need therefore is a tool that encapsulates the complexity and makes it accessible to a wider audience, right? You guessed right, the answer is also OpenShift. We are close to some screenshots, don’t worry!

If there is one more thing I’d like you to remember it’s that we don’t manage single containers in Kubernetes but we manage pods. A pod is usually simply made up of a container though, but can also include 2 or more. For simplicity you can simply assume that application = container = pods.

A pod can be made up of one or more containers

I could of course start going down the rabbit hole and now go in depth and bore everyone to death by explaining ReplicationControllers, Deployments, ConfigMaps, Secrets and and and. There is plenty of other resources where you can learn about all of these, trust me you don’t need to know about any of this to deploy a serverless application with OpenShift.

Just one more remark: If you wanted to use Kubernetes as it is available on GitHub, you would need to understand what all of those complex terms mean, because Kubernetes is simply a CLI tool. There is no graphical interface, no image registry, no monitoring — heck not even a container runtime that comes with it by default. Wanna see what I mean?

Decisions to make for default Kubernetes

If you want to deploy a default Kubernetes cluster you have to go ahead and start picking from the categories that you are seeing above. I am paid to know what the Kubernetes landscape looks like and even I’m overwhelmed by having to do that.

This is clearly a non starter for everyone that is trying to ‘just get started’.

Enough words now, let’s look at some screenshots.

Assuming that you have an OpenShift 4 cluster running, or OKD 4, simply create a new project and enter the Developer space.

You will see the following:

Default screen in the Developer menu

From here, you could choose to deploy a pod based on a catalog, dockerfile or a container image. But since we assume that you have no clue how to use any of these options because you have never worked with containers before, you can simply choose ‘From Git’.

The repository we are going to use hosts a simple nodeJS application that returns some text. You can see in the image below, that there are no special configuration files such as Docker files needed to make any of this work.

https://github.com/florianmoss/node-hello.git

Simply copy the link to the repository. In OpenShift, paste the link as seen below in the ‘Git Repo URL’ field and select NodeJS as the Builder image. Then select ‘Create’.

Congratulations, you just deployed your first pod! And while doing so, you used containers, Kubernetes and a function that we call S2I (Source-To-Image) that allows you to build a container based on a repository.

Let’s have a look at the topology:

View of the topolgy

We can see that a pod has been deployed that utilizes a Deployment configuration and an application label. You don’t even know what this means, possibly, and yet you have done it. Simly select the ‘Open URL’ icon now to get access to the application (via the route that was created by default, again, don’t worry if that means nothing to you).

As expected, the application returns some text:

Return message of our application

Skip this paragraph if you know nothing about Kubernetes. If you are already experienced with Kubernetes, you could now go ahead and explore the Route and Deployment configurations that were automatically created for you. You could now change the UpdateStrategy or change the amount of Replicas simply within the created YAML definition.

Deployment configuration file created by OpenShift

Ok, great. The first promise was upheld already, you just deployed an application by using Kubernetes. Now, let’s look at deploying a Serverless application.

Serverless

Serverless, cool — but what does that actually mean? Serverless means essentially nothing else but: Your application doesn’t use any resources if it’s not being used. That’s it. No dark magic, nothing. But why should we use it? Think about your HR team, once a month they send out the PaySlips, but their application to do that keeps on running all year, every. single. day. And guess what, you are paying for it. That’s why Jeff Bezos is a rich man, because you are running stuff that doesn’t need to run (careful hidden sarcasm!). But in all seriousness, we all waste a ton load of money because we are running applications on servers that run 99% of the time idle. This is exactly why serverless is so important.

To understand how kNative implements this idea, it is important to understand how Serving works within kNative. A Service manages essentially the whole lifecycle of the application, everything that happens goes through the Service first. If traffic hits our endpoint, the Service will then send the traffic to the route which can then decide which revision is supposed to serve the content. So, what is a revision then? Easy, a revision is a snapshot of the code at a given time, think about it this way: You have a repository, this is revision 1, now you make a change to it, this becomes then revision 2…I think you get the concept. The configuration maintains simply the desired state of the deployment, don’t worry too much about this at this stage.

Structure of kNative Serving implementation

Sounds all horribly difficult, doesn't it? Trust me, we can make this look a heck of a lot easier by actually using it.

First of all, make sure that the Serverless Operator is installed in your cluster.

Now, select ‘+Add’ on the left side and choose ‘From Git’, as we have done previously. And also paste in the URL for the repository that was used previously.

Default screen in the Developer menu

In the ‘Resources’ section, choose the kNative Service option as seen below and select ‘Create’. That’s it.

Selecting the kNative option

In the topology you can now see a second application that will look similar to the following:

A deployed kNative application utilizing Serving

The first thing to observe is that there is currently no pod running. This is exactly what we expect from a serverless application, because no interaction should also mean no uptime. You can also see an outer endpoint that points with 100% at the Revision inside. If you scroll back and compare this to the diagram called ‘Structure of kNative Serving implementation’, you also understand why it looks like this. In fact, what you are seeing is the exact representation of the diagram. We have an outer shell that is the Service, the Route points at a Revision on the inside, and the Revision is managed by the Configuration. Despite us possibly not understanding what all of this means, OpenShift automatically did it for us by utilizing kNative.

If we go ahead and select the Service endpoint on the outer ring, the application will automatically deploy a pod when under load and scale back down when running idle, as seen below.

Application scaling up upon traffic and scaling down when on idle

Promise number 2 kept! You just deployed a serverless application in a Kubernetes cluster. With what? 4 clicks! And guess what, you could have done so without having any prior knowledge about any of those technologies.

This is really what makes OpenShift so powerful in my opinion. It encapsulates complex technologies and makes them accessible to anyone.

Now, that’s all great — but what now?

If you are a complete beginner and new to Kubernetes: Learn OpenShift

If you want to use it at home and know Kubernetes: OKD

If you have a Kubernetes cluster running and want to learn more about Serverless: kNative

Instrcutions for: Plain Kubernetes + kNative

--

--