Is there a world post-Kubernetes?

Red-figure bell-krater depicting Orestes visiting Delphi to request help from Apollo and Athena, 4th century BC, via the British Museum, London

Today I’m going to gaze into my crystal ball, and try to imagine a world post-Kubernetes. No! Martin! Say it ain’t so, K8s is here to stay and is the future of the cloud! Well, that may be the case, but I have some opinions

The contenders for a post-Kubernetes world as it stands now are as follows:

  • Unikernels
  • Run-times like WASM
  • More Kubernetes (say whaaaaat?)

Let’s start with what K8s was designed to solve: we had the rise of docker, which led to the rise of containers, which in turn led to the problem of managing containers on servers while keeping them secure, which in turn led to frameworks to help orchestrate and manage containers across fleets of servers. There were many, and now there’s really only one (yes Linkerd, we still love you).

OK, so Kubernetes was designed to manage containers across fleets of servers, and in turn abstract the cloud infrastructure from the implementation infrastructure – that means: I should be able to manage services in Kubernetes the same way on Google Cloud as I would on AWS as I would on my own server farm.

Has it achieved this goal? Arguably not – and the only evidence to back up that claim you should need is the fact that each cloud vendor (AWS, Google, Redhat, Azure and even Digital Ocean) has implemented their own managed version of Kubernetes for their users.

In most cases these implementations are similar, but none are the same – I’m looking at you OpenShift. What does that mean? It means that Kubernetes – the system that was meant to level the playing field, the ultimate meta-standard, has had to be converted into a managed service in order to make adoption easy for their users. Why? Because it is so hard to run and maintain yourself.

In short: Kubernetes is hard, an it’s so hard that cloud vendors have co-opted it into a new walled-garden. Theoretically you can lift and shift between vendors, but there are enough subtleties between implementations that true migration is still somewhat labor intensive.

This is where I think we’re going to see the post-kubernetes world, and it’s bleak.

First off, Kubernetes can actually orchestrate anything, it doesn’t need to be containers. There’s already drivers for Virtual Machines, which opens the door to Unikernels, and it looks like WASM isn’t far behind.

So what about Unikernels and run-times like WASM?

Unikernels are amazing, but still really hard to build, while WASM is held back in server-side through the rather immature state of WASI, and the wider issue of of language support – if you really want to write WASM, you do it in Rust. Neither currently (or in the longer-term future) will bench-up to the real flexibility containers can offer now.

No folks, the post-K8s world is… more K8s.

Unfortunately, instead of this “more K8s” being the “OS for Cloud Infrastructure” – at least in this authors humble opinion – managed K8s, will become more deeply integrated with the underlying cloud providers’ infrastructure.

Why do I think this? Because we are now in the wide-adoption phase of something called Custom Resource Definitions (CRDs). These provide a way for software vendors (such as our very own Tyk – the Open Source API Gateway) to provide K8s-native manifests for their own software and overall integrate better with Kubernetes, especially if the software in question is a service provider and interacts with other resources (we do this with our Tyk Kubernetes Operator – check it out, it’s pretty cool).

Unfortunately, CRDs also open the door for cloud vendors to build moats around their managed-Kubernetes offerings – and they will do this through their external value-added managed service offerings such as databases, queues and monitoring stacks (I’ve warned about this before in some of my talks).

Instead of using an off-the-shelf open source database like PostgreSQL, you’ll use managed PostgreSQL, or worse, Aurora (or CosmosDB, or BigTable). Then in an effort to define your infrastructure as code (because that’s what the cool kids do), you’ll use a CRD to configure it, and that CRD is *not* portable.

I mean ultimately, you could just manage high-availability PostgreSQL yourself in the K8s cluster, and NATS, and Redis, and Memcached, and then deal with the insane complexity that comes with handling dedicated storage, so you’ll adopt Rook, or Longhorn, or GlusterFS too.

But wait, wasn’t Kubernetes meant to save me time and effort?

Well, with the managed versions, it will, and just like that K8s, or more precisely – the consumption of K8s, will be subject to the same vendor moat-building that all other popular open source projects have become subject to in the cloud world.

(That is incredibly depressing Martin – please give me some hope)

Well, the ultimate future in the post-k8s world will come down to making the server-side seem obsolete: by turning it into a perfect magic black-box. One where you write your code (or it’s written for you by an AI), and it is run in the cloud without you ever needing to think about how it runs, because that is entirely handled by the vendor.

That’s what the push to serverless is all about, and Kubernetes is the stepping-stone towards that goal.

Personally, I believe serverless will actually reduce the amount of innovation there is in terms of software development, because it stratifies the technology layer and literally locks developers out of an entire segment of the technology stack, further specialising technologists into “*-end” classifications. The more commoditised developers become, the less they will be thinking outside of their silo when tackling problems, the less brains are working on interesting problems.

In this particular case, I really do hope I’m wrong.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s