VidulividuliBeta

Modern Deployment Is Broken And Nobody Wants to Admit It

A decade of coding taught me that the deployment ladder - VMs, containers, Kubernetes - just trades one set of problems for another. Here's why.

By Avin Kavish
OpinionEngineering
Background
Modern Deployment Is Broken And Nobody Wants to Admit It

We set out to ship a blog. Instead, we spent three weeks configuring infrastructure.

I've been coding for a decade. I've led engineering teams. I've used VMs to run applications in production and experienced the suck firsthand. And here's what I've learned: the deployment ladder - from VMs to containers to Kubernetes - doesn't solve your problems. It just trades them for different ones. Each step forward, you sink a little deeper into the infrastructure bog.

VMs - The Original Problem

Let's start with virtual machines. VMs suck because you have to rotate the logs, add SSH keys, manage OS patches, handle dependency updates, configure security hardening - the list goes on. Every operational task that you'd rather not think about becomes your problem.

But it gets worse. If you're running a web app or a microservice (which is basically always), you have to install a load balancer, create an auto-scaling group, set scaling targets, set up a process manager, create launch templates. Each of these is its own mini-project with documentation to read, best practices to learn, and failure modes to understand.

You wake up at 3 AM because logs filled the disk. You spend Tuesday morning rotating SSH keys. You spend Wednesday afternoon applying security patches. And Thursday? Thursday you're finally getting back to the feature you were supposed to ship Monday.

The deeper you wade into the VM bog, the slower you move. It's not just infrastructure - it's your team's time, your product velocity, your ability to actually build the thing you set out to build. You're stuck maintaining infrastructure when you should be shipping features.

And here's the thing: VMs were made for a different purpose. They weren't designed to ship cloud SaaS - they were designed to provide timeshare on mainframes in the 90s. We're using the wrong tool for the job, and wondering why it's so much work.

"Just Use Containers!" (They Said)

So you listen to the advice: run containers. Modern, portable, isolated. Problem solved, right?

Wrong. Don't just run containers in a Docker instance on a VM. Let's trace what actually happens: from the perspective of the container, the cumbersome tasks of log rotation and all the rest are outsourced to Docker. Docker passes the burden to the host OS. The host OS is still managed by you.

You haven't eliminated the work - you've just added abstraction layers. The buck still stops with you for the underlying infrastructure. You thought you were climbing out of the bog, but you're just wading through a different part of it.

"Fine," you think, "skip the headaches. Just run your container on a container service."

But wait. The suck doesn't end there.

Now you have to set up your API Gateway. Configure your load balancer. Create your scaling groups. The whole lot, yet again. And that's after you manage to successfully containerize your application, which in itself is a lot of work. Multi-stage builds, layer optimization, base image selection, security scanning, registry management - containerization isn't free.

So we end up with a lot of busy work that has nothing to do with the primary goal of the company, which was to ship a blog.

Enter Kubernetes - The Final Boss

At this point, you're thinking: "Kubernetes. That's the answer. That's what the big companies use."

And you're not wrong about the power. Kubernetes gives you Deployments, Services, Pods, Ingress - abstractions that actually abstract. Declarative infrastructure. Self-healing systems. Horizontal pod autoscaling. Service meshes. The works.

Kubernetes is a workhorse that powers internet-scale companies like Google. It's an easy pick for a platform if you're building infrastructure that needs to scale to billions of requests.

But here's the thing: it's an advanced topic that requires a degree in Kubernetes.

You need to understand control planes and worker nodes. You need to know the difference between a Deployment and a StatefulSet, when to use a ClusterIP versus a LoadBalancer, how to configure RBAC policies, what admission controllers are, how to handle persistent volumes, and on and on.

And yet again, no progress has been made towards shipping the blog.

It's not a light choice for a software company. You chose Kubernetes to focus on business logic, and now you're a Kubernetes administrator. More years, more tools, still not shipping.

The Pattern

Here's what all these approaches have in common: they treat infrastructure as a prerequisite, not as a product that should work for you.

There's a hidden assumption in the deployment ladder - that you must become an infrastructure expert to ship software. That somewhere between writing code and serving users, you need to also become fluent in operating systems, container runtimes, and orchestration platforms.

This isn't just my frustration. Recent surveys show mounting DevOps frustration and costs across the industry. Teams are spending more time on infrastructure and less time shipping features. The tools promised to make things easier, but the complexity just shifted.

Each "solution" optimizes for scale and flexibility at the cost of time-to-value. And for most teams, that's the wrong trade-off. You don't need to handle Google's scale. You need to ship a blog (or a SaaS app, or a mobile backend, or whatever your actual product is).

The question isn't "which deployment method is technically superior?" The question is: "what if the abstraction went further?"

Application platforms are the sweet spot. Not infrastructure-as-a-service where you're still configuring load balancers. Not container orchestration where you're still writing YAML. Platforms that take your code and handle everything else - the deployment, the scaling, the monitoring, the networking. That's the abstraction level that actually gets you out of the bog.

The Fix

That is why when I made Viduli, I included every need of a production application that's serving millions of users into the core platform. No add-ons, no extra charges, no additional steps - just everything built in.

Every production concern you'd normally spend weeks configuring? Built in. Load balancing, auto-scaling, service mesh, API gateway, database backups, monitoring, log aggregation, SSL certificates, DNS management - it's all there from day one.

And yes, Viduli is built on Kubernetes. All the power of the workhorse, none of the complexity. You get enterprise-grade orchestration, self-healing systems, and battle-tested infrastructure - without writing a single line of YAML or understanding control planes. That's the right abstraction layer.

Not because VMs are wrong. Not because containers are bad. Not because Kubernetes isn't powerful. But because the goal is to ship the blog, not manage infrastructure.

If it distracts from your primary business goal, abstract it away completely. That's the architectural principle.

Kubernetes powers Google because Google builds infrastructure. Most companies don't. Most companies build products. The infrastructure should be invisible, automatic, and someone else's problem.

Ask yourself: what are you really managing, and does it help you ship faster?

If the answer is no, you're not climbing a ladder - you're stuck in the bog.