Serverless Is An Architectural Handicap (And I'm Tired of Pretending it Isn't)
As a software architect, I hate serverless. Not because it doesn't work, but because it forces design constraints that cripple your application. Here's why always-on servers matter.


I need to say something controversial: as a software architect with a decade of experience building production systems, I hate serverless.
Not because it's bad technology. Not because AWS Lambda doesn't work. But because serverless is an architectural handicap that the industry has collectively decided to ignore.
The serverless pitch is seductive: "Just write functions. We'll handle everything else. No servers to manage." But what they don't tell you is that you're trading infrastructure complexity for architectural constraints that will haunt every design decision you make.
The Request-Response Prison
Here's the fundamental problem with serverless: it forces you into a request-response model that most real applications outgrew years ago.
Every Lambda function lives and dies with a single invocation. It wakes up when called, executes your code, and goes back to sleep. This seems elegant until you realize what you've lost: the ability to run code at any time, outside the request-response cycle.
Let me give you real examples of things that are trivial with always-on servers but become architectural nightmares with serverless:
Background Job Processing
You have a user upload a video. You need to transcode it, generate thumbnails, extract metadata, update the database, send notifications, and update search indexes.
With an always-on server: You accept the upload, queue the job, return a response. A background worker picks it up and processes it over the next 20 minutes. Easy.
With serverless: You're fighting 15-minute execution limits. You need to chain functions together. You need Step Functions or SQS. You're orchestrating distributed state machines for what should be a simple background job. Your architecture diagram looks like a bowl of spaghetti because you're working around artificial constraints.
Scheduled Tasks & Cron Jobs
You need to send daily email digests, clean up old records, generate reports, check for expired subscriptions.
With an always-on server: Set up a cron job. Done. Your application owns its own scheduling logic.
With serverless: CloudWatch Events or EventBridge. More services to configure. More IAM policies. More places where things can break. And now your application logic is split between your code and AWS service configurations.
Real-Time Features
WebSockets. Real-time notifications. Live dashboards. Collaborative editing. Long-polling. Server-sent events.
With an always-on server: Maintain persistent connections. Hold state in memory. Broadcast updates to connected clients. This is what servers are made for.
With serverless: You need API Gateway WebSocket APIs with connection tables in DynamoDB, callback URLs stored somewhere, Lambda functions that can't hold connections, and complex orchestration just to send a message. You've turned a 20-line WebSocket handler into a distributed system with five moving parts.
Database Connection Pooling
This one is particularly painful. Databases have connection limits. Applications need connection pools.
With an always-on server: Create a connection pool at startup. Reuse connections across requests. This is Database 101.
With serverless: Every function invocation might need a new connection. You hit connection limits at scale. You need RDS Proxy (another service, more cost). Or you use HTTP-based databases. Or you implement connection management in your application code. Or you just accept degraded performance.
The Stateless Handicap
Serverless demands statelessness. Every invocation starts fresh. No memory from previous requests. No warm connections. No in-process state.
This sounds like a principle, like good architecture. But it's actually a constraint masquerading as a best practice.
Now, I understand that at scale, shared state like sessions and caches should live in external services like Redis. When you're running multiple server instances, you need centralized state anyway. That's not the issue.
The issue is what happens BETWEEN your code and those external services.
What Servers Can Do (That Serverless Can't)
1. Persistent Connections
With an always-on server, you create a connection pool to Redis at startup. Every request reuses those warm connections. Fast, efficient, minimal overhead.
With serverless, every invocation might need a new connection. Even with connection reuse tricks, you're constantly establishing and tearing down connections. That's 10-50ms of latency added to every Redis operation.
2. Request-Scoped State
Your server can hold temporary computation state during a request. Parse a JWT once and keep it in a variable. Load user permissions and cache them for the request duration. Compute something expensive and reuse it.
With serverless, you either recompute everything or hit Redis for every tiny lookup. There's no middle ground.
3. Warm Initialization
Servers load configuration once at startup. Compile regex patterns. Initialize libraries. Set up connection pools. Build lookup tables from static data.
Serverless does this on every cold start. Or you accept slower performance. Or you build complex warming strategies.
4. Background Refresh
Your server can have a background thread that refreshes cached data from external services. Keep a local copy of frequently-accessed data, refresh it every 30 seconds. Fast reads, eventual consistency where it makes sense.
Serverless can't do this. Every function invocation pays the cost of fetching data fresh.
5. In-Memory Lookups
Configuration maps. Feature flags. API keys. Rate limit counters for the last second. These don't need to be shared across servers, but you also don't want to hit Redis for every single check.
Servers keep these in memory. Serverless hits external storage or reloads them constantly.
The result? Even when using the same external services, serverless applications are slower and more expensive because they can't maintain any warm state between requests.
You're not building a stateless application. You're building a system that constantly pays the cold-state penalty.
The Cold Start Burden
When a Lambda function hasn't run in a while, AWS needs to provision a container, load your code, and initialize your runtime. This takes time:
- 100-500ms for Node.js and Python
- 1-3 seconds for Java and .NET
- Even longer for large dependencies
Yes, you can keep functions warm. Yes, you can use provisioned concurrency. But now you're paying for idle capacity - the exact thing serverless promised to eliminate.
With an always-on server, there are no cold starts. Your application is always ready. Users get consistent performance, not random 2-second delays.
The Cost of "Free" Scaling
"Serverless scales automatically!" they say. "From zero to millions!" they promise.
What they don't mention: serverless is cheap at zero scale and expensive at consistent scale.
Let's do the math for a typical web API serving 10 requests per second (not high traffic, just steady):
- 10 req/sec × 86,400 seconds = 864,000 requests/day
- At 100ms average execution time = 86,400 seconds of compute/day
- Lambda pricing: ~$0.0000166667 per GB-second
- For 1GB memory: ~$1.44/day = $43/month in Lambda costs
A comparable container (1 vCPU, 1GB RAM) on most platforms: $10-20/month.
At consistent traffic, serverless costs 2-4x more than containers. The "pay-per-request" model is great for sporadic workloads, terrible for steady ones.
And that's just compute. Add in API Gateway costs ($3.50 per million requests), CloudWatch Logs, data transfer, and you're looking at even higher bills.
The Vendor Lock-In Nobody Talks About
"Containers are portable!" everyone says. "Functions are standard!" they claim.
But look at your serverless codebase:
# AWS Lambda handler
import boto3
def lambda_handler(event, context):
# Parse API Gateway event
body = json.loads(event['body'])
# Call other AWS services
dynamodb = boto3.resource('dynamodb')
s3 = boto3.client('s3')
# Lambda-specific response format
return {
'statusCode': 200,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps(result)
}This code is AWS-specific from top to bottom:
- Lambda event format
- API Gateway integration
- boto3 for AWS services
- Lambda response format
Moving to Google Cloud Functions or Azure Functions means rewriting your integration layer. Your "cloud-agnostic" functions are locked into AWS just as much as if you'd built on EC2.
With containers, your application code is actually portable. The container image runs anywhere - AWS, GCP, Azure, your own datacenter, or any platform that supports containers.
What You Actually Need (And Don't Get)
As an architect, here's what I actually want when I deploy an application:
- Always-on execution - My code runs continuously, handling requests and background tasks
- Persistent connections - WebSockets, database pools, external API connections
- In-memory state - Caches, sessions, rate limiters without external services
- Predictable latency - No cold starts, consistent performance
- Background processing - Long-running jobs, scheduled tasks, async work
- Reasonable costs - Pay for what I use, but don't pay a premium for it
- Simple architecture - Straightforward designs, not distributed systems by default
Serverless gives me #6 (in the beginning). It fails at everything else.
| Requirement | Always-On Server | Serverless |
|---|---|---|
| Always-on execution | ✅ | ❌ |
| Persistent connections | ✅ | ❌ |
| In-memory state | ✅ | ❌ |
| Predictable latency | ✅ | ❌ |
| Background processing | ✅ | ❌ |
| Reasonable costs | ✅ | ⚠️ |
| Simple architecture | ✅ | ❌ |
An always-on server gives me all seven.
The Right Tool for the Job
I'm not saying serverless is bad for everything. There are legitimate use cases:
- Event processing - S3 upload triggers, webhook handlers, IoT events
- Scheduled batch jobs - Run once per day/week, idle otherwise
- Sporadic workloads - Unpredictable spikes, long idle periods
- Glue code - Small integrations between services
These fit the serverless model naturally. Short-lived, stateless, event-driven.
But web applications? APIs? Microservices? Background workers? Real-time features? These are not serverless workloads. They're continuous, stateful, always-on applications that serverless forces into an unnatural shape.
The Better Abstraction
The serverless revolution got one thing right: developers shouldn't manage infrastructure.
But it got the solution wrong: the answer isn't to constrain your architecture around functions. The answer is to abstract the infrastructure while preserving architectural freedom.
That's why modern application platforms exist. You get:
- Container-based deployment - Full applications, not just functions
- Always-on execution - No cold starts, no timeouts
- Background workers - Long-running jobs that just work
- WebSocket support - Real-time features without complexity
- Connection pooling - Databases work like they should
- Automatic scaling - Scale up and down based on demand
- Simple pricing - Pay for compute resources, not invocations
All the deployment simplicity of serverless, none of the architectural constraints.
When I built Viduli, this was the core principle: abstract infrastructure without constraining architecture. You write normal applications with background workers, WebSockets, database connections, in-memory caching - all the things that make software engineering straightforward.
Serverless solved the wrong problem. It made infrastructure invisible by making good architecture impossible.
The right solution makes infrastructure invisible while letting you build proper applications.
Ask Better Questions
Stop asking "Should I use serverless?"
Start asking:
- Does my application fit the stateless, request-response model?
- Can I live with 15-minute execution limits?
- Am I okay with cold starts and variable latency?
- Do I want to orchestrate distributed state machines for simple tasks?
- Am I building for sporadic or consistent workload?
If you're building a typical web application, API, or microservice, the answer to most of these is "no."
You don't need serverless. You need deployment simplicity without architectural compromise.
That's not revolutionary. That's just good engineering.
Serverless is a handicap. Stop pretending it isn't.