Developer convenience, infrastructure cost, and losing the open web
Lately I’ve been thinking about the cloud platforms, SaaS, AI, scraping, costs, and the gradual closing of the web.
At first glance these seem like separate topics, but they share a set of assumptions that have quietly become “best practices” in the industry. Assumptions about what’s professional, what’s safe, what’s scalable, and what’s supposedly too hard for individuals or small teams to do themselves.
While I'm not anti-cloud or anti-SaaS in general, I do have a feeling these tools are often used far beyond where they make sense, largely due to marketing pressure and fear. That overuse creates real downstream effects: higher costs, lock-in, fragile systems, and eventually people closing off their own sites and apps just to stay afloat.
Cloud platforms and the myth of “serious infrastructure”
There’s a widespread belief that building something “serious” online requires “serious” platforms, which usually means public cloud infrastructure: AWS, GCP, Azure. Anything else is treated as amateurish.
The argument usually goes like this:
- Cloud gives you scalability, reliability, and failover.
- You shouldn’t self-host or manage servers yourself.
- Professionals use managed platforms.
- You'll need to hire sysadmins anyways so it won't be cheaper.
This narrative sounds reasonable and may be true in some cases, but is off the mark in many real-world cases.
Cloud platforms are not set-and-forget. The cloud solutions are complex beasts that you can easily mis-configure if you don't know what you're doing. As a result, you still need expertise to design, audit, and maintain the system, and over time you’ll need changes, fixes, and migrations. Instead of Unix tools and config files, you use web consoles, access policies, managed services, logs, metrics, and alerts. That just shifts where the complexity lives lives, but the work and costs remain.
Cloud infrastructure also isn’t inherently more reliable. In recent months alone, AWS, Cloudflare, GitHub, and others have had significant outages. Shared platforms fail too, and when they do, failures are global. A small, well-understood system under your control is easier to reason about and recover.
Security follows a similar pattern. Cloud providers have professional teams, but they also represent high-value targets with enormous blast radii. A small, simple, well-patched server with a minimal setup is often simpler to secure and audit.
Cost is where the differences become unavoidable: following cloud “best practices” gets expensive fast. Compute is only the starting point, and then load balancers, replication, managed databases, traffic, logs, metrics, alerting, and add-ons all pile on. In practice, cloud setups are often an order of magnitude or more expensive, and they still require specialized knowledge.
The usual justification is that this avoids hiring a sysadmin. What actually happens is that you replace that role with an AWS, GCP or Azure consultant at the same. If you can learn cloud tooling deeply enough to manage it yourself, you can learn Linux administration.
This pattern isn’t new. In the 2000s, Linux and Apache were considered unprofessional compared to Windows servers or branded Unix systems. Postgres and MySQL were dismissed in favor of Oracle. Linux routers were seen as inferior to Cisco hardware. These were marketing narratives framed as best practices, not technical inevitabilities.
Cloud infrastructure follows the same trajectory.
The Anti-Not-Invented-Here syndrome
A second pattern shows up in how people approach application architecture.
Instead of building small, simple components, there’s an increasing tendency to default to SaaS platforms and heavyweight frameworks for problems that are already well understood and mostly straightforward.
A blog is a good example.
Someone wants to write a blog. They reach for React and Next.js. That leads to client-side rendering, which causes SEO issues, so server-side rendering gets added. A remotely exploitable vulnerability appears in Next.js, raising concerns about arbitrary code execution. Running it on a plain Linux host now feels risky. Containers sound safer, or better yet, deploying to Vercel so someone else handles it.
At first it’s free. Then traffic grows. Bots and AI scrapers arrive. Bills start increasing. Now the problem is framed as “evil scrapers.”
None of this was necessary.
A blog is static content. Static HTML generation has existed for decades. Hosting it on a cheap VPS or static hosting costs almost nothing. The attack surface is minimal, and traffic volume doesn’t matter.
Unnecessary complexity creates cascading requirements: more infrastructure, more security layers, more tooling. Eventually outsourcing does make sense, but only because the system was made hard to operate in the first place.
The same thing happens with databases. Instead of running Postgres, MySQL, or SQLite locally, people jump straight to platforms like Supabase or RDS. They’re convenient and feature-rich, but most projects use only a small subset. If you later rely on the advanced features, moving away becomes painful or impossible. Growth then turns into a recurring cost problem.
Authentication is another example. Auth is a solved problem with solid libraries and established patterns. The (solid) advice to avoid rolling your own cryptography has expanded into a blanket avoidance of auth entirely. Instead of teaching people what not to do, the default is outsourcing to third-party services. The complexity doesn’t disappear, it just becomes vendor-specific.
This shifts effort from general skills to lock-in.
Over time, SaaS subscriptions accumulate. Individually they seem minor, but together they add up, especially if you have unexpected usage spikes. Running your own product becomes expensive, which forces aggressive monetization, artificial limits, or scaling simply to justify the cost structure.
Scrapers and anti-scrapers
When infrastructure is expensive and usage-based, every request matters. Even serving a blog becomes a cost center. That’s where scrapers of any kind (but especially AI) become a visible problem.
Much of today’s hostility toward scraping is driven by real bills. If serving traffic costs money, unwanted traffic becomes something to block. Cloudflare rules, captchas, garbage responses: anything that reduces load.
With static files served at near-zero cost, this wouldn’t matter. Scraping would be irrelevant or even welcome. Scraping becomes a problem because of the underlying cost model.
The response to those costs is a gradual closing of the web.
Large platforms already operate this way. Facebook, chat platforms, and social networks allow data in while tightly controlling access out. Public content is often only accessible through proprietary interfaces, with limited or hostile APIs.
Individuals increasingly mirror this behavior. Access is blocked for everyone except Google or Bing for SEO reasons. Others are explicitly denied, not merely discouraged through robots.txt.
This dynamic strengthens incumbents. Building a competitor to Google today is limited by permissions, not technology. The web is increasingly crawlable only by the largest players.
The irony is that the actors people fear most (OpenAI, Facebook, Google, Anthropic) can easily bypass these barriers. They have the resources to do so. Smaller companies, researchers, and hobbyists do not. The web closes unevenly.
This creates a feedback loop: marketing-driven practices raise costs, higher costs incentivize restriction, and restriction concentrates power further.
DIY is an advantage
This trend is unlikely to reverse on its own. The incentives are strong, and the marketing is effective.
Understanding that you can operate systems yourself still matters. Hosting simple services and keeping systems boring and cheap are often the most robust choices available.
There's value is recognizing when you only need a small slice of what’s being sold. With AI-assisted coding, implementing that slice is easier than it’s ever been.
People who are comfortable one layer below the current fashion retain more options. They can decide when outsourcing makes sense and when it doesn’t.
Turns out Doing Things Yourself is a real competitive advantage, and it’s becoming rare. I hope it doesn't become extinct.