All Insights

AI on Rails: Who Owns the Liability When AI Writes Your Code?

When you build software, you make trade-offs. Ship the MVP fast with basic authentication, or spend three months building enterprise-grade security? Launch with a workaround, or delay two weeks to do it properly?

Every shortcut creates debt. Not because developers are lazy — because speed has value and perfection has costs. The problem is not taking on debt. It is never paying it back.

Over time, it compounds: unpatched libraries with known vulnerabilities, undocumented workarounds nobody understands anymore, integrations built for 100 users now serving 10,000, "temporary" solutions from 2019 still running in production.

Now Add AI to the Equation

Recent research is sounding alarms. GitClear analysed 211 million lines of code and found declining code quality tied to AI tools. Ox Security found AI-generated code is highly functional but systematically lacking in architectural judgment. Stack Overflow reports experienced developers are 19% slower when cleaning up AI-generated code compared to code they wrote themselves.

GitClear: Analysis of 211 million lines of code shows declining quality correlated with AI adoption — more copy-paste patterns, fewer architectural decisions.

Ox Security: AI-generated code is "highly functional but systematically lacking in architectural judgment" — it solves the immediate problem, not the underlying one.

Stack Overflow: Experienced developers are 19% slower when reviewing and cleaning AI-generated code than code they wrote from scratch.

At Sourcelab, we use AI to elevate our team. Juniors get instant answers instead of waiting for seniors. Code gets written faster. Productivity is genuinely up. But here is what we are wrestling with: our normal quality processes are under pressure.

The Liability Problem

Senior developers reviewing every line of code? That is becoming a bottleneck when AI writes entire features in minutes. But not reviewing it? That is accepting liability for code nobody fully understands.

When AI-generated code has a security flaw, Sourcelab is liable. Not the AI. Not the tool. Us.

When unpatched libraries leak customer data, the platform owner pays — not the developer who left three years ago.

When "temporary" workarounds violate GDPR, you get fined — not your velocity metrics.

This is why technical debt re-assessment is not optional — whether you are using AI or not. It is how you stay compliant, secure, and trustworthy.

What "AI on Rails" Means in Practice

We are actively building quality gates to solve this. We do not have it fully figured out yet — and I think anyone who tells you they do is oversimplifying. But the direction is clear: you need structured checkpoints between AI generation and production deployment.

For us this means automated security scanning on every pull request, mandatory architectural review for features above a certain complexity threshold, and a clear policy on which parts of the codebase require human-written code regardless of AI capabilities.

You cannot be transparent about systems you don't understand. Ship fast, yes. But know what you are shipping.

This connects directly to digital trust. Clients trust us to deliver software that is secure, maintainable, and understandable. That trust doesn't disappear just because we are using new tools. If anything, the new tools make the responsibility heavier — because the volume is higher and the speed is faster.

How are you balancing AI velocity with quality control? I would genuinely like to know what is working — and what is not.

Want to talk about Digital Trust, AI governance, or software quality?

Get in Touch