03-16-2026

Why Software Quality Is Getting Worse

Shak Schiff

Photo by Nicolas Cool on Unsplash

We Got Faster. We Also Got Sloppier.

Not long ago, shipping software took time. Weeks of planning. Long review cycles. Handoffs between people who actually looked at the thing before it went out the door. That process had plenty of problems. It was slow, sometimes bureaucratic, and not always better for all the extra time spent.

But something got preserved in the friction: a checkpoint. A moment where a person had to sit with the product and decide if it was ready.

That checkpoint is largely gone now. And the results are showing up everywhere.

Today, teams ship faster than at any point in the history of software. AI writes code. Automation fills in the gaps. No-code and low-code tools let people who have never written a line of code spin up entire websites in an afternoon. Builders who used to need six months now need six days. That speed is impressive. It is also producing a wave of broken, fragile digital experiences that nobody is taking responsibility for.

Speed Does Not Eliminate Risk. It Compresses It.

When you ship faster, you do not reduce the number of things that can go wrong. You reduce the time you had to catch them before they reached a customer. That is a meaningful difference, and most teams are not accounting for it.

A team using AI to generate code can produce in one day what used to take a week. But if that code has not been tested across real devices, real browsers, and real screen sizes, it does not matter how fast it was written. The speed just means the problem arrived sooner.

The same goes for no-code website builders. Drag-and-drop tools make it easy to put a product page together in minutes. They do not make it easy to know whether that page works on a 375-pixel-wide phone screen, in Safari, with autofill enabled, when a customer tries to check out at 11pm while their WiFi is spotty. That is not a theoretical scenario. That is Tuesday for a large portion of any ecommerce store’s customer base.

Tools Create Output. They Do Not Create Accountability.

This is the part nobody talks about. Every productivity tool in the modern software stack is optimized to help teams produce more output. More code. More pages. More features. More deployments. The metrics that get tracked are output metrics: velocity, story points, deployments per week, time to ship.

None of those metrics tell you anything about what happens when a customer lands on what you built.

A Shopify store can be built in a weekend with a theme, a few apps, and a solid product catalog. The owner can look at it on their laptop, think it looks great, and start running ads the next morning. And the site might actually look fine on their laptop. It might fall apart on Android Chrome. The navigation might lock up on tablet. The quantity field on the product page might pull up a full text keyboard on mobile instead of a number pad, creating a tiny jolt of friction at the exact moment someone is trying to buy.

No tool catches that. No plugin alerts you. It just keeps happening, invisibly, hundreds of times a day.

The Broken Middle: Screen Sizes Nobody Tests

One of the most consistent problems across every site assessment we run is breakpoint failure. Teams test on desktop. They check the mobile preview in their browser’s developer tools. They call it done.

The devices their customers are actually using sit somewhere in between. A tablet in landscape mode. A phone with a large screen. A browser window that is not quite maximized. At those in-between sizes, layouts collapse. Text overlaps. Buttons disappear. Navigation stacks in ways that were never designed and never reviewed.

This is not a rare edge case. It is a standard outcome when teams rely on tools to build without building in any process to verify the result. The tool does its job. It generates a layout. That layout was never designed to handle every breakpoint. Nobody walked through it. So it ships broken and stays broken until a customer gets frustrated enough to say something, or until it shows up in a conversion rate drop that nobody can explain.

AI Makes This Problem Bigger, Not Smaller.

There is a version of the AI productivity story that sounds like this: AI writes the code, AI reviews the code, AI tests the code, so everything is covered. I see this thinking more and more. It is understandable. It is also wrong.

AI is exceptionally good at producing output that looks correct. It can generate code that passes a review, builds without errors, and deploys without incident. What it cannot do is tell you what the experience feels like for a real person on a real device, in a real scenario, under real conditions.

That gap between “looks correct” and “works for customers” is where revenue leaks. It is where trust erodes. It is where people close the tab and go somewhere else. Automation and AI tools multiply your speed. They also multiply the surface area for unverified experience to reach your customers. If you double how fast you ship and keep the same amount of human verification in your process, you have not improved your quality. You have just distributed your risk faster.

The Quiet Cost Nobody Measures

Teams track what is easy to track. Deployment frequency. Bug counts. Page load speed. Those are useful numbers, but they do not tell the full story of what a customer experiences.

What does not get tracked is the drop-off on the product page that happens specifically on iPhone 14 in portrait mode. The checkout abandonment that spikes on Safari because a form validation message appears behind the keyboard. The search results page that returns blog posts instead of products because the index was never checked after the last site update. These things do not generate an alert. They do not create a ticket. They just cost sales.

The brands that take this seriously are the ones that build a layer of human verification into their process, not as a replacement for automation but alongside it. They understand that tools help you build faster and that speed creates new obligations, not fewer.

Moving Fast and Verifying As You Go

The solution here is not to slow down. Nobody is going back to six-month shipping cycles, and that is fine. The answer is to stop treating delivery as the finish line.

Shipping is the beginning of a customer’s experience, not the end of your team’s responsibility. When you use tools that accelerate output, you need to ask what happens to the verification step that used to live inside the slower process. If you have removed it, you have not eliminated that work. You have just moved it onto your customers, who will do it for you and often leave without telling you what they found.

Every new feature added without real-world testing is a bet that nothing broke. Every deployment made without checking the critical buying flows across devices is a bet that everything still works. Most of the time those bets pay off. When they do not, the cost is not a bug report. It is a lost sale, a lost customer, and a piece of trust that does not come back.

The tools are not going away. They should not. But the speed they give you is only an advantage if the experience you are shipping is one that customers can actually use.

Bring Your Vision to Life