Physical Address
London, UK
Physical Address
London, UK
When Software Testing Became a Commodity
Here’s an uncomfortable truth.
Many organisations discover it too late.
The £2 million you believe you saved by commoditising testing is often costing you far more in customer churn, emergency fixes, and lost momentum.
Most of us have seen this play out. A company decides it has “cracked the model” by moving testing to a low-cost offshore provider or test factory promising dramatic savings. Six months later, their flagship product is bleeding customers, support queues have exploded, and senior engineers are firefighting production issues instead of building new capability.
On paper, the operation looks efficient.
Test cases are executed.
Reports are produced.
SLAs are met.
In reality, quality is quietly eroding.
The mistake is simple but costly: treating software testing like manufacturing widgets, when it’s actually closer to commissioning bespoke furniture.

The logic is appealing. Testing looks repeatable. You define test cases, execute them, record results. Surely this can be industrialised? Surely execution can be separated from thinking and sourced at the lowest possible cost?
This is what James Bach and Michael Bolton describe as the assembly line fallacy — the belief that skilled cognitive work can be reduced to factory processes without loss of value.
Organisations fall into this trap because activity is easy to count. Executed tests, completed scripts, passed checks — these feel measurable and controllable. Cost reduction looks immediate and visible. The risks, meanwhile, remain abstract and comfortably out of sight.
Until they aren’t.
Testing appears simple only when you reduce it to mechanics. The moment you recognise it as investigation — as the exploration of risk in a changing system — the factory model starts to break down.
Let’s talk about what this approach really costs.
The Consortium for Information & Software Quality estimates the cost of poor-quality software in the United States alone to be close to £1.9 trillion annually. For individual organisations, Cost of Poor Quality commonly sits around 20% of sales revenue. A £100 million business may be quietly burning £20 million a year through rework, customer attrition, and emergency remediation.
What’s striking is how often “cost-saving” testing strategies contribute to this problem. Industry studies consistently show high dissatisfaction rates with offshore QA models, with many organisations reporting cost increases rather than reductions once hidden factors are accounted for.
Those hidden costs compound quickly:
You never budgeted for the management overhead. Time zone gaps, rework cycles, clarification meetings, and escalation paths consume senior engineering and leadership attention.
You never budgeted for the communication tax. Misunderstood requirements, lost context, and ambiguous “pass” results delay feedback and mask emerging risks.
You never budgeted for the production firefighting. Scripted testing misses nuanced failure modes. Defects escape. Development capacity is diverted from growth to recovery.
Quality debt compounds quickly. Each missed risk makes the next one more expensive to address.
The fundamental error is treating testing as execution rather than investigation.
Skilled testers don’t just check whether software behaves as specified. They understand the business domain well enough to recognise when “working as designed” still fails customers. They spot patterns of risk introduced by architectural decisions. They question assumptions, explore edge cases, and notice when behaviour looks plausible but wrong.
This is not checklist work. It requires judgement, context, and critical thinking.
A cabinet maker analogy helps. Flat-pack furniture is cheap, standardised, and sufficient for many situations. But when you have an awkward alcove, a curved wall, or a period property with constraints, you need a craftsperson who understands both the materials and the environment.
Software testing is the awkward alcove. Every system is different. Every organisation has unique risks. You cannot template your way to confidence.
Returning to the earlier example, a forensic look at the supposed “60% cost saving” tells a different story:
A £2 million saving on testing labour resulted in well over £10 million in direct and indirect costs — before considering morale, delayed releases, and reputational damage.
This is the risk of optimising for efficiency while being blind to effectiveness.
The most dangerous aspect of this failure mode is that it looks successful right up until it isn’t.
Dashboards are green.
Test reports are complete.
SLAs are met.
Meanwhile, customers leave quietly, engineers lose momentum, and competitive advantage erodes.
So the leadership question isn’t complicated:
THE LEADERSHIP QUESTION
“Are you measuring testing efficiency, or are you managing business risk?”
When you measure what’s easy to count, you’re counting on expensive failures.
Ask your test leadership: “Can you articulate the three biggest business risks our testing strategy is managing right now?” If the answer is about test coverage percentages or execution velocity, you have a test factory measuring activity. If the answer is about specific business consequences of potential failures, you have risk intelligence. Know the difference—it’s worth millions.
Because you cannot test quality into a product through volume. You cannot script your way to insight. And you cannot outsource critical thinking to the lowest bidder and expect anything other than expensively documented failure.
This isn’t an argument against offshore partnerships, automation, or efficiency. It’s an argument against mistaking cost reduction for value creation.
Organisations that succeed treat testing as a professional capability, not a commodity. When offshore teams work well, it’s because they’re integrated as skilled practitioners — invested in domain knowledge, business context, and shared accountability for outcomes.
The real choice isn’t between onshore and offshore. It’s between buying test execution and investing in risk intelligence. The question isn’t whether you can afford skilled testers—it’s whether you can afford testing without skill.
When you design your testing strategy, ask yourself:
Are we building a test factory, or developing insight?
Are we paying for activity, or for confidence to release?
Are we optimising cost, or protecting revenue?
Because the difference between the two is often the difference between a short-term saving and a very expensive mistake.