• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Satisfice, Inc.

Software Testing for Serious People

  • Home
  • About
    • Privacy Policy
  • Methodology
    • Exploratory Testing
    • Reasons to Repeat Tests
  • Consulting
  • Classes
    • James Bach’s Testing Challenge
    • Testimonials
    • RST Courses Offered
    • Rapid Software Testing and AI (RST/AI)
    • Rapid Software Testing Explored
    • Rapid Software Testing Applied
    • Rapid Software Testing Managed
    • Rapid Software Testing Coached
    • Rapid Software Testing Focused: Risk
    • Rapid Software Testing Focused: Strategy
  • Schedule
  • Blog
  • Contact
  • Resources
    • Downloads
    • Bibliography: Exploratory Process
    • Bibliography: Risk Analysis
    • Bibliography: Coaching
    • Bibliography: Usability
    • Bibliography: My Stuff From IEEE Computer and IEEE Software Magazines
    • Bibliography: The Sociology of Harry Collins

So, You “10x’d” Your Work…

Published: February 8, 2026 by James Bach 4 Comments

There are lots of ways to “10x” things.

  • Drive 300 miles per hour down a busy city street.
  • Eat 15,000 calories in one meal.
  • Own ten dogs.
  • Have ten children.
  • Make hundreds of friends.

All these things have fairly obvious consequences and side effects. Even having lots more friends would force you to embrace a much shallower notion of friendship than you could otherwise enjoy. So, why is it that when AI fanboys speak of “10x-ing” their productivity, they never say a thing about the side effects?

It’s because they are speaking and behaving recklessly.

Increasing productivity by multiples is already an extraordinary claim. Doing so without incurring great risks and repercussions is simply unbelievable. Unless of course, the claimant has low standards. If you don’t care about quality, you can achieve any other goal you like.

When developers ask AI to write code for them, that may happen very fast. But some things are still slow:

  • the developer’s process of learning about what they are building
  • the developer’s process of deciding what to do next
  • the product owner’s processes of learning and deciding
  • the end users’ processes of learning and deciding
  • the time it takes for an elusive bug to reveal itself during normal use

If someone claims to have 10x’d their product development, my inquiry would look something like this:

  1. Tell me your test strategy. (Oh, you don’t know, because you asked the AI to test everything for you.)
  2. I would ask your AI for its test strategy, then. (It produces a lot of text that may or may not be what it actually did.)
  3. I would ask your AI for its analysis of the weaknesses of that strategy. (It produces text that may or may not represent important weaknesses.)
  4. I would review its claims about the risk coverage and product coverage achieved by its own checks.
  5. I would learn enough about the product to make my own analysis of product risks and how well they are addressed by the test strategy. (This would either reveal important holes in the testing or it wouldn’t. Either way, I could begin to take responsibility for the strategy.)
  6. I would acquire and review the chat transcripts that show how the product came together. (This would give me clues about weaknesses in the development approach and potential undiscovered bugs.)
  7. In the course of doing this, I would review test artifacts and results. (My goal would be first to make sense of them, and then to evaluate them against the claims the AI made about them and also against the overall business risk.)
  8. I would especially focus on security problems and user experience problems. (These are common in slop-coded products.)

Steps 4-8 cannot be significantly accelerated by AI, because the whole point is to achieve human understanding of how we know that the goal was achieved. That understanding cannot be manufactured outside a human mind and imported. Instead, some actual person must construct it. And that person must be credible.

We have all seen AI confidently assert things that are not true. I’ve seen claims that tests have been run, even when they weren’t; bugs that were fixed, even though they weren’t; products that are working, even though they don’t work at all. This is why we can’t just take AI’s word for it. This is why I say that “AI” stands for “automated irresponsibility.”

Testing is really not about quality, it’s about responsibility. What does your “10x” productivity mean, and how do you know?

Filed Under: AI and Testing, Rapid Software Testing Methodology, Test Strategy, Testing Culture

Reader Interactions

Comments

  1. Shailesh Shetty says

    9 February 2026 at 12:29 am

    Using separate AI tools (not by the AI which created the app) to generate standalone automated scripts which can be run and monitored independently by other tools such as Selenium/Playwright.

    To make it even more robust, also provide business relevant testcases to automate that you’d run (based on testers domain experience, exploratory skills).

    All said and done, in some chatbots created by AI, i’ve come to realise that with thorough exploratory testing, you will find scenarios where everyone will agree there is a bug (and it may be intermittent) but not many would know how to fix that bug. And the typical conclusion would be that thats how llms sometimes behave (hallucination), to document it and move forward with an updated disclaimer.

    And that non-deterministic behaviour of such apps is what scares me most about software development future which for most of my career atleast i assumed to a be based on 1s and 0s.

    [James’ Reply: You obviously have experience with this. In fact, I cover these issues in my class.]

    Reply
  2. Greg says

    9 February 2026 at 1:26 am

    It’s a bit depressing, seeing the entire industry repeat the same mistake of 15 years ago with test automation tools: confusing the tool for the work.

    Reply
  3. Kimball Robinson says

    20 February 2026 at 3:40 pm

    Sapient testing, humans in the driver seat, keep your thinking hat on, pay attention to context, assess risk.

    In the online debates, I see arguments about how best to use AI to code. I suggest you consider the possibility that EVERYONE is right – for now. Some CAN code 10x. And nobody will care, in theory. It’s like having a rough-hewn excel sheet that get it done. Those junk systems will talk to the ones done better. We will live in a dual world where everyone is right–given a context. The nice thing about 10x speed is they will accelerate the visible cost of bad code (if indeed it is bad) over time, and it will be a touch easier for companies to understand the real cost of software is in change and maintenance – the reason it’s “soft” and not “hardware” in the first place.

    In fact, as text prediction engines, LLM AI are literally only speculating 100% – pattern-predicting based on context/input.

    Software that is more cookie cutter WILL be faster or 10x. Software that is more complex or original problem solving will take more care. No system is entirely in one domain, and software companies in the former domain … will find a different pricing model.

    Reply
  4. Michael Bolton says

    8 March 2026 at 4:17 am

    It seems to me that a few lines of bad and untested code has the potential to 1000x the loss of thousands of dollars, the demise of a company’s reputation, or the attention of the regulators.

    Reply

Leave a Reply to Michael Bolton Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Search

Categories

  • About Me (18)
  • Agile Methodology (15)
  • AI and Testing (8)
  • Automation (22)
  • Bug Investigation and Reporting (8)
  • Buggy Products (26)
  • Certification (10)
  • Context-Driven Testing (44)
  • Critique (47)
  • Ethics (23)
  • Exploratory Testing (33)
  • FAQ (5)
  • For Newbies (24)
  • Heuristics (27)
  • Important! (20)
  • Language (35)
  • Management (21)
  • Metrics (4)
  • Process Dynamics (28)
  • Quality (10)
  • Rapid Software Testing Methodology (28)
  • Risk Analysis (12)
  • RST (8)
  • Scientific Method (3)
  • Skills (29)
  • Test Coverage (8)
  • Test Documentation (8)
  • Test Oracles (5)
  • Test Reporting (11)
  • Test Strategy (28)
  • Testability (4)
  • Testing Culture (99)
  • Testing vs. Checking (18)
  • Uncategorized (13)
  • Working with Non-Testers (7)

Blog Archives

Footer

  • About James Bach
  • Satisfice Blog
  • Bibliography: Bach on IEEE
  • Contact James
  • Consulting
  • Privacy Policy
  • RST Courses
  • RST Explored
  • RST Applied
  • RST Managed
  • RST Coached
  • RST Focused: Risk
  • RST Focused: Strategy
  • RST Methodology
  • Exploratory Testing
  • Testing Training
  • Resources
  • Bibliography: Exploratory
  • Bibliography: Risk Analysis
  • Bibliography: Coaching
  • Bibliography: Usability
  • Bibliography: The Sociology of Harry Collins
  • Schedule
  • Upcoming Public Classes
  • Upcoming Online Classes
  • Public Events
  • Tester MeetUps

Copyright © 2026 · News Pro on Genesis Framework · WordPress · Log in