AI cannot behave responsibly. Only natural persons can. I have previously written about this:
In a vital sense, humans have a “moat” about them, with respect to human society, that no AI can cross. AI can pretend to be a part of society, it can interact with human society, but it cannot actually be a part of society. In the world of business, this matters. Here’s how it matters: Those of you who’ve read my self-education book Secrets of a Buccaneer-Scholar are aware that I dropped out of high school at 16 and became a video game programmer. But my employer had a problem: a 16 year-old cannot enter into a binding contract. The solution was for me to become an “emancipated minor” in the state of California. That legal status allowed me to fully participate in the economic side of society, because I could be held responsible for a breach of contract. There is no such legal status for an AI tool.
Whatever services define a business, there is a necessary web of responsibilities required to make those services safe and reliable. In exactly the same way that removing the rule of law would destroy the conditions needed for a robust economy, disclaiming all responsibility also makes business impossible. Now that AI is widespread, we must be crystal clear about this.
To this end, I have created a new set piece reference document. It’s a concise description of the logic behind responsible work and specifically how that relates to the use of AI. You can download the nice version of it here, with footnotes.
Here is the text:
Principles of Responsible Work
by James Bach, Jon Bach, and Michael Bolton
- Every non-trivial business comprises some set of services that enable it to function. Examples include sales, accounting, R&D, customer support, etc. These services must be sufficiently reliable or else the business will collapse.
- Every service entails the risk of failure. When failures occur, the business must be able to recognize them and recover. In regulated industries, risk management may be subject to specific process mandates.
- A “responsible person” is a natural person in a business who is reasonably competent, prepared, and accountable for some service that sustains or defines that business. No matter what tools or processes are used within a business, someone must be responsible for them. To bear responsibility, a person must have sufficient capacity. For instance, neither a child nor a tool (such as AI) has the capacity (either legally or socially) to bear responsibility. Even adult humans may lack capacity, such as when an airline pilot has had insufficient sleep or is under the influence of drugs.
- A “responsible service” is one that is performed in good faith by a responsible person. This may include interpreting and following procedures, improving skills, anticipating problems, and reporting to relevant authorities or clients, both inside and outside the business.
- Responsible services may incorporate any manner of tool, as long as the person performing that service can operate the tool safely and legally. The effort and skill required to operate a tool safely increases with the complexity of the tool, the obscurity of its output, the amount of output produced per unit of time, and the baseline reliability of the tool when performing that task.
- Responsibility can be taken, shared, declined, or delegated, as long as there is a clear and reasonable protocol for doing so. In the absence of such a protocol, the business is vulnerable to accusations of negligence. This is a principal topic of common law and the law of contracts, although specific laws and regulations may constrain how a business can distribute responsibility.
- Therefore, to avoid inefficiency, poor quality, and legal trouble, businesses must develop and maintain clear lines of responsibility, assure competence and readiness among responsible persons, put reliable tools in place, and maintain appropriate oversight of any delegated responsibility.
Responsible Operation of AI
- AI cannot bear responsibility. AI is not a responsible person, and it would be meaningless to speak of a tool that operates in “good faith.” Therefore, it cannot provide a responsible service, nor can responsibility be delegated to it.
- An “AI agent” is always a tool operated by a natural person, irrespective of whether the person is monitoring it in real-time.
- Thus, the operator of an AI agent always bears responsibility for its work.
- The responsible operator cannot merely prompt and pray; they must assure adequate quality of the work.
Therefore, the operator must…
- be sufficiently skilled in the use of the AI tool.
- be sufficiently prepared to operate the tool in that context.
- be sufficiently alert to risks, anomalies, or defects that may occur in the work.
- feel empowered (and actually have the power) to reject or remediate any work done by AI. Otherwise, the operator becomes a scapegoat, a “moral crumple zone.”
- avoid cognitive overload, excessive cognitive debt, and cognitive surrender.

Leave a Reply