If you present someone else’s work as if it were your own, no one will respect you, and you won’t even respect yourself. If someone is paying you to do thoughtful work, and they come to believe you are not thinking, after all, they will lose their enthusiasm to pay you. If you send someone a lot of data and ideas, and it comes to light that all you did was write a little prompt then paste the results into a file, then at least one of two things will happen: they will like the results and realize that AI can replace you, or they won’t trust the results and feel that you are spamming them.
In this modern world I find I am developing a new reflex. My first impulse, when someone I don’t know well shares any report, analysis, or apparently thoughtful writing, is to not believe it’s real. I am automatically discounting the value of work from people who lack a reputation for original thinking.
And I bet you are doing the same thing, aren’t you? We are already awash in a world of deceptive automated communication. You are already far too smart to fall for typical email spam or text messages from “Diana” asking you to “get tacos” with her and then apologizing for texting to the wrong number. (That scam doesn’t even make sense. When someone gives me his number, the very first thing I do is text him “its me” and then when he responds “yep” I add him as a contact. Nobody looks up someone else’s number out of the blue and sends them a taco feasting request as the first message ever.)
Even if you are an ardent AI fanboy, wouldn’t you prefer to prompt AI for yourself instead of reading someone else’s article generated from a prompt? “Write an essay about AI that would impress a credulous person. Include bullet points or emojis or whatever. You know what to do.”
If you are too young to know this, I’ll tell you: when web search engines first came out, no one was saving and sharing web search pages with each other. If they shared anything, they would share the query-– the prompt. They’d share insights on tricks and tips for Googling, not list all the hits that came back. The result of a Google search is transient, ephemeral, not something that anyone should publish in a book or use as a social media post. I think AI is best used like that: as a personal tool that tells you things you don’t share directly with other people.
Even if you think AI produces consistently good work, you are playing with fire if you let it infect your work. The “fire” is the way other people will begin to assume that all of your genuine work is actually the product of AI. Thus using AI injudiciously could taint your reputation for anything else you do.
Maybe you want to shout back “but James I am partnering with AI! I’m in charge! It’s really my ideas, and the AI is just wordsmithing it!” My response is: Good luck expecting anyone to believe that.
Is it true that you are the senior author of your “AI-powered” writing? I can’t know unless I watch you do the work. But I’m not going to spend the time to do that. Only if you have a strong reputation for original work will I be willing to give you the benefit of the doubt. And I believe I’m not alone. I believe you will come to have the same attitude as I do, and be equally suspicious of other people’s work, whether you admit it or not. In your heart you know I’m right.
Some people say “AI will not replace you. Someone using AI will replace you.” I don’t believe that is true, but for the sake of argument, let’s say that it is. What would happen next? After the world is full of people using AI to do their work, how do humans differentiate themselves? If we can’t differentiate ourselves, then the cost of labor will plummet. If the only way we differentiate ourselves is to be clever about using AI, how will anyone know? AI produces so much slop that’s it practically impossible to review and vet it.
You know what, I do use AI in my work. But here are my rules:
- In general, I treat AI as “automated irresponsibility.” I think of it as a precocious child that may have good ideas, but is totally unreliable.
- I never have AI write any text that goes out under my name. Not one sentence. Ever. I don’t even turn on autocorrect on my phone, for fear it will insert a word I didn’t intend.
- I never rely on an AI-based tool that I do not know to have been systematically and deeply tested. Especially if the tool has to process large amounts of data to give me answers that are difficult to independently verify.
- I never rely on an AI-generated summary of a larger text. (And yes I have been burned by a client who used a tool to summarize my work and came to fundamentally wrong conclusions about my beliefs based on that 20% wrong summary.)
- I allow myself to use GPT queries and deep research tools to explore facts and ideas as a first cut. However, I guard against the probability of AI bias by using other sources outside of AI, and I guard against hallucinations by reviewing every source for myself.
- I allow myself to use a GPT to analyze and critique my own work. This is safe because even if it is wrong in its critique, it might still identify a weakness that I can address.
- I allow myself to use a GPT to make prototypes and throwaway versions of tools. This is fast and fairly safe (although there is a growing danger of hackers poisoning us with malware in libraries that GPTs hallucinate into existence).
I want there to be nothing in my I but me.
Postscript
I asked ChatGPT to summarize the post, above. Here is what it produced:
The author argues that
overreliance on AI for creative or intellectual work undermines credibility, trust, and self-respect. If people realize you’re using AI to generate contentwithout meaningful input, they’ll either feel deceived or conclude you’re replaceable. In a world flooded with AI-generated content, skepticism grows—people instinctively doubt the authenticity of others’ work unless they have a proven reputation for original thinking. Sharing AI outputs as if they were your own ideas erodes youruniquenessand may lead others to question all your future work. The author uses AI cautiously, treating it as a flawed tool for exploration or prototyping, but never for producing final work under their name. Their guiding principle:“There should be nothing in my ‘I’ but me.”
Like most things GPTs produce, it’s not terrible. Just as a gourmet meal isn’t terrible just because there is a hair in it. Still we consider it unacceptable to have hair in our food. This is why I don’t use AI summaries.
The parts I have crossed out were where ChatGPT got it wrong.
- I’m not arguing that overreliance undermines credibility, etc., but rather that any reliance does so (regardless of the merits of the work).
- I’m not warning against people discovering that you are using AI without meaningful input (a phrase that does not appear in my essay) but rather that they discover you are using it at all to produce your work.
- It’s not your uniqueness that is primarily threatened when you use a GPT to write for you and its not clear what is you and what is AI, it’s your credibility. Uniqueness is threatened, too, but that has nothing to do with presenting the work as your own, but will occur even if you are totally open about using AI.
- ChatGPT’s choice of my guiding principle is more what I would call a summarizing thought. For a guiding principle it should have chosen: “think of AI as a precocious child that may have good ideas, but is unreliable.” But if it were truly insightful, it could do better. Here is a guiding principle I would take away from this essay “People judge you not only by the work you show them, but by their beliefs about what you didn’t show them. So protect your reputation, lest AI seem like a better bargain than you.”
Leave a Reply