AI and accountability
Generative AI + lip service to guard rails = instant free content.
The brutal reality is that content is a commodity.
Content marketing is getting hit particularly hard, with AI-generated video, images, podcasts, and articles everywhere.
But I think that businesses are focused on the wrong outcome. With the exception of publishers, the organizational goal is not to “produce content,” free or otherwise. The organizational goal is to sell a product or service, and content needs to support that goal.
Too many organizations are leaning into “instant free content” as their goal.
This is not an arms race. The winner is not the organization that produces the MOST content. The race is to produce content that people USE. You can get good results from AI, especially for well-understood problems. The problem arises with nuance and edge cases because genAI gives you the average of its database.
But don’t take my word for it. I asked ChatGPT to explain the downside of using AI to generate content.
To summarize:
- If your content is currently below average, AI is an improvement.
- If your content needs to be above average, AI will not get you there.
People are mistaking form for function. AI can generate something in the general shape of a legal brief, but will create bogus citations. A lawyer’s job is not to create something that looks like a legal brief. Their job is to create an actual legal brief with actual legal arguments.
As an author, you are accountable for your work. If you produce that content for an employer, the employer is accountable (and liable) for your content. If the spell-checker doesn’t catch a spelling error, that doesn’t make the error OK. Using AI doesn’t excuse you from getting the legal citation right in a brief, or ensuring that your image doesn’t have six-fingered human hands, or verifying that the machine translation doesn’t have howlers. Until we resolve the tension between AI-generated inaccuracies and author accountability, we’re going to have issues.
We have largely come to terms with this conflict in machine translation. We understand that instant translation of a website probably means an error or two, but we are willing to trade speed for accuracy.
But now AI is promising faster and cheaper. And it’s just not true. AI does well at synthesizing and summarizing, but it doesn’t extrapolate well. So when I ask AI to refactor a white paper into a case study, it creates something that has the form of a case study and stuffs in the content of the white paper. That largely works.
But try asking it to create a white paper from a presentation. What you get is a lot of filler words and not a lot of content, because the original slide deck doesn’t have detail. You might do better if you record someone delivering the presentation. Now we’re back to having complete content and just refactoring that content into a different format.
Eventually, we will put generative AI in its place as a useful tool, just like a spell-checker or desktop publishing software, or any other innovation that has changed the process of content creation.
But right now, we are in the middle of the hype, where the key metric is “how heavily are you using AI?” Instead, we need to evolve to “how efficient is your content production process and how good is your deliverable?”
Special thanks to John Collins for helping to clarify my argument.
Any errors should be blamed on AI are mine.
Much more on AI issues in these articles:
- JP Holecka: https://jpholecka.substack.com/p/bad-math-why-humans-might-be-rejecting?r=6g6e8&utm_medium=ios&triedRedirect=true
- Cory Doctorow: https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington
- John Collins: https://newsletter.collinscontent.com/posts/who-has-the-authority-to-stop-content-at-your-company
Questions? Ask Sarah!
"*" indicates required fields
