10 reality checks for AI-enabled content ops
Hoping AI will solve all your content problems? Here are 10 reality checks for successful AI-enabled content operations.
Hoping AI will solve all your content problems? Here are 10 reality checks for successful AI-enabled content operations.
What does it actually mean to govern your content in the age of AI, and who’s really in control? In this episode, Sarah O’Keefe sits down with Patrick Bosek, CEO of Heretto, to unpack why the quality, accuracy, and structure of your content may be the most critical factors in what your users experience on the other side of an AI model.
Patrick Bosek: In today’s world, you don’t have 100% control. There are a couple of different places where this needs to be broken up. One is the end user: what they physically get and what control they have versus what control you have. Then, there’s what control you have of how the AI model is going to behave based on your information and your inputs. Whether or that model is public, like a user accessing your documentation through Claude Desktop, or private, like a user accessing your documentation through your app or website, the governance piece comes down to what control you have immediately before the model. And that breaks down into a couple of things: completeness, accuracy, and structure of the content.
Good content fundamentals have been the foundation of effective product content for decades, and those same principles are exactly what make content AI-ready today. In this episode, Bill Swallow and Alan Pringle explain how attending to your hierarchy of content needs is the key to AI success.
Alan Pringle: Right now, AI is not going to fix bad content problems. It is going to regurgitate that bad information, giving your end users information that’s flat out wrong. If your content at the basic source level is wrong, your AI by extension is going to be wrong. And that is the unglossy, unvarnished, hard truth that is still, I don’t think, seeping in like it should across the corporate world.
Bill Swallow: It really does come back to the fact that, despite the world changing on a day-to-day basis, the fundamentals have not changed.
In this webinar, Emilie Herman, Director of Content Operations at the Financial Accounting Foundation (FAF), shares lessons from her career journey. Through the lens of publishing services and large-scale content workflows, Emilie shows how the shift from manual processes to automation mirrors what’s happening with AI, and how these adaptation techniques apply to your content ops career.
It’s isolating when you feel like it’s all on you to figure out how to reinvent your career. Reach out and talk to people. It’s nice to make a human connection, which is very important to get past AI, but also to look at what other people are doing. Collaborate, talk things through, and acknowledge that everybody’s trying to figure things out. People want to experiment! There’s strength in numbers. If you have a manager, mentor, or someone who can help put you in the room to be part of the discussion, you feel empowered to take control of your destiny.
— Emilie Herman
Ready to futureproof your content operations? These upcoming events have the insights you’re looking for!
In this episode, Sarah O’Keefe and Alan Pringle explore how AI transforms content delivery from static documents into dynamic, consumer-driven experiences. However, the need for human-led governance is critical, and Sarah and Alan explore issues of accuracy, accountability, governance, and more. They challenge organizations to define AI success by its ability to deliver accurate, high-impact outcomes for the end user.
Sarah O’Keefe: The metrics that are being used to measure the success of AI are all wrong. We should be measuring the success of various AI efforts based on, “Are people getting what they need? Are they having a successful outcome with whatever it is that they’re trying to do?” The metric we actually seem to be using is, “What percentage of your workflow is using AI? How many people can we get rid of because we’re automating everything with AI?” It’s the wrong metric. The question is, how good are the outcomes?
When we first shared PDF files online instead of printing them decades ago, did accuracy in those PDF files improve with the shift from print to digital? And when we later published that same content as web pages, did old information become current because of the shiny new delivery format?
As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability.
Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do.
Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it.
Generative AI + lip service to guard rails = instant free content.
The brutal reality is that content is a commodity.
We’re ready to bring you more industry-leading content ops insights in 2026! Check out these upcoming events.
Thanks for joining us this year! Through webinars, podcasts, LearningDITA training, and more, we’re grateful to be part of your content ops journey.
Will cheap content cost your organization more in the long run? In this webinar, host Sarah O’Keefe and guest Dawn Stevens share how poor workflows, inaccurate source data, and the commoditization race can undermine both product quality and brand trust. Sarah and Dawn also discuss why strategic staffing and mature content ops create the foundation your AI initiatives need to deliver reliable content at scale.
Sarah O’Keefe: I write content that’s great for today. Tomorrow, a new development occurs, and my content is now wrong. We’re down the road of “entropy always wins.” We’re heading towards chaos, and if we don’t care for the content, it’ll fall apart. So what does it look like to have a well-functioning organization with an appropriate balance of automation, AI, and staffing?
Dawn Stevens: I think that goes back to the age-old question of, “What are the skills that we really think are valuable?” We have to see technical documentation as part of the product, not just supporting the product. That means that we, as writers, are involved in all of the design. As we design the documentation, we’re helping design the UX.
What happens when AI accelerates faster than your content can keep up? In this podcast, host Sarah O’Keefe and guest Michael Iantosca break down the current state of AI in content operations and what it means for documentation teams and executives. Together, they offer a forward-thinking look at how professionals can respond, adapt, and lead in a rapidly shifting landscape.
Sarah O’Keefe: How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us, what automation looks like, and the risk that is introduced by the limitations of the technology? What’s the roadmap for somebody that’s trying to navigate this with people that are all-in on just getting the AI to do it?
Michael Iantosca: We need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right.
Your organization’s content debt costs more than you think. In this podcast, host Sarah O’Keefe and guest Dipo Ajose-Coker unpack the five stages of content debt from denial to action. Sarah and Dipo share how to navigate each stage to position your content—and your AI—for accuracy, scalability, and global growth.
The blame stage: “It’s the tools. It’s the process. It’s the people.” Technical writers hear, “We’re going to put you into this department, and we’ll get this person to manage you with this new agile process,” or, “We’ll make you do things this way.” The finger-pointing begins. Tech teams blame the authors. Authors blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations say, “We’ve got to start making a change.” They’re either going to double down and continue building content debt, or they start looking for a scalable solution.
— Dipo Ajose-Coker
As a purveyor of high-stakes technical content, I am watching the rise of AI with alarm. Our interest in automation and new technologies is on a collision course with our mandate to deliver timely, accurate information. I am not the only one who is concerned; many people are writing on this topic. (Here’s a recent post from Michael Iantosca.)
How can global brands use AI in localization without losing accuracy, cultural nuance, and brand integrity? In this podcast, host Bill Swallow and guest Steve Maule explore the opportunities, risks, and evolving roles that AI brings to the localization process.
The most common workflow shift in translation is to start with AI output, then have a human being review some or all of that output. It’s rare that enterprise-level companies want a fully human translation. However, one of the concerns that a lot of enterprises have about using AI is security and confidentiality. We have some customers where it’s written in our contract that we must not use AI as part of the translation process. Now, that could be for specific content types only, but they don’t want to risk personal data being leaked. In general, though, the default service now for what I’d call regular common translation is post editing or human review of AI content. The biggest change is that’s really become the norm.
—Steve Maule, VP of Global Sales at Acclaro
AI, self-paced courses, and shifting demand for instructor-led classes—what’s next for the future of training content? In this podcast, Sarah O’Keefe and Kevin Siegel unpack the challenges, opportunities, and what it takes to adapt.
There’s probably a training company out there that’d be happy to teach me how to use WordPress. I didn’t have the time, I didn’t have the resources, nothing. So I just did it on my own. That’s one example of how you can use AI to replace some training. And when I don’t know how to do something these days, I go right to YouTube and look for a video to teach me how to do it. But given that, there are some industries where you can’t get away with that. Healthcare is an example—you’re not going to learn how to do brain surgery that someone could rely on with AI or through a YouTube video.
— Kevin Siegel
In this episode of the Content Operations podcast, Sarah O’Keefe and Bill Swallow unpack the promise, pitfalls, and disruptive impact of AI on multilingual content. From pivot languages to content hygiene, they explore what’s next for language service providers and global enterprises alike.
Bill Swallow: I think it goes without saying that there’s going to be disruption again. Every single change, whether it’s in the localization industry or not, has resulted in some type of disruption. Something has changed. I’ll be blunt about it. In some cases, jobs were lost, jobs were replaced, new jobs were created. For LSPs, I think AI is going to, again, be another shift, the same that happened when machine translation came out. LSPs had to shift and pivot how they approach their bottom line with people. GenAI is going to take a lot of the heavy lifting off of the translators, for better or for worse, and it’s going to force a copy edit workflow. I think it’s really going to be a model where people are going to be training and cleaning up after AI.
Your customers expect intelligent, AI-powered experiences. Is your content strategy ready for an AI-driven world? After a popular panel at ConVEx San Jose, the team at CIDM brought the conversation online in this webinar.
AI is going to require us to think about our content across the organization, across the silos, because at the end of the day, the AI overlord, the chatbot is out there slurping up all this information and regurgitating it. The chatbot doesn’t care that, for example, I work in group A, Marianne’s in group B, and Dipo’s in group C, and we don’t talk to each other. The chatbot, the world, the consumer, sees us all in the same company. If we’re all part of the same organization, why shouldn’t it be consistent?
— Sarah O’Keefe
In this episode of our Let’s Talk ContentOps webinar series, Scott Abel, The Content Wrangler himself, talks about the future of content operations in the age of artificial intelligence. You may know Scott from his work as a consultant, conference presenter, and talk show host, but in this session, we turn the spotlight back on Scott and ask him what HE thinks about the future of content ops.
Viewers will learn how AI is reshaping content operations, including:
The tcworld/tekom conference took place in Stuttgart, Germany, from November 5–7. The event is the largest technical communication conference in the world, typically with 2,500–3,500 attendees.
In episode 169 of The Content Strategy Experts podcast, Sarah O’Keefe and special guest Sebastian Göttel of Quanos engage in a captivating conversation on generative AI and its impact on technical documentation. To bring these concepts to life, this English version of the podcast was created with the support of AI transcription and translation tools!
Sarah O’Keefe: So what does AI have to do with poems?
Sebastian Göttel: You often have the impression that AI creates knowledge; that is, creates information out of nothing. And the question is, is that really the case? I think it is quite normal for German scholars to not only look at the text at hand, but also to read between the lines and allow the cultural subtext to flow. From the perspective of scholars of German literature, generative AI actually only interprets or reconstructs information that already exists. Maybe it’s hidden, only implicitly hinted at. But this then becomes visible through the AI.
Folge 169 ist auf Englisch und Deutsch verfügbar. Da unser Gast Sebastian Göttel sich im deutschsprachigen Raum mit KI beschäftigt, kam die Idee, diesen Podcast auf Deutsch zu erstellen. Die englische Version wurde dann mit KI-Unterstützung zusammengebastelt.
Sarah O’Keefe: Was hat die generative KI mit Gedichtinterpretationen zu tun?
Sebastian Göttel: Ja, nun, also oft hat man da ja den Eindruck, dass KI das Wissen schöpft, also Informationen aus dem Nichts erschafft. Und da ist die Frage, ist das denn wirklich so? Denn für die Germanisten ist es, glaube ich, schon eher normal, nicht nur den vorliegenden Text anzuschauen, sondern auch zwischen den Zeilen zu lesen, den kulturellen Subtext einfließen zu lassen. Und aus dem Blickwinkel der Germanisten, interpretiert oder rekonstruiert generative KI eigentlich nur Informationen, die schon vorhanden ist. Möglicherweise ist die verborgen, nur implizit angedeutet. Aber die wird durch die KI dann sichtbar.
In episode 165 of The Content Strategy Experts Podcast, Sarah O’Keefe and guest Patrick Bosek of Heretto discuss how the role of customer self service is evolving in the age of AI.
I think that this comes back to the same thing that it came back to at every technological shift, which is more about being ready with your content than it is about having your content in the perfect format, system, set of technologies, or whatever it may be. The first thing that I think either of us will say, and a lot of people in the industry will tell you, is that you need to structure your content.
— Patrick Bosek