Why Cheap Content Is Expensive and How to Fix It, featuring Dawn Stevens
Will cheap content cost your organization more in the long run? In this webinar, host Sarah O’Keefe and guest Dawn Stevens share how poor workflows, inaccurate source data, and the commoditization race can undermine both product quality and brand trust. Sarah and Dawn also discuss why strategic staffing and mature content ops create the foundation your AI initiatives need to deliver reliable content at scale.
Sarah O’Keefe: I write content that’s great for today. Tomorrow, a new development occurs, and my content is now wrong. We’re down the road of “entropy always wins.” We’re heading towards chaos, and if we don’t care for the content, it’ll fall apart. So what does it look like to have a well-functioning organization with an appropriate balance of automation, AI, and staffing?
Dawn Stevens: I think that goes back to the age-old question of, “What are the skills that we really think are valuable?” We have to see technical documentation as part of the product, not just supporting the product. That means that we, as writers, are involved in all of the design. As we design the documentation, we’re helping design the UX.
Resources
- AI and content: Avoiding disaster
- Futureproof your content ops for the coming knowledge collapse (podcast)
- Looking for a community of like-minded practitioners? Join the Center for Information Development Management (CIDM).
- If you’re looking for more great content ops insights, download the latest edition of our book, Content Transformation.
- Check out other episodes in our Let’s Talk ContentOps! webinar series.
Transcript:
Christine Cuellar: Hey everybody, and welcome to today’s show, Why Cheap Content is Expensive and How to Fix It. Today’s guest is Dawn Stevens, who’s the president and owner at Comtech Services, and our host, as always, is Sarah O’Keefe, the founder and CEO of Scriptorium. So without further ado, I’m going to pass things over to Sarah!
Sarah O’Keefe: Thanks, Christine, and hi, Dawn, welcome aboard.
Dawn Stevens: Hi, Sarah. Good to be here.
SO: I’m afraid the crazy train is how this is going to be today.
DS: Whenever we get together, right?
SO: Yeah. Well, welcome to the party. Okay, let’s dive in. I think you and I have talked publicly and not publicly about commoditization and a race to the bottom, and with AI accelerating everything, what happens when you commoditize technical content, when you go with the cheapest possible option without any attention to anything other than cost?
DS: Yeah. Well, when you commoditize content, or when you commoditize anything, ultimately, you’re turning it, in my opinion, from a strategic asset, from something that differentiates an organization, into something that’s much more generic, a product or a service that can be easily replaced or devalued in some way. So organizations ultimately see content in this situation as interchangeable, anybody can produce it, one version is as good as another. And so, they don’t see content as part of the overall value chain anymore, it’s more of an afterthought rather than an integral part of the design, the support or the brand itself.
DS: And so, what we’re seeing, I think, in the commoditization is it’s relying a lot on automation, or that acceleration of AI aspect of it, which potentially gives the benefits of it’s faster, it’s cheaper, which is, I think, part of that motivation, but it loses its brand personality. The user experience becomes more generic, more sterile, and so the voice of the organization is standardized and indistinguishable, ultimately, from its competitors. So if everybody’s commoditizing, we just have this plain vanilla documentation everywhere. I also think commoditization then also makes it so that expertise, of course, is undervalued. And so, if we’re treating it just like a mechanical process that anything can do, we, as skilled professionals, lose the influence in design and decision-making. And so, the organization is forfeiting the benefits of things like any of the strategic aspects, information architecture, reuse strategies, user research, and those types of things.
DS: And then, I think finally, the other big result that a lot of people don’t talk about with commoditization is that there’s little incentive, if you’ve commoditized, to experiment with future things, more interactive media, personalization, intelligent content, all of those trends, there’s less likely that you’re going to spend time and energy doing that innovation, and so the documentation ecosystem just stops evolving. And so, the organization can’t really keep up with the expectations that users might get from other companies who are doing those innovations, and so, again, we lose that competitive advantage.
SO: Yeah. And I want to be clear here that when we say commoditization, that is not in any way the same thing as offshoring. We have a lot of global teams now that are really good, that are producing great content and doing innovative things and all the rest of it. So while it is true that we can push things from a higher cost country to a lower cost country and potentially save some money, that’s entirely different from, I am going to go into whatever location and pay the lowest possible amount, because there’s nothing that differentiates person A from person B other than their raw cost. We’re just saying, “You’re a cog in the system, and if I can get you for less money, that’s great.” There’s some great talent out there all over the world, and as long as you’re being paid an appropriate wage locally… Now, India is cheaper than the US, that’s true, but we moved this to India is not at all the same thing as we’ve commoditized it, so I just want to make sure we say that explicitly.
DS: Yeah, absolutely.
SO: So we asked this question in the polls, where does your organization stand on the race to the bottom? So 11% are telling us they are all in on AI and firing everyone.
DS: Great, okay.
SO: I find that somewhat encouraging, because it’s only 11% rather than 50%.
DS: That’s true, that’s true.
SO: Because from the news, it sounds as though all the jobs are gone everywhere, if you just see what’s coming out. 27%, about a quarter, say a well-balanced approach to automation, AI, and human effort, that’s encouraging. 50% say they’re encouraging and finding some opportunities. 4%, everything is lovingly handcrafted with zero automation anywhere, and 4% other. So if you look at this… Oops, the AI number went up, it’s now up to 14%, oh dear. But on the good side of it, it is not 50%, so I guess that’s somewhat encouraging.
SO: So one of the most common things that we hear in this context of commoditization is basically content is a necessary evil. We’ve got to do content, but we don’t want to, and we’re just going to… And so, here’s my question. If you say that content is a necessary evil, at the end of the day, isn’t your entire operation a necessary evil? The product is a necessary evil in order to get revenue, right?
DS: Right. I think it all… I guess the idea of necessary evil is we all would like to be independently wealthy, and so if we don’t have to do anything, then that’s the ideal Nirvana.
SO: Right. That’s the promise of AI, Dawn.
DS: Exactly. But I think whether it’s content, whether it’s the product or whatever you call necessary evil, typically, what people are reacting to is just the frustration of it’s taking the effort, it’s taking me money or time that I don’t want to give, but it’s not really a valuation of the content and the value it brings. If somebody calls something a necessary evil, they’re acknowledging… The first part is necessary, they’re acknowledging it is a necessity, but they’re not acknowledging the value that potentially it’s bringing. So I think to counter that, you ultimately, it’s the age-old question that we always have in technical documentation, is how do we prove our value? But we have to reframe content as a strategic enabler, that our goal is to show that documentation’s not just necessary, but it’s transformative.
SO: Yeah. And I think one of the hard parts about this is that there’s a lot of bad content out there, bad technical content, and so when I make the argument, or you do, or anyone else, that content is a strategic asset, it’s more like, well, content can and should be a strategic asset, but if your organization is doing a terrible job of producing the bare minimum, then, well, maybe it’s not an asset. It should be, but it isn’t. So if what they’re producing is just crap on a page, then yeah, commoditize that. So we’re faced with this fork in the road of either make it better or go whole hog into AI and just keep producing slop, as you are now. So what’s the motivation, this race to the bottom, this idea of commoditization, what’s the logic behind that?
DS: Well, I think it’s probably three or four factors here, the first certainly just being speed. I see an awful lot of organizations saying, “Well, we could release our product a whole lot faster if we didn’t have to wait for the documentation.” And some of that all depends on where does documentation fall into process of, are we developing the whole product and then we’re throwing it over the wall and handing it to documentation to do something with, then yeah, the shorter we can make that process, the faster that we can get to market. I think there are other solutions than the commoditization side of that, like involve documentation earlier, but I think speed is certainly a motivator.
DS: The cost savings of, well, is AI, is automation ultimately cheaper than paying humans? And I’ve certainly got some opinions on that that we can talk about a little bit later here. But there’s certainly the money aspect of it. I think there’s also a scalability idea, and I also have some opinions as to what exactly are we talking about with scalability. But they think that it’s more, I can do more with less on that. And then, actually, I think those are the three motivators, but I think with the AI piece of it, there is another motivator, which is the jump on the AI bandwagon. Everybody has to have an AI initiative at this point, we all have to show that we’re doing something with AI, and so documentation seems like an easy place to insert this and say, “Yeah, look, we have, at our company, some kind of AI initiative.”
SO: I think it’s certainly true that we should be able to take a product database full of specs, the height and width and weight of a given product, and all of that stuff ends up in, let’s say, a little two-page data sheet. So you have an image of the thing, you have some specs, you have a little product description, whatever. And it would be relatively straightforward to pull all of that stuff out of the product database and just generate the data sheet. And this is a great solution, except for one tiny, tiny, small problem-
DS: The database is crap?
SO: Yes. How did you know? How did you know? The database is crap.
DS: Right. Well, I think… we’re ignoring with the idea of we’re adding AI everywhere, we’re adding AI not maybe just in the documentation, but also potentially in the development side of things too, so we’ve got AI feeding AI, and we’ve seen some of those discussions before of how that can really degrade everything. But even if you don’t have that, we’ve got the developers creating whatever database that they’re creating, not necessarily with any kind of structure or logic or things like that that we might apply to documentation, the database isn’t organized that way.
SO: Yeah. The database isn’t organized is an accurate sentence generally, which is… Well, yeah. We’ve talked a lot about the issues of product as designed and product as built. Particularly for hardware, what you run into is that the design documents say one thing, “It’s going to be this shape and size and it’s going to use these components,” and all the rest of it. And then, you get into it rolling down the actual assembly line and there are changes being made on the assembly line, and it turns out that more often than not, the place where those changes are captured is in the documentation. So the docs are accurate to what actually comes off the assembly line and the design docs are not, because they stopped, they did the design, got to 80%, and then when the actual design hit the actual manufacturing process, some changes were made, and those got captured in the docs, but not in the design docs. So if you want to automatically generate your product documentation from your design docs, you have to have design docs that are accurate and up-to-date and complete, and that happens never.
DS: Yeah. My husband’s a developer, and I can tell you that their least favorite thing to do is go back and update something, like, “Yes, we had to make a necessary change for this to really work the way it was supposed to work, but we’re trying to get the product out the door, we’re not trying to go back and update what we actually did.”
SO: Right. But the logical fallacy is if you want the AI to do it magically, it has to start from something, and you’re not giving it the something, because you, or your husband, has moved on to the next product. So what are the risks? We talk about race to the bottom and commoditization, and this is bad broadly, what are the implications of doing this? What happens when you get into this mindset of it’s just ink on paper, or I guess pixels on a screen, and we just don’t care? What are the risks of doing that?
DS: Well, I guess there’s a lot. Some of these might be even more accelerated with AI. So we start with probably just the basic, like you were just talking about with the database or any of those types of things, of the accuracy and accountability risk, that we produce content that is inaccurate or incomplete or potentially misleading. When you throw AI in there, I think there’s even more of a risk in that, because it can make it sound very plausible-sounding, but it’s still incorrect, so it sounds much more authoritative than maybe it was if it was just generated straight out of the database. So then we’ve got all of those risks of users hurting themselves, their equipment compromising their data, avoiding their warranties, all sorts of those types of risks with legal and ethical exposures and all those things. So that’s the obvious one.
DS: I think some other ones is that we lose the context and audience understanding, our understanding of the audience, the empathy, I guess, of the user. Our job as technical communicators is to do more than just rephrase the specs out of that database, we’re interpreting how is the user going to use this, what’s the user intent, we’re anticipating where things might be confusing for the user, we’re tailoring the tone and the format to what the users need. And so, we end up producing, in this idea of commoditization, maybe technically correct content, but that’s contextually empty.
It’s accurate, but it’s not useful, or it’s not engaging, or it’s not something that the users really want to interact with, because on the AI side of things, AI doesn’t feel, it doesn’t think, it doesn’t really even have a genuine understanding. It’s a predictor, I think people said that over and over and over in webinars, so hopefully people understand that aspect. But how that then translates is it’s not understanding what is the user’s goal, it doesn’t understand what the context is, the user’s pain points and everything else. And so, it might produce technically correct content, but again, misaligned with user goals, or even inaccessible to the different audiences, and that leads to unhappy users and potentially abandoning products.
SO: Yeah. And so, when we think about this, I don’t think either one of us is arguing that the proper approach to this… We’re saying race to the bottom is bad and commoditization is bad, but there is obviously room for automation and AI strategy in a fully functioning tech comm department, in a content operations environment, and the interesting question is, where and how do you apply automation, or AI, and/or AI, to optimize your efficiency and take advantage… That sounds bad, and leverage your humans, the expensive humans that you’re bringing in, in the best way, where do you apply their skills and where do you let the computer do the work?
And I think ultimately, to your point, you have to understand, each of you, for your organization, what is your risk profile. Do you have regulatory risk? Do you have life and safety risk? Do you have market risk? I talk so often about video gaming, and how, in general, documentation for video games does not have life and safety risk. There’s maybe an epilepsy warning at the beginning for flashing lights, but in general, it’s for fun. So you would think, oh, commoditized race to the bottom. But in fact, if the video game isn’t fun, people won’t play it, and if they won’t play it, they won’t buy it. And so, there’s a different kind of quality calculation there that is very much, how do we go to market with this game in a way that will lead to market success? So what’s your brand promise, and how do you deliver on that brand promise in a way that makes people want to buy your thing?
Now, we asked about content challenges, this is our second poll over here. And a few people said burnout, and a few people said low quality, and a few people said inefficient workflows, and there’s some other. But 57% of the people that have responded at this point said bandwidth, not enough resources, which is, I think, maybe the eternal problem. So as we think about an intelligent way of applying automation and AI, applying tools to the problem of not enough bandwidth, where do we go with that? Where can we leverage some of these tools to get to a point where we get better outcomes with not enough people? How do we solve our bandwidth problem?
DS: Yeah. I think the key thing is that so many companies, and again, thankfully, only that 14% are saying, “We’re just going to cut it all to AI,” but they’re seeing AI being this whole solution of just replace because it’s faster. But I think the solution isn’t to use AI completely, or avoid it, if you’re on the other side of here’s all these risks, but it’s to use it intelligently, as you were saying. So what we need to do is automate things that are mechanical, but humanize the meaningful, free the writers and that minimal amount of time that they have to focus on things that need judgment, things that need the empathy, things that need the strategic insight, and use automation for high-volume, rules-based, repetitive tasks, where precision, consistency, are going to be more important than maybe the creativity or the nuance that a human would bring.
So what is automation good at, the speed, the consistency and the scale, are going to be anything where rules are really clear, that you can improve the efficiency without really losing the control, because you give it a very clear set of rules. So a lot of the basic routine language and style optimization that certainly people have talked about are good things to… They’re measurable, they’re objective. It’s easy to say, “Here are the rules for how we want our grammar and our sentence punctuation structures, terminology,” even things like potentially even reading level adjustments and so forth can be very routine. Anything that’s got to do with data-driven, so even things like metadata tagging, search optimization, can be things that oftentimes humans find really hard to do.
Back in my day, with indexing, we had professional indexers, the writers didn’t just do it. And now, all of a sudden, we’re supposed to be really good at metadata. It’s essentially that same skill. But some of that, maybe automating those aspects of it, analytics of what people are accessing, where they get stuck, automation can report all of those types of things out. So it’s all this rule-based, maybe automate processes, but you’re not automating judgment. Yeah, go ahead.
SO: Yeah, no, it’s a hard problem, because for whatever reason, right now, the incorrect universal consensus appears to be that AI can be all things to all people, it can do all of those things. And we’re down in the trenches, and your best path to complete obscurity and/or obsolescence right now is to say, “The AI can’t do that.” AI can do a lot of things.
DS: Yeah. AI can mimic syntax, that’s what I’m talking abou really, a lot of that type of stuff. But it can’t mimic the empathy that I think… Maybe we don’t talk about empathy that much, but I think it’s always been a perpetual issue of understanding our audience. So what the human is bringing is understanding the confusion and the frustration and the user intent that ultimately requires that human insight. All AI is doing is predicting. So we bring value, the human brings the value, from that strategic and context thing of we can research the audience needs and the pain points and their workflows, and we can translate those technical details into what the users will actually understand. We can make ethical decisions of what to include or emphasize or omit. We can decide what content’s needed, why, how it fits into the overall product experience, and those types of things, that need that judgment, that AI just doesn’t bring, the judgment. It just gives you what it knows without making that judgment on, do you need it, or do you care about it, or is it going to confuse you?
SO: Yeah. AI is about patterns broadly, and so if you feed generative AI a whole bunch of patterns that are set up a certain way, it is going to then generate new… Well, new. It’s going to generate new stuff that is going to follow those patterns. And so, the implication is that if there are some problematic patterns in your content, it will cheerfully generate new content that follows the problematic patterns, because that’s what’s there. It also, interestingly, will try to infer relevance from things that are outliers, from things that don’t follow the pattern.
So I find it very helpful to remember that AI is just math, and so it’s a bunch of equations, and when the equations don’t balance, weird shenanigans happen. And so, the AI in scanning a corpus of content, if you used a certain kind of terminology half the time and a different terminology the other half the time, well, why did you do that? Well, in reality, it’s because, Dawn, you wrote half the content and I wrote the other half and we didn’t coordinate. The AI doesn’t know that, because again, it doesn’t know anything. And so, it tries to infer relevance from that difference in terminology, which brings us right back to, and therefore, the humans have to do the work of fixing those patterns and fixing what is being fed into the system. Yeah, go ahead, sorry.
DS: I was going to say, I think the thing to remember is that AI is not actually set up to protect your organization’s credibility and intellectual capital, it’s not a protector.
SO: Isn’t it actually the opposite?
DS: Right, right, exactly. And so, the human-in-the-loop is giving us judgment and stewardship, and deciding what needs to be there and distributed, and overseeing how AI’s tools are trained and what data sets they use and everything else. It’s not going to go, should I be telling this information here, or not even the question of, should I be making it up? Its goal here, when we talk about generative AI, the task that we have given it is generate, so it wants to please, it’s going to generate, and it’s not going to decide, well, was this a good thing to generate?
SO: And it’s not necessarily accurate. You ask it a question, and it will, as you said, aim to please. I’ve run into some stuff recently where I was asking ChatGPT some questions about competitive positioning in the industry landscape and what’s going on with all the different CCMSs and this, that and the other thing. Well, ChatGPT informed me that two companies had merged. They have not in fact merged. But I asked a question that was sort of along the lines of, “What would be an interesting merger?” And so, because I asked a leading question and I included that piece of information, it went out into its corpus and said, “Okay, what things can I put together? Where does mathematically and logically merger fit into the content that I have? ” And it produced an answer. So if you ask it a leading question, it gives you that answer.
Another one, this is perhaps my favorite example of the problematic nature of the AI, I asked AI, this was maybe two years ago, “Hey, what is the deal with DITA adoption in Germany? Why is it so low?” Which is a known thing. And I actually know the answer to this question and why this happened, but I asked the question. And the AI came back with some stuff that was semi-plausible, and then it said, “Hey, German is very complex syntactically, and so therefore, DITA maybe isn’t appropriate, DITA doesn’t work for German.” Now, that makes absolutely no sense, that’s an insane thing to say, because the grammar of the language at the sentence level, it’s not relevant for the tags that you’re putting on it, so it is just objectively wrong.
But here’s the more interesting thing. When I asked ChatGPT the same question again and said, “Give me an answer in German,” it gave me something very close to the same answer I asked previously, but it left out, “German is syntactically complex, blah, blah, blah.” So what happened was that I asked the question that involved the word German, and in the English language corpus sitting underneath ChatGPT, it is full of, “Ooh, German is scary and complicated.” The German language corpus sitting under ChatGPT does not say that, because people who speak German don’t think that it is necessarily a big deal that it has grammar and inflection and whatever. So the answer that I got from ChatGPT regarding this, why no DITA in Germany, was it fed in the cultural context of the content that it has in a way that is wrong. It made that relevant, even though it isn’t, because from a math point of view, those vectors, you look at the German node, it’s connected to ooh scary, and so it gave me that answer.
So turning this a little bit, we have a question in the chat from somebody saying that their major content challenge right now is restructuring existing content so that they can adopt AI technology. And so, I think I’m going to throw that one to you, as you do, what does that look like? What does it look like to restructure content, or maybe what does it look like to have content that doesn’t work?
DS: Well, as you were saying, I think it’s all about those links between content. So you’re not just restructuring the content to say, “This is some kind of semantic tag in structured authoring DITA,” or that type of thing, and say, “Here, I’m going to help you, AI, identify what purpose this particular content serves.” That’s certainly an aspect of it. But it is all of that linking in that relationships of drawing those explicit relationships between content so that it doesn’t have to infer things that might be wrong, so things like… A lot of people talk about knowledge graphs and taxonomies and those types of things as being very central to this restructure, is that we’re looking at that bigger picture.
And it’s interesting, because for a long time, we’ve focused on topics, topics, topics, topics. It’s topic-based authoring and your user’s only going to read an individual topic to get their answer, and so we are all about thinking about does this topic answer the full question completely, and maybe not as much about establishing all of the relationships. And now, it’s certainly an aspect of it. I’ve certainly tried to train that, from the very beginning, topic-based authoring is a network of topics and we do need to establish those relationships. But I’ve seen over and over and over again, the relationship part of that is harder to do. And so, it’s like, well, we’ll just start with, let’s get everything into topics.
And so now, we have no explicit relationships between these, we’ve gotten it all into maybe some structured content. I don’t know if the person who’s asking the question, if they’re still even at that point. But beyond just that structure, what we’ve been potentially ignoring too much, to our detriment, at this point is figuring out what is the network between them and drawing those lines. So it doesn’t say, “I should connect this thing about language perception of German into this technical piece of information,” that we’ve given it other patterns of, “This is related to this and this is related to that.”
SO: Yeah. If you think about a book for a second, the old-fashioned thing, which we have something like a thousand years of experience with, if you think about topics, in the context of a book, they have sequence and hierarchy. A comes before B, comes before C, comes before D. And also, A is the chapter heading, and it has B, C, D, and E, which are contained within it, so there’s a relationship there that you’re capturing. If you think about learning content, there’s a similar sequencing, typically, of course material, which contains lessons, which contain… And in many cases, you want people to go through these things in a particular sequence, and you build up their knowledge.
And so, if you think about a collection of topics just sitting in a big puddle of topics, what you’re describing is much more of that data lake network effect. Well, this one over here connects to that one over there in ways that are not represented in a sequence and hierarchy; it’s related topics. “Hey, go read this other thing over here,” or, “Go look up the settings that you need in that reference topic over there.” So we have to cross-connect things, and if we don’t cross-connect them, the AI probably will, because it will, again, see those patterns, see those connectors, and do things with them.
So it’s a really interesting way of thinking about it, that the model that we had, the book model, is only two axes, sequencing and hierarchy, so it’s a two-dimensional representation of content. And now, we have these connectors all over the place, so we’re… I hate to say in a multidimensional space, but here we are, because you have what you’re describing, this is related to this other thing over here, and we have context, if I’m in factory A, it only has this equipment, therefore I only want to see that content, or the equipment here is set up a certain way, so show me that content. So we have to be much more intentional about crafting those relationships and making sure that those relationships are in there.
One of the most… Well, two things. One, a lot of people are saying, “Oh, just give me a PDF, I’ll feed that into the AI,” which makes me cry. “We did all this structure, go use the structured stuff.” “No, no, the AI doesn’t know how to do anything other than PDF.” Amazing, okay. Additionally, no matter how good your content is, it gets out of date over time. I write the content, it’s great for today. Tomorrow, some new development occurs, my content is now wrong. Or wrong if you got the product update, but right if you didn’t get the product update, and immediately we’re down this road of, oh dear. Entropy always wins, we’re tending towards chaos, and if we don’t provide for care and feeding of the content, it’ll fall apart over time. So what’s your vision for this? What does it look like to have a well-functioning organization with an appropriate use of automation and AI, and what does it look like from a staffing point of view, what kinds of roles do we need in that organization?
DS: Yeah. I think that this has been still another age-old question of, what are the skills that we really think are valuable? And I think we run into, even without all the things that we’ve talked about, this idea of lower paid people, who are more typists or take what the engineers have written and edited or something like that, and I think that’s where the concern has come with the commoditization and everything else of, okay, that’s the easy stuff for the AI to potentially do is follow the style guides.
So what I see is where the technical documentation field really needs to go, and I think I’ve been saying it for years and years and years, and so have you, is that we’re more of that strategic aspect of things that were part… I think we have to see documentation as part of the product and not supporting the product, and that means that we, as writers, are involved in all of the design. As we design the documentation, we’re helping design the UX. The dream of we have a product that self-documents has always been around in my entire career, and yet we’ve never quite gotten there. But the idea is that modern user experience includes all the microcopy, the help text, the field names, everything part of the UX, it’s all that strategic part, all of that’s documentation in context.
And so, we have to be really part of the infrastructure, we being part of clarifying design intent, identifying usability gaps early as we try to write, we’re the proxy users, our questions surface flaws in the product before the customers ever see it. So, integrating the documentation team into that means that we are more than just glorified secretaries, we are designers, we are strategists, and that’s what AI can’t do, or what automation can’t do. The human-in-the-loop, what we have to make sure that we are doing is bringing that design, that judgment, that strategy thinking in order to really improve the overall product. So we set the standards, we make the decisions, we verify the meaning, and that requires a higher level of skill than just manipulating words.
And so, I think we’ve run into, I’m sure you’ve run into it a million times as well, the idea that everybody can write. In fact, that was part of my early career, is that I have an engineering degree, but I’d always intended to be in technical documentation. I loved writing in my high school days, but I also loved the science aspects of things, and I had a high school person helping me decide on careers say, “Oh, go get an engineering degree, because, ‘Anybody can write.'” And when you have that opinion… And we do, because everybody does have to write, we go through high school, we go through college, we have to write papers, so therefore we know how to write. But if that’s our definition of writing, we’re just looking for writers, and I think that’s why a lot of people have moved away from the technical writing job title to something, content strategist, information developer, whatever, putting in different words, because that concept of writing definitely brings this idea of anybody can write.
But that’s not what we’re looking for, that’s not what the documentation team should be hiring. We’re not hiring writers, or we shouldn’t be, in my opinion. We should be hiring these designers, the strategists, information developers, that all have a different meaning than just, I’m writing.
SO: Yeah, it’s interesting, because actually creating clear, concise, cohesive content is really not so simple. Now, AI and automation, just broadly, tools and software, can do things like fix my grammar. There’s a little squiggle that says, “Hey, you might want to fix your subject-verb agreement.” Yeah, I should probably do that, yep. The disconnect that I think that we’re seeing is that because the perception is that the people doing the content creation are in fact just pounding stuff out on a keyboard and/or fixing grammar and/or reformatting documents, as a tech writer, whatever you’re calling yourself, if your job is reformatting and fixing grammar coming from engineers, then absolutely, yes, your job is going away.
DS: I agree.
SO: That stuff is all now automated. So the fact that you’re good at grammar is great and helpful, and no longer a skill that will buy you a job, because legitimately, the AI/a whole bunch of linguistic support tools can do that work. But the stuff that you’re talking about, Dawn, is not so easily automated, the judgment of, well, which topic do I write? Sure, the AI can clean it up and refactor it, and tighten up my sentences, and tell me to fix my terminology, and do a whole bunch of other things, but did I write the right thing and did I make the right choices about the example that I used, that creativity that’s in there?
So interestingly, looking at this last poll that we ran, which has to do with risk tolerance, this is definitely weighted towards organizations are too cavalier about risk in content. So a third said risk tolerance is appropriate for the risk level of our product or content. And so, again, we’re back to if it’s air gapped operations for a nuclear power plant, we should probably be super careful. If it’s consumer electronics, we are maybe not quite so careful. Although, definitely tell them not to drop it in the bathtub, that kind of thing. So appropriate risk level, 33%. 13% said organizations overly cautious, but 40% said they are too cavalier and should be more cautious. So broadly, the poll responses are tilted towards our organization should be more careful, and they’re not, because they don’t see the risk.
So unfortunately, I’ve been through a couple of these hype cycles, and at a certain point, you just put your head down and wait for it to reach that infamous plateau of productivity. You have the hype, the peak of inflated expectations, then you have the trough of despair, and then you have the plateau of productivity. And right now, “Oh, let’s get rid of everybody because the AI can do it” is wrong, but that doesn’t really help when you’re the one getting laid off, because somebody else decided that we don’t need you.
So a couple of things here, but I think as we wrap this up and move into the questions, I wanted to ask you about automation versus AI, because we’ve used them interchangeably for productivity and improving our bandwidth. What is the difference between an automation workflow and an AI workflow, or is there a difference?
DS: Well, I think your point is exactly right, we’ve done it in our own talk here and it’s happening everywhere. We just go, “Oh, AI does everything, it automates things,” and they are not the same thing. Even I, when I was describing things earlier in this talk, talked about the rule-based efficiencies aspect of this is what I give to AI, and yet, ultimately, that’s what we’re talking about with automation. Automation is following those explicit predefined rules, predefined workflows, it performs repetitive, predictable tasks without human intervention at all. So it can execute a programmed instruction, “If this happens, do this.” It relies on very structured input of, “This is exactly how you are supposed to behave or do it.”
So examples, we can automate the publication of a document when it reaches an approved state in your CCMS or something like that. We can automate maybe generating release notes from Jira tickets or checking comments or something like that. We can automate the checking for broken links, checking for spelling, checking for missing values in your metadata fields. Those are all things that we can automate. Getting the speed and consistency, it actually potentially reduces human error, because we’re not really good at automation, we get bored or lack focus or something like that, and so automation is going to prevent a lot of those types of human errors. But it can’t handle any kind of ambiguity, it can’t make judgment calls, it’s going to break if the input changes, and so your rules don’t apply anymore, and so it’s only going to produce results that are as smart as the way you set things up. So automation is really muscle memory, you tell it what to do and it does it perfectly, but that’s all it does.
Now, when we bring AI in, the promise of AI is this idea of adaptive reasoning. So it’s going to use your statistical models, your machine learning, your math, like you were talking about it, to interpret things, to predict things, and to generate some kind of outcome that resembles a human thought process. So it’s learning from patterns, not just rules. And so, it can handle various ambiguity, not necessarily well, like we’ve talked about, but it can handle it. And any kind of incomplete inputs, it can make some of those inferences and things like that. And it can adapt and improve over time, based on the training it gets, the feedback that we provide.
So it can generate draft texts from a database or spec or code comments or something like that. It can summarize long documents into some kind of an overview for you. It can suggest terms based on content, meaning that type of thing that you can’t write a rule for. So that’s the distinction between the automation and the AI, so it handles more variation. It can accelerate some of your early drafting and so forth. I still think you need the human part to be checking all of that, but it can certainly accelerate some of that that a rules thing couldn’t do.
And so, I think the way I think of it is AI is like intuition, but still without the understanding of it. So basically, from our side of things, we’re using automation for things that we can define very precisely, publishing pipelines, formatting, versioning, style enforcement. We’re using AI for things that benefit from suggestion, not full final answer, but things that it could suggest to us or to synthesize things for us, so summarizing things, categorizing things, making recommendations. But we’re using humans to make those decisions, to set the standards, to make the decisions, to verify the overall meaning.
SO: Yeah. I wanted to circle back to something you said earlier about empathy, and this is the bigger picture issue around the question of, how do we deploy AI and how do we do it well, and also automation? First, as an organization, you, your organization has a brand promise and has trust and reputation with your customers, or not, as the case may be. Deploying an AI is fine, big picture. Understand though that if that AI destroys trust in your organization and your organization’s brand, or impinges on your reputation in certain ways, there’s going to be a cost associated with that. So right now, everything is like, “Oh, AI is free and it’s amazing.” Well, okay, it’s not free, but whatever. But nobody or very few people are talking about trust and reputation as potential costs. We’ve talked about product liability, also a concern. And then, the other thing is empathy. And then, I want to circle around to some of the questions people are asking.
DS: We’re creating a false economy. AI seems that it’s cheaper to produce, but it’s more expensive potentially to maintain, because those hidden costs of, well, things that we already talked about, human review and quality assurance and those types of things certainly are not necessarily being factored in, but it ultimately becomes a cost shifting problem that’s just something that we have to deal with later on, and it is because of this, like you were saying, the trust. When we go back to commoditization, we talked about that it becomes much more generic and impersonal, and so that loss of your brand, the experience, the loss of the distinctive tone, leads to brand credibility issues. Customers can really perceive the company as untrustworthy if the content is feeling like it’s machine-made, and people can still tell. It’s not all about em dashes, people can tell if content’s lacking a human touch, and they instinctively equate that with lower quality. So the content feels impersonal, inaccurate, users lose confidence in the product, and the brand as well, and that, nobody’s talking about.
SO: Yeah. So the empathy issue, you said earlier that AI doesn’t have empathy, which is, of course, absolutely 100% true. However, AI does perform empathy, it pretends like it has empathy, or it gives you output that looks like empathy. And there are more than zero people that are using AI chatbots as therapists. I find this concerning.
DS: Yeah. Every time you interact with it, how does it start the answer to every question I ask it? “That’s a really good question,” or something to that effect, and then you ask a follow-up question and it’s like, “Oh, that’s the perfect follow-up question.” It’s trying to give you the perception that it understands where you’re coming from and that it empathizes with you or it’s buttering you up or whatever it’s doing. Yeah, definitely, that’s programmed into it, that leads us to maybe a false sense of security to trust it with all of our problems or those types of things.
SO: Security, intimacy, in very problematic ways. One of my favorite stories is that if you ask a chatbot to do something and you keep asking it to do stuff, eventually, it’ll say, “Oh, that’s going to take a little while. Check back later.” Because if you think about it, when people ask me, as a human, to do things, eventually I’m going to put them off, like, “Oh, I can’t get to that today.” And the LLM corpus is full of people making excuses, basically. Now, the chatbot doesn’t actually have other commitments that will stand in the way of it completing the work, but because that content is in there, it says it, because that’s the pattern of what a response looks like.
So this synthetic world is really very concerning. We haven’t talked a lot about ethics, but we need to, as content people, we need to think really carefully about the implications of what we are doing with AI and with automation and with people, and make sure that the systems that we are building out are appropriate, sustainable, ethical, trustworthy. The algorithms are biased, because the content is biased, because our society is biased. It’s not the algorithm, the algorithm just got all the stuff. The stuff is full of bias, therefore the algorithm will perform bias, that’s just how it is. So Dawn, any quick closing thoughts on this extremely not-at-all grim-
DS: I think to me, the summary of this is, to me, I think the biggest risk of everything that we’re talking about in the commoditization and the use of AI is that we’re treating documentation as a cost to minimize rather than a capability to strengthen. So while AI can help accelerate routine, without human stewardship, we lose that strategic lever for customer self-service, for product usability, for knowledge retention, for brand trust, and so these short-term savings ultimately lead to long-term fragility. That would be my closing statement.
SO: Okay. Well, Christine, I’m going to throw it back to you and wrap us up here. Thank you, everybody.
DS: Thank you.
CC: Awesome. Yeah, thank you, Sarah and Dawn, for talking about this today. And thank you all for being here on today’s webinar. If you have a moment to go ahead and rate and provide feedback about today’s webinar, that helps us know what you liked. Please feel free to add feedback about what topics or guests you’re wanting in the future, because we want to make the content that you want to see, so we really appreciate that feedback. Also, if you want to stay updated on this series in 2026, make sure to subscribe to our Illuminations newsletter. That is in the attachment section. So make sure, again, you download those attachments before you go. There’s a lot of great links about what the presenters talked about today, Dawn shared a lot of great information in there, so make sure you check that out. And thank you all so much, we hope you have a great rest of your day.
