Skip to main content
June 13, 2023

AI in the content lifecycle

Updated July 31st, 2023 by Sarah O’Keefe. 

The year 2023 begins the Age of Artificial Intelligence (AI). Everyone is talking about AI and its impact. Scriptorium is focusing on AI’s effect on content operations, and the public release of ChatGPT and other generative AI engines means rethinking the entire content lifecycle.

AI gives anyone the ability to remix, repurpose, and synthesize new content in various media, such as text, images, videos, and audio. Pattern-driven tasks will benefit from AI; for example converting documents from one format to another, terminology management, style guide compliance, and much more. Instead of searching stock photography sites, you can use an AI image engine to describe the image and have it generated in seconds.

But like any other innovation, there is also the potential of misuse: deep-fake videos, ever-more sophisticated phishing scams using cloned audio, and content that sounds authoritative but is not accurate.

Another concern is best described as “Entropy always wins.” Over time, systems tend toward disorder unless you put in work to hold back the chaos. The AI engines are scraping content from public sources, and those public sources now include AI-generated content. AI researchers are warning of potential “model collapse.” The Curse of Recursion: Training on Generated Data Makes Models Forget,” ArXiv Journal, published May 27, 2023, Ilia Shumailov et al.

A sensible AI strategy for content operations should focus on the following priorities:

  • Automating of tedious, repetitive tasks
  • Generating ideas or rough drafts for new content
  • Applying and verifying known patterns in content
  • Summarizing and synthetizing existing verified content

The rise of machine translation provides a useful parallel. In localization, machine translation is used to improve velocity and throughput, and human linguists are used for post-editing, critical translation work, and transcreation. As with AI tools, improving the quality of the input content results in better output results.

Disruption is coming

Like other content innovations, the use of AI will displace or eliminate some roles and lead to the development of new roles. Before the advent of written language, content was dispersed by storytellers and bards. The printing press displaced scribes, copyists, and manuscript illuminators. More recently, the advent of digital publishing eliminated typesetters. Traditional publishers are also jeopardized by digital publishing (which enables self-publishing) and social media (which enables distribution without a gatekeeper).

For AI, our best guess is that AI will displace low-value content producers, such as content farms that write fake product reviews, SEO-optimized clickbait, and the like. When you are trying to game the system, speed and cost are critical, and accuracy is irrelevant.

Innovation Who is displaced or disrupted?
Writing Storytellers and bards
Printing press Scribes and copyists
Digital publishing Typesetters
Social media Publishers
AI Copywriters and copy editors?

Style guide, terminology enforcement, and content conversion work will benefit from AI’s pattern-recognition capabilities, but in the process, copy editors (who are charged with understanding and enforcing style and terminology guidelines) will largely disappear.

Many organizations are hoping to leverage AI to automate the production of higher-value content, but adoption there will be slowed by legal risks. (There are also ethical problems, but we’re having trouble envisioning a scenario where ethics problems slow down the early adopters. We are after all familiar with social media.)

Generative AI and chatbots

For ChatGPT, Bard, and the other AI chatbots, it’s important to recognize that they do not understand concepts or meaning. Essentially, ChatGPT is autocomplete with some additional guardrails. You can, for example, tell ChatGPT to write a set of instructions about how to install a window, and it will generate something that has the form of instructions and talks about windows. It may or may not be a sensible set of instructions.

If you tell ChatGPT to convert the instructions to a valid DITA task, it can insert the correct tagging. So you can produce a valid XML file with a series of steps, but the problem is that the steps aren’t necessarily coherent.

ChatGPT generates content that looks plausible.

Generative AI is promoted as a way to increase content velocity by automating content creation. Ironically, though, the AI engines will perform best if they are fed accurate, highly structured, semantically rich information. Many organizations are exploring how to set up private, internal AI engines that use only curated, “known good” content developed inside the organization. Working with internal engines mitigates many of the privacy concerns and also makes it possible to ensure that the AI source content is controlled.

Image generators

AI image generators mix, match, and resample images to produce new images. It’s easy to find problem images, people with missing or extra fingers, limbs attached in impossible ways, and the like. These images are great fun as we point and laugh.

AI generated image of a man in workout clothes looking at a laptop. One leg is missing, the other bent at an odd angle with a shortened torso and an arm bend backwards. It's not right.

But again, you can generate a lot of plausible-looking images, and in many cases, it’s already difficult to tell the difference between photos and AI-generated images.

Two images of a mountain range along a body of water - one is AI generated, and one is a stock photo. Both look very similar and realistic.

One of these images is from 123rf.com (a stock photography site). One image was generated by Adobe Firefly. Which do you think is which? Check the bottom of this article for the solution!

Synthetic audio and video

Both audio and video are vulnerable to “deep fakes” with the use of AI. If you have a short snippet of a person’s voice, it’s quite easy to clone that person’s voice and create new audio. This synthetic audio will be useful for people who are losing their physical ability to speak or for making podcast edits. But scammers are going to have a field day with the ability to mount phishing attacks using a synthetic voice.

Video is more complex, but the AI tools make it possible to create “deep fake” videos that are not easily identified as fake. Political attack ads are already creating deep-fake videos of their opponents.

Trust and reputation

If AI origin is undetectable to the casual observer, trust matters more than ever. Content consumers will rely on company promises that content was created by humans, or that generated content has been reviewed and approved by humans. In July 2023, the larger AI companies made a voluntary commitment to the US government regarding AI technology. Seven major companies agreed to principles of safety, security, and trust.

The European Union is taking a more proscriptive approach with a proposed regulatory framework in the EU AI Act.

It seems likely that both AI companies and AI users will be held accountable for their development and use of AI tools. Organizations will need to take responsibility for content, and not just blame the AI if something goes wrong.

Copyright and intellectual property

Some of the thorniest AI issues are legal rather than technical. The US Copyright Office says that AI-generated content cannot be copyrighted:

If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it. For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user. Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.

Let’s say that an organization has a large amount of reviewed, approved “known good” content, which is (of course) copyrighted. If someone feeds that information into ChatGPT and synthesizes a summary, is the summary copyrighted? The Copyright Office says no. So therefore, does running content through a chatbot effectively strip the copyright? What if the chatbot is private and owned by the content owner?

Additionally, there are intellectual property concerns. If a public AI engine (such as one of the image generators) uses copyrighted information as its data sources, then isn’t new, synthetic content effectively a copyright infringement? Is the generative AI allowed to scrape any public-facing information and repurpose it? That seems like a stretch of fair use, but this is exactly what is happening.

The lawsuits have already started.

Some guardrails for ethical AI

As we begin to integrate AI into content operations, keep in mind several considerations for ethical use of AI:

  • Source content: Consider the data sources of the AI engine. Adobe Firefly, for example, has indicated that they are using only public domain, openly licensed, and stock images. That seems much safer than using an image-generation engine that has scraped public websites or, worse, doesn’t disclose their training set.
  • Bias: Be aware of bias in algorithms, especially unintentional bias. Back in 2018, Amazon was in the news for a resume-evaluation tool with gender bias. AI can find patterns that you do not want to replicate. We are concerned that AI will bake in historical patterns, which are often discriminatory.
  • Disclosure: Disclose the sources used by an AI engine and how the AI engines are used in content production workflows.

At the end of the day, we know that automation of rote tasks usually is 10x faster than manual work. Technological innovations provide economic advantages; the key is to find a way to use the technology in alignment with ethical boundaries.

Answer for mountain images: The image on the left came from 123rf.com. The image on the right is AI-generated by Adobe Firefly.

Have questions for Sarah about AI, content operations, or something else? Connect with her on our website

"*" indicates required fields

Data collection (required)*
This field is for validation purposes and should be left unchanged.