Skip to main content
May 13, 2024

Pulse check on AI: May, 2024

In episode 166 of The Content Strategy Experts Podcast, Sarah O’Keefe and Alan Pringle check in on the current state of AI as of May 2024. The landscape is evolving rapidly, so in this episode, they share predictions, cautions, and insights for what to expect in the upcoming months.

We’ve seen this before, right? It’s the gold rush. There’s a new opportunity. There’s a new possibility. There’s a new frontier of business. And typically, the people who make money in the gold rush are the ones selling the picks and shovels and other ancillary services to the “gold rushees.”

— Sarah O’Keefe

Related links:



Alan Pringle: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re checking in on the state of artificial intelligence. Things are moving really fast in the AI space, so we want to let you know we recorded this podcast in May of 2024. Hey everyone, I’m Alan Pringle.

Sarah O’Keefe: And I’m Sarah O’Keefe, hi.

AP: And we’re going to talk about AI yet again, but we need to circle back to it because it’s been a while and kind of assess the space right now. Last week I saw a really great meme. It was a still of Carrie Brownstein and Fred Armisen from the Put a Bird On It sketch from Portlandia. And it said, “Put an AI on it!” And that’s kind of where we are now.

SO: Yay.

AP: So many companies, so many services, so many products look at this AI thing that we’ve got now. And a lot of these AI birds, if you will, have landed on content creation, kind of our wheelhouse. So let’s pick that apart for a minute.

SO: So I guess we can start with generative AI, GenAI, which is a ChatGPT and all of its general ilk, right? The chat interfaces. And generally speaking, at least for technical content. There does seem to be an emerging consensus that this is not where you go for content creation. You’re not going to start from scratch. Now, maybe you get it to throw out some ideas. Maybe you can do a first draft, but overall, the idea that, you know, ChatGPT or generative AI is just going to generate your docs for you is not the case. So there’s a big nope on content creation, but there’s also a big yes for productivity enhancement. I wrote a draft, but did I write it at the appropriate seventh or eighth grade level? Can I run it through the AI and let it clean it up? I need a summary. I need this cleaned up. I need my XML tag set corrected. I need a proposal for keywords that metadata that I haven’t put in yet, those kinds of things. So there does seem to be a rising level of capabilities in that space, in that productivity enhancement, how can I take this thing that I wrote or that I created and refine it further to get to where I need to be.

AP: Yeah, I was at a conference a few weeks ago, and in the expo hall, so many of the vendors were selling an AI service or some kind of AI add-on. And my thought was, how can the market possibly sustain all of these new products and new services? And I know there was an article in the New York Times last week that was talking about the business viability of AI. And it really doesn’t matter how cool or neat what your AI tool does. If there’s not business viability behind it, you’re going to have a really hard time in the marketplace, because you’ve got so many established players like the likes of Google and Microsoft who are really starting to dig into AI, and does it leave room for anyone else? So part of me wonders, is this going to help some vendors, hurt some vendors, or some vendors just going to go away at some point because of this tussle with AI features?

SO: Well, I mean, we’ve seen this before, right? And it’s the gold rush. There’s a new opportunity. There’s a new possibility. There’s a new frontier of business. And typically, the people who make money in the gold rush are the ones selling the picks and shovels and other ancillary services to the “gold rushees.”

AP: Exactly.

SO: And so to me, I’m starting to think about this as it’s going to fade into the background eventually in the sense that it would not occur to me at least to write a document without a spell checker and/or, you know, some sort of built-in grammar checker. They’re super useful, but I don’t necessarily do exactly what they tell me at all times. I look at what they tell me and then I use my own judgment. So I think that’s where we’re going to land where AI is going to be this useful tool that sort of a little bit fades into the background and that has human review. And we’re starting to see people refer to human in the loop, just as we did with machine translation, which is another place where you can look for patterns. What is AI adoption gonna look like? Go look at machine translation. Sometimes it’s good enough, sometimes it needs a human in the loop. Sometimes if you’re translating, let’s say, literary fiction, it’s maybe not that well suited, because it’s just not going to pick up on the kinds of things you need to pick up as a literary translator.

AP: Yeah, yeah, I agree. Is this going to be a feature that you accept as part of whatever suite of tools that you’re using? It’s just built in and there it is. So let’s talk now about something a little more complicated and I think maybe a little more dangerous with AI and that’s intellectual property. It has always been a problem, and there are all kinds of lawsuits flying about with different content creators claiming that different AI engines are stealing their copyrighted content, that sort of thing. And I don’t think that haze, that cloud has really been removed at this point. It’s still a problem that we need to address.

SO: Yeah, it’s a huge question mark. And, you know, it’s terrifying from the point of view of if I use AI and the AI injects something into my content or into my code that is that belongs to somebody else that’s copyrighted by somebody else. What’s that going to look like? What’s going to happen? And I have seen, you know, differing opinions on this from all sorts of people in our industry, in adjacent industries, from attorneys, non-attorneys, everybody has an opinion on this. And the thing is that the responses are, they just run the gamut from, do not use under any circumstances because we could get ourselves in trouble to eh, whatever, YOLO, it’ll be fine. I saw a comment just the other day along the lines of, well, I can’t believe that people would get sued for this because everybody’s doing it essentially. And I mean, they might be right. I’m not saying they’re wrong, but remember Napster? I mean, they got taken down. 

AP: Yes, they did.

SO: We now have streaming and those kinds of things, but the original one that was kind of the unlicensed in a pirate version, really, did get taken out. And I haven’t the slightest idea whether the regime that we’re under right now is going to end up like, you know, a Napster or like a Spotify. Not a clue.

AP: Yeah, yeah. And this conversation on IP intellectual property kind of ties into something else I’m going to talk about too. And that’s the regulatory angle. Different governments are taking a look at this. And I think that’s absolutely worth discussing as well.

SO: Yeah, again, I think more questions than answers, but just in the last, say, two months, the European Union has passed an AI act, which divides AI into risk categories based on what kinds of things it is doing. And so they’re banning certain kinds of AI, they are regulating certain kinds of AI, and then they’re allowing certain other kinds, you know, but they’ve basically said, if it’s in the highest risk category, then you have to follow these kinds of rules, or maybe it’s not allowed at all. China has taken a different approach. The US has so far done nothing in terms of regulations. 

AP: Nothing.

SO: We’ve talked about it, but we haven’t done anything. So it’s quite likely that at least in the short term, the regulatory schemes will be different in different locations, in different countries. And then just in the past week or so, I bumped into a pretty interesting article that was talking about GDPR, the European Privacy Regulation. And basically, under GDPR, you have certain kinds of rights. You have the right to be forgotten. You have the right to be taken out of a database, and somebody has anonymously sued OpenAI because when they go into OpenAI and they say more or less, “What is my birthday?” It gives them the wrong answer. So this is apparently a, again, anonymous but public figure, and we’ll put the article in the show notes. So this anonymous public figure is suing AI on the grounds that it reports an incorrect birthdate for that person and you have the right to have your data be correct under GDPR. Well, OpenAI’s response to this lawsuit is along the lines of it is impossible for us to correct that, right? Because there’s not an underlying database that says John Smith date of birth X. It’s just generative, which is sort of the crux of the whole issue here. But the legal footing, the legal argument appears to be that under GDPR you can’t say, oh, I’m sorry, it’s impossible for me to correct that fact. So you’re just gonna have to deal with it. And so we’re gonna have a really interesting collision between the content that generative AI creates, which may or may not be factual. That’s not a thing, right? And GDPR, which is an established law. And I have the slightest idea where that’s going.

AP: Yeah. And I think too, in this vacuum or with this absence of regulations in some countries, you’re going to see companies then make their own rules. And a lot of them have telling employees when they, what they can and cannot do with AI, which really, like I said, in the absence of there being any kind of rules to help kind of create a baseline, it makes sense, especially if you’re trying to be very careful about liability, putting out incorrect information or using copyrighted information that you shouldn’t be, it would make sense for you to protect your bottom line by basically instituting your own guidelines for how you can and cannot use AI.

SO: And if you look at social media where the platform is basically not responsible for the content that people are putting on it, right? So if I’m on LinkedIn, let’s say, and I put something on LinkedIn and then somebody else reads it, if what I’ve said is problematic, they’re gonna sue me, not LinkedIn, right? LinkedIn is not responsible. And with AI right now and generative AI, who’s responsible? If I go and I generate something using generative AI, and then I publish it in some way, and then I guess I assert copyright on it, which is a whole other can of worms because I can’t right now under current law. But if I do that, then if what I post is wrong and legally problematic, so it’s, I don’t know, defamatory or something, then like who gets sued? Do you sue OpenAI for being incorrect? Do you sue me? Do you sue the platform I put it on? Like who is responsible when the AI gets the content wrong? Is it me because I didn’t validate it or correct it or clean it up? If we build out a chatbot that’s AI-driven, that’s generating information and you know, we’ve already seen this use case legally. You know, the company is going to be responsible for the information that the chatbot is putting out if the chatbot is sitting on the company website. But if it’s impossible to be sure that the chatbot’s gonna be right, what do you do with that?

AP:  Yeah. And it’s been established as of last night before we recorded this, the big Met Gala happened. And apparently there were two quite realistic photos or images of Katy Perry in two different dresses. How she pulled that off, I don’t know, at on the steps at the Met Gala. So and the problem is a lot of the social media platforms absolutely could not, they just didn’t do anything. These photos just exploded. 

SO: Right. Because they were fake, right?

AP: They were fake. 100% generative AI fake. And even her mother was fooled by it, apparently. And Katy Perry’s response was, “I was working, so no, I was not there.” But it just goes to show you that you’re right. Once these images got out there, they exploded on social media, and those platforms really are not equipped to handle flagging of that or even removing it at this point.

SO: “I didn’t know you were going to the Met Gala.”

AP: Exactly.

SO: Yeah, I’ve seen a decent number of it largely in AI news coverage where, you know, a New York Times or Washington Post will put up an image and they’ll put a slug on it or a caption that says this is AI-generated. And usually, they watermark it. So it’ll be actually in the image, not on a caption below, but in coverage of AI itself. And for example, talking about this deep fake or, you know, the one where the Photoshop UK princes, princesses, that one. They carefully labeled the photo itself as altered on the photo so that people would know when they were reading the news story what they were dealing with. But of course, you know, that’s not going to happen on social media, where it’s just going to fly around the world faster than anything. And so, yeah, I think I don’t know. I mean, I’m saying I don’t know a lot. That’s where we are. We don’t know.

AP: We don’t. Yeah.

SO: We don’t know what’s going to happen. Things are changing very quickly. The legal and regulatory and risk scenarios are completely unclear. I did want to touch on one other sort of more practical matter. We’ve seen a lot of complaints recently, and I think I’ve experienced this personally, and I think you have as well, that search, like Google search, Bing search, all the traditional search is actually getting worse. 

AP: Oh, 100%. Yeah.

SO: You search and you get bad, you know, just junky results, and you can’t find the thing you’re actually looking for. And the basic reason that that’s happening is that the internet, the worldwide web has been flooded with AI-generated content at a scale that has completely overwhelmed the search algorithms, such that they are unable to sort through all this stuff and actually give you good information. I mean, we did at one point, a year ago, have a scenario where if you had a pretty good idea of what you were looking for and you typed in the right search phrase, you would get some pretty decent results and you could find what you were looking for. And now it’s just junk, which has to do with AI-generated content that is micro-targeting SEO phrases. And ultimately, I think this means, well, it’s going to be a war between the search engine algorithms and the AI-generated content. But I suspect that search and SEO as we know it today is done because it won’t win this. And then people are like, “Oh, I like it a lot better when I go to ChatGPT, and it gives me this nice conversational paragraph of response,” notwithstanding the fact that that paragraph of response probably isn’t super accurate.

AP: But it’s so chatty and friendly.

SO: Uh-huh. So I’m not terribly optimistic about that one either. And so what does this mean if you are a company that produces important and high-stakes content, like all of our clients, basically? What does that mean to you? And I think it means that you’re going to be looking hard at a walled garden approach, right? To say, if you are on our site, and you are behind a login on our site, we have curated that information, we have vetted it, we have approved it, and you can rely on it. If you go out there, you know, in the big wide world, there’s no telling what you’re gonna find out there. And that implies that I have to know who I’m buying from so that I can go to the right place and get the right information. And I’ve already found myself doing this. Instead of going to a big general purpose, e-commerce buy things site, such as the one I’m carefully not mentioning, I find myself saying, oh, I need a new stand mixer. I like KitchenAids. I’ll go to their site and buy it there. And so I’m buying direct from brands that I’m familiar with and that I know because that feels safer than going to the great big site that has a little bit of everything, including a stunning array of what seems to be problematic counterfeit and or knockoff kinds of things. So instead, yeah, but so if I don’t know the brand, if I don’t know the brand, then what? Like, how do I find the right thing if I don’t know where to start already?

AP: The same as true of information. Right. Yeah, I don’t think that’s going to be fixed anytime soon and it’s probably going to get worse after this podcast, in fact.

SO: So I’m concerned. Yeah, and you know, as a parting gift of, I guess, fear, we will put it in the show notes using a gift link. But there was an article that appeared in the Washington Post about a month ago, maybe two, having to do with apps for identifying wild mushrooms when you’re foraging. So this already seems kind of like a high-risk activity to me, just generally going out in the forest and looking for mushrooms that you’re going to forage and hope you get it right and you pick the really delicious one and not the one that’s gonna kill you. And Alan’s making faces at me because he hates mushrooms.

AP: I have the solution for this problem. Don’t eat them. But that’s not helpful. Yeah.

SO: Yes, you have a really simple solution. But for those of us who do like mushrooms and don’t want to die, there are a whole bunch of apps out there. And so there was some research done in apparently Australia on mushroom identification apps, which are apparently AI-driven, which seems like kind of not a good idea. However, what they found was that the best of the AI-driven apps was 44% accurate. And I wish for my mushroom identification app to be a whole lot more than 44% accurate, especially in Australia where everything kills you!

AP: So a 56% chance of poisoning yourself. That’s excellent. Great.

SO: Yeah, or at least of getting it wrong. But again, it’s Australia. And so if it’s wrong, it’s probably going to kill you because that’s Australia. So yeah, that’s not good. And that feels like a not acceptable outcome here. So I don’t know where this is going, but I am pretty concerned.

AP: Yeah. So as we wrap up, there are some good things to talk about, especially, there are a few. Sarah was whispering, “Are there? Are there?” Or made a face. There are, I mean, on the content-creation side, I think there have been some tools that have added some useful features, much like the spell-checker analogy that you talked about. But there are still so many unanswered questions in regard to intellectual property and legal risk. All of those things are still way up in the air. A lot of countries are trying to adjust by taking a look at regulations, but you know, those aren’t in place yet. So we’re in, we’re at a crossroads, I think, and we’ve still got to pay a lot of attention to what’s going on with AI right now.

SO: Yeah, you know, there’s some really, there’s some really nifty tools out there. It’s also worth pointing out that there have been tools that use machine learning and AI that are already out there. They just weren’t, it wasn’t AI front and center. Now everything, as you said, put an AI on it because you can get sales that way, and you can get attention. But there are a lot of companies that are doing some really interesting and really difficult work with this. And I want to, you know, I’m not against any of this stuff. I just want to make sure that we use these tools in a way that, you know, maximizes the good outcomes and minimizes the, “Oops, I ate the wrong mushroom.”

AP: Yeah. Fatal mistakes. Not a fan. Not a fan at all. Well, I think we’ll wrap it up on that cheery note about eating poisonous mushrooms on the Content Strategy Experts podcast. We go places, folks. We will talk about almost anything on this, not just content. So thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit or check the show notes for relevant links.