Skip to main content

Ready, set, AI: How to futureproof your content, teams, and tech stack (webinar)

Your customers expect intelligent, AI-powered experiences. Is your content strategy ready for an AI-driven world? After a popular panel at ConVEx San Jose, the team at CIDM brought the conversation online in this webinar.

AI is going to require us to think about our content across the organization, across the silos, because at the end of the day, the AI overlord, the chatbot is out there slurping up all this information and regurgitating it. The chatbot doesn’t care that, for example, I work in group A, Marianne’s in group B, and Dipo’s in group C, and we don’t talk to each other. The chatbot, the world, the consumer, sees us all in the same company. If we’re all part of the same organization, why shouldn’t it be consistent?

Sarah O’Keefe

Resources

LinkedIn

Transcript: 

Trish Grindereng: In today’s webinar Ready set, AI: How to futureproof your content, teams and tech stack with Dipo Ajose-Coker with RWS, Marianne Calilhanna with Data Conversion Laboratories, and Sarah O’Keefe with Scriptorium. Welcome to you all.

Dipo Ajose-Coker: Thank you.

Marianne Calilhanna: Thank you.

Dipo Ajose-Coker: I’ll start sharing now. Just let me know that I am not sharing my email stack.

Marianne Calilhanna: It looks good, Dipo.

Dipo Ajose-Coker: All right, excellent. Well: ready, set AI. Let’s go. Let’s futureproof your content. Basically, we thought we’d put this together, with Sarah and Marianne. We did a similar webinar, well, a similar presentation at the Convex San Jose conference in March. Following the enthusiasm from that, we thought, let’s bring this out, let’s try and get this out to more of our crowd out there. The appetite for AI just continues to grow. There’s new developments every day and there’s people feeling, “I’m getting left behind,” and they want to quickly jump onto that bandwagon as quickly as possible. What we want to do is to try and help you prepare for that. You don’t want to jump on with jumbled up content. You want to prepare that content, you want to prepare your teams and your organization so that you can be successful and then not throw it out the window after six months to a year. We’re hoping, at the end of this session, that you’ll be able to assess your content landscape, spot gaps in the structure and the governance and findability before AI exposes those. We want to start building an AI-friendly pipeline. We’ll be giving some practical steps to help you get on that way. We want to help you manage the change, change management. People are hard. Tech is easy, people are hard, so you want to start trying to change some of the anxiety around that, mitigate the risks. Then we’ll maybe try and give you some quick win scenarios that will help prove value very quickly. Before we go on, I thought I’d share this with you in that … Yeah, sorry. RWS underwent a rebranding. It just so happened to fall … I came up on this slide and it’s like, well, you want to be like do generating content. You’re going to be transforming that content and you want to also protect your own content. When you do start preparing your content, if you have prepared it properly, the impact is transformational. You will be able to get real good use out of your AI. You’ll be able to improve workflows, you’ll be able to generate that content quicker. It’ll be more accurate. You can’t have an assembly line without machined parts. The machined parts have to be consistent in nature, and they’re designed to fit together in a certain number of ways. You can’t just mishmash and put them all together. So that’s what we’re going to be doing today; we are going to look at how you can standardize those parts, how you can label them, create all that sort of stuff and put it together so that you can generate, transform and protect your content. You’ve already been introduced to us. I’m going to quickly skip over this one. Sarah O’Keefe from Scriptorium. Marianne from Data Conversion Laboratory and myself, I’m with RWS and I work on the Tridion Docs product. Now, just a quick recap. At Convex, we thought we’d try this out and what we did was we put out some Lego sets, the Lego Creative suitcase, and we tried to simulate what putting your content, everyone knows Lego is that that classic metaphor for the power of structured content, they’re modular pieces, they’re reusable, they’re flexible, not in that way, not when you step on them at midnight, but they’re flexible in their use. You can scale the content and they’re built according to a standard. Lego understood that a long time ago. IKEA followed suit with their standardized models that you can scale and build different things out of it. We gave these sets out and in some of the sets we semantically tagged the content. What did we do? We sorted by color, we put them into different boxes. One of the boxes we just threw everything in and we actually took the instructions out. The result was so funny. You should take a look at some of the blog posts that we put out on that. I think I’ll try and share that video that we created on there. Basically what we were trying to do was even if you have got structured content, if you don’t label it properly, if you don’t create those relationships between the pieces, then well you end up building nonsense. We thought we’d show you the results of having proper structure. You have reusable bits. Those leaves that you see on the ground, those are actually reusable as frogs. Thanks, Marianne, for putting this together. They’re modular pieces that you can then use to build something else. So here we’ve got a bonsai tree, but maybe you might be able to build another type of tree on there. On the right with no instructions, i.e. no metadata, no industry standard, there’s no organization. You’ve not put it into a CCMS. There’s like no relationship between the pieces in the metadata. Then your AI hallucinates. Who can guess what this is? Answers in the chat please. Marianne, do you want to speak to this a little bit?

Marianne Calilhanna: Yeah. I’ve always thought that this new series of Legos that came out, they’re these blooms, these flower sets. I wondered if Lego has been listening to all the metaphors in the DITA, in the structured content world, because there’s a series of Lego that used those pieces. So in this example with the bonsai tree, yeah, they’re little frogs and it was my kids who told me like Lego had all these extra pieces, so they thought, “Well, we could reuse these.” I guess this metaphor is sort of going both ways. Yeah. That left image it’s a Lego set with instructions and kids put all the pieces together and then they follow on the instructions and then boom, they create this great piece. On the right, it’s a facsimile, it’s a reproduction of what happened in real time when we were at ConVEx, where we provided the CCMS in that we threw all of the Lego pieces into the Lego suitcase that Dipo brought and no instructions. While everyone was creating their little horses or their little, I forget what else we had … I don’t know if any of you remember.

Sarah O’Keefe: Little small people. A couple of other things.

Marianne Calilhanna: Small people. Yeah. Then one that was just kind of crazy, it was cool looking, but we’re like, “What’s that?” That was clearly the AI hallucination because it came from the group who was working with the Lego set that had all the pieces jumbled, they had no instructions. When we set the scenario and asked folks to create something, they kind of looked up and like, “Well, there are no instructions. What do we do?” they saw everybody else putting things together nice and tidy and organized, and they were really scrambling. Boy, did it really capture this conversation that we’re about to have, that we’ve all been having for quite some time.

Dipo Ajose-Coker: Yeah. Sarah, when we’re talking about preparing content for AI, what does that mean? Talk to us about what does that mean when you’re trying to organize that content to you?

Sarah O’Keefe: So remember that AI is looking for patterns, and so the big-picture answer is that if your content is; predictable, repeatable, follows certain kinds of pattern and is well labeled, then the AI, if we’re talking about a chatbot extracting information, will perform better. The big picture answer to how do we make sure that the AI works is all the things we’ve been telling people to do; structure your content, have a consistent content model, be consistent with your terminology, and your writing, and how you organize your sentences and your steps and your this and that. And put metadata, taxonomy, put a classification system over the top of it, in the same way that you would sort these blocks by color, or size, or function, or all of the above. One of the great advantages of metadata is you can sort on two axes, or three, or 15, but the thing to remember just as you move into something like this is that AI, with its pattern recognition and its machine processing. You touched on this Dipo when you said machine parts have to be consistent. AI is going to expose every bit of content debt that you have. Every case where there’s an edge case, where something’s not consistent, where you didn’t quite follow the rules, it’s going to think, it doesn’t think, it’s going to think, “Oh, that’s significant,” and it’s going to try to do something with it. So think about the distance between your ideal state content, which of course we’ll never get to, but your current state content and how do you close that gap? How do you make that gap as small as possible so that the machine, the AI, can process your content successfully.

Dipo Ajose-Coker: Marianne?

Marianne Calilhanna: Yeah. Just one other thing I want to add with this conversation. We talked about modularity, reusability, interoperability and standards. We have these standards in place across our industries for managing content and it supports all of this that we’re talking about. That’s great because you don’t have to start from scratch. An example would be DITA. Probably most people here are familiar with that term, but DITA is a standard way of tagging and structuring your content so that the supporting tools are there and understand that language as well as the large language models.

Dipo Ajose-Coker: Yeah. The fact that it’s standardized means that toolmakers, people who are creating software, who are training LLMs, can have that standard structure, the language that this means this. That way when you feed it in, you get a consistent sort of output. If you want to avoid chaos, you want to maybe think about relationships between the elements and how you organize the content within that system that you’re putting it all into. Marianne, talk to me a little bit about this.

Marianne Calilhanna: It was funny, the other day I was doing something outside of work, I was working on a website for something else and I kept running into a problem. I tried to search through the help files, couldn’t find the answer, and I was like, “Oh. Now I have to resort to the chatbot. Well, here I go.” I had a fantastic experience with the chatbot. I hate to say this, but it was probably the first time ever. We’ve been talking about chatbot, we talk about how structured content helps with this, but for the first time I was like, “Wow.” Problem, question, answer, just flawless. All I could think about is, boy, I want to ask them what they’re doing behind the scenes. I was completely fascinated because when you have your content, your knowledge structured, when you have the metadata, when you have those relationships identified, that supports the AI to understand those relationships, to improve the contextual responses, and ultimately it gives a great user experience, and that’s what probably everyone here on this webinar wants.

Dipo Ajose-Coker: Yeah. I think one of the things that I try and use to prove that you have to establish those relationships first because otherwise you don’t know what you’re talking about. I say, “Who is your brother’s uncle to you? What does that relate … ” Your father’s brother. I gave it away there, didn’t I? who is your father’s brother to you? I overthought it, but basically, I mean, who’s your father’s brother. It’s your uncle. How did you learn that? Well, when you were growing up, we established this relationships. If an alien came in and landed on earth and pointed to that person and asked you, “Who is that?” You’d say, “Well, my uncle, Ralph.” There’s just no other way. There’s no logical relationship between why you would call that person uncle. It’s just basically an established standard. It’s translated into all the different languages. Sarah, if you think of a CCMS, do you think a CCMS will solve all our problems?

Sarah O’Keefe: Oh, of course. I mean, absolutely. I mean, it’s worth noting that father’s brother is not the same word in every language as mother’s brother. Even that example, there’s some nuance in there, which is kind of interesting. A CCMS is basically the case here. It’s the container that you can sort all of your Legos in. Now, it is perfectly possible to purchase a CCMS or a CMS and dump all the Legos in without sorting them. I mean, just having a CCMS does not give you this lovely classification system that we’ve established here. So necessary, but not sufficient is probably the answer we’re looking for. Arguably, you can make an attempt to classify and structure your content without a CCMS. It’s a tool that helps you enable it and do it more efficiently. I mean, this is going to be like my refrain for the next 20 years, you still have to do the work. You have to put in the work before you can leverage the machine, or the software, or the automation.

Dipo Ajose-Coker: Perfect. We’ve been talking about strategy here. What are the tactics that you want to employ here then in preparing for AI? Sarah, do you want to go with this?

Sarah O’Keefe: Yeah. You have to do the work, and then risk mitigation is the other thing that people are thoroughly sick of hearing me say. You need to put the content in a repository, but if you still have 18 copies of the same or the same-ish piece of content, and then I as an author search for that content, I’m going to find one of the 18 copies, and that’s really bad. You have to find those duplicates. DCL, by the way, makes a lovely product that can help you do this. You have to find the duplicates. You have to get rid of the redundancy because that decreases the total amount of content that you’re working with, which is helpful, both to you in your daily life as a content creator, manager, author, whatever, but also, again, fewer parts, more consistency. Not to get too far off the general topic, but one of the big issues that we’re seeing now is an increasing interest in structured content for learning content, which tends to be in its own silo away from the tech com content. How do we bridge that? How do we break apart? Do we combine them? Do we put everything in a single location, in a single storage, or do we find some way of crosswalking from, let’s say, the CCMS to the LCMS, the learning content management system? Then how do we make all of that searchable? Again, if I’m searching for a particular piece of content, but I’m searching the wrong repository and it doesn’t turn up and then I write it again and now we have duplication. All of these things tie into having a much better understanding, and much better control over your content universe as an author or as a content creator.

Marianne Calilhanna: Yeah. When you’re taking the time, you’re starting a project like this and you need a starting point, well, how do I even begin to tackle this? It’s a trite saying, but you don’t know what you don’t know. If I’m in one department, I don’t know what David did over there, but it’s true. We have a tool called Harmonizer. We love seeing the looks on customers’ faces when they’re gobsmacked. I had no idea that we had this many versions or, oh my gosh, everything was right in eight of these versions except one had a near fatal instruction over here and you just don’t know unless you do that inventory. It’s like another metaphor. You’re moving to a new home and you have to pack up everything. You get all the glasses that you’re going to move, and you’re like, “Why do I have 56 pint glasses for a family of four? Let’s get rid of this. Let’s clean it up.” It’s a pretty profound experience. You feel refreshed and like, okay, now I can start this massive undertaking and know that I’m doing it in an organized way.

Dipo Ajose-Coker: So talking tactics, you want to talk to people who have that experience of helping you to classify and know how to structure your content, the model that you want to use. Then you want to use services that will help you identify, detect those duplicates, help you make those decisions as to whether or not to have an extra copy of something, because maybe there is a reason why there are two warning messages. One is because it’s always for an older copy of the software, and the new one is for version six onwards and things like that. Sorry, Sarah, you were going to say?

Sarah O’Keefe: Well, the moving metaphor is a great one because, A, you discover you have 56 pint glasses. Thanks, Marianne. I feel a little bit on that one for no reason, but you throw away a bunch of them and then you move. Then as you’re unpacking, you find 30 more and you’re like … then you keep throwing things away. It’s like an ongoing battle against glasses.

Dipo Ajose-Coker: Then you have that dinner party and then you find out that you threw away too many of them, or you threw away that special one, the one that was from Auntie Edna who wanted to see it, and you’re having Auntie Edna around you just threw it away, all of that sort of stuff. Let’s move on. Come on, change over. So metadata, your instruction manual sort of in a way. Marianne, talk to me about this.

Marianne Calilhanna: Yeah. Okay. We’re probably throwing out too many metaphors, but nonetheless, I’m going to throw out another one.

Dipo Ajose-Coker: I love them. I love metaphors.

Marianne Calilhanna: I always think of metadata and taxonomies when you’re talking about governance and everything that goes into knowledge management, content management, I think of it as an iceberg. You’ve got all this visible stuff, content that your employees see, content your employees use, what your customers are searching for, but then underneath is a even larger ecosystem. It’s the larger part of the iceberg that supports that top part. When you think of metadata and taxonomies, I think a lot of people think, “Oh, I’m done. I’ve tagged all my content, I’ve got this taxonomy. I’m finished with my knowledge management.” I always advise, shift from that mindset of being finished, because you’re never really done. Language is living, industry terms change. Were we using large language models in the nineties? LLM, that term? No. So you have to just always iterate through your knowledge management, your content management, and make a point to revisit it, in whatever timeframe is relevant to your organization, your industry. Those are some of my thoughts about metadata taxonomies. Sarah, what do you think?

Sarah O’Keefe: Well, nobody likes governance. Governance is the sort of dirty work of keeping everything under control, and having processes, and having rules, and ensuring that the content that walks out the door is appropriate and compliant and ties right back to the previous slide, which talks about risk. I think, Marianne, you’ve covered all the key things. What I would say is that your governance framework needs to match your risk profile. Canonically, we always talk about medical devices as something that has very heavy compliance and also a lot of risk, because if a medical device is not configured correctly, if the instructions aren’t right, if the either medical professional or end user, the consumer, misuses it, it could have some dire effects, by which I mean dead people. Your governance framework needs to match up with the level of risk that’s associated with the product, or the content, that you’re putting out the door. If it’s a video game, that’s my canonical doesn’t need a lot of governance example, except a couple of things. All our video games have warnings at the beginning about flashing lights and epilepsy. Also, video game players, gamers tend to be very, very unforgiving of slow content. There’s a wiki somewhere, it’s got all this documentation in it and they’ll update it and make changes. The governance isn’t really there in the sense that people can do it themselves, but if you were to tell them, “Oh, it’ll take us six months to put that update in,” that would be totally, totally unacceptable. Your governance is going to depend on; the type of product, the type of content, the level of risk, the types of risk, and you need to take that into account.

Marianne Calilhanna: Yeah.

Dipo Ajose-Coker: I’ll just add on here that you could have all the rules in the world, if you’ve got no way of enforcing it, then you might as well just have written it on a piece of paper and put it on the back shelf. You need a tool. I’ve got to talk about the CCMS part of it that is able to help you enforce the rules, the standard helps you enforce the rules. You can create that model, but if you say, these people are not allowed to change it, or you can only change this, you can only duplicate this content in this particular scenario, having a tool … There’s no one sitting behind every writer saying, “Naughty, naughty, naughty. You shouldn’t have duplicated that.” However, if the tool is able to stop you from duplicating that content and you want to balance automation with human quality assurance, so you’ve got the tool that is going to stop you, but maybe it’s just going to prompt you or send a message to that manager saying, “This content has been duplicated. This content should not be duplicated. We prevent you from using this in this particular manual, because the metadata tells us that it’s not applicable.”

Marianne Calilhanna: Hey, Dipo. We did have a question come through. Were you going to say that?

Sarah O’Keefe: Yes, I was going to say the same thing. Go for it.

Marianne Calilhanna: I think it’s relevant to talk about you’re bringing up tools. Someone asked just to clarify what we mean by interoperability. So bringing up the CCMS is a good example. Maybe one or both of you could comment on interoperability, sort of explain that, make sure we’re all on the same page here.

Dipo Ajose-Coker: Yeah. First of all, the standard that we are pretty much all talking about here is DITA. DITA is designed in a way that you can use it with other XML. You can easily translate it and match it, create a matrix, but also you want your CCMS to be able to connect to other tools and take information from other databases. One particular example that I see that is happening in the IIRDS world, that’s another XML standard that is used to class parts. In the automobile industry, in Germany, they were really hesitant to move into DITA because they had these vast databases and vast systems that classified all their parts and everything. They did not know how to connect it. IIRDS was like put together to help create that standard language for DITA systems to connect to, and understand what’s coming from a parts system. Interoperability is your system being able to connect and exchange information intelligently and easily with other systems that you might be using within your organization. Sarah?

Sarah O’Keefe: Yeah. No, I think that covers it. I mean, ultimately there are some infamous tools that are not particularly interoperable. I’m thinking of Microsoft Word, InDesign. Usually when we start talking about interoperability, we’re talking about a couple of different things. One is, as you said, DITA itself, which is a text-based thing that we can process, so machine processable. Also is the place where you’re storing your content accessible? Can we connect into and out of it? That usually means is there an API, is there an application programming interface that allows me to either reach in or push out the content to other places that it needs to go? I would say that there’s a lot of work to be done in that area, because our tools are not as a cleanly interoperable as I would like.

Dipo Ajose-Coker: Actually, if we’re talking about AI … Sorry, Marianne. If we’re talking about AI, there’s an interesting buzz term that is coming out and that’s like MCP. This middle language, middle standard that is coming in, I think it was Anthropic that put it up. It’s model context protocol. Everyone’s talking about, agentic AI, allows your LLM to interact and talk to any clients that are being built up. Loads of people are building these little clients to help you write stories or help you create an image, and then it has to connect to a large language model. When that new model is created, all the developers have to go and change their code and all that. MCP stands in that middle bit and allows the interoperability between large language models and client AI applications.

Sarah O’Keefe: There’s a question related to this, which I think I’m going to pick up. Basically the poster says, “My dev teams want all the content in markdown for AI consumption. Metadata and semantic tagging is stripped out of our beautiful XML.” Yeah. This is a huge problem. We’ve got a couple of projects that are … To the person that wrote the question, it could be worse, because we have customers where the dev team, or the AI team, actually, is requesting PDF. As bad as you may feel about your markdown situation, it actually could be a whole lot worse. Ultimately this is a problem around, its sort of interoperability, because the AI building team didn’t really think too carefully about what’s the input that we’re going to get. You could go to them and say, “I have this amazing DITA content. I can feed it to you in all sorts of ways with taxonomy, with classification, with everything.” They say, “Cool me. Give me PDF,” or, “Strip it down to HTML,” which is at least better than PDF. Even your markdown example, I mean it’s not great, but it could be so much worse. This is a problem because if we, as content people, are providing inputs to the AI, then we need to be stakeholders in how that AI is going to accept the content, and not just be told me give me PDF and walk away. There’s a related question about best medium to feed LLMs, and the answer is of course, it depends, although I’ll let the two of you jump in. I would say that if you’re starting from DITA, if you’re starting from structured content, then probably you’re looking at moving your structured content into some sort of a knowledge graph and using that as a framework to feed the LLM. That would be my knee-jerk, context-free answer.

Dipo Ajose-Coker: Yeah. Basically that just segued us into this slide. Training your writers. AI is not going to fix your bad input. Then you’ve got to talk about IP, intellectual property, copyright, audit trails. Let’s dig into this a little bit. Building something meaningful. How do you build something meaningful. Sarah?

Sarah O’Keefe: Right. Garbage in, garbage out. I’ve come up with a couple of other acronyms that go around this, but again, you have to do the work. You have to have good content. You have to have content that is relevant, and contextual, and structured, and accurate. One of the key reasons I think that we’re running into this, “Oh, just let the AI write all the content,” problem … This is kind of like anyone can write 2.0. The AI can write. Cool. One of the reasons this is happening, I think, is because at the end of the day, there’s a lot of really, really bad content out there. When we say no, you need content professionals and the C-level person is looking at their content saying, “But what I have is not good. I can have the AI not good. It can be equally not good, and it’s fast because a machine.” We have to create useful, valuable, insightful, contextual content so that you can build an AI over the top of that to do interesting things, and not resort to generative AI to just create garbage.

Marianne Calilhanna: Yeah. And the hyper focus too. Someone who’s very specialized, maybe a researcher looking for advances in CRISPR technology for pediatric oncology. I’m just kind of making that up. You want to make sure that you have a system, an environment, that is looking just at the literature that you want to do for the research. That’s a great example where structured content, maybe combined with RAG, is going to make sure that you stay within that specialized subject area that you want to focus on, that’s really critical for you.

Dipo Ajose-Coker: Yeah. As you were talking, I was thinking about that old analogy; if you give a thousand monkeys a thousand typewriters they’ll eventually come up with the works of Shakespeare, but in the meantime you’re going to be reading a whole load of gobbledygook.

Sarah O’Keefe: Yeah. The version of that I saw was, “A thousand monkeys and a thousand typewriters, eventually they’ll produce Shakespeare, but now thanks to the internet, we know this is not true.”

Dipo Ajose-Coker: Okay. Well, structured content is the foundation. We’ve just established that. It turns the potential of your AI into something that can be performant. What else is involved in here? Structured content fuels your AI. Marianne, talk to us about this a little bit.

Marianne Calilhanna: Yeah. I mean I think we’ve sort of beat this to death. Anyone who’s talked to me has probably heard me say so many times, structured content is the foundation for innovation. It’s the starting block. Also when you talk about the kinds of organizations with whom DCL works, RWS and Scriptorium, they’re also working at scale, so large volumes. That’s also when you need to shift to this way of working, and this way of thinking, because to enable automation, to enable intelligent reuse at scale, large volumes, that’s really when you also need to consider the move to structured content so that you can deliver things without that manual intervention. I can have that great chatbot experience, that I’ve never had in all these years, because I know behind that there’s modular, tagged content that is just hyper-focused to what I needed, to my problem.

Dipo Ajose-Coker: Yeah. Basically you’re able to, without having to retool everything, deliver to the different channels. There’s no need to rework it and say, “We want to create a PDF this time. Could you rearrange it?” The metadata behind that allows the AI, or whatever tool that you’re pushing it into, to understand that this is going for a mobile device or this answer is for a chat, this answer is going to the service manual who has this level of qualification. All of that is what allows you to then be able to scale and say that we’ll create that content once and we can just easily push it out when we update it. We can push it out to whatever channel that we need to. If you always have to think that it’s going to take us three weeks because we put a new comma in, to then get it all out there, project managers are going to say, “No, forget it. We’ll wait for the next big update.” I’m sure half of the people in here have heard that phrase, “Let’s wait for the next big update before we make those changes.” If you’re able to make a tiny little change and push it out automatically at scale, this is that magic spot that you’re looking for. What’s blocking AI readiness, Sarah?

Sarah O’Keefe: It’s always culture. It’s always change resistance. Those others are interesting. Yeah. These are the three, but ultimately change resistance and we’re seeing … I mean we’ve already seen a couple of comments about this in the chat, about the AI team is building out something that’s incompatible with what the content team is doing. Why is that conversation not happening? Well, because it never occurred to them that there were stakeholders. They don’t think of content as being a thing that gets managed. It’s just like an input kind of like, I don’t know, flour and sugar or something. Change resistance, organizational problems, organizational silos. When we talk about silos, a lot of times we’re talking about systems. The software over here and the software over here can’t talk to each other, but more so the people over here and the people over here refuse to talk to each other. When I say refuse, in many cases they are incentivized not to talk to each other, because their upper management, they don’t talk, they don’t whatever, they don’t collaborate. There’s some competitors. Have you seen those environments where the two groups hate each other? Oh, no, we don’t talk to them. They’re terrible. They live in that state over there that we don’t like.

Dipo Ajose-Coker: Marketing gets it all the time.

Sarah O’Keefe: They’re in Canada, or they’re in the US, or they’re in France, or they’re in, I don’t even want … They’re in X location. I’ve heard a lot of them and you know how those people are, and it’s like, “Oh my Lord. You work for the same company.”

Marianne Calilhanna: Today with global organizations working in hybrid or in a remote capacity, you’re not even going to bump into those people getting a coffee, like you used to in the old days, when we were all in an office together or taking the same train to work. We got a question in the dialogue box that made me think we missed a bullet point here, and it’s convincing management, so it’s money. That’s another thing blocking this is dedicated funding to work a different way. How do you convince management to do that? Great question.

Sarah O’Keefe: Yeah. The business case is really, really important. There’s a number of problems there, but the big picture problem is that content people, in general, are not accustomed to, or talented at, take your pick, getting large dollar investments for their organization. They’re sort of like, “Oh, we’re always last. We never get anything. We’re over in the corner with no stuff.” When we start talking about structured content at scale and these scalability systems, and an assembly line, or a factory model for content, and for automation, and content operations, well, those are big dollar investments. That’s setting aside the question of expensive software. I mean, the software is not cheap, but that’s not the issue, really. The issue is this change. Changing people into where they’re going and how they’re doing this and how their jobs evolve and needing to not just put your head down and write the world’s greatest piece of content, but rather, “Oh, you know what? Marianne wrote this last year, and I can take it, if I modify one sentence, I can use it in my context also,” and now we have one asset and we’re good, instead of making a copy because I don’t like the way Marianne wrote it, so I’m going to rewrite it in my voice, that type of thing. AI, again, is going to require us to think about our content across the organization, and across the silos, because at the end of the day, the AI overlord, the chatbot that’s out there, slurping up all this information and regurgitating it, it does not care about the fact that I work in group A, and Marianne’s in group B, and Dipo’s in group C, and we don’t talk to each other. The chatbot, the world, the consumer, sees the three of us as, if we’re all in the same company, they’re all part of the same organization, so why shouldn’t it be consistent and they’re not wrong.

Dipo Ajose-Coker: Yeah. I did actually do a presentation on building your business case for DITA. One of the things I said is content operations needs to get away from that mindset that they’ve been put in, that they’re a cost center. They’re actually a revenue generator. They’re one of those final deciders. If you think of any company, we get bids from different companies and one of the things they want to see is the documentation. I tell you, when I’m looking at buying a new water pump, because I just got flooded, I’m going to compare everything. Compare the prices, go to all the review sites, and in the end I’ve got two or three choices, and then I’m going to go and look at the documentation and see how well written is it? Is there something in there that will help me make that final decision? Let’s say nine out of 10 times there something in one of the documentation that’s going to help me with that final decision. We’re sort of running behind a little bit here. AI readiness, are your blocks sorted. Before adopting AI, are you blocks sorted? That’s the kind of question. What are the things that you need to look at? We’ve talked about it, but I just want us to summarize it on this slide. Marianne?

Marianne Calilhanna: Yeah. I mean I think we’ve hit everything here. Governance and structure. We did miss, again, that executive buy-in. I keep going back to that question. We joke that now for organizations looking to adopt a structured content approach to get that executive buy-in, just slap onto your management team, we need it to enable AI. AI is that [inaudible 00:45:39].

Dipo Ajose-Coker: Yes. That’s the magic word now, isn’t it?

Marianne Calilhanna: Open the wallet. Yeah. But then that allows you to do the real things that are listed here; educate and align. At my company, we’ve started a bi-monthly AI literacy lab where we’ll watch a 15-minute video around a topic on AI. It doesn’t even have to be relevant to us, but then we have a conversation. Boy, is that sparking just … It’s sparking communication across all our different teams and it’s getting us as a company thinking about so many different things in the vast AI world. Yeah. Again, I’m going to just keep saying again and again structured content is foundational.

Dipo Ajose-Coker: Sarah, anything to add?

Sarah O’Keefe: No, I think we’ve covered it. What else we got?

Marianne Calilhanna: Yeah.

Dipo Ajose-Coker: I love this one.

Marianne Calilhanna: I think this is really important,

Dipo Ajose-Coker: Sarah?

Sarah O’Keefe: Yeah. Take a look at where you are. Many, many, many organizations are down in that siloed bucket. There’s a more detailed explanation of this, but this is a bog-standard five-step maturity model, and you really just want to think about how integrated is my content? How well is it done? Is it in silos that are not connected. Am I doing some reuse for maybe taxonomy and talking at the enterprise? We’ve unified our content, we’re managing our content, and content is considered strategic. That’s kind of the big picture of what we’re looking at here. Now, you do not want to go from level one to level five in four weeks. Very, very bad things will happen, mostly to you. So whichever level you’re at, start thinking about do I move up one level? How do I make that improvement? Make those incremental, reasonable improvements as you’re in flight with your content because almost certainly you can’t throw it away and start over. If you’re in a startup and you’re brand new, then congratulations because you can kind of pick a level and say, “This is where we need to be for now, for our current size, our current company maturity,” and think about what it looks like to move up as you go, but really, really think, honestly, about where you are on this and what you can do with it. Then a content audit. Understanding what you have, both on the back end, stored, and on the delivery front end can be very, very helpful to figure out what your next step needs to be.

Dipo Ajose-Coker: Then you consult your experts. It’s an ongoing engagement. What are those steps. We’re here for you to speak with and we’ll give our contact details out, but if you want to look at your content strategy for your content strategy, talk to Scriptorium. You’ve talked about the strategy, you’ve set up your model, then you want to start that migration and start detecting those duplicates and start applying that strategy to how you deal with that content, how you tag it, then DCL is there for you. Then if you’re looking for the content solution, do I want this type of CCMS, do I want it based on this standard? Then, well, you come to RWS. Together it’s like that process. You audit your strategy and then your implementation all the time and get us all to talk to each other. That’s why we thought it’d be great having all three of us in here. We’re all parts of … Don’t create silos. Bring us all together, get us to talk to each other, rather than talk to one without letting them know where you want to go, or where you’ve been.

Marianne Calilhanna: DCL stands for Data Conversion Laboratory and we’ve been asked to convert this content, sure, but there are many times when people have come to us and it’s like, “You really would benefit speaking to Scriptorium, or to a strategic organization.” We much prefer working in this order because we know that when it’s time to convert that content, to migrate to RWS, it is going to go smoother for everyone, most importantly the customer. We can trust what the information architects at Scriptorium have identified, we know that we have a very clearly defined target for that conversion, for that migration, and then we know it’s going to seamlessly go right into RWS. I just can’t say that enough.

Sarah O’Keefe: To this slide, there’s an interesting point here and we want to be careful. It’s not that you cannot do AI with unstructured content, it is that structured content means you’re going to have more consistency, more predictability, and essentially better machined content parts that you’re feeding into the AI assembly line. Hypothetically, you can use unstructured content and feed it into the AI, the problem is you have to do way, way, way, way, way more work to get the AI to perform. I don’t know about you, but I mean every day there’s another example of ChatGPT churning out inaccurate information. If I fed it better information or, not me, if it consumed better information, it would have better results. Structure means that we are enforcing consistency and enforcing all these things and we can get taxonomy in there and therefore we can do a better job with AI processes. That’s what we’re saying here, or at least that’s what I’m saying here.

Dipo Ajose-Coker: Yeah. Why are we getting hallucinates? Well, AI, or the large language models, were trained on unstructured content, in the most part. It’s like the whole hoovered up books and everything, not really structured it, and it’s able to make up things. Imagine if it had only been trained on structured content, the answers would be better. I think we’ve come to the end here. We said we’d try and leave a little bit of space, one, for you to contact us. If you would like a copy of the slides that we’re using, you could write to any of us, our email addresses are up there. Get in contact with us, we’ll be happy to send the slides. Set up a conversation with us. If you would like all three of us together, we’re quite happy to do that. Come into your organization and talk to you, get the right experts in to guide you along your journey. Questions and answers, Q&A session, Trish, what we got?

Trish: Well, we’ve got a couple here. I do want to remind our attendees that we will send out a recording link to all those who registered, as well as I will include in the email everybody’s contact information. So perfect. Just a reminder, the Q&A, not the chat, for any of your questions. Looks like they’re very interesting. Is there a benchmark on how much energy processing is needed for AI to work through structured versus unstructured content?

Sarah O’Keefe: That’s a great idea. Not to my knowledge.

Marianne Calilhanna: That’s a really good question. Yeah. We do know, of course, that AI uses a lot of energy and resources. I talk with my colleague, Mark Gross, about that a lot. He was a former nuclear engineer. He’s a pragmatic person and always mentions well, the energy resource issue will catch up. AI’s going so fast over here and we know that this is an issue, the processing is going to get better over time, but I would love to see a benchmark like that as well. I’m going to start looking for that.

Dipo Ajose-Coker: Yeah.

Trish: Another one, and we may run out of time. By the way, should you have any questions we don’t get to please reach out and contact Sarah, Marianne or Dipo. Are there studies that prove definitively that structured content improves accuracy with LLMs?

Sarah O’Keefe: Also a great idea. Again, not to my knowledge. Also, it’s actually a very problematic question because there’s the question of structured versus unstructured, and consuming, but let’s say it’s the same exact text, but one is a word file and one’s DITA topics or something like that, that never happens. What you then have to tease out is when we move this to structured content, and we fixed all the redundancy, and we improve the consistency and we fix the formatting inaccuracies and all those things, how much of that plays into the improvements that we may or may not see? Another great question. I don’t know if we have any academics on the call, but if we do, I would challenge them to go look into that, because that sounds fun.

Dipo Ajose-Coker: Yeah.

Trish: Well, it looks like we’ve run out of time. Great discussions. Hope that you will join us back again at CIDM Webinars. For that, I’ll say goodbye and thank you so very much for all who attended and our panelists.

Dipo Ajose-Coker: Thanks so much for hosting us.

Sarah O’Keefe: Thank you, Trish. Thanks, everyone.

Marianne Calilhanna: Bye. Thanks, everyone.

Dipo Ajose-Coker: Thanks. Bye.

Trish Grindereng: Bye-bye.