Skip to main content
January 26, 2026

From black box to business tool: Making AI transparent and accountable

As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability.

Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do.

Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it.

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Sarah O’Keefe: Hey everyone. I’m Sarah O’Keefe. Welcome to another episode. I am here today with Nathan Gilmour, who’s the Chief Technical Officer of Writemore AI. Nathan, welcome.

Nathan Gilmour: Thanks, Sarah. Happy to be here.

SO: Welcome aboard. So tell us a little bit about what you’re doing over there. You’ve got a new company and a new product that’s, what, a year old?

NG: Give or take, yep.

SO: Yep. So what are you up to over there? Is it AI-related?

NG: It is actually AI-related, but not AI-related in the traditional sense. Right now, we’ve built a product or tool that helps technical authoring teams convert from traditional Word or PDF formats, which would make up the bulk of much of the technical documentation ecosystem and help convert it to structured authoring. Meaning that they can get all of the benefits of reuse, easier publishing, high compatibility with various content management systems. And can do it in minutes where traditional conversions could take hours. So it really helps authoring teams get their content out to the world at large in a much more efficient and regulated fashion.

SO: So I pick up a corpus of 10 or 20 or 50,000 pages of stuff, and you’re going to take that, and you’re going to shove it into a magic black box, and out comes, you said, structured content, DITA?

NG: Correct.

SO: Out comes DITA. Okay. What does this actually … Give us the … That’s the 30,000-foot view. So what’s the parachute level view?

NG: Perfect. Underneath the hood, it’s actually a very deterministic pipeline. Deterministic pipeline means that there is a lot more code supporting it. It’s not an AI inferring what it should do. There’s actual code that guides a conversion process first. So going from, let’s say, Word to DITA, there are tools within the DITA Open Toolkit that allow and facilitate that much more mechanically, rather than trusting an AI to do it. We know that AI does struggle with structure, especially as context windows expand. It becomes more and more inaccurate. So if we feed these models with far more mechanically created content, they become much more accurate. You’re not trusting them to do much more, more of the nuanced parts of the process. So there’s a big difference between determinism and probabilism. Where determinism is the mechanical conversion of something, probabilism is allowing the AI to infer a process. So that’s where we differ is our process is much more deterministic versus allowing the AI to do everything on its own.

SO: So is it fair to say that you combined the … And for deterministic, I’m going to say scripting. But is it fair to say that you combined the DITA OT scripting processing with additional AI around that to improve the results?

NG: Correct. It also expedites the results so that instead of having a human do much of the semantic understanding of the document, we allow the AI to do it in a far more focused task. Machines can read faster.

SO: Okay. And so for most of us, when we start talking about AI, most people think large language model and specifically ChatGPT, but that’s not what this is. This is not like a front-end go play with it as a consumer. This is a tool for authors.

NG: Correct. And even further to that, it’s a partner tool for authors. It allows them to continue authoring in a format that they’re familiar with. Well, let’s take Microsoft Word, for example. Sometimes the shift from Word to structured authoring could be considered an enormous upheaval. Allowing authors to continue authoring in a format that they’re good at and they’re familiar with, and then have a partner tool that allows them to expedite the conversion process to structured authoring so that they can maintain a single source of truth, makes things a little bit better, more manageable, and more reliable in the long run. So instead of having to effectively cause a riot with the authoring teams, we can empower them to continue doing what they’re good at.

SO: Okay. So we drop the Word file in and magically DITA comes out. What if it’s not quite right? What if our AI doesn’t get it exactly right? I mean, how do I know that it’s not producing something that looks good, but is actually wrong?

NG: Great question. And that’s where, prior to doing anything further, there is a review period for the human authors. So in the event that the AI does make a mistake, it is not only completely transparent, so the output, the payload, as we describe it, comes with a full audit report. So every determination that the AI makes is traced and tracked and explained. And then further to that, the humans are even able to take that payload out anyway, open it up in an XML editor. So at this point in time, the content is converted, it is ready to go into the CCMS.

Prior to doing that, it can go into a subject matter expert who is familiar with structured authoring to do a final validation of the content to make sure that it is accurate. The biggest differentiator, though, is the tool never creates content. The humans need to create content because they are the subject matter experts within their field. They create the first draft. The tool takes it, converts it, but doesn’t change anything. It only works with the material as it stands. And then once that is complete, it goes back into another human-centered review so that there are audit trails, it is traceable. And there is a final touchpoint by a human prior to the final migration into their content management system.

SO: So you’re saying that basically you can diff this. I mean, you can look at the before and the after and see where all the changes are coming in.

NG: Correct.

SO: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do.

NG: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. Where we want to come in is to bring clarity to these black boxes. Make them transparent, I guess you can say. Because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it.

One of the added benefits that we have baked into the tool from a backend perspective is its ability to be completely internet-unaware. Meaning if an organization has the capital and the infrastructure to host a model, this can be plugged directly into their existing AI infrastructure and use its brain. Which, realistically, is what the language model is. It’s just a brain. So if companies have invested the time, invested the capital in order to build out this infrastructure, the Writemore tool can plug right into it and follow those preexisting information security policies. Without having to worry about something going out to the worldwide web.

SO: So the implication is that I can put this inside my very large organization with very strict information security policies and not be suddenly feeding my entire intellectual property corpus to a public-facing AI.

NG: That is entirely correct.

SO: We are not doing that. Okay. So I want to step back a tiny bit and think about what it means, because it seems like the thing that we’re circling around is accountability, right? What does it mean to use AI and still have accountability? And so, based on your experience of what you’ve been working on and building, what are some of the things that you’ve uncovered in terms of what we should be looking for generally as we’re building out AI-based things? What should we be looking for in terms of accountability of AI?

NG: The major accountability of AI is what could it look like if a business model changes? Let’s kind of focus on the large players in the market right now. There will always be risk with using these large language models that are publicly facing right now. A terms of service change could mean that all of the information that organizations use in order to leverage these tools could become part of a training data set later on down the road. It’s hard to determine what will happen in the future.

So the ability to use online and offline models encourages the development of very transparent tools. So even if the Writemore tool is using a cloud model, I still hold the model almost accountable to report its determinations. It’s not just making things up, so to speak. So there’s a lot that goes into it. There’s a lot that we don’t know about these tools, to be totally honest. We’re still trying to determine what it looks like in a broader picture, in a broader use case. Because the industry is evolving so quickly that, quite simply, we don’t know what’s coming up.

SO: Sounds to me as though you’re trying to put some guardrails around this so that if I’m operating this tool, then I can look at the before and after and say, “Don’t think so.” I mean, presumably it learns from that, and then my results get better down the road. Where do you think this is going? I mean, where do you see the biggest potential and where do you see the biggest risks or opportunities or … I’ll leave it to you as to whether this is a positive or a negative tilted question.

NG: There’s a lot of potential in order to incorporate into organizations that can’t use these tools. Like we had mentioned earlier, organizations are looking into this. Municipalities are looking into AI. But with the state of the more open models right now, it’s very hard to say. So I know I keep circling back around the ability to use smaller language models. They are not only much more efficient to operate, they’re also cheaper, quite simply, to operate. We know that the large language models require enormous computing power. But if provided focused tasks in order to either assist in the classification of topics or fulfill requests in order to pull files, in that regard, you can get away with using smaller levels of compute. And in today’s day and age of computing, the price relativistically is coming down in terms of density is going up. So it’s cheaper to run a model at higher capacities than it ever has been. And it’s only going to improve over time.

So empowering organizations to be able to incorporate these tools in order to streamline their own workflows is going to be very important to them. And on top of that, being able to abide or follow their information security policies only makes the ideas much more compelling. And on top of that, being able to encourage organizations to take full control of their documentation and not necessarily need it to go out of house allows organizations to keep internal costs down while still maintaining the security policies of making sure their content doesn’t leave their organization. There’s always going to be room for partner organizations to come in and help with their content strategy. But the conversion itself can be done in-house using their tools, using their content, using their teams. Which really helps keep costs down, they drive the priority lists, they can do everything that they need to do in order to maintain that control.

SO: Now, we’ve touched largely … Or we’ve talked largely about migration and format conversion. But there’s a second layer in here, right? Which we haven’t mentioned yet.

NG: There is. There’s the ability also during the conversion phase, it’s to have an AI model do light edits. So being able to feed it a style guide to abide by means the churn that we see with these technical teams isn’t as nearly impactful. You can have technical authors still write their content. But if a new person joins the team, they can still author the material just as normally. But then the tool can take over in order to ensure that it’s meeting the corporate style guide, the corporate language, so on, in order to expedite that process. So onboarding time for new team members shrinks as well. So like I said, it’s a very much it’s a partner tool in order to expedite the processing of content, authoring, conversion, migration, getting into a new CCMS and the real empowerment behind it.

SO: And the style guide conformance. So I think we’re assuming that the organization has a corporate style guide?

NG: Assuming, yes.

SO: Okay. Just checking.

NG: But then again, that’s…

SO: Asking for a friend.

NG: Of course.

SO: So if they don’t have one, where’s the corporate style guide come from?

NG: And that could be something that an organization can either generate internally or, as mentioned, work with an external vendor who specializes in these kinds of things in order to build a style guide so that all of their documentation follows the same voice and tone. The better the documentation, the better the trust of the content overall.

SO: So, can we use the AI to generate the corporate style guide?

NG: Probably. Yes. Short answer, yes. Longer answer, not without very close attention to it.

SO: And doesn’t that also assume that we have a corpus of correctly styled content so that we can extract the style guide rules?

NG: There’s a lot more. Yeah.

SO: So I mean, what I’m working my way around to is if we have content chaos, if you have an organization that doesn’t have a style guide, doesn’t have consistency, doesn’t have all these things, can you separate out what is the work that the humans have to do? And what is the work that the machine can do to get to structured, consistent, correct, voice and tone and all the rest of it? How do you get from the primordial soup of content goo to structured content in a CCMS?

NG: Great question. Typically, that starts with education. We work with the teams in order to identify these gaps first. We don’t just throw in a tool and say, “Good luck, hope for the best.” Because we see it time and time again, even in manual conversion processes where that simply doesn’t work. But in taking the time to work with teams, providing them with the skills and the knowledge in order to be successful serves a much longer term positive outcome than ever before. If we educate these teams on what any tool realistically needs, it means the accuracy of the tool goes up in the longer run. So you’re seeing multiple benefits on multiple sides.

So to your point about primordial soup, well, working with teams in order to identify these gaps, these issues, working to identify the standards that should go into the content prior to anything sets not only them up for success in the long run, but also for any tools that they want to implement down the road. It all starts with strong content going in because, as the adage goes, garbage in, garbage out. So if we can clean up the mess prior or work with the teams prior in order to establish these standards, then the quality of output only goes up.

SO: Yeah. And I mean, I think to me, that’s the big takeaway, right? We have these tools, we can do interesting things with them, but at the end of the day, we have to also augment them with the hard-won knowledge of the people. You mentioned subject matter experts, the domain experts, the people inside the organization that understand the regulatory framework or the corporate style guide, or all of those guardrails that make up what is it to create content in this organization that reflects this organization’s priorities and culture and all the rest of it.

NG: And taking the time to educate users is a far less invasive process than exporting bulk material, converting it manually, and handing it back. Because realistically, if we take that avenue or that road, we’re not educating the users, we’re not empowering them to be successful in the long run. All we’ll end up doing is all the hard work, but then in one, two, five years, we run into the same issue where we’re back to primordial soup of content, and it’s another mess. So if we start with the education and the empowerment and then work towards the implementation of tools, the longer-term success will be realized.

SO: Well, I think, I mean, that seems like a good place to leave it. So Nathan, thank you so much. This was interesting and I look forward to seeing where this goes and how it evolves over the next … Well, we’re operating in dog years now, so over the next month, I guess.

NG: So true. And thanks, Sarah, for having me on.

SO: Thanks, and we’ll see you soon.

Conclusion with ambient background music

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

For more insights on AI in content operations, download our book, Content Transformation.