From black box to business tool: Making AI transparent and accountable
As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability.
Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do.
Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it.
