Who is Responsible for Gen AI? The Matter of Accountability
/Gen AI is one of the hottest technological trends of the past few years, with the number of people who regularly use it for their work steadily on the rise. However, the benefits of Gen AI also come with many risk factors, from its potential to spread misinformation to the ways criminals can use it as a tool of cyber warfare. As such, organizations need to be ready to reckon with these ethical considerations before they adopt AI-powered technologies, placing accountability at the heart of their digital transformation plan. But what does accountability really mean and how much do we understand about Gen AI to hold it accountable?
Generally speaking, accountability means being held responsible for decisions made and actions taken, with the understanding that you’re being held to some ethical standards. This also means that if you fall short of meeting these standards, actions will be taken to address the issue. It is all about taking ownership of what you do and accepting the consequences of those actions. A failure to show accountability can badly hurt one’s reputation, while those who hold themselves accountable are more highly regarded and generally seen as more trustworthy.
The concept of accountability is a bit more complicated when it comes to AI because of the technology’s ability to generate content and make decisions independently from direct human orders which can come with serious ethical considerations. Yet no matter how “intelligent” AI might be, it has no autonomy or intent, meaning it cannot be held accountable in any meaningful sense. You cannot punish an algorithm for not staying true to its word. This begs the question then: who is accountable for what an AI does: the company that made it, the individuals who use it, or the government that regulates its use?
With so much uncertainty around Gen AI and its usage, we don’t really have any hard answers yet. So at this point, everyone who could be responsible for what AI does needs to take action. We’ve already seen widespread regulatory responses globally: the UK held the AI Safety Summit at Bletchley Park in November 2023, and while British Prime Minister Rishi Sunak said his country’s approach wasn’t to “rush to regulate”, he still stressed that only governments can keep people safe from AI’s risks. Last year also saw the President’s Council of Advisors on Science and Technology (PCAST) launch a working group on Gen AI to determine how to ensure that the tech is developed and deployed as equitably, responsibly, and safely as possible. Meanwhile, the National Institute of Standards and Technology (NIST) introduced an AI Risk Management Framework, while the Health Information Trust Alliance (HITRUST) launched its AI Assurance Program focused on supporting organizations seeking security assurances for AI applications and systems.
As for companies, steps need to be taken to ensure that Gen AI lives up to its fullest potential, delivering the greatest value while keeping possible risk factors in check. To make this a reality, it is important to have an AI story in mind. Before you adopt, ask yourself: what will AI help you accomplish, what are the positives of using it, and what are the potential dangers? Many companies are eager to monetize Gen AI, but they cannot afford to ignore the risks to their organization or their people. This is Gen AI without any accountability and it’s an approach that is bound to fail.
Any cutting-edge technology will inevitably come with its share of risks, but that doesn’t mean that we can’t do anything about them. Companies can overcome many of the most common risks associated with Gen AI by establishing clear frameworks and guidelines for their usage. Whether they are topical guardrails that prevent AI models from answering unrelated questions to security measures that only allow models to connect to safe external third-party applications, guardrails add a layer of security that makes it easier to prevent problems down the road.
As Gen AI grows more advanced and widespread, guaranteeing the safety and responsible use of AI models becomes increasingly paramount. Since an AI cannot be held accountable, it is pertinent that organizations take steps to hold themselves accountable for what their AI tools might do. Not only will this extra degree of rigor drive better results overall, but it will also shield organizations from backlash if something goes wrong. So remember, if you don’t know who will be held accountable for the actions of your AI investments, it might be you. Plan accordingly!