Artificial Intelligence (AI) Governance

What is AI governance?

Artificial intelligence (AI) governance is about establishing a legal framework for ensuring the safe and responsible development of AI systems. 

Advertisements

In the AI governance debate, society, regulators, and industry leaders are looking to implement controls to guide the development of AI solutions, from ChatGPT to other machine learning-driven solutions, to mitigate social, economic, or ethical risks that could harm society as a whole. 

Risks associated with AI include societal and economic disruption, bias, misinformation, data leakage, intellectual property theft, unemployment due to automation, or even weaponization in the form of automated cyberattacks.  

Ultimately, the end goal of AI governance is to encourage the development of safe, trustworthy, and responsible AI, defining acceptable use cases, risk management frameworks, privacy mechanisms, accuracy, and, where possible, impartiality. 

Why is AI Governance Important?

AI governance and regulation are important for understanding and controlling the level of risk presented by AI development and adoption. Eventually, it will also help to develop a consensus on the level of acceptable risk for the use of machine learning technologies in society and the enterprise. 

However, governing the development of AI is very difficult because not only is there no centralized regulation or risk management framework for developers or adopters to refer to, but it is also challenging to assess risk when this changes depending on the context the system is used within. 

Looking at ChatGPT as an example, enterprises not only have to acknowledge that hallucinations can spread bias, inaccuracies, and misinformation, but they also have to be aware that user prompts can be considered leaked to OpenAI. They also need to consider the impact that AI-generated phishing emails will have on their cybersecurity

More broadly, regulators, developers and industry leaders need to consider how to reduce the inaccuracies or misinformation presented by large language models (LLMs), as this information could potentially have the ability to influence public opinion and politics.   

At the same time, regulators are attempting to strike a balance between mitigating risk without stifling innovation among smaller AI vendors.  

Transparency is the Bedrock of Governance

Before regulators and industry leaders can have a more comprehensive perspective of AI-related risks, they first need more transparency over the decision-making processes of automated systems. 

For instance, the better the industry understands how an AI platform comes to a decision after processing a dataset, the easier it is to identify whether that decision is ethical or not and whether the vendor’s processing activities respect user privacy and comply with data protection regulations such as the General Data Protection Regulation (GDPR). 

The more transparent AI development is, the better risks can be understood and mitigated.  As Brad Smith, vice chair and president of Microsoft, explained in a blog post in May 2023, “When we at Microsoft adopted our six ethical principles for AI in 2018, we noted that one principle was the bedrock for everything else – accountability.” 

“This is the fundamental need: to ensure that machines remain subject to effective oversight by people, and the people who design and operate machines remain accountable to everyone else.” 

Without transparency over how AI systems process data, there is no way to assess whether they are developed with a concerted effort to remain impartial or if they are simply developed with the values and biases of their creators.  

NIST’s AI risk management framework 

On 26 January 2023, the U.S. National Institute of Standards and Technology (NIST) released its AI risk management framework, a voluntary set of recommendations and guidelines designed to measure and manage AI risk. 

NIST’s standard is one of the first comprehensive risk management frameworks to enter the AI governance debate, which looks to promote the development of trustworthy AI. 

Under this framework, NIST defines risks as anything that has the potential to threaten individuals’ civil liberties, which emerges due to the nature of AI systems themselves or how a user interacts with them. Crucially, NIST highlights that organizations and regulators need to be aware of the different contexts in which AI can be used to fully understand risk. 

NIST also highlights four core functions organizations can use to start controlling AI risks:

  • Governance: Building an organization-wide culture of risk management to manage ethical, legal, and societal risks. 
  • Mapping: Categorizing AI systems and mapping potential risks that could impact other organizations and individuals.
  • Measurement: Using quantitative, qualitative, and hybrid risk assessment techniques to assess the scope of AI risk. 
  • Management: Continuously identifying AI risks and developing a strategy to mitigate those risks over time.

It is important to note that NIST’s framework has many critics due to the fact it’s a voluntary framework, so there’s no regulatory obligation for organizations to develop AI responsibly at this stage.

Barriers to Trustworthy AI Development: Blackbox

One of the main barriers to AI governance at the moment is the black box development approach of AI leaders like Microsoft, Anthropic, and Google. Typically, these vendors will not disclose how their proprietary models work and make decisions in an attempt to maintain a competitive advantage. 

While a black box development approach allows AI vendors to protect their intellectual property, it leaves users and regulators in the dark about the type of data and processing activities their AI solutions use to come to decisions or predictions. 

Although other vendors in the industry, like Meta, are looking to move away from black box development to an open-source and transparent approach with LLMs like Llama 2, the opaqueness of many vendors makes it difficult to understand the level of accuracy or bias presented by these solutions.  

Gaining Constructive Outcomes from AI

AI governance is critical to guiding the development of the technology in the future and implementing guardrails to ensure that it has mainly positive outcomes for society as a whole

Building a legal framework for measuring and controlling AI risk can help users and organizations to experiment with AI freely while looking to mitigate any adverse effects or disruption. 

Advertisements

Related Terms

Latest Artificial Intelligence Terms

Related Reading

Tim Keary

Since January 2017 Tim Keary has been a freelance technology writer and reporter covering enterprise technology and cybersecurity.