AI governance: Ensuring your AI is transparent, compliant, and trustworthy - IBM

AI governance: Ensuring your AI is transparent, compliant, and trustworthy - IBM

AI governance: Ensuring your AI is transparent, compliant, and trustworthy
Introduction
AI governance should be a best practice for every organization, and here's why.
Leaders of enterprises creating AI services are being challenged by an emerging problem of how to effectively govern the creation, deployment and management of these services, throughout the AI lifecycle. These enterprise officials want to understand and gain control over their processes to meet internal policies, external regulations or both. This is where AI governance makes a difference.
AI governance is the ability to direct, manage and monitor the AI activities of an organization. In particular, leaders of organizations and enterprises in regulated industries, such as banking and financial services, are legally required to provide a certain level of transparency into their AI models to satisfy regulators. Failure to offer this transparency can lead to seven-figure fines and penalties. With this, AI models can no longer function as a mystery. Enterprise leaders must provide greater visibility into their automation processes and provide clear documentation of the health and functionality of their models in order to meet regulations.
Read more to find out what AI governance is, why it is important, and how IBM can help your organizations embrace it as a practice.
Business leaders must manage the associated risks as they scale their use of AI.
What is AI governance?
Discover what proper AI governance delivers.
AI governance is the ability to direct, manage and monitor the AI activities of an organization. This practice includes processes that trace and document the origin of data, models and associated metadata and pipelines for audits. The documentation should include the techniques that trained each model, the hyperparameters used, and the metrics from testing phases. The result of this documentation is increased transparency into the model’s behavior throughout the lifecycle, the data that was influential in its development, and its possible risks.
Before a model is put into production, it is validated to assess the risks to the business. Once the model goes live, it is continuously monitored for fairness, quality and drift. Regulators and auditors are given access to its documentation which provide explanations of the model’s behavior and predictions. These explanations provide visibility into how the model works and what processes and training the model received.
The objective of AI governance is to deliver transparent and ethical AI to establish accountability, responsibility and oversight.
For a deeper dive on AI governance and documentation research, read the AI FactSheets 360 .
Proper AI governance gives enterprises the ability to achieve the following benefits:
Capture AI lifecycle facts, enabling greater visibility and automated documentation.
Perform analysis of these facts to improve business outcomes, increase overall efficiency and learn best practices.
Specify enterprise policies to be enforced during the AI development and deployment lifecycle.
Facilitate communication and collaboration among data scientists, AI engineers, developers, and other stakeholders shaping the AI lifecycle.
Build AI at scale, with a centralized, comprehensive view of all activities.
The result of model tracing and documenting is increased transparency of the model’s behavior.
Why AI governance?
By 2022, 65 percent of enterprises will task CIOs to transform and modernize governance policies to seize the opportunities and confront new risks posed by AI, machine learning (ML) and data privacy and ethics.1
Drivers behind this trend include the following demands for enterprises:
Compliance: Make AI solutions and AI-related decisions consistent with relevant industry regulations and legal requirements.
Trust: Protect customer satisfaction and brand value by ensuring trustworthy, transparent AI systems that achieve their objectives.
Efficiency: Improve speed to market and reduce costs by standardizing and optimizing AI development and deployment
Watch this webinar to find out AI Governance can help organizations scale their AI.
What regulations require AI governance?
See what guidance different countries and regions recommend.
News stories from the last few years show that AI can be discriminatory—the most widely known examples have occurred in banking and financial services.2 However, other sectors are not immune. For example, an online retailer chose to disband an internal team because of a controversy involving the algorithms they used in a hiring process. The algorithms that were used to vet potential employees were said to be biased because they were trained on mostly male resumes, which means they could potentially identify more male candidates than female candidates and perpetuate a gender bias.3 Similarly, in the public sector, a study revealed that UK police officers questioned whether using algorithms to predict future crime could result in bias and discrimination.4
AI governance practices and regulations have been adopted by a number of countries to prevent the advent of bias or discrimination in the algorithms.
To reduce risk from factors such as AI bias, many countries and regions have adopted guidance on how to govern AI.
US: SR-11-7
SR-11-7 is the US regulatory model governance standard for effective and strong model governance in banking. The regulation requires bank officials to apply company-wide model risk management initiatives and maintain an inventory of models implemented for use, under development for implementation, or recently retired. Leaders of the institutions also must prove their models are achieving the business purpose they were intended to solve, and that they are up-to-date and have not drifted. Model development and validation must enable anyone unfamiliar with a model to understand the model’s operations, limitations and key assumptions.5
In the US, SR-11-7 guides US bank officials to apply company-wide model risk management initiatives and maintain an inventory of models implemented for use, under development for implementation, or recently retired.
Canada: Directive on Automated Decision-Making
Canada’s Directive on Automated Decision-Making describes how that country’s government uses AI to guide decisions in several departments. The directive uses a scoring system to assess the human intervention, peer review, monitoring and contingency planning needed for an AI tool built to serve citizens. Organizations creating AI solutions with a high score must conduct two independent peer reviews, offer public notice in plain language , develop a human intervention failsafe, and establish recurring training courses for the system.6 As Canada’s Directive on Automated Decision-Making is a guidance for the country’s own development of AI, the regulation doesn’t directly affect companies the way SR 11-7 does in the US.
Europe’s evolving AI regulations
In 2019, the European Commission’s incoming president said she planned to introduce new legislation governing AI.7 The new legislation on AI would require high-risk AI systems to be transparent, traceable and under human control. Authorities would check AI systems to make sure data sets were unbiased. The commission also wanted to launch a debate throughout the European Union (EU) about when and whether to use facial recognition and other biometric identification.8
AI governance guidelines in the Asia-Pacific region
In the Asia-Pacific region, countries have released several principles and guidelines for governing AI. In 2019, Singapore’s federal government released a framework with guidelines for addressing issues of AI ethics in the private sector. India’s AI strategy framework recommends setting up a center for studying how to address issues related to AI ethics, privacy and more. China, Japan, South Korea, Australia and New Zealand are also exploring guidelines for AI governance.9
3 “Amazon scraps secret AI recruiting tool that showed bias against women.” Reuters, 9 Oct. 2018. Accessed 14 June 2020.
4 Alexander Babuta and Marion Oswald, “Data Analytics and Algorithmic Bias in Policing.” Royal United Services Institute for Defence and Security Studies, 2019. Accessed 14 June 2020.
5 “SR 11-7: Guidance on Model Risk Management.” Board of Governors of the Federal Reserve System Washington, D.C., Division of Banking Supervision and Regulation, 4 April 2011. Accessed 15 June 2020.
6 “Canada's New Federal Directive Makes Ethical AI a National Issue.” Digital, 8 March 2019. Accessed 15 June 2020.
Why is AI governance needed and what are the consequences?
Learn why a centralized, comprehensive view of models is important.
Any company using AI models to automate its business process needs governance. For instance, a leading telecommunications company developing multiple models. Or officials discovering significant redundancy in their efforts who want to learn from best practices.
Many companies have multiple data science teams using different tools to build models. These teams need the following insights:
A comprehensive, centralized view of their models
Efficient processes—automated wherever possible—to build, monitor, and manage the models
A way to share knowledge across all collaborators
Some companies use AI to detect fraudulent insurance claims, identity theft and illegal impersonation, money laundering, or other fraud. Insurers for instance, use natural language processing to draw value from unstructured text, image recognition and classification tasks to help them do their job faster. In the US, anti-money laundering and fraud detection models used by insurers and others have been subject to review since 2011.10 Applying robust AI governance to insurers and others extends model governance. Additionally, AI governance can ensure that models are accurate and effective by reviewing the design process and determining whether the models continue to be adequate for real-life situations.
Watch this webinar on automating AI model risk management at financial firms.
Proper AI governance includes checkpoints in the AI lifecycle with clear accountability at each checkpoint. For instance, retailers using AI for product recommendations or supply-and-demand forecasting need to ensure their models don't drift. Healthcare organization leaders who use AI to look for patterns in medical research need to debias their models to ensure the data on which they've been trained them fairly represents protected features such as gender, race, and zip code.
The need for AI governance is similar to the need for software development governance a few decades ago. Enterprise executives determined too much of their software development was ad hoc, so they created the CIO office to help govern the processes. Now, the responsibility for AI governance should fall to a position such as the chief data officer (CDO) or chief risk officer (CRO).
The EU plans to add AI regulations to the General Data Protection Regulation (GDPR). GDPR infringements currently can “result in a fine of up to €20 million, or 4% of the firm's worldwide annual revenue.”
What are the consequences of not adopting AI governance?
There are many negative consequences for a company that does not adopt AI governance, one being a lack of efficiency. The machine learning process is iterative and requires collaboration. Without good governance and documentation, data scientists or validators can't be sure of the lineage of a model's data or how the model was built. Leading to results can be challenging to reproduce. If administrators train a model using wrong or incomplete data--months of work could be destroyed.
Lack of AI governance can also result in significant penalties. Bank operators have been issued seven-figure fines for using biased models when determining loan eligibility. The EU plans to add AI regulations to the General Data Protection Regulation (GDPR). GDPR infringements currently can “result in a fine of up to €20 million, or 4% of the firm's worldwide annual revenue from the preceding financial year, whichever amount is higher.”11
Brand reputation is also at risk. One experiment used AI software to learn the speech patterns of young people on social media. Administrative officials removed the software quickly after internet trolls “taught” the tool to create racist, sexist, and anti-Semitic posts.
10 “Supervisory Guidance on Model Risk Management.” Board of Governors of the Federal Reserve System Office of the Comptroller of the Currency, 4 April 2011. Accessed 15 June 2020.
Why is automation key in AI governance?
Automation in AI governance is crucial in keeping a competitive edge while meeting regulations.
Problems can arise when companies conduct manual AI governance processes. Data governance can include manual data validation, comparison, and other intervention, which requires familiarity with the data management and handling process. When done manually, model validators may need to get expertise in each type of algorithm being used, which can be slow and costly to the business and can result in human errors. This delay in processes could result in the company falling behind their competitors or being late to hand over information to auditors. With automation, the documentation and validation processes of AI governance would be much more efficient.
The risk manager of one major bank said, “We're looking at automating all handovers. Once a model is developed, there should be no more need to describe the model. Today, this developer needs to document everything about the model manually.” According to a model validator, a model document can be hundreds of pages because the description contains everything about the model.
Therefore, manual AI governance with documentation is not enough. Automation in AI governance is crucial in keeping a competitive edge while meeting regulations.
Not adopting AI governance can carry severe consequences for organization leaders such as seven-figure penalties and fines.
How does IBM help organizations with AI governance?
Automating governance processes is just the beginning.
People are at the core of using AI governance and deciding what data to use for building models. Many kinds of skillsets are needed in the AI lifecycle, including product owners, model developers, model validators, and model deployment engineers. That’s why IBM® offers solutions that not only help automate AI governance processes but also provide the following features:
Enhanced collaboration between different skillsets through a common taxonomy of terms and “facts” about model development and deployment
A complete view of models through collection of metadata across the lifecycle and across tools
A catalog of all models and data used to train those models
Recommended workflow and processes for establishing accountability and checks at each point in the lifecycle, such as bringing the data science team closer to the CDO
Standards and rules that can automatically enforce policies
Guidance when extending a governance program to include both data and AI
Watch this webinar for a deep dive on AI governance and which automation tools can benefit your organization.
IBM provides the tools people need to accomplish and automate AI governance processes.
How does IBM enable governance throughout the AI lifecycle?
Different capabilities help you know, trust, and use your AI models.
IBM solutions for AI governance are designed to help you achieve the following tasks:
Know your model (understand its history)
Trust your model (enhance its transparency and compliance)
Use your model (enable its monitoring and management so the model performs as expected)
What IBM offerings can get you started with AI governance?
At the core of IBM AI governance are capabilities to deliver model fairness, explainability, and standardized documentation. IBM Cloud Pak® for Data , for example, is a cloud-based, unified AI platform that tracks and measures outcomes from AI across its lifecycle. The solution adapts and governs AI to changing business situations—for models built and running anywhere.
Data virtualization and containers in Cloud Pak for Data enable one unified experience operationalizing and governing AI across multiple hybrid clouds.
Cloud Pak for Data consists of a full stack of components for every stage of the AI lifecycle, including built-in governance, purpose-built AI model risk management and collaboration tools. Examples of these components include Watson™ Knowledge Catalog , Watson OpenScale™ and Watson Studio . Watson Knowledge Center organizes data for governed use of data, Watson Studio provides a governance-enabled build platform, and Watson OpenScale delivers automation of governance processes and tests.
IBM also offers open-source AI governance toolkits. AI Fairness 360 helps examine, report and mitigate bias in models throughout the AI application lifecycle. AI Explainability 360 includes metrics for explaining a model's processes and decision-making. AI Adversarial Robustness 360 helps researchers and developers defend and verify AI models against adversarial attacks.

Images Powered by Shutterstock