Avoiding AI Pitfalls with AI Governance

November 29, 2023
Avoiding AI Pitfalls with AI Governance

By Gregory Hodgkinson, Prolifics CTO

“AI systems should be designed to benefit society while minimizing potential negative impacts.”

So, you’ve got your AI vision in place. As you’re forging your path to differentiation and transformation, how do you avoid the pitfalls along the way?

First you need to define your policies — your precautionary intent for how to avoid bad outcomes with AI. This is the list of things you promise yourself you’ll either do — or specifically not do.

So, what might your “precautionary intent” be for AI? You’d probably agree that the following “should” statements would be a suitable list.

Our use of AI should …

– Promote fairness and equity

– Be accountable and auditable

– Enhance user understanding and trust

– Maintain reliability and safety standards

– Comply with all relevant laws and regulations

Stating the intent (or policies) is a good start, but as we know the road to disaster is paved with good intentions. How do you ensure you follow through on your policies? The answer is AI governance. Governance is what keeps practice in line with policy.

 

Enter IBM’s watsonx.governance product

Watsonx.governance is a next-generation enterprise toolkit designed to automate and accelerate workloads across the AI lifecycle while providing risk management and facilitating regulatory compliance.

Significantly, this product builds on and extends IBM’s existing strength in traditional AI model governance — including inventory, workflow, evaluation, and monitoring — but the new and exciting capabilities in watsonx.ai are all about generative AI, so it makes sense that watsonx.governance also supports governance for generative AI. And that it does, making this a truly unique product.

 

Pitfall! AI Governance Edition

In my previous article introducing watsonx I used the analogy of a band to talk through the .ai, .data and .governance as band members. Well, as we’re talking about avoiding pitfalls with AI governance, lets switch things up with a different analogy:

“Pitfall!” was the classic video game / platformer of the early 1980s — where Pitfall Harry explored a jungle within a 20-minute time limit to collect treasure erstwhile avoiding various traps and dangers. A tenuous enough link for my analogy.

So, let’s look at some of the capabilities of watsonx.governance with a “play-through” of the pitfalls of AI that governance seeks to avoid, looking at how watsonx.governance gives you hero powers to deal with them all.

“Welcome to Pitfall! AI Governance Edition, where you step into the shoes of a skilled digital policy maker in the intricate realm of AI governance. Time is ticking, reflecting the real-world urgency to address critical AI-related issues. Your reputation hangs in the balance — every decision you make impacts public trust and influence. With AI governance as your guiding tool, join us as we unveil the strategies and insights to navigate through the game’s challenging levels, avoiding pitfalls and traps, to successfully reach the final stage. Let’s embark on this journey together!”

 

Pitfall! — The classic 1982 platformer — AI governance edition?! (Sadly, not a real thing.)

 

 

Level 1 — Responsibility Run!

Objective: AI responsibility involves ethical development and use of AI systems, focusing on clear documentation of the model’s workflow and ensuring the factual accuracy and reliability of the data used and produced, critical for upholding the integrity and trustworthiness of AI systems.

Pitfalls: Unclear use cases, unclear lifecycle for model, lack of model lineage.

Tips: watsonx.governance gives you the advantage with comprehensive support for a governance lifecycle:

  • AI model use cases — Start your governance lifecycle by defining an AI use case in the model inventory catalog. This brings together all of the assets that will be built to implement the use case in one place, enabling you to scale without losing control.
  •  Model facts — A comprehensive set of facts about the model is published alongside the model in the inventory, providing insights into the model’s performance and behavior. These include various statistical measures, performance metrics, and data about the model’s training and deployment environments.

Bonus Tips for Generative AI: watsonx.governance provides specific support for generative AI:

  • Prompt governance — Track and govern your generative AI prompt templates.
  • Risk guidance — Use the built-in guidance that helps you understand what risks to monitor when implementing a specific business problem with a large language model (LLM).
  • Cost and adoption tracking — Understand how a certain LLM is being adopted in your organization, as well as importantly the costs associated with that adoption.

 

Level 2 — Bias Bump

Objective: Prevent AI bias, which occurs when an AI system displays prejudiced results due to flawed assumptions in the algorithm or biases in the training data. This can lead to unfair or discriminatory outcomes, especially in decision-making processes.

Pitfalls: Rampant bias in decisions made by the model, with no ability to analyze bias when it is suspected.

Tips: watsonx.governance actively prevents AI bias:

  • Bias monitor and alert: Easily create a fairness monitor with a threshold for bias, which automates detection and alerts you when it happens.
  • Feature selection: Select any model feature to monitor for bias.
  • Understand the bias: Use the analysis capability to drill into what the bias looks like.

Bonus Tips for Generative AI: watsonx.governance lets you prevent bias in generated responses:

  • Detecting stigma or social bias: You can automatically monitor the outputs of a prompt template for social bias (unintended and often systematic inclusion of prejudiced perspectives or discriminatory tendencies) or stigma (creation and reinforcement of negative or socially disapproved perceptions).

 

Level 3 — Quality Quest

Objective: Ensure the effectiveness and accuracy of an AI system as it is performing its intended tasks. Aim for accurate, reliable, and consistent performance, aligned with specific goals and standards.

Pitfalls: No detection of drop in quality of model results, no way of even measuring quality.

Tips: watsonx.governance assures you that your models are performing as expected:

  • Quality monitor and alerting: Set up monitors to detect quality issues and alert you when quality is slipping.
  • Quality metrics: Measure a range of different quality metrics, including accuracy, precision, recall, Area Under ROC Curve (the trade-off between true positive and false positive rates), and more.
  • Thresholds: Set upper thresholds on metrics that need to stay low and lower thresholds on metrics that need to stay high.

Bonus Tips for Generative AI: watsonx.governance provides specific support for generative AI:

  • Generative quality: Now you can also continuously evaluate the quality of your prompt templates, with performance measures for text summarization, text classification, entity extraction, content generation, and more.
  • Personally identifiable information (PII): Importantly, you can continuously monitor a prompt template to ensure that it is not leaking any PII information, a key concern with generative.

 

Level 4 — Drift Dilemma

Objective: Prevent AI drift, or model drift. This can occur when the performance of an AI model deteriorates over time, often due to changes in the underlying data or environment that the model was not originally trained to handle, leading to a decrease in accuracy or relevance of its outputs.

Pitfalls: Model performance degrades, and nobody realizes until it is too late, no ability to focus on important features.

Tips: watsonx.governance ensures you know when conditions have changed that may cause your model to no longer perform as well as it did when it was trained:

  • Drift monitor and alerts: Put a monitor on a model to continuously check that it hasn’t drifted away from the latest data reality and get alerted when this is detected.
  • Output, model quality, and feature drift: Automatically detect if the model starts producing different results for the same or similar input data, if there is a significant change in the performance metrics of a model (such as accuracy, precision, or recall), or changes in the data that the model uses to make predictions.

 

Level 5 — The Transparency Trial

Objective: AI transparency is about making the workings of an AI system clear and understandable to humans, especially regarding how the system makes decisions or predictions. This includes the ability to explain and justify the AI’s processes and outcomes, which is crucial for building trust and accountability in AI systems.

Pitfalls: Unable to explain why AI made a decision, unable to reason about how the decision may have been different if the data was different.

Tips: watsonx.governance makes explaining what your model has done simple:

  • Explain transaction: You have a full history of all the decisions (transactions) that your model has made, along with the ability to get an explanation for why a decision was made.
  • What if: Let’s you “time travel” and go back to when that decision was made, change the data, and then see what effect that has on the decision.
  • Tipping points: Even better, it will provide analysis that tells you at which value the decision would have changed.

Bonus Tips for Generative AI: watsonx.governance provides specific support for generative AI:

  •  “Where did that come from?” A new feature that will explain each part of a generated set of text and attribute it back to the initial content made available to the model — highlighting the fragment of text that resulted in the generation of a certain section of output.

 

AI Pitfalls Avoided, Congratulations!

 

 

You did it! Final level complete. You’ve avoided all of the potential pitfalls that could have resulted in the dreaded “game over” for your AI vision. Well done, and pat yourself on the back.

With a strong set of AI policies, supported by watsonx.governance, you can ensure your organization’s AI vision and its good outcomes won’t be spoiled by a bad outcome somewhere along the way.

 

Gregory Hodgkinson

Greg Hodgkinson is Prolifics’ Chief Technology Officer and Worldwide Head of Engineering, and an IBM Lifetime Champion. As a technology leader, he’s responsible for innovative cross-practice solutions for our customers, creating a foundation for innovation in the company, and driving improvements in the art of software development and delivery throughout Prolifics.

Talk to a Gen AI expert

Learn more about Gen AI at Prolifics.

Read Greg’s blog: Get Ready For IBM WatsonX: The AI Platform With X Factor (prolifics.com)

Did you miss our webinar, “Avoiding AI Pitfalls with AI Governance”, with IBM’s VP of watsonx, Madison Gooch? Watch the on-demand version here.

 

About Prolifics

At Prolifics, the work we do with our clients matters. Whether it’s literally keeping the lights on for thousands of families, improving access to medical care, helping prevent worldwide fraud or protecting the integrity and speed of supply chains, innovation and automation are significant parts of our culture. While our competitors are throwing more bodies at a project, we are applying automation to manage costs, reduce errors and deliver your results faster.

Let’s accelerate your transformation journeys throughout the digital environment – Data & AI, Integration & Applications, Business Automation, DevXOps, Test Automation, and Cybersecurity. We treat our digital deliverables like a customized product – using agile practices to deliver immediate and ongoing increases in value. Visit prolifics.com