MLOps Community
+00:00 GMT

AI regulations are here. Are you ready?

AI regulations are here. Are you ready?
# AI Explainability
# AI Regulations
# AI Trust

This post was written in collaboration with our sponsors from FiddlerAI

June 14, 2022
Krishna Gade
Krishna Gade
Krishna Gade
Krishna Gade
AI regulations are here. Are you ready?

This post was written in collaboration with our sponsors from FiddlerAI.

Krishna Gade

It’s no secret that artificial intelligence (AI) and machine learning (ML) are used by modern companies for countless use cases where data-driven insights may benefit users.

What often does remain a secret is how ML algorithms arrive at their recommendations. If asked to explain why a ML model produces a certain outcome, most organizations would be hard-pressed to provide an answer. Frequently, data goes into a model, results come out, and what happens in between is best categorized as a “black box.”

This inability to explain AI and ML will soon become a huge headache for companies. New regulations are in the works in the U.S. and the European Union (EU) that focus on demystifying algorithms and protecting individuals from bias in AI.

The good news is that there’s still time to prepare. The key steps are to understand what the regulations include, know what actions should be taken to ensure compliance, and empower your organization to act now and build responsible AI solutions.

The Goal: Safer digital spaces for consumers

The EU is leading the way with regulations and is poised to pass legislation that governs digital services—much in the same way its General Data Protection Regulation (GDPR) paved the way for protecting consumer privacy in 2018. The goal of the EU’s proposed Digital Services Act (DSA) is to provide a legal framework that “creates a safer digital space in which the fundamental rights of all users of digital services are protected.”

A broad definition for digital services is used, which includes everything from social networks and content-sharing platforms, to app stores and online marketplaces. DSA intends to make platform providers more accountable for content and content delivery, and compliance will entail removing illegal content and goods faster and stopping the spread of misinformation.

But DSA goes further and requires independent audits of platform data and any insights that come from algorithms. That means companies which use AI and ML will need to provide transparency around their models and explain how predictions are made. Another aim of the regulation is to give customers more control over how they receive content, e.g. selecting an alternative method for viewing content (chronological) rather than through a company’s algorithm. While there’s still uncertainty around how exactly DSA will be enforced, one thing is clear: companies must know how their AI algorithms work and have the ability to explain it to users and auditors.

In the U.S., the White House Office of Science and Technology has proposed the creation of an “AI Bill of Rights.” The idea is to protect American citizens and manage the risks associated with ML, recognizing that AI “can embed past prejudice and enable present-day discrimination.” The Bill seeks to answer questions around transparency and privacy in order to prevent abuse.

Additionally, the Consumer Financial Protection Bureau has reaffirmed that creditors must be able to explain why their algorithms may deny loan applications to certain applicants. There is no exception for creditors using black-box models which are too opaque or complicated.

The U.S. government has also initiated requests for information to better understand how AI and ML are used, especially in highly-regulated sectors (think financial institutions). At the same time, the National Institute of Standards and Technology (NIST) is building a framework “to improve the management of risks to individuals, organizations, and society associated with artificial intelligence (AI).”

The Timeline: Prepare for AI explainability

DSA could go into effect as early as January 2024. Big Tech companies will be examined first and must be prepared to explain algorithmic recommendations to users and auditors, as well as provide non-algorithm methods for viewing and receiving content.

While DSA only impacts companies that provide digital services to EU citizens, few will escape its reach, given the global nature of business and technology today. For those American companies that manage to avoid EU citizens as customers, the timeline for U.S. regulations is unknown. However, any company that uses AI and ML should prepare themselves to comply sooner rather than later.

The best course of action is to consider DSA in a similar manner to how many organizations viewed CCPA and GDPR. DSA is likely to become the standard-bearer for digital services regulations and the strictest rules created for the foreseeable future.

Rather than take a piecemeal approach and tackle regulations as they are released (or as they become relevant to your organization), the best way to prepare is to focus on adherence to DSA. It will save time, effort, and financial fines in the future.

The Need: Build trust into AI

Companies often claim that algorithms are proprietary in order to keep all manner of AI-sin under wraps. However, consumer protections are driving the case for transparency, and organizations will soon need to explain what their algorithms do and how results are produced.

Unfortunately, that’s easier said than done. ML models present complex operational challenges, especially in production environments. Due to limitations around model explainability, it can be challenging to extract causal drivers in data and ML models and to assess whether or not model bias exists. While some organizations have attempted to operationalize ML by creating in-house monitoring systems, most of these lack the ability to comply with DSA.

So, what do companies need? Algorithmic transparency.

Rather than rely on a black-box models, organizations need out-of-the-box AI explainability and model monitoring. There must be continuous visibility into model behavior and predictions and an understanding of why AI predictions are made—both of which are vital for building responsible AI.

Those requirements point to a Model Performance Management (MPM) solution that can standardize Model/MLOps practices, provide metrics that explain ML models, and deliver Explainable AI (XAI) that provides actionable insights through monitoring.

Fiddler is not only a leader in MPM but also pioneered proprietary XAI technology that combines all the top methods, including Shapley Values and Integrated Gradients. Built as an enterprise-scale monitoring framework for responsible AI practices, Fiddler gives data scientists immediate visibility into models, as well as model-level actionable insights at scale.

Unlike in-house monitoring systems or observability solutions, Fiddler seamlessly integrates deep XAI and analytics so it’s easy to build a framework for responsible AI practices. Model behavior is understandable from training through production, with local and global explanations and root cause issues for multi-modal, tabular, and text inputs.

With Fiddler, it’s possible to provide explanations for all predictions made by a model, detect and resolve deep-rooted biases, and automate the documentation of prediction explanations for model governance requirements. In short, everything you need to comply.

While regulations may be driving the push for algorithmic transparency, it’s also what ML teams, LOB teams, and business stakeholders want to better understand why AI systems make the decisions they make. By incorporating XAI into the MLOps lifecycle, you’re finally empowering your teams to build trust into AI. And that’s exactly what will soon be required.

Dive in
Related
Blog
5 Principles You Need To Know About Continuous ML Data Intelligence
By Vikram Chatterji • Jul 9th, 2022 Views 221
Blog
5 Principles You Need To Know About Continuous ML Data Intelligence
By Vikram Chatterji • Jul 9th, 2022 Views 221
Blog
Vector Similarity Search: From Basics to Production
By Samuel Partee • Aug 11th, 2022 Views 223
Blog
Flyte: MLOps Simplified
By Demetrios Brinkmann • Nov 23rd, 2022 Views 306
Blog
🔭 Improving Your ML Datasets With Galileo
By Ben Epstein • May 20th, 2022 Views 0