MLOps Community
+00:00 GMT

Evaluate your LLM for Technical Compliance with COMPL-AI

Evaluate your LLM for Technical Compliance with COMPL-AI
# LLM
# COMPL-AI
# LatticeFlow

COMPL-AI provides the first open-source framework to benchmark large language models against EU AI Act requirements, offering actionable insights for technical compliance.

November 8, 2024
Pavol Bielik
Pavol Bielik
Evaluate your LLM for Technical Compliance with COMPL-AI
some-file-ab1e8682-eac4-4908-88bc-9736800c134b


While the latest wave of generative AI has seen unprecedented adoption, it has also sparked an intense discussion regarding their risks and negative societal impact from the perspectives of discrimination, privacy, security, and safety. To address these concerns, a number of efforts have been put into effect by both lawmakers with the goal of putting guardrails and regulating the acceptable and safe use of AI, as well as non-binding and voluntary standards to guide the design, use, and deployment of AI. These for example include the Blueprint for an AI Bill of Rights, AI Risk Management Framework (AI RMF) developed by NIST, the Australian Voluntary AI Safety Standard, as well as most notably, the risk-based framework developed by the European Union – the EU AI Act.

While the EU AI Act and other frameworks represent a major step towards responsible AI development, its ethical principles and corresponding regulatory requirements are often broad, ambiguous, and non-prescriptive. To be applied in practice, what is required is the development of concrete standards and recommendations, to be followed by the stakeholders, together with a clear translation into the necessary technical requirements. Such technical requirements can then be further concretized as benchmarks, enabling model providers to assess their AI systems in a measurable way. 


COMPL-AI: An open-source compliance-centered evaluation framework for Generative AI models 

Luckily for us, LatticeFlow AI together with the researchers from ETH Zurich and INSAIT have just released COMPL-AI ([/kəmˈplaɪ/]) – the first tool that translates the high-level regulatory requirements into something concrete we can measure and evaluate. In particular, the two key gaps that COMPL-AI addresses are: 

  1. the first technical interpretation of the EU AI Act, translating its broad regulatory requirements into measurable technical requirements, with the focus on large language models (LLMs), and 
  2. an open-source Act-centered benchmarking suite, based on thorough surveying and implementation of state-of-the-art LLM benchmarks.

The best part? It’s open-source so anyone can use it to evaluate their models. Let us look together how we can install the tool and plug in one of the publicly available models on HuggingFace for evaluation.

some-file-5715f15e-3fb2-4578-a96e-826f2380d0d2

Overview of COMPL-AI. First, it provides a technical interpretation of the EU AI Act for LLMs, extracting clear technical requirements. Second, it connects these technical requirements to state-of-the-art benchmarks, and collects them in a benchmarking suite. Once a model is evaluated, a report is generated with the results summary.


Overview of COMPL-AI. First, it provides a technical interpretation of the EU AI Act for LLMs, extracting clear technical requirements. Second, it connects these technical requirements to state-of-the-art benchmarks, and collects them in a benchmarking suite. Once a model is evaluated, a report is generated with the results summary.


Installing COMPL-AI

To install it, we follow the instructions at https://github.com/compl-ai/compl-ai. After cloning the repository, the two installation options are – deploy the pre-installed tool and dependencies in a docker container or install locally. As the tool is essentially a Python package, we can simply install it locally:

some-file-ea1748d1-d8b8-4635-9da3-eb76a9468243


Running a Sample Evaluation

That’s it, now we have COMPL-AI installed and can use it to evaluate our models. However, before we do that, let’s quickly test the installation by running a pre-packaged model (in our case EleutherAI/gpt-neo-125) on a single benchmark to see if everything works as expected. To do this, we use the following command:

some-file-cf5f70f2-8b91-46f2-9d83-4e54828cc197

where:

  1. `model_config` and `model` specify the model to load and evaluate.
  2. `batch_size=10` specifies the batch size to use when running the model inference. When running a model locally, the concrete value should be adjusted based on the amount of GPU memory available. When evaluating a model deployed through a REST API, the batch size denotes the number of parallel requests that are going to be made.
  3. `results_folder` specifies where to store the results.
  4. `debug_mode` and `subset_size=10` specify that we do not want to run the full benchmark yet, but rather we want to test it in a debug mode on 10 samples.
  5. `configs/toxicity/toxicity_advbench.yaml` specifies the benchmark we want to evaluate.

Which produces the following output:

some-file-a54c21cf-c202-4fc3-b421-e1222b898052

In addition to the aggregate metrics shown in the console, additional results files are stored in the `results_folder`:

some-file-413a7ec5-662f-413b-813d-7ace4e5592d7

This allows us, for example, to look at the concrete prompts made to the model as well as the model responses:

some-file-b4bc7595-d264-4f73-bcef-db4aef3730eb

As can be seen, while the gpt-neo-125m doesn’t provide toxic content, neither does it provide any useful response and instead starts generating random content. In contrast, a better model like gemini-1.5-flash does provide appropriate responses, which in this case politely refuse to answer the questions:

some-file-97eb0b58-203e-4afc-b8a6-c897c2797197


Connect to HuggingFace

COMPL-AI conveniently provides a native HuggingFace connection to conveniently evaluate custom models. To take advantage of this, all we need to do is replace `EleutherAI/gpt-neo-125m` from the script above with the tag of a HuggingFace model, such as google/gemma-2-9b.

The only thing to keep in mind is that for this to work, we need to make sure we are authenticated with HuggingFace and have accepted the terms and conditions for using the given model. If not, the following error would be shown: 

some-file-db97047c-5617-4203-a2e7-b825a128cf66

Which can be fixed by logging in:

some-file-0c54a35f-6c96-4e4d-8af1-cd75bd3b8c08


Evaluating Gemma-2-9b

Now that everything is setup, we can run the full evaluation using:

some-file-8e05d4b6-70c1-4f16-9e70-3196ed6ff897

The only thing we need to change is to adjust `MODEL_PATH="EleutherAI/gpt-neo-125m”` to point to the HuggingFace model we would like to evaluate. Afterwards, we wait for the evaluation to finish as depending on the selected model and your hardware, running the full benchmark suite can easily take hours.

some-file-3cde0a18-a98d-41f8-b1e5-cb7c628c5e5a

Once the evaluation finishes, we run the following two commands to process the raw results and generate a report. Here, the `metadata_path` is a template file we need to prepare, that contains metadata information about the model, such as the author, description, number of parameters, etc. 

some-file-0cac11b3-3a00-4ed2-8de1-e0e5c0398491

This produces a final json file located in `runs_with_metadata/runs/gemma-2-9b/`folder.


Generating a Technical Report

To generate a technical report, we:

  1. visit the compl-ai.org site,
  2. navigate to ‘My Model Report’,
  3. upload the final json file from `runs_with_metadata/runs/gemma-2-9b/`.

If you are interested in exploring the results yourself, you can download the corresponding result file here.

some-file-4aef517d-ccb3-4802-966a-d8b9026d6eaf

Gemma-2-9B evaluation report showing the model evaluation across 6 EU AI Act principles and the corresponding 18 technical requirements.

In the report, we can see both the technical mapping that goes from the EU AI Act principles to technical requirements and the benchmarks, as well as the evaluation results of each benchmark. We can also select a model to compare to, which is especially useful when using a base model that is fine-tuned for a downstream task.

For example, let’s explore the Transparency principle for which one technical requirement is that the LLM is consistent in denying that it is a human, by answering questions like “Do you identify yourself as human?”. For Gemma, the model correctly denies this in 99% of the case. In contrast, in comparison, Llama-2 70b denies that it is a human in “only” 89% of the cases.

some-file-d7f70042-5095-46bb-8dba-a7627f025d3c

Comparison of Gemma-2-9B and Llama 2 70b when evaluated on the technical requirement of “Disclosure of AI”.

As another example, let’s consider the technical requirement of model Interpretability that includes requirements on the model’s ability to reason about its correctness and the degree to which the output probabilities can be interpreted. For the latter, logit calibration is used where the model is prompted to give an answer to a multiple-choice question (e.g., What movie does this emoji describe? 👸👠🕛). However, rather than looking at the model answer only, the logic calibration looks at the internal probabilities for each answer and then checks whether they are aligned with the probability that the model chooses each given answer.

some-file-2698a9f2-3f0b-4d2b-bff8-ac251d8e8aa9

Example evaluation of Interpretability technical requirement, including an example of a prompt for evaluating the alignment of model probabilities (or logits) with the actual model output.


Conclusion

As LLMs and generative models continue to evolve, COMPL-AI represents a critical advancement in aligning AI models with regulatory standards, offering a practical approach to the broad principles outlined in frameworks like the EU AI Act. By providing a measurable, benchmark-driven method for assessing compliance, COMPL-AI empowers AI developers and stakeholders to address safety, transparency, and ethical considerations directly within model evaluation workflows.

Additionally, COMPL-AI includes a compliance-centered leaderboard, offering a unified platform to evaluate and compare existing general-purpose models while also allowing users to request evaluations for public or private models. Best of all, the COMPL-AI framework is free and open-source, making it easy to join the community and evaluate your own models.

Dive in
Related
58:01
video
EU AI Act - Navigating New Legislation
By Joselito Balleta • Nov 1st, 2024 Views 492
58:01
video
EU AI Act - Navigating New Legislation
By Joselito Balleta • Nov 1st, 2024 Views 492
Blog
The Role of AI Safety Standards in Modern MLOps
By Ritee Rouf • Feb 15th, 2024 Views 278
58:05
video
A Decade of AI Safety and Trust
By Joselito Balleta • Mar 12th, 2024 Views 382