This summer, the first phase of the European AI Act came into effect. Much like the General Data Protection Regulation (GDPR), the primary objective of the AI Act is to safeguard human rights and ensure that AI is used in a trustworthy and transparent manner. The Act mandates that individuals are fully informed when AI is being utilised and have a clear understanding of how it impacts them.
Similar to GDPR, the AI Act categorises AI systems into different levels of risk: low risk, high risk, and unacceptable risk. Additionally, it outlines specific transparency requirements for organisations using AI.
Here’s a breakdown of the different risk categories under the AI Act:
Unacceptable Risk: AI systems deemed too dangerous are prohibited within the EU. These include systems that manipulate mental states or exploit vulnerable individuals, posing significant threats to fundamental human rights.
For example, this category could include “social scoring” systems, where individuals are rated based on their behaviour in society.
High Risk: This category includes AI systems that impact critical infrastructure or have significant implications for human rights, such as AI systems used in health or law enforcement.
Low Risk: AI systems that pose minimal risks, such as recommendation engines or spam filters, fall under this category.
Transparency Requirements: When deploying AI systems like chatbots, it is mandatory to inform users that they are interacting with a machine. Similarly, AI-generated content, such as deep fakes, must be clearly labelled to ensure users are aware of its artificial origin.
Why is the AI Act Important?
At its core, Artificial Intelligence (AI) is driven by machine learning—a technology that enables machines to predict outcomes based on patterns found in existing data, or generate artificial data that resembles previously seen data. The AI Act is crucial because it provides a regulatory framework that ensures this technology is used in a way that respects human rights, fosters trust, and promotes transparency.
Machine learning offers countless applications that benefit both individuals and businesses. However, some of these applications occur in highly sensitive areas where the stakes are much higher.
When utilising machine learning or similar statistical methods, it is essential to interpret and understand the results produced by these systems. In certain scenarios, this can be challenging.
Consider a scenario where your doctor is trying to diagnose a medical condition using a model known as a 'decision tree.' Each piece of data guides the decision-making process along different branches, ultimately leading to a diagnosis, such as diabetes. If you ask your doctor to explain why they believe you have diabetes, they should be able to analyse and interpret the data that led to this conclusion.
In contrast, it would be unsatisfactory if the diagnosis was simply, “I think you have diabetes because ChatGPT says so.”
As mentioned earlier, there are situations where not being able to fully analyse AI-generated results may be acceptable, but in many critical cases, transparency and interpretability are paramount. For instance, when AI is used in determining medical treatments or approving bank loans, understanding how the AI arrived at its conclusions is far more significant than, say, knowing why ChatGPT recommends a particular cookie recipe—the latter is not a life-altering decision.
The AI Act mandates that data owners must always have the ability to understand which specific data and analyses were used in the decision-making process. This is where transparency becomes vital; individuals should be able to see how their data is being used and have the option to decline its usage.
If your business already utilises machine learning, it is essential to ensure that your practices align with the AI Act. Compliance means not only safeguarding human rights but also maintaining the trust of your customers and stakeholders by being transparent about how their data is used.
Navigation the complexity of AI Model training
Training a good machine learning model is a costly endeavour, usually involving massive amounts of data, and thousands of hours of CPU time. For companies that make their living selling Machine Learning as-a-service it is therefore imperative that the outcome of this process—the model’s parameters—remain confidential.
The EU AI Act’s oversight requirements complicates things for the service provider. If the service provider’s AI models have to undergo regular compliance and conformity checks, then the model has to be handed over to the authority performing these checks. At least for the duration that it takes to perform the checks.
With the introduction of the EU AI Act, new regulations around accountable AI are being established. These rules mandate that the models you use to train generative algorithms must adhere to copyright laws and ethical standards.
To illustrate how this works, imagine you're training a machine to recognize cats and dogs in various images. You begin by showing the machine a photo and then providing the correct label—whether it’s a dog or a cat. After many such examples, the machine begins to learn and make associations: when it sees a particular image, it can identify whether it’s a cat or a dog.
After training on millions of photos, the model becomes proficient at classification. However, there's a catch: the machine cannot explain its reasoning. It may accurately differentiate between a cat and a dog, but it cannot articulate how it makes these distinctions.
The Partisia Solution
At Partisia, we offer comprehensive solutions to help companies remain compliant with evolving AI regulations. Our approach is twofold: we provide both passive and active compliance strategies.
Solution 1: Transparency and Traceability Through Blockchain
One of our core strengths lies in delivering transparency solutions. Leveraging our cutting-edge blockchain technology, we help businesses ensure their AI training algorithms are fully traceable, auditable, and transparent. This allows you to meet regulatory requirements with confidence. Our tools help you track and verify that your algorithms are trained in a manner that adheres to relevant legal frameworks, so you can be certain your company is compliant with the latest legislation.
Solution 2: Certification and Compliance Auditing
Staying compliant with regulations is not just a legal requirement but also a responsibility towards your customers and stakeholders. At Partisia, we assist companies in obtaining the necessary certifications for compliance, not only under GDPR but also under the forthcoming AI Act.
Our auditing services ensure that your organisation is fully prepared to meet these standards. What sets us apart is our ability to conduct thorough audits using our advanced Multi-Party Computation (MPC) technology. This allows us to assess your trained models without compromising your proprietary Machine Learning algorithms. With Partisia, you can ensure your models are compliant without sacrificing confidentiality or intellectual property.
We are always here to help! Reach out to us today, and let’s have a talk about how your company can stay AI compliant.