Skip to main content

Model training in regulated industries – privacy-first approaches


Why model training has become a compliance challenge

Model training sits at the core of modern AI systems. In financial services, insurance, and other regulated industries, models are increasingly used to detect fraud, assess risk, and automate decision-making. However, as models grow more data-hungry and complex, the way they are trained has become a regulatory concern.

Traditional model training relies on centralizing large volumes of sensitive data. This approach clashes directly with GDPR, banking secrecy laws, and the requirements introduced under the EU AI Act. The result is a growing gap between what AI teams want to build and what compliance teams can approve.

The limits of traditional centralized model training

Centralized training approaches create structural risks that are difficult to mitigate in regulated environments.

Data exposure and privacy risk

Aggregating data into a single training environment increases the blast radius of breaches and misuse. Even anonymized datasets can often be re-identified when combined at scale.

Regulatory friction and audit complexity

Supervisory authorities increasingly expect firms to explain where training data comes from, how it is protected, and how risks are mitigated throughout the model lifecycle. Centralized pipelines make this harder to prove.

Operational inefficiency and false positives

Models trained on narrow or siloed datasets often perform poorly, leading to higher false-positive rates in fraud detection and compliance monitoring.

What modern model training requires in regulated sectors

To remain viable, model training must evolve beyond simple data pooling.

Distributed learning by design

Training should happen where data already resides, reducing unnecessary movement of sensitive information.

Privacy-preserving computation

Advanced cryptographic techniques are required to ensure that insights can be shared without exposing raw data or proprietary model parameters.

Built-in compliance controls

Auditability, traceability, and data minimization must be native to the training process, not added as an afterthought.

Federated and confidential model training explained

Federated learning changes the model training paradigm. Instead of sending data to a central location, the model is distributed to participants, trained locally, and updated based on aggregated learnings.

When combined with privacy-preserving computation, federated model training becomes suitable for regulated industries.

This approach allows institutions to benefit from shared intelligence while maintaining full control over their data and meeting regulatory expectations.

Related reading: Federated learning in finance – privacy-safe AI collaboration

Why model training alone is not enough without cryptographic protection

Standard federated learning still exposes risks. Model updates can leak sensitive information if not properly protected.

This is where cryptography becomes essential.

By applying techniques such as Multi-Party Computation (MPC), model updates remain encrypted throughout aggregation. No single party can access another participant’s data or insights, and even the orchestrator remains blind.

This creates a training environment where collaboration and confidentiality coexist.

Model training under the EU AI Act

The EU AI Act introduces explicit expectations around data governance, risk management, and technical robustness.

For high-risk AI systems, regulators will assess:

  • How training data is sourced and protected

  • Whether data minimization principles are enforced

  • How bias, drift, and performance are monitored

Privacy-preserving model training directly supports these requirements by design, rather than relying on compensating controls.

Related context: The new AI Act is here – is your company prepared?

Business impact of secure model training

For regulated organizations, modern model training is not just a technical choice. It has direct business consequences.

Better model performance

Access to broader, collaborative insights improves accuracy and reduces blind spots in fraud and risk models.

Lower compliance risk

Encrypted and distributed training reduces exposure to regulatory penalties and audit findings.

Faster innovation cycles

Teams can experiment and deploy models without lengthy legal approvals tied to data sharing.

Five steps to ensure compliancy with the EU AI act

The European AI Act, like the GDPR before it, brings new levels of changes to how companies must handle arti cial intelligence systems. Compliance isn't just a legal formality; it's crucial for safeguarding human rights, maintaining transparency, and building trust with your customers.

 EU Act

 What's inside?

  • Identify and manage AI risk levels

  • Implement transparency measures

  • Conduct regular audits

  • Use Blockchain for traceability

  • Adopt ethical AI practices

  • Tailored compliance solutions

and more...

 

 

 

 

 

Where Partisia fits into privacy-preserving model training

Partisia enables secure, collaborative model training using Multi-Party Computation as a foundational technology.

By combining federated learning with MPC, Partisia allows organizations to:

  • Train AI models across institutions without sharing data

  • Keep model updates encrypted at all times

  • Prove compliance with GDPR and the EU AI Act

  • Scale AI collaboration across borders and organizations

“The future of model training in regulated industries is not about collecting more data in one place. It’s about collaborating on intelligence without exposing what should remain private.”
Mark Medum Bundgaard, Chief Product Officer, Partisia

 

Preparing your organization for the next generation of model training

Organizations should start by reassessing their AI pipelines:

  • Identify where sensitive data enters the training process

  • Evaluate whether data movement can be reduced or eliminated

  • Introduce privacy-preserving computation at the model level

  • Align AI governance with upcoming regulatory audits

Modern model training is no longer just a data science concern. It is a strategic capability that determines how far AI can scale in regulated environments.

Summary

Model training is evolving under pressure from regulation, security risks, and rising AI expectations. Centralized approaches are reaching their limits.

Privacy-preserving, distributed model training offers a path forward - enabling stronger models, lower compliance risk, and real collaboration across institutions.

For regulated industries, this is not an optimization. It is becoming a requirement.

Partisia
Partisia
2025.11.04