Compliant use of AI - implementing the EU’s AI and Data Act

Compliant use of AI - implementing the EU’s AI and Data Act

Converting sensitive data into anonymised data

AI is already impacting how we automate tasks and make better decisions across many. Data is the oil and although large language models trained on publicly available data are very popular some of the more important problems require data that are proprietary and something highly confidential and regulated.

The Partisia Platform is developed to address data confidentiality but also transparency into the use of data. These properties address both data regulations like GDPR and the core requirements in the EU’s AI act and Data act.

no alt text

R&D focal points

Explore our R&D efforts and witness the transformative power of AI in shaping the future of technology.


Regulating the use of AI is a concern for both professionals in the field as well as regulators around the globe. Controlling AI is a major concern to avoid that poorly generated AI content impact decisions from elections, public opinions and personal decisions that may impact both the individuals economy and health condition. Ensuring transparency is a key tool as knowing how the underlying AI algorithms and how they were created is crucial to assessing the true value. For AI trained and applied on confidential data, privacy is equally important. Striking this balance is a core part of the Partisia Platform.


Separating the training from the use of AI models is crucial. To ensure transparency, it is important to keep a record of the applied AI model, which is hard if training and using part of the same process. Trained AI models can be directly applied using the Partisia Platform to ensure that the users’ private data needed to generate a precise response are kept encrypted at all time without compromising transparency into the applied AI model.


Converting sensitive data into anonymised data that can be used in clear text to train AI models is a great and ongoing challenge. Partisia’s Confidential Computing product is designed for this purpose and builds on more than 8 years of work with some of the most trusted data custodians like Denmark Statistics and the Danish Healthcare Data. Continuous R&D effort is needed to improve and expand on this approach to data security.


Federated Machine learning is a promising privacy enhancing technology and computational principle built into Partisia’s Confidential Computing product. The basic idea is that some problems can be solved by sending the algorithm to data as opposed to sending all data to the algorithm. For some AI models, this reduced the required central computation as most of the heavy computation can be done locally. Hereby the computational overhead from using MPC to secure the “central computations” is reduced significantly. Continuous R&D effort is needed to improve and expand on this approach to data security.


The most critical R&D effort required for using MPC in large-scale AI training is new innovative protocols. Initial work on MPC based neural network and MPC models that support off-loading of heavy computations to one of two confidential computing parties are promising candidates for the next generation of MPC based AI.

Invest in AI

Position yourself at the forefront of innovation by investing in Partisia's groundbreaking AI technologies. Join us in shaping the future of artificial intelligence and unlocking new possibilities in technology, business, and beyond.

Real control and privacy of data is paramount for individuals and organizations to harness the tremendous benefits of AI without losing control of the very data that defines us.
Kurt Nielsen Chief Executive Officer, Partner
no alt text

Quantum Computing

We provide and continuously extend a quantum-proof platform for secure use of quantum computing. It is paramount for us to ensure that the Partisia Platform remains quantum proof unlike most of the currently applied cryptography.

no alt text