Trustworthy AI
How to deal with the upcoming European AI ACT and the development of trustworthy AI in research organizations
Abstract
A trustworthy AI system is one that is legal, ethical and robust. This system respects and ensures respect for essential values such as security, sustainable impact, human responsibility, explicability, fairness and privacy. With the upcoming European regulations, such as the IA ACT, the framework for the design, implementation and deployment of AI systems is becoming clearer and will become essential for any institution or company working in AI.
In this context, we need to have a look at the practical implementation of this AI legislation at European level on the research activity of AI research laboratories, around and using AI, and by extension on the management and support of these researchers.
To meet this challenge, we propose a 3-point methodology developed as part of the EDIH Dihnamic project, which implements a transversal axis on trustworthy AI: 1) a continuous and recurrent acculturation of researchers and support teams on these subjects, 2) the setting up of a trustworthy AI referent in research laboratories for personalized support and 3) the constitution of a multi-disciplinary Hub of experts made up of scientific peers from the private and public sectors to support research projects in upstream reflection and during projects when the referent and the researcher wish on more sensitive subjects.
We will discuss all these recommendations and the challenges they raise for the subject of trustworthy AI, from a technical and management point of view, as well as the metrics that must be put in place to evaluate this methodology.