A world-first approach to help organisations comply with future AI regulations in Europe has been published today in a report by the University of Bologna and the University of Oxford. It has been developed in response to the proposed EU Artificial Intelligence Act (AIA) of 2021, which seeks to coordinate a European approach in tackling the human and ethical implications of AI.
A one-of-a-kind approach, the ‘capAI’ (conformity assessment procedure for AI) will support businesses to comply with the proposed AIA, and prevent or minimise the risks of AI behaving unethically and damaging individuals, communities, wider society, and the environment.
Produced by a team of experts at Oxford University’s Saïd Business School and Oxford Internet Institute, and at the Centre for Digital Ethics of the University of Bologna, capAI will help organisations assess their current AI systems to prevent privacy violations and data bias. Additionally, it will support the explanation of AI driven outcomes, and the development and running of systems that are trustworthy and AIA compliant.
Thanks to capAI, organisations will be able to produce a scorecard for each of their AI systems, which can be shared with their customers to show the application of good practice and conscious management of ethical AI issues. This scorecard covers the purpose of the system, the organisational values that underpin it, and the data that has been used to inform it. It also includes information on who is responsible for the system - along with their contact details - should their customers wish to get in touch with any queries or concerns.
Professor Matthias Holweg, American Standard Companies Chair in Operations Management at Saïd Business School and co-author of the report, remarked: "To develop the capAI procedure, we created the most comprehensive database of AI failures to date. Based on this, we produced a one-of-a-kind toolkit for organisations to develop and operate legally compliant, technically robust and ethically sound AI systems, by flagging the most common failures and detailing current best practices. We hope that capAI will become a standard process for all AI systems and prevent the many ethical problems they have caused.”
In addition to ensuring compliance with the AIA, capAI can help organisations working with AI systems to: monitor the design, development, and implementation of AI systems; mitigate the risks of AI failures of AI-based decisions; prevent reputational and financial harm; assess the ethical, legal, and social implications of their AI systems.
Professor Luciano Floridi, OII Professor of Philosophy and Ethics of Information, Director of the Centre for Data Ethics at the University of Bologna, and co-author of the report summarises the goal of the project: "AI in its many varieties is meant to benefit humanity and the environment. It is an extremely powerful technology, but it can be risky. So, we have developed an auditing methodology that can check AI’s alignment with human and EU legislation, and help ensure its proper development and use.”