Methodology for the Risk and Impact Assessment of Artificial Intelligence Systems from the point of view of Human Rights, Democracy and the Rule of Law (HUDERIA Methodology)

Citation:

Work data:

Type of work: Handbook/Primer/Guide

Categories:

e-Government & e-Administration | ICT Infrastructure

Tags:

artificial intelligence

Abstract:

The risk and impact assessment of AI systems from the point of view of human rights, democracy and the rule of law (“the HUDERIA”) is a guidance which provides a structured approach to risk and impact assessment for AI systems specifically tailored to the protection and promotion of human rights, democracy and the rule of law.

It is intended to be used by public and private actors and play a unique and critical role at the intersection of international human rights standards and existing technical frameworks on risk management in the AI context.

The HUDERIA is a standalone, non-legally binding guidance document. Parties to the Framework Convention have the flexibility to use or adapt it, in whole or in part, to develop new approaches to risk assessment or to refine existing ones, in accordance with their applicable laws. However, Parties must fully comply with their obligations under the Framework Convention, including the baseline standards for risk and impact management outlined in Chapter V.

Downloads: