Societies are becoming increasingly dependent on digital technologies, including decision-making algorithms applied across a broad spectrum of such as transportation and automated driving, health and medical diagnostic, public administration and criminal justice, insurance, commerce, news, advertising or autonomous weapons. As decision-making algorithms become more widespread and capable of processing large bodies of information both at scale and speed, and optimising choices for humans and institutions, they can bring wide crosscutting benefits to society. Yet they are also increasingly complex, not free from errors and possibly biased. They remain challenging to conventional decision-making where human judgment and ethical deliberation matter.
In July 2018, with support from the Swiss Re Institute, IRGC organised a multi-disciplinary and multi-stakeholder workshop on the governance of decision-making algorithms. In a round-table setting, a group of thirty participants representing scientists and developers in AI and data science, experts in regulatory issues and policy analysis, and representatives of industry and insurance companies discussed governing risks and benefits, both at technical and governance levels.
The report elaborated after the workshop emphasizes the main opportunities and challenges related to the development and use of decision-making learning algorithms (DMLAs):
- Technology and governance are tightly connected.
- What is new: algorithms can ‘learn’ and self-evolve
- Risk evaluation and governance must be done for each domain and applications (e.g. healthcare, automated driving, predictive policy-making, insurance, etc.)
- Governance of DMLAs must consider existing regulations and key benchmarks, against which DMLAs’ performance must be calibrated
- In is critically important to improve the accuracy of outcome, compared to human decision
- The problem of algorithmic biases is a key challenge, in particular when it leads to outcome with unfair social consequences
- Under some circumstances humans should remain in control. It is thus important to differentiate if and when humans are or must be in control, and when they are unable to take control back
- There is a need to develop standards, principles and governance rules and embed them into the very design and functioning of DMLAs
- Defining accountability, responsibility and liability remain central
- Engineering digital trust and developing social trustworthiness is a critical challenge and increasingly relevant