TK MILAB AI and Law: "Computational Accountability" by Joris Hulstijn (Tilburg University)
Computer systems that are based on artificial intelligence, and specifically on machine learning techniques, are more and more pervasive. Based on large amounts of data such systems take decisions that matter. For example, they select our news, they decide to grant a loan or to apply a discount based on a customer profile. Still, some person (human or legal) remains responsible for the decisions being made by the system. Looking back, that person is also accountable for the outcomes, and may even be liable in case of damages. That puts constraints on the design of autonomous systems and on the governance models that surround these decisions.
Can we design autonomous systems in such a way that all decisions can be justified later, and the person who is ultimately responsible, can be held accountable?
In this talk, we will analyse the problem of computational accountability along two lines. First, we will discuss system design. For all decisions, evidence must be collected about the decision rule that was used, and the data that was applied. However, many algorithms are not understandable for humans. Hence the need for explainable AI. Alternatively, we must prove that the system is set up in such a way that it can only use valid algorithms and reliable data sets, which are appropriate for the decision task. Second, we will discuss the governance model for autonomous systems. What are the standards and procedures, as well as the roles and responsibilities to make sure that only valid algorithms and reliable data are being used in decision making? The discussion will be illustrated by practical examples.
The event will be hosted on Zoom. You can register by clicking here.