Postponed!

More news will be published on this website

Morning

9.00 a.m. - Welcome  

Prof. dr. Sylvie De Raedt (tax law professor and research manager DigiTax UAntwerp)

9.05 a.m. - A functional taxonomy of AI fiscal governance in the EU 

David Hadwick (PhD candidate in tax law at DigiTax UAntwerp & PhD Fellow at the FWO - Research Foundation for Flanders)
In two decades, the proliferation of use of AI by tax administrations has been nothing short of outstanding from a handful of Member States in the early 2000s to a majority of EU tax administrations making daily use of the technology. This presentation will outline the current state of use AI by tax administrations in the EU, the Member States that make use of AI, the types of AI models used and the different functions performed.

9.50 a.m. - The legal limits of webscraping 

Prof. dr. Sylvie De Raedt
This part of the program will discuss the automated collection of data through  webscraping, and will explore the relevant case law of the European Court of Human Rights as well as the GDPR requirements to define the legal limits of webscraping. How does the practice of webscraping relate to the prohibition of fishing expeditions? What are good practices?  

10.35 a.m. - Coffee break

10.50 a.m. - Getting ready for data analysis: what about data quality

Michiel Van Roy (PhD candidate in Applied Economics, Faculty of Business and Economics and DigiTax UAntwerp)
The reliability of predictive machine learning models can be compromised when trained on low quality data. Algorithms that can automatically identify low quality data in datasets are highly desired. This session will explore one of such algorithms, based on the Shapley value, along with its challenges and limitations.

11.30 a.m. - Predictive algorithms for fraud detection

Daphne Lenders (PhD candidate in Computer Science, Adrem Data Lab and DigiTax UAntwerp)
One of the many application areas of Artificial Intelligence are predictive algorithms, which can automate decision processes, normally made by humans. After providing a general introduction about such algorithms, we will explore through a case study their potential for detecting tax fraud from data. What benefits can these algorithms bring? And perhaps more importantly: what challenges and risks do they give rise to?

12.30 p.m. - networking lunch at the Faculty Club

Afternoon

2.00 p.m. - Algorithmic bias and automation bias: the legal perspective 

Prof. dr. Anne Van de Vijver (tax law professor DigiTax UAntwerp)
Algorithmic bias refers to automated decisions that are systematically unfair to certain groups of people, while automation bias is the propensity of people to prefer suggestions from automated decision-making systems and to ignore contradictory information. This session will explore how the legal system sets limits to discriminatory biases. How do fundamental rights protect taxpayers from biased decision-making?

2.30 p.m. - Methods to measure bias and mitigate unfairness when constructing machine learning models

Prof. Toon Calders (professor in computer science UAntwerp)
Artificial intelligence is more and more responsible for decisions that have a huge impact on our lives. But predictions, made using data mining and algorithms, can affect population subgroups differently. Academic researchers and journalists have shown that decisions taken by predictive algorithms sometimes lead to biased outcomes, reproducing inequalities already present in society. In this presentation we will present an overview of recent research on measuring bias in data and how to avoid such bias to result in unfair models.

3.20 p.m. - Coffee break

3.30 p.m. - Data analysis and automated decision making: transparency requirements 

Prof. dr. Sylvie De Raedt and David Hadwick
Data collection and automated decision-making systems have now become an integral part of our daily lives. This type of innovation also seems to have brought new risks - risks to fundamental rights, distrust and disruptions of institutional processes. In the context of automation, transparency has been hailed as the new keyword. Yet, transparency is an elusive concept spanning across different areas of the law. This presentation will showcase the different transparency requirements, in ECtHR jurisprudence, the GDPR and the Proposal for the AI Act.

4.20 p.m. - How to explain the black box decision

Dieter Brughmans (PhD candidate in Data Science at Digitax UAntwerp)
Businesses are increasingly turning to machine learning systems to automate and enhance their operations and decision-making. By making use of complex modeling techniques, they are able to create models with high and sometimes superhuman predictive performance. However, given their complexity, these models are often used as black-boxes for which it is unclear how predictions are made. This has led to the development of a new field called eXplainable Artificial Intelligence (XAI) which studies how these algorithms can be made comprehensible for humans again. In this presentation, we will discuss how different XAI algorithms can be used to explain black-box predictive models.

5 p.m. - Closing drink

Programme day 2 – University of Antwerp

Postponed!

More news will be published on this website

Morning

  • 9.00: Welcome – prof. dr. Sylvie De Raedt, tax law professor and research manager DigiTax UAntwerp
  • 9.05: A functional taxonomy of AI fiscal governance in the EU - David Hadwick (PhD candidate in tax law at DigiTax Uantwerp & PhD Fellow at the FWO - Research Foundation for Flanders)

In two decades, the proliferation of use of AI by tax administrations has been nothing short of outstanding from a handful of Member States in the early 2000s to a majority of EU tax administrations making daily use of the technology. This presentation will outline the current state of use AI by tax administrations in the EU, the Member States that make use of AI, the types of AI models used and the different functions performed.

  • 9.15: The legal limits of webscraping - prof. dr. Sylvie De Raedt (tax law professor DigiTax UAntwerp)

This part of the program will discuss the automated collection of data through  webscraping, and will explore the relevant case law of the European Court of Human Rights as well as the GDPR requirements to define the legal limits of webscraping. How does the practice of webscraping relate to the prohibition of fishing expeditions? What are good practices?  

  • 9.45: Predictive algorithms for fraud detection - Daphne Lenders (PhD candidate in Computer Science, Adrem Data Lab and DigiTax UAntwerp)

One of the many application areas of Artificial Intelligence are predictive algorithms, which can automate decision processes, normally made by humans. After providing a general introduction about such algorithms, we will explore through a case study their potential for detecting tax fraud from data. What benefits can these algorithms bring? And perhaps more importantly: what challenges and risks do they give rise to? 

  • 10.30: Coffee break

  • 10.45: Methods to measure bias and mitigate unfairness when constructing machine learning models - prof. Toon Calders (professor in computer science UAntwerp)

Artificial intelligence is more and more responsible for decisions that have a huge impact on our lives. But predictions, made using data mining and algorithms, can affect population subgroups differently. Academic researchers and journalists have shown that decisions taken by predictive algorithms sometimes lead to biased outcomes, reproducing inequalities already present in society. In this presentation we will present an overview of recent research on measuring bias in data and how to avoid such bias to result in unfair models.

  • 11.45: Algorithmic bias and automation bias: the legal perspective - David Hadwick (PhD candidate in tax law at DigiTax Uantwerp & PhD Fellow at the FWO - Research Foundation for Flander)

Algorithmic bias refers to automated decisions that are systematically unfair to certain groups of people, while automation bias is the propensity of people to prefer suggestions from automated decision-making systems and to ignore contradictory information. This session will explore how the legal system sets limits to discriminatory biases. How do fundamental rights protect taxpayers from biased decision-making?

  • 12.30 networking lunch at the Faculty club - UAntwerp

Afternoon

  • 14.15:  Data analysis and automated decision making: transparency requirements - dr. Alessia Tomo (Postdoctoral Researcher at DigiTax UAntwerp)

Data collection and automated decision-making systems have now become an integral part of our daily lives. This type of innovation also seems to have brought new risks - risks to fundamental rights, distrust and disruptions of institutional processes. In the context of automation, transparency has been hailed as the new keyword. Yet, transparency is an elusive concept spanning across different areas of the law. This presentation will showcase the different transparency requirements, in ECtHR jurisprudence, the GDPR and the Proposal for the AI Act.

  • 15.00: Getting ready for data analysis: what about the predictive quality of your data and how to explain the black box decision - Michiel Van Roy (PhD Candidate in Applied Economics, Faculty of Business and Economics and Digitax UAntwerp)

The reliability of predictive machine learning models can be compromised when trained on low quality data. Algorithms that can automatically identify low quality data in datasets are highly desired. This session will explore one of such algorithms, based on the Shapley value, along with its challenges and limitations.

Businesses are increasingly turning to machine learning systems to automate and enhance their operations and decision-making. By making use of complex modeling techniques, they are able to create models with high and sometimes superhuman predictive performance. However, given their complexity, these models are often used as black-boxes for which it is unclear how predictions are made. This has led to the development of a new field called eXplainable Artificial Intelligence (XAI) which studies how these algorithms can be made comprehensible for humans again. In this presentation, we will discuss how different XAI algorithms can be used to explain black-box predictive models.

16.00: Panel discussion on ethical AI - Kari Spijker & Devin van den Berg

17.00: Closing drink