Executive summary:

On the 13th February 2020, the Court of the Hague halted the use of Systeem Risico Indicatie (SyRI), a machine-learning risk-scoring algorithm which predicted the risk of fraud and non-compliance in the associated to individual welfare recipients in the area of social security and income-dependent schemes based on risk-indicators inferred from historical data. The Court found that the SUWI wet, the legislation which regulated the use of the SyRI did not offer sufficient insights to welfare recipients into the risk-indicators used in the model, as well as the functioning of the machine-learning model. On that basis, the Court found that the legislation lacked transparency and breached Art. 8(2) of the European Convention on Human Rights (ECHR).

Facts of the case:

The case of SyRI is the first litigation where the use of machine-learning algorithms by the tax administration of an EU Member State was contested and scrutinized before a court. Several civil society interest groups, including the Dutch Section of the International Commission of Jurists (Nederlandse Juristen Comité voor de Mensenrecht – NCJM), The National Client Participation Council (Landelijke Cliëntenraad), the Privacy First Foundation (Stichting Privacy First), two private individuals and the Netherlands Trade Union Confederation (Federatie Nederlandse Vakbeweging – FNV), claimed that Article 64 and 65 of the wet structuur uitvoeringsorganisatie werk en incomen (SUWI wet), the law implementing the organization of social security and income-dependent schemes, was in breach of a number of provisions of the ECHR, most notably the right to privacy and the right to non-discrimination, provisions of binding international treaties, as well as in breach of Dutch basic law.

Article 64 of the SUWI wet is the law which authorized the linkage of data held by any Dutch administrative or governmental bodies. In other words, it authorized the Dutch tax administration to process the data of any administration or governmental bodies, whether local, regional or national, to prevent and track social security frauds and non-compliance. To process such voluminous bulk of data, Article 65 of the SUWI wet provided the integration of SyRI, a machine-learning algorithm that would automatically process the data of the aforementioned governmental and administrative institutions. As described in the Netherlands’ country report, SyRI is an external risk-management algorithm, which predicts the risk of social security fraud, by scoring welfare recipients based on risk-factors which are inferred from historical data. The algorithm processes previously known cases of compliant welfare recipients, and known cases of fraudulent activities to derive risk-factors, i.e. factors which could be indicative of fraudulent behaviour, and develop a scoring grid so that in fine the model can automatically analyse the documentation provided by welfare recipients and select some individuals for additional audits by human government officials.

Naturally such methods of data processing is not devoid of any risk to citizens’ fundamental rights. In that regard, the work of a plethora of researchers has shown how such predictive model and risk-factors, how ‘objective’ as these may be, can nonetheless have disproportionately prejudicial effects on minorities and individual of lower socio-economic status. This was, for example, demonstrated by ProPublica in the case of COMPAS, a similar risk-scoring algorithm used by US penal courts to assess a defendant’s eligibility to bail. With COMPAS, two individuals, virtually identical in all ways, could be subjected to vastly different treatment in court by virtue of their difference in skin colour. Furthermore, an initial bias of the machine can be exacerbated through learned bias. As more individuals of a minority group become targeted by audits, the machine-learning system will gradually learn that belonging to such minority group is a risk-factor, ultimately turning such prediction into a discriminatory self-fulfilling prophecy. Accordingly, the claimants in the SyRI case argued that the use of such machine-learning model could generate risks of discrimination, stereotyping and stigmatization which have been identified by the ECtHR as protected aspects under the right to a private life. These claims were further supported by the UN special rapporteur on extreme poverty and human rights.

As advanced by the claimants, the second issue with the use of SyRI was the fact that Article 64 provided a literal carte blanche to the Dutch tax administration (Belastingdienst), as it authorized the linkage and processing of literally all data held by any Dutch governmental or administrative body, and offered little to no protection to welfare recipients as safeguards against potential abuses.

Ruling :

Based on the aforementioned arguments, the Court ultimately sided with the claimants and found that in its present form, Article 65 of the SUWI wet did not offer sufficient safeguards to be found compliant with the right to a private life enshrined in Article 8 of the ECHR. The Court started by emphasizing that the fight against fraud is a quintessential function of the State, which could therefore warrant the use of a very wide range of data. On that basis, it found no issue with Article 64 in its present form. However, the court acknowledged that machine-learning algorithms do bear an important risk of discrimination. Consequently, laws which authorize the use of machine-learning algorithms must be equipped with sufficient safeguards and provide citizens with verifiable insights into the functioning of the algorithms and the risk-factors used by the model. The Court found that, in its present form, Article 65 of the SUWI did not contain such description of the model and did not offer insights into the model used, and thereby violated Article 8(2) of the ECHR.

What are the key takeaways from the SyRI case?

The key takeaway from the SyRI case is the necessity of ‘transparency in the interest of verifiability’ (§6.91) when integrating machine-learning algorithms to processes of the tax administration. The Court did not admonish the use of predictive models processing large bulks of data by the administration, in fact one could say it implicitly legitimized it, by reminding claimants of the quintessential mission of the State to prevent fraud and how ‘undeniably useful’ the use of algorithms may be for such prerogative. However, the Court in fine sided with the claimants by ruling that the use of machine-learning, by virtue of its inherent risk of discrimination, should be accompanied by verifiable normative ‘points of reference’ for citizens so these can verify whether their data is processed lawfully and to ensure that the outputs of these models are free of any bias or error. In that way the Court emphasizes that transparency, in the form of these verifiable insights, is a necessary tool to neutralize some of these potential biases and errors and empower citizens to do in each individual cases. Yet, the case of SyRI also raises an important question: ‘how much transparency should we afford to citizens regarding algorithms to combat fraud?’ As revealing some of the risk-indicators is necessary to neutralize some of the risks of indirect or direct discrimination, but revealing all risk-indicators used could jeopardize the well-functioning of these algorithms on organized criminal groups, who cause the vast majority of social security frauds in the EU. Upon conversations with tax officials in other EU Member States, they admit that the SyRI case has not fallen on deaf ears. Nonetheless, national legislators and national courts have not yet provided tax administrations with a sufficiently satisfactory answer regarding the degree of transparency that should be afforded to taxpayers when it comes to machine-learning algorithms.

References:

Full English transcript of the SyRI case, available at: https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878.

N. Appelman, Dr. R. Ó Fathaigh, Prof. J. van Hoboken, ‘Social Welfare, Risk Profiling and Fundamental Rights: The Case of SyRI in the Netherlands’ (2021) JIPITEC 12(4) 2021 – Introduction by Dr. B. Hanuz.

Report of the Special Rapporteur on extreme poverty and human rights, UN Doc A/74/493 (11 October 2019).​​