Special Edition Blog Series on PhD Abstracts (Part VI)
This post is a continuation of the blog post series on PhD abstracts. You can find the first part of the series here.
Onntje Hinrichs: Data as Regulatory Subject Matter in European Consumer Law
Whereas data has traditionally not been subject matter that belonged to the regulatory ambit of consumer law, this has changed gradually over the past decade. Today, the regulation of data is spread over various legal disciplines, with data protection law forming its core. The EU legislator is thus confronted with the challenging task of constructing different dimensions of data law without infringing its core, i. e. to coordinate each dimension with the ‘boundary setting instrument’ of the GDPR. This thesis analyses one of these new dimensions: consumer law. Consumer law thereby constitutes a particularly interesting field due to its multiple interactions and points of contact with the core of data law. Authors have increasingly identified the potential of consumer law to complement the implementation of the fundamental right to data protection when both converge on a common goal, i. e. on the right to data protection and to protect consumer privacy interests. At the same time, however, consumer policy might conflict and occasionally even be diametrically opposed with the fundamental right to data protection when, for instance, consumer law enables data (‘counter-performance’) to be commodified in consumer contracts that package pieces of data into pieces of trade. To disentangle this regulatory quagmire, it is therefore necessary to better understand how consumer law participates in and shapes the regulation of data. However, up to this date, no comprehensive enquiry exists that analyses to what extent data has become regulatory subject matter in European consumer law. This thesis aims to fill that gap. This study will provide further clarity on both: what consumer law actually regulates when it comes to the subject matter of data as well as its often-unclear relationship with data protection law. At the same time, this study contributes to further the general understanding of how data is perceived and shaped as regulatory subject matter in EU law.
Robert Poe: Distributive Decision Theory and Algorithmic Discrimination
In the European Union and the United States, principles of normative decision theory, like the precautionary principle, are inherently linked to the practices of risk and impact assessments, particularly within regulatory and policy-making frameworks. The descriptive decision theory approach has been applied in legal research as well, where user-centric legal design moves beyond plain-language interpretation to consider how users process information. The EU Digital Strategy employs elements of both normative and descriptive decision theories, integrating these methodologies to develop an encompassing strategy, forecasting technological risks but also engaging stakeholders in constructing a digital future that is consistent with European fundamental rights. Working under the premise that “code is law,” a variety of tools have been developed to prescript normative constraints on automated decision-making systems, such as: privacy- preserving technologies (PETs), explainable artificial intelligence techniques (XAI), fair machine learning (FML), and hate speech and disinformation detection systems (OHS). The AI Act is relying on such prescriptive technologies to perform “value-alignment” between automated decision-making systems and European fundamental rights (which is obviously of the utmost importance). It is in this way that technologists—whether scientist or humanist or both—are becoming the watchmen of European fundamental rights. However, these are highly specialized fields that take focused study to understand even a portion of what is being recommended as fundamental rights ensuring. The information asymmetry between experts in the field and those traditionally engaged in legal interpretation (and let us not forget voters), raises the age-old question of who is watching the watchmen themselves? While some critical analysis of these technologies has been conducted, much remains unexplored. Questions like these about digital constitutionalism and the EU Digital Strategy will be considered throughout the manuscript. But the main theme will be to develop a set of “rules for the rules” applied to the “code as law” tradition, specifically focusing on the debiasing tools of algorithmic discrimination and fairness as a case study. Such rules for the rules are especially important given the threat of an algorithmic Leviathan.
Soumia El Mestari: Threats to Data Privacy in Machine Learning: Legal and Technical Research Focused on Membership inference Attacks
This work systematically discusses the risks against data protection in modern Machine Learning systems taking the original perspective of the data owners, who are those who hold the various data sets, data models, or both, throughout the machine learning life cycle and considering the different Machine Learning architectures. It argues that the origin of the threats, the risks against the data, and the level of protection offered by PETs depend on the data processing phase, the role of the parties involved, and the architecture where the machine learning systems are deployed. By offering a framework in which to discuss privacy and confidentiality risks for data owners and by identifying and assessing privacy-preserving countermeasures for machine learning, this work could facilitate the discussion about compliance with EU regulations and directives. We discuss current challenges and research questions that are still unsolved in the field. In this respect, this paper provides researchers and developers working on machine learning with a comprehensive body of knowledge to let them advance in the science of data protection in machine learning field as well as in closely related fields such as Artificial Intelligence.