Final LeADS Event at Scuola Superiore Sant’Anna

 The final event of the LeADS project took place at the Scuola Superiore Sant’Anna in Pisa, bringing together the 15 Early Stage Researchers (ESRs) for a memorable conclusion. Over three days of dynamic activities and intellectual exchange, the event featured an Innovation Challenge, a thought-provoking conference titled “Legally Compliant Data-Driven Society,” and a Poster Walk showcasing research across the four Crossroads of the LeADS project.

 

The Innovation Challenge: AI Act Compass: Navigating Requirements for High-Risk AI Systems

As part of the LeADS project’s final event, an Innovation Challenge was held in collaboration with the Pisa Internet Festival to address the complexities of the AI Act. The challenge aimed to inspire participants to develop practical solutions to help AI developers and deployers navigate the AI Act’s risk classification system and understand the specific requirements applicable to their AI systems.

The competition was conducted in two phases. In the first phase, held remotely, teams submitted mock-ups demonstrating how their solutions could simplify compliance with the AI Act. The second phase took place in person in Pisa, where teams refined their solutions to address a real-world scenario and presented their proposals to a jury.

The scenario centered around SmartBytes, a startup developing an AI-powered algorithm called CyrcAIdian to monitor sleep patterns, which faced critical compliance challenges under the new AI Act. Participants were tasked with determining how CyrcAIdian’s classification—either as a fitness tracker or a medical device—would influence its regulatory obligations and commercialization strategy.

The challenge fostered innovative thinking and awarded cash prizes for solutions that were not only practical and user-friendly but also legally robust, with a focus on helping businesses navigate the  complexities of AI regulations.

The Innovation Challenge culminated in an exciting afternoon of presentations, where participating teams showcased their creative approaches to tackling the AI compliance scenario. It was a day marked by energy, collaboration, and healthy competition. Each team brought forward unique and innovative solutions, making the jury’s decision exceptionally difficult.

Ultimately, The Data Jurists claimed first prize and also received the special award for the Most Innovative Solution. The AI-Act Navigators secured second place, while The AI-WARE team came in third. The award for Best Presentation went to AI-Renella.

Congratulations to all the participants for their outstanding efforts!

Special Edition Blog Series on PhD Abstracts (Part VI)

This post is a continuation of the blog post series on PhD abstracts. You can find the first part of the series here.

Onntje Hinrichs: Data as Regulatory Subject Matter in European Consumer Law

Whereas data has traditionally not been subject matter that belonged to the regulatory ambit of consumer law, this has changed gradually over the past decade. Today, the regulation of data is spread over various legal disciplines, with data protection law forming its core. The EU legislator is thus confronted with the challenging task of constructing different dimensions of data law without infringing its core, i. e. to coordinate each dimension with the ‘boundary setting instrument’ of the GDPR.  This thesis analyses one of these new dimensions: consumer law. Consumer law thereby constitutes a particularly interesting field due to its multiple interactions and points of contact with the core of data law. Authors have increasingly identified the potential of consumer law to complement the implementation of the fundamental right to data protection when both converge on a common goal, i. e. on the right to data protection and to protect consumer privacy interests. At the same time, however, consumer policy might conflict and occasionally even be diametrically opposed with the fundamental right to data protection when, for instance, consumer law enables data (‘counter-performance’) to be commodified in consumer contracts that package pieces of data into pieces of trade. To disentangle this regulatory quagmire, it is therefore necessary to better understand how consumer law participates in and shapes the regulation of data. However, up to this date, no comprehensive enquiry exists that analyses to what extent data has become regulatory subject matter in European consumer law. This thesis aims to fill that gap. This study will provide further clarity on both: what consumer law actually regulates when it comes to the subject matter of data as well as its often-unclear relationship with data protection law. At the same time, this study contributes to further the general understanding of how data is perceived and shaped as regulatory subject matter in EU law.

Robert Poe: Distributive Decision Theory and Algorithmic Discrimination

In the European Union and the United States, principles of normative decision theory, like the precautionary principle, are inherently linked to the practices of risk and impact assessments, particularly within regulatory and policy-making frameworks. The descriptive decision theory approach has been applied in legal research as well, where user-centric legal design moves beyond plain-language interpretation to consider how users process information. The EU Digital Strategy employs elements of both normative and descriptive decision theories, integrating these methodologies to develop an encompassing strategy, forecasting technological risks but also engaging stakeholders in constructing a digital future that is consistent with European fundamental rights. Working under the premise that “code is law,” a variety of tools have been developed to prescript normative constraints on automated decision-making systems, such as: privacy- preserving technologies (PETs), explainable artificial intelligence techniques (XAI), fair machine learning (FML), and hate speech and disinformation detection systems (OHS). The AI Act is relying on such prescriptive technologies to perform “value-alignment” between automated decision-making systems and European fundamental rights (which is obviously of the utmost importance). It is in this way that technologists—whether scientist or humanist or both—are becoming the watchmen of European fundamental rights. However, these are highly specialized fields that take focused study to understand even a portion of what is being recommended as fundamental rights ensuring. The information asymmetry between experts in the field and those traditionally engaged in legal interpretation (and let us not forget voters), raises the age-old question of who is watching the watchmen themselves? While some critical analysis of these technologies has been conducted, much remains unexplored. Questions like these about digital constitutionalism and the EU Digital Strategy will be considered throughout the manuscript. But the main theme will be to develop a set of “rules for the rules” applied to the “code as law” tradition, specifically focusing on the debiasing tools of algorithmic discrimination and fairness as a case study. Such rules for the rules are especially important given the threat of an algorithmic Leviathan.

Soumia El Mestari: Threats to Data Privacy in Machine Learning: Legal and Technical Research Focused on Membership inference Attacks

This work systematically discusses the risks against data protection in modern Machine Learning systems taking the original perspective of the data owners, who are those who hold the various data sets, data models, or both, throughout the machine learning life cycle and considering the different Machine Learning architectures. It argues that the origin of the threats, the risks against the data, and the level of protection offered by PETs depend on the data processing phase, the role of the parties involved, and the architecture where the machine learning systems are deployed. By offering a framework in which to discuss privacy and confidentiality risks for data owners and by identifying and assessing privacy-preserving countermeasures for machine learning, this work could facilitate the discussion about compliance with EU regulations and directives. We discuss current challenges and research questions that are still unsolved in the field. In this respect, this paper provides researchers and developers working on machine learning with a comprehensive body of knowledge to let them advance in the science of data protection in machine learning field as well as in closely related fields such as Artificial Intelligence.

Special Edition Blog Series on PhD Abstracts (Part V)

This post is a continuation of the blog post series on PhD abstracts. You can find the first part of the series here.

Armend Duzha: Data Management and Analytics on Edge Computing and Serverless Offerings

This research will propose a new approach to protect against risks related to personal data exploitation, drawing a methodology for the implementation of data management and analytics in edge computing and serverless offerings in considering privacy properties to modulate the prevention of risks and promotion of innovation. In addition, it will establish AI-driven processes to increase the user’s ability to define in a more accurate way both his offerings in edge computing environments and the data management and analytics as regards the protection of her/his privacy; draw the architecture in terms of data governance and analytics linked with the resource resources management on such dynamic environments.

 

Christos Magkos: Personal Health Information Management
Systems for User Empowerment

In the era of immense data accumulation in the healthcare sector, effective data management is becoming increasingly relevant in two domains: data empowerment and personalisation. As healthcare shifts towards personalized and precision medicine, prognostic tools that stem from robust modeling of healthcare data, while remaining compliant with privacy regulations and the four pillars of medical ethics (Autonomy, Beneficence, Non-maleficence and Justice) are lacking. The following thesis assesses the principles that the design of health data storage and processing should adhere to through the prism of  personal information management systems (PIMS). PIMS enable decentralized data processing, while adhering to data minimization and allowing for control of data exposure to third parties, hence enhancing privacy and patient autonomy. We propose a system where data is processed in a decentralized fashion, providing actionable recommendations to the user through risk stratification and causal inference modeling of health data sourced from electronic health records and IoT devices. Through an interoperable personal information management system, previously fragmented data which can be variably sourced and present with inconsistencies can be integrated into one system consistent with the EHDS, and as such data processing can proceed more accurately.  When attempting to design clinically actionable healthcare analytics and prognostic tools, one of the main issues arising through current risk stratification models is the lack of actionable recommendations that are deeply rooted to pathologies analyzed. We therefore compare whether causal inference models derived from existing literature and known causal pathways can provide equally accurate predictions to risk stratification models when medical outcomes are known. This would allow for explainable and actionable outcomes, as physicians are reluctant to act upon “black box” recommendations due to medical liabilities and patients are less likely to be compliant to unexplained recommendations, rendering them less effective when translated to the clinic.  Simulated datasets based on different types of data collected are analyzed according to risk stratification and causal inference models in order to infer potential recommendations. Different methodologies of risk stratification and causal inference are assessed and compared in order to find the optimal model that will function as a source of recommendations. Finally, we propose a holistic model under which the user is fully empowered to share data, analytics and metadata derived from this data management system with doctors, hospitals and researchers respectively with recommendations that are designed to be explainable and actionable.

Aizhan Abdrassulova: Boundaries of Data Ownership: Empowering Data Subjects in the EU.

Striving to find the most effective data governance system in the European Union over time not only does not lose its relevance, but on the contrary is gaining momentum. One of the frequently proposed models was the concept of data ownership, which, after being abandoned, seemed scientifically unattractive for a while, but now continues to be discussed among legal scholars and policymakers. Today, a fresh perspective on the data ownership is essential, placing the greatest emphasis on personal data ownership in order to empower data subjects and expand their capabilities and control. In this area, the practical side and the improvements that individuals and companies with an awareness of data ownership can get are significant. When it comes to the boundaries of data ownership, first of all it is necessary to look at the existing gaps and problems “from the inside”, and find out what is generally considered problematic for the data subject itself? What are the expectations of the data subjects themselves? What level of control over their data do they consider acceptable and sufficient? Along with efforts to find answers to these pressing questions, there is an obvious need to provide suggestions for improving the level and quality of personal data management, which could be satisfactory for data subjects. The issues of privacy, access to data, as well as the ability to use and benefit from their data by individuals can not be overlooked. In this regard, an analysis of provisions of the Data Act Proposal is to be done, as well as consideration of the data ownership approach as artifact as exchange. Scientific research has been relatively little developed regarding individuals’ perception of the value of their own data, while this provides new opportunities and makes it valuable for the possibility of understanding the views and needs of data subjects.

WINNERS of the Innovation Challenge “AI Act Compass: Navigating Requirements for High-Risk AI Systems”

It was a long day

It was a restless but fair competition,

a lot of energies were spent,

innovative solutions were developed

it was challenging

it was great.

All the Teams invested their best efforts to solve both the Off-line and In-person phases of the challenge, all the solutions were excellent but… some more than others, it was hard but this is the Jury verdict (click on the Team names to know the winners’ solutions and bios)

1st prize: The Data Jurists 

2nd prize: The AI-Act Navigators 

3rd prize: The AI-WARE

Special prizes: 

Most innovative Solution: The Data Jurists 

Best Presentation: AI-Renella