Special Edition on Dissemination Pieces: ESRs Insights on Law and Technology (Part III)

This post is a continuation of the blog post series on dissemination pieces. You can find the first part of the series here.

Barbara Lazarotto – ESR 7: Can Business-To-Government Data Sharing Serve The Public Good?

Data is considered to be the world’s biggest business, leading some to affirm that it can be considered a commodity. Access to data has been essential to promote competition and innovation between different stakeholders, including the public sector. The European Union has enacted a series of Regulations that overlap and interconnect with the main objective of enhancing the sharing of data from all parties. In this context, this research aims to explore them and analyze if they indeed assist business-to government data sharing.

Fatma Dogan – ESR 8: To Use or Not to Use? Re-using Health Data in AI Development

This study examines the re-use of health data in the context of AI development, focusing on regulatory frameworks governing this practice under the European Health Data Space. It explores how transparency and the protection of personal data are balanced with the need for innovation in healthcare. By analysing real-world examples and the application of General Data Protection Regulation principles, particularly transparency, this study assesses whether health data can be re-used for AI-driven healthcare advancements without undermining individuals’ data protection rights.

 

Xengie Doan – ESR 9: Collective Consent, Risks and Benefits of DNA Data Sharing

Health data is sensitive and sharing it could have many risks for personal or shared genetic data. So how can impacted individuals consent together? Collective consent has been used in person, but no digital collective consent exists yet. Challenges span legal-ethical issues and technical properties such as transparency and usability. To address these challenges, this work uses genetic data sharing as a use case to better understand what tools and methods can enhance a user-friendly, transparent, and legal-ethically aware collective consent.

Armend Duzha – ESR 10: Extracting Data Value through Data Governance

Harvesting value from data requires an organization-wide approach. Data governance plays an essential role in a heterogenous environment with multiple entities and complex digital infrastructures, enabling organisations to gain a competitive advantage. This research examines a new approach for data governance developed to extract data value respecting the ever delicate balance between transparency and privacy. In addition, it provides an overview of the key innovations brought in by novel technologies such as Artificial Intelligence, Federated Learning and Blockchain, and how these can be integrated in a data governance program.

Christos Magkos – ESR 11: Persοnal Health Infοrmatiοn
Management Systems (PHIMS) Fοr User Empοwerment: A
Cοmprehensive Overview

The management of continuously increasing personal health data, in the digital information era, is becoming more and more relevant in modern healthcare. Through integrating raw data in digital platforms, personal health information management systems (PHIMS) could provide a method for the storage, management, and regulation of personal health data access. We examine how PHIMS can empower users to take control of their own healthcare by combining diverse health information sources such as health monitoring devices and electronic health records into a single easily accessible system.

Aizhan Abdrassulova – ESR 12: Personal Data Ownership:
Individuals’ Perspective in the EU

Speaking about the concept of data ownership, first of all it is necessary to look at the existing gaps and problems “from the inside”, and find out what is generally considered problematic for the data subject itself? What are the expectations of the data subjects themselves? What level of control over their data do they consider acceptable and sufficient? Along with efforts to find answers to these pressing questions, there is an obvious need to provide suggestions
for improving the level and quality of personal data management, which could be satisfactory for data subjects.

Onntje Hinrichs – ESR 13: Why Your Data is not Your Property (and Why You Still End Up Paying With It)

This essay explores three interrelated topics that reveal tensions in the European approach towards the regulation of the data economy: (i) data as property (ii) data and fundamental rights, and (iii) data as payment. By retracing how scholars and policy makers have attempted to find an appropriate regulatory framework for the data economy, this essay shows that up to this day, contradictions in the EU’s approach to the data economy persist and become evident in our everyday lifes online. Despite not owning our data, we end up paying for digital content and services with our data. This essay explains this paradox and its role in ongoing legal battles between the large corporations, civil society and the EU.

Robert Poe – ESR 14: The Perils of Value Alignment

This essay argues that global AI governance risks institutionalizing violations of fundamentalrights. It critiques the ethical foundation of AI governance, observing that moral objectives are being prioritized over legal obligations, leading to conflicts with the rule of law. The essay calls for a re-evaluation of AI governance strategies, urging a realistic approach that respects citizens, legal precedent, and the nuanced realities of social engineering, aiming to provide an account of some of the dangers in governing artificial intelligence—with an emphasis on Justice.

Soumia El Mestari – ESR 15: What AI is stealing! Data privacy risks in AI

Even if we may not realize it, AI’s presence in our lives is increasing at a great pace. Most technological services we use nowadays are driven by AI, and that could be good news since AI’s aims to improve the quality of the services. Unfortunately, to work well, AI greedily feeds on user data: AI models collect, process, and store a great deal about us, which is a problem if such sensitive information is leaked. This chapter discusses that this risk of AI’s leaking personal data is not only hypothetical and suggests how to mitigate it.

Special Edition on Dissemination Pieces: ESRs Insights on Law and Technology (Part II)

This post is a continuation of the blog post series on dissemination pieces. You can find the first part of the series here.

Cristian Lepore – ESR 4: Self-Sovereign Identity: The Revolution in Digital Identity

Digital identity is important for businesses and governments to grow. When apps or websites ask us to create a new digital identity or log in using a big platform, we do not know what happens to our data. That is why experts and governments are working on creating a safe and trustworthy digital identity. This identity would let anyone file taxes, rent a car, or prove their financial income easily and privately. This new digital identity is called Self-Sovereign Identity (SSI). In our work, we propose an SSI-based model to evaluate different identity options and we then prove our model value on the European identity framework.

Mitisha Gaur – ESR 5: Policing the AI Judge: A Balancing Act
AI is ubiquitous in public and private sectors used for optimizing tasks through complex data analysis. While the technology is promising, its use in high-risk domains raises concerns about trust, fairness, and accountability. This chapter analyzes AI backed automated decision-making systems being used by public authorities and advocates for a strict governance framework based on risk management and algorithmic accountability practices focused on safeguarding fundamental rights and upholding the rule of law by adhering to the principles of natural justice.
Maciej Zuziak – ESR 6: How to Collaboratively Use Statistical Models in a Secure Way
The following article compiles research on the brink of privacy, federated learning and data governance to provide a reader with a basic understanding of the nuanced world of decentralised learning systems. It starts from simple notions of personal data and its connection to artificial intelligence. Afterwards, it goes into the realm of statistical learning to explain the basic technocratic lingo in a (hopefully) engaging way. With those topics covered, it proceeds to deliver on the basic notion of Data Collaborative and Decentralised Data Governance – an arcane term that the reader will be familiar with at the end of this lecture. Finally – it poses some open-ended remarks on whether the centralized infrastructure is really beneficial to our safety. While the delivery of the article is rather simple and straightforward – it also serves the curious reader with a set of links and pointers that would allow them to go deeper into a well of data governance and large AI infrastructure.

Special Edition on Dissemination Pieces: ESRs Insights on Law and Technology (Part I)

Welcome to our special edition blog post highlighting the exciting research of our 15 Early-Stage Researchers (ESRs). These talented individuals have been tackling some of the most pressing challenges at the intersection of law and technology. In this edition, we’ve compiled their abstracts, each designed for a general, non-specialist audience. Each ESR has worked to break down complex interdisciplinary research into accessible narratives that explore critical issues like privacy, data sharing, AI, and the broader regulatory landscape of the data economy. Their findings offer innovative solutions, reveal key limitations, and propose paths for future exploration.

These research pieces are written in a way that makes them valuable to a wide range of readers—from policymakers and lawyers to computer scientists and engaged citizens. They illustrate the power of interdisciplinary collaboration in driving progress and finding solutions to today’s most urgent legal-tech challenges. The complete collection of these insightful abstracts will be published in a special issue of the open-access journal Opinion Juris in Comparitione. Stay tuned for more updates and discover how this work shapes the future of law and technology.

Qifan Yang – ESR 1: Your Data Rights: How does the GDPR affect the Social Media Market?

With the development of digitalization, personal data has gradually become a valuable resource for social media companies to extract value and obtain market dominance. Personal data processing can raise serious concerns about privacy leaks and misuse. In response, the adoption of the General Data Protection Regulation (GDPR) enhances personal data protection and market competition, but also potentially influences economic interests, the rights of data subjects, as well as market dynamics. The chapter uses the social media market to understand the complex relationship between the GDPR and market competition.

 

 

Louis Sahi – ESR 2: Evaluation and Harmonization of Data Quality Criteria: Insights from Expert Interviews for Legal Application

This article focuses on the development of a framework for assessing data quality, with the goal of enabling automated evaluation. It highlights the increasing importance of data quality in modern, data-driven organizations, especially in light of evolving regulatory frameworks such as the GDPR, Open data EU regulation. The paper begins by addressing inconsistencies
in current data quality criteria (DQCs) and proposes a unified list based on a comprehensive literature review. The research seeks to align data quality standards with the broader context of data processing, including governance and lifecycle management. Through expert interviews with professionals in data management and legal fields, the study aims to consolidate the DQCs while ensuring compliance with EU regulations. The article emphasizes the need for collaborative data processing in decentralized environments, such as the European Common Data Spaces, and the importance of ensuring trust, legal compliance, and reliability in shared data. The research contributes towards bridging the gap between academic methodologies and real-world industrial applications of data quality assessment.

 

Tommaso Crepax – ESR 3: BYOD – Bring Your Own Data. The struggle of re-using data in a world of heterogeneous systems

Data portability, often seen as straightforward, is a complex issue in the digital era, intersecting with law, technology, and economics. This contribution uses a BBQ analogy to illustrate the challenges of ensuring data remains functional and meaningful across systems. Examining regulations like GDPR, the Digital Markets Act, and the Data Act, highlights gaps in addressing data semantics and content completeness. The piece advocates for a holistic, integrated approach to enhance data portability, emphasizing the need for a Legality Attentive Data Scientist (LeADS) approach to drive innovation and user empowerment in the digital marketplace.

LeADS Poster Presentation

The final event of the LeADS project took place at Scuola Superiore Sant’Anna in Pisa, bringing together the 15 Early-Stage Researchers (ESRs) for a final gathering. During three days of dynamic activities and intellectual exchange, this culminating event featured an Innovation Challenge, a thought-provoking conference titled “Legally Compliant Data-Driven Society,” and a Poster Walk showcasing the research in the four LeADS project crossroads.

After the conference on ‘Legally Compliant Data-Driven Society,’ participants had the opportunity to explore and discuss the posters presented by ESRs, which showcased the final results of their work conducted across each Crossroad. We have included the posters in this newsletter so that readers can explore the research findings and insights.

Below, you can find the posters.

Final LeADS Event at Scuola Superiore Sant’Anna

 The final event of the LeADS project took place at the Scuola Superiore Sant’Anna in Pisa, bringing together the 15 Early Stage Researchers (ESRs) for a memorable conclusion. Over three days of dynamic activities and intellectual exchange, the event featured an Innovation Challenge, a thought-provoking conference titled “Legally Compliant Data-Driven Society,” and a Poster Walk showcasing research across the four Crossroads of the LeADS project.

 

The Innovation Challenge: AI Act Compass: Navigating Requirements for High-Risk AI Systems

As part of the LeADS project’s final event, an Innovation Challenge was held in collaboration with the Pisa Internet Festival to address the complexities of the AI Act. The challenge aimed to inspire participants to develop practical solutions to help AI developers and deployers navigate the AI Act’s risk classification system and understand the specific requirements applicable to their AI systems.

The competition was conducted in two phases. In the first phase, held remotely, teams submitted mock-ups demonstrating how their solutions could simplify compliance with the AI Act. The second phase took place in person in Pisa, where teams refined their solutions to address a real-world scenario and presented their proposals to a jury.

The scenario centered around SmartBytes, a startup developing an AI-powered algorithm called CyrcAIdian to monitor sleep patterns, which faced critical compliance challenges under the new AI Act. Participants were tasked with determining how CyrcAIdian’s classification—either as a fitness tracker or a medical device—would influence its regulatory obligations and commercialization strategy.

The challenge fostered innovative thinking and awarded cash prizes for solutions that were not only practical and user-friendly but also legally robust, with a focus on helping businesses navigate the  complexities of AI regulations.

The Innovation Challenge culminated in an exciting afternoon of presentations, where participating teams showcased their creative approaches to tackling the AI compliance scenario. It was a day marked by energy, collaboration, and healthy competition. Each team brought forward unique and innovative solutions, making the jury’s decision exceptionally difficult.

Ultimately, The Data Jurists claimed first prize and also received the special award for the Most Innovative Solution. The AI-Act Navigators secured second place, while The AI-WARE team came in third. The award for Best Presentation went to AI-Renella.

Congratulations to all the participants for their outstanding efforts!

Special Edition Blog Series on PhD Abstracts (Part VI)

This post is a continuation of the blog post series on PhD abstracts. You can find the first part of the series here.

Onntje Hinrichs: Data as Regulatory Subject Matter in European Consumer Law

Whereas data has traditionally not been subject matter that belonged to the regulatory ambit of consumer law, this has changed gradually over the past decade. Today, the regulation of data is spread over various legal disciplines, with data protection law forming its core. The EU legislator is thus confronted with the challenging task of constructing different dimensions of data law without infringing its core, i. e. to coordinate each dimension with the ‘boundary setting instrument’ of the GDPR.  This thesis analyses one of these new dimensions: consumer law. Consumer law thereby constitutes a particularly interesting field due to its multiple interactions and points of contact with the core of data law. Authors have increasingly identified the potential of consumer law to complement the implementation of the fundamental right to data protection when both converge on a common goal, i. e. on the right to data protection and to protect consumer privacy interests. At the same time, however, consumer policy might conflict and occasionally even be diametrically opposed with the fundamental right to data protection when, for instance, consumer law enables data (‘counter-performance’) to be commodified in consumer contracts that package pieces of data into pieces of trade. To disentangle this regulatory quagmire, it is therefore necessary to better understand how consumer law participates in and shapes the regulation of data. However, up to this date, no comprehensive enquiry exists that analyses to what extent data has become regulatory subject matter in European consumer law. This thesis aims to fill that gap. This study will provide further clarity on both: what consumer law actually regulates when it comes to the subject matter of data as well as its often-unclear relationship with data protection law. At the same time, this study contributes to further the general understanding of how data is perceived and shaped as regulatory subject matter in EU law.

Robert Poe: Distributive Decision Theory and Algorithmic Discrimination

In the European Union and the United States, principles of normative decision theory, like the precautionary principle, are inherently linked to the practices of risk and impact assessments, particularly within regulatory and policy-making frameworks. The descriptive decision theory approach has been applied in legal research as well, where user-centric legal design moves beyond plain-language interpretation to consider how users process information. The EU Digital Strategy employs elements of both normative and descriptive decision theories, integrating these methodologies to develop an encompassing strategy, forecasting technological risks but also engaging stakeholders in constructing a digital future that is consistent with European fundamental rights. Working under the premise that “code is law,” a variety of tools have been developed to prescript normative constraints on automated decision-making systems, such as: privacy- preserving technologies (PETs), explainable artificial intelligence techniques (XAI), fair machine learning (FML), and hate speech and disinformation detection systems (OHS). The AI Act is relying on such prescriptive technologies to perform “value-alignment” between automated decision-making systems and European fundamental rights (which is obviously of the utmost importance). It is in this way that technologists—whether scientist or humanist or both—are becoming the watchmen of European fundamental rights. However, these are highly specialized fields that take focused study to understand even a portion of what is being recommended as fundamental rights ensuring. The information asymmetry between experts in the field and those traditionally engaged in legal interpretation (and let us not forget voters), raises the age-old question of who is watching the watchmen themselves? While some critical analysis of these technologies has been conducted, much remains unexplored. Questions like these about digital constitutionalism and the EU Digital Strategy will be considered throughout the manuscript. But the main theme will be to develop a set of “rules for the rules” applied to the “code as law” tradition, specifically focusing on the debiasing tools of algorithmic discrimination and fairness as a case study. Such rules for the rules are especially important given the threat of an algorithmic Leviathan.

Soumia El Mestari: Threats to Data Privacy in Machine Learning: Legal and Technical Research Focused on Membership inference Attacks

This work systematically discusses the risks against data protection in modern Machine Learning systems taking the original perspective of the data owners, who are those who hold the various data sets, data models, or both, throughout the machine learning life cycle and considering the different Machine Learning architectures. It argues that the origin of the threats, the risks against the data, and the level of protection offered by PETs depend on the data processing phase, the role of the parties involved, and the architecture where the machine learning systems are deployed. By offering a framework in which to discuss privacy and confidentiality risks for data owners and by identifying and assessing privacy-preserving countermeasures for machine learning, this work could facilitate the discussion about compliance with EU regulations and directives. We discuss current challenges and research questions that are still unsolved in the field. In this respect, this paper provides researchers and developers working on machine learning with a comprehensive body of knowledge to let them advance in the science of data protection in machine learning field as well as in closely related fields such as Artificial Intelligence.

Special Edition Blog Series on PhD Abstracts (Part V)

This post is a continuation of the blog post series on PhD abstracts. You can find the first part of the series here.

Armend Duzha: Data Management and Analytics on Edge Computing and Serverless Offerings

This research will propose a new approach to protect against risks related to personal data exploitation, drawing a methodology for the implementation of data management and analytics in edge computing and serverless offerings in considering privacy properties to modulate the prevention of risks and promotion of innovation. In addition, it will establish AI-driven processes to increase the user’s ability to define in a more accurate way both his offerings in edge computing environments and the data management and analytics as regards the protection of her/his privacy; draw the architecture in terms of data governance and analytics linked with the resource resources management on such dynamic environments.

 

Christos Magkos: Personal Health Information Management
Systems for User Empowerment

In the era of immense data accumulation in the healthcare sector, effective data management is becoming increasingly relevant in two domains: data empowerment and personalisation. As healthcare shifts towards personalized and precision medicine, prognostic tools that stem from robust modeling of healthcare data, while remaining compliant with privacy regulations and the four pillars of medical ethics (Autonomy, Beneficence, Non-maleficence and Justice) are lacking. The following thesis assesses the principles that the design of health data storage and processing should adhere to through the prism of  personal information management systems (PIMS). PIMS enable decentralized data processing, while adhering to data minimization and allowing for control of data exposure to third parties, hence enhancing privacy and patient autonomy. We propose a system where data is processed in a decentralized fashion, providing actionable recommendations to the user through risk stratification and causal inference modeling of health data sourced from electronic health records and IoT devices. Through an interoperable personal information management system, previously fragmented data which can be variably sourced and present with inconsistencies can be integrated into one system consistent with the EHDS, and as such data processing can proceed more accurately.  When attempting to design clinically actionable healthcare analytics and prognostic tools, one of the main issues arising through current risk stratification models is the lack of actionable recommendations that are deeply rooted to pathologies analyzed. We therefore compare whether causal inference models derived from existing literature and known causal pathways can provide equally accurate predictions to risk stratification models when medical outcomes are known. This would allow for explainable and actionable outcomes, as physicians are reluctant to act upon “black box” recommendations due to medical liabilities and patients are less likely to be compliant to unexplained recommendations, rendering them less effective when translated to the clinic.  Simulated datasets based on different types of data collected are analyzed according to risk stratification and causal inference models in order to infer potential recommendations. Different methodologies of risk stratification and causal inference are assessed and compared in order to find the optimal model that will function as a source of recommendations. Finally, we propose a holistic model under which the user is fully empowered to share data, analytics and metadata derived from this data management system with doctors, hospitals and researchers respectively with recommendations that are designed to be explainable and actionable.

Aizhan Abdrassulova: Boundaries of Data Ownership: Empowering Data Subjects in the EU.

Striving to find the most effective data governance system in the European Union over time not only does not lose its relevance, but on the contrary is gaining momentum. One of the frequently proposed models was the concept of data ownership, which, after being abandoned, seemed scientifically unattractive for a while, but now continues to be discussed among legal scholars and policymakers. Today, a fresh perspective on the data ownership is essential, placing the greatest emphasis on personal data ownership in order to empower data subjects and expand their capabilities and control. In this area, the practical side and the improvements that individuals and companies with an awareness of data ownership can get are significant. When it comes to the boundaries of data ownership, first of all it is necessary to look at the existing gaps and problems “from the inside”, and find out what is generally considered problematic for the data subject itself? What are the expectations of the data subjects themselves? What level of control over their data do they consider acceptable and sufficient? Along with efforts to find answers to these pressing questions, there is an obvious need to provide suggestions for improving the level and quality of personal data management, which could be satisfactory for data subjects. The issues of privacy, access to data, as well as the ability to use and benefit from their data by individuals can not be overlooked. In this regard, an analysis of provisions of the Data Act Proposal is to be done, as well as consideration of the data ownership approach as artifact as exchange. Scientific research has been relatively little developed regarding individuals’ perception of the value of their own data, while this provides new opportunities and makes it valuable for the possibility of understanding the views and needs of data subjects.

WINNERS of the Innovation Challenge “AI Act Compass: Navigating Requirements for High-Risk AI Systems”

It was a long day

It was a restless but fair competition,

a lot of energies were spent,

innovative solutions were developed

it was challenging

it was great.

All the Teams invested their best efforts to solve both the Off-line and In-person phases of the challenge, all the solutions were excellent but… some more than others, it was hard but this is the Jury verdict (click on the Team names to know the winners’ solutions and bios)

1st prize: The Data Jurists 

2nd prize: The AI-Act Navigators 

3rd prize: The AI-WARE

Special prizes: 

Most innovative Solution: The Data Jurists 

Best Presentation: AI-Renella 

 

Special Edition Blog Series on PhD Abstracts (Part IV)

This post is a continuation of the blog post series on PhD abstracts. You can find the first part of the series here.

 

Barbara Lazarotto: Business to Government Data Sharing in the EU and the protection of personal data: Making sense of a complex framework.

Data is a crucial resource that plays an essential role in the economy and society. Yet, due to market failures, data has been often treated as a commodity and held in silos by a few actors, often large companies. In light of recent developments, there have been talks about transferring data from exclusive control of certain groups to making it accessible for public use. The European Union has taken a step in this direction by introducing the “European Data Strategy”, a set of rules and regulations that amongst other objectives, also aimed at making it easier for stakeholders to share data among themselves and with governments. However, this regulatory framework which includes different modalities of business-to-government data sharing is fairly new and the synergy between them is still yet to be seen since many of them may overlap and have possible contradictions.

Against this backdrop, there is a pressing need to analyze the current legal and regulatory landscape for business-to-government data sharing in the EU, how they interact with each other, and their possible consequences for the rights of data subjects. The analysis will delve into the complexities of the regulatory conundrum associated with business-to-government data sharing and explore whether the current framework effectively addresses the data subject’s data protection rights as enshrined in the GDPR. Ultimately, this research aims to provide a comprehensive understanding of the legal and regulatory landscape for business-to-government data sharing and its connections with data subject’s rights.

Fatma Dogan: Navigating the European Health Data Space: A Critical Analysis of Transparency Implications in Secondary Data Use under GDPR.

This thesis aims to critically examine the European Health Data Space (EHDS) proposal, with a specific focus on its secondary use framework and the implications of transparency requirements of the General Data Protection Regulation (GDPR). The research delves into the intricate intersection of EHDS provisions, GDPR transparency requirements, and the proportionality principle. In this context, whether a rights-based approach to privacy regulation still suffices to address the challenges triggered by new data processing techniques such as secondary use of data will be discovered. GDPR’s rights-based approach grants individuals a set of rights and obligation to offer transparency is one of them. However, it is highly unclear how these rights could be able to employ by data subjects under EHDS secondary use framework.

Xengie Doan: Tools and Methods for User-Centered, Legal-Ethical Collective Consent Models: Genomic Data Sharing.

Health data is sensitive and sharing it could have many risks, which is especially true for genetic data. One’s genome might also indicate physical or health risks that could be used for more personalized healthcare or personalized insurance premiums. These risks affect not only the individual who has initially consented to the collection and sharing, but also those who may be identified from the DNA, such as genetic relatives or those who share a genetic mutation. How can relevant individuals come together to consent to genetic data sharing? Collective consent stems from indigenous bioethics where indigenous tribes fought for their right to consent to biomedical research as a community, not just as individuals. It has been used in research partnerships with indigenous groups to improve stakeholder involvement instead of treating indigenous populations as test subjects. Though it has been proposed, no digital collective consent (wherein multiple individuals consent in different via different governance structures such as families or tribal leader) exists for the general public. Challenges span legal-ethical issues and technical properties such as transparency and usability. In order to build collective digital consent to meaningfully address real world challenges, this work uses genetic data sharing as a use case to better understand what tools and methods can enhance a user-friendly, transparent, and legal-ethically aware collective consent. I conducted a theoretical and empirical study on collective consent processes for health data sharing. First, we explored the privacy and biomedical gaps in collective consent, as it has not been implemented widely outside of indigenous populations. Then I surveyed user goals and attitudes towards engaging elements within different consent mediums, then I analyzed the transparency and user-relevancy of policies from notable DTC genetic testing companies to find gaps in. Last, I validated the framework for transparent, user-centered collective consent with a use-case with a company in Norway.

LeADS Organises Innovation Challenge and Final Conference on Legally compliant data-driven society

Pisa, October 2024—The LeADS Project (Legality Attentive Data Scientists) hosted its final three-day meeting at the LeADS beneficiary Sant’Anna School of Advanced Studies. The event featured a diverse range of activities, from an innovation challenge on the AI Act to intensive panel discussions.

The event opened with an innovation challenge on the AI Act. External participants of the challenge needed to create a solution that helps developers or deployers of AI systems navigate the AI Act’s risk classification system and understand which requirements apply to them.

The third day focused on the final LeADS Conference, named “Legally compliant data-driven society,” which explored how a multidisciplinary approach to governance can reap the benefits of new technologies while guaranteeing fundamental rights and freedoms. The conference had outstanding speakers across three panels, each addressing critical facets of a data-driven society.

The first panel included an introduction by Giovanni Comandé from Sant’Anna School of Advanced Studies, followed by a keynote from Giovanni Pitruzzella, Judge at the Italian Constitutional Court, who discussed regulatory challenges and opportunities in data markets. Giuseppe Turchetti of Sant’Anna School then explored innovation ecosystems fueled by data, while Antonio Buttà from the Italian Competition Authority reflected on the evolving competition landscape shaped by data flows. The panel concluded with a lively discussion on fostering innovation while preserving market fairness.

The second panel focused on the topic of “Research and Secondary Use of Data.” Comandé introduced the session, followed by Paul de Hert from Vrije Universiteit Brussel, who addressed the ethical and legal frameworks supporting secondary data use in research. Regina Becker of Luxembourg National Data Service presented a European perspective on data stewardship, and Piotr Drobek from Poland’s Personal Data Protection Office (UODO) emphasized the challenges of privacy in secondary data applications.

Finally, the third panel explored the topic of “Data Society and Technological Sovereignty,” and featured an introduction by Michelle Sibilla from Université Toulouse III. Jorge Maestre Vidal from Indra Digital Labs explored the relationship between data sovereignty and security, while Giovanni Comandé provided insights on the legal implications of emerging data technologies. Nicola Lattanzi from IMT School for Advanced Studies Lucca concluded the panel with a reflection on how data policies can foster technological independence.

During the engaging three-day event, ESRS had the opportunity to participate in a productive and enlightening discussion. The conversation emphasized the crucial need to harmonize technological progress with the fundamental principles of sovereignty and security.

 

Contact details:

Veronica Virdis, LeADS Project Manager

pm@legalityattentivedatascientists.eu