Technical and legal aspects of privacy-preserving services: the case of health data

Nowadays, the potential usefulness as well as the value of health data are broadly recognized. They may transform traditional medicine into clinical science intertwined with data research, driving innovation and producing value from the perspective of the key stakeholders of the health care ecosystem: not only patients but also health care providers and the life insurance sector.

Yet, the health data does not appear out of thin air, it is not a product that can be viewed in isolation. It is:

  • the personal data related to the physical or mental health of a natural person, including the provision of health care services, which reveal information about his or her health status (data concerning health),
  • the personal data relating to the inherited or acquired genetic characteristics of a natural person which give unique information about the physiology or the health of that natural person and which result, in particular, from an analysis of a biological sample from the natural person in question (genetic data),
  • the personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data (biometric data).

Thus, the individual cannot be deprived of the right to decide about their processing as the health issues are at the very centre of the privacy protection sphere.

It becomes clear that balancing the interests of the private individual whose privacy is protected, interests of other private and public actors, and general common interests is highly problematic. Naturally, processing of the health data cannot be unrestricted: optimally, the legal framework should facilitate unlocking the value of health data for European citizens and businesses and empower users in the management of their own health data without undermining the very essence of the right to privacy.

Currently, processing of health data falls under complex GDPR legal regime. This, however, poses a serious challenge for the data processors on the one hand and, on the other, gives rise to numerous legal questions. What are the grounds for processing such data in this highly differentiated context?  How should medical data be protected both on the regulatory and technological level? How can we harness newest technology to increase data safety? How can anonymization and/or privacy-preserving data management techniques using efficient cryptography (e.g. homomorphic, secure multi-party computations) contribute to reaching higher protection levels without becoming a hurdle or an impediment for legitimate data processing? Can the blockchain technologies be used for health information exchange? Should the creation of technological infrastructure be coupled with establishing proper key management schemes?

The task is twofold. First, on the regulatory level general policy guidelines for legislators, independent agencies, businesses on data sharing platforms are necessary, together with the analysis of the policy and market implications of providing privacy-preserving services. Second, the practical recommendations are needed: specific postulates should be formulated on how data protection techniques can be applied in the health domain, in order to contribute to achieving the abovementioned aims.

Author: dr. Katarzyna Południak-Gierz, Jagiellonian University

Is blockchain THE reliability solution for big data?

Blockchains have sparked great enthusiasm from the data science community who believes this technology will be THE solution to data authenticity, data privacy protection, data quality guarantee, smooth data access and real time analysis [1], [2]. Data being considered as the new digital oil, data science and blockchain seem to be the perfect match [3]. Indeed, data science allows people/organizations to extract valuable knowledge from humongous volume of structured or unstructured data. So, blockchain provides security and reliability of the manipulated data. But does it sound too good to be true?

 

Blockchain is a way to implement a decentralized repository (a.k.a Distributed Ledger Technology) managed by a group of participants, without necessity of assuming trust among each other. Blockchain groups data records into blocks that are cryptographically signed and chained by back-linking each block to its predecessor. Blockchain was initially proposed for cryptocurrency (e.g., Bitcoin). This first generation of blockchain applications is called Blockchain 1.0. Later, smart contracts were introduced, paving the way to decentralized applications referred as Blockchain 2.0. Today, Blockchain 3.0 explores a wider spectrum of target applications like e-health, smart cities, identity management, etc [4].

 

Big data is one of the possible Blockchain 3.0 applications. Deepa et al [5] recently published a survey on the use of blockchain technology for big data which shows that projects try to apply blockchain-based solutions at different steps of big data processing. This includes big data acquisition (data collection, data transmission and data sharing [6]), big data storage (by securing decentralized file systems or by detecting malicious updates in databases [7]) or big data analytics (for machine learning model sharing, decentralized intelligence and trusted decision-making of machine learning [8]).

 

Although blockchain technology appears to be a good candidate to secure big data, this technology is not flawless [9] [10] [11] and security threats/vulnerabilities have been identified at each layer of the blockchain stack model [12]. First of all, blockchains depend on the underlying network services and attacks on routing protocols or on DNS can harm a blockchain network. At the consensus layer, which is the core component that directly dictates the behavior and the performance of the blockchain, the situation is also complex [13]. The classic Proof of Work protocol is far from being a panacea and is a non-sense from the environment point of view [14]. In addition, most miners are gathering around mining pools to increase their processing capability, and thus, their chance of adding a new block to the blockchain. At the time of writing, the blockchain.com website estimates that six bitcoin mining pools (F2Pool, AntPool, Poolin, ViaBTC, Huobi.pool and SlushPool) represent 63% of the hash rate [15]. If they collude with each other, they can launch the 51% attack and destabilize the whole bitcoin network [13]. Consequently, more and more consensus algorithms are studied, proposed, and extended such as proof of stake, of authority, of activity, RBFT, YAC, etc. However, an ideal consensus algorithm is still missing as almost all algorithms have significant disadvantages in one way or another with respect to their security and performance, as concluded in [13]. The Replicated State Machine layer, which is responsible for the interpretation and execution of transactions, can be vulnerable too. Blockchain technology doesn’t guarantee the reliability of the data, only the integrity of the blocks. For instance, Karapapa et al. [16] showed how to make ransomwares available using Ethereum smart contracts. Confidentiality of data is also not always embedded in the blockchain. Finally, blockchain is implemented as software running on computers and thus attackers can exploit security holes and misconfigurations. E.g., white hat hackers found more than 40 bugs in blockchain and cryptocurrency platforms during a one month bug bounty session in 2019 – 4 of them were buffer overflows which made possible to inject arbitrary code [17].

 

To conclude, blockchain technology offers promising features to big data. However, one should acknowledge the current technical limitations of the technology. Another consideration is legal aspects. Indeed, the European Parliamentary Research Service observed many points of tension between blockchains and the GDPR [18]. When all these issues will be answered then yes … blockchain will be a serious candidate for being the reliability solution for big data.

 

By Romain Laborde

 

References

[1]       “Why Data Scientists Are Falling in Love with Blockchain Tech,” Techopedia.com. https://www.techopedia.com/why-data-scientists-are-falling-in-love-with-blockchain-technology/2/33356 (accessed Apr. 21, 2021).

[2]       2021 at 1:00pm Posted by Isaac Rallo on March 15 and V. Blog, “Six use cases in Blockchain Analysis.” https://www.datasciencecentral.com/profiles/blogs/six-use-cases-in-blockchain-analysis (accessed Apr. 21, 2021).

[3]       “What Makes Blockchain and Data Science a Perfect Combination.” https://www.rubiscape.io/blog/focus-on-data-diversity-to-make-your-ai-initiatives-successful-0 (accessed Apr. 21, 2021).

[4]       D. Di Francesco Maesa and P. Mori, “Blockchain 3.0: applications survey,” Journal of Parallel and Distributed Computing, vol. 138, pp. 99–114, Apr. 2020, doi: 10.1016/j.jpdc.2019.12.019.

[5]       N. Deepa et al., “A survey on blockchain for big data: Approaches, opportunities, and future directions,” arXiv preprint arXiv:2009.00858, 2020.

[6]       N. Tariq et al., “The Security of Big Data in Fog-Enabled IoT Applications Including Blockchain: A Survey,” Sensors, vol. 19, no. 8, Art. no. 8, Jan. 2019, doi: 10.3390/s19081788.

[7]       N. Zahed Benisi, M. Aminian, and B. Javadi, “Blockchain-based decentralized storage networks: A survey,” Journal of Network and Computer Applications, vol. 162, p. 102656, Jul. 2020, doi: 10.1016/j.jnca.2020.102656.

[8]       Y. Liu, F. R. Yu, X. Li, H. Ji, and V. C. M. Leung, “Blockchain and Machine Learning for Communications and Networking Systems,” IEEE Communications Surveys Tutorials, vol. 22, no. 2, pp. 1392–1431, Secondquarter 2020, doi: 10.1109/COMST.2020.2975911.

[9]       X. Li, P. Jiang, T. Chen, X. Luo, and Q. Wen, “A survey on the security of blockchain systems,” Future Generation Computer Systems, vol. 107, pp. 841–853, 2020.

[10]     M. Saad et al., “Exploring the attack surface of blockchain: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 1977–2008, 2020.

[11]     Y. Wen, F. Lu, Y. Liu, and X. Huang, “Attacks and countermeasures on blockchains: A survey from layering perspective,” Computer Networks, vol. 191, p. 107978, 2021.

[12]     I. Homoliak, S. Venugopalan, D. Reijsbergen, Q. Hum, R. Schumi, and P. Szalachowski, “The Security Reference Architecture for Blockchains: Toward a Standardized Model for Studying Vulnerabilities, Threats, and Defenses,” IEEE Communications Surveys & Tutorials, vol. 23, no. 1, pp. 341–390, 2020.

[13]     M. Sadek Ferdous, M. Jabed Morshed Chowdhury, M. A. Hoque, and A. Colman, “Blockchain Consensus Algorithms: A Survey,” arXiv e-prints, p. arXiv-2001, 2020.

[14]     A. B. Business CNN, “Bitcoin mining in China could soon generate as much carbon emissions as some European countries, study finds,” CNN. https://www.cnn.com/2021/04/09/business/bitcoin-mining-emissions/index.html (accessed Apr. 21, 2021).

[15]     “pools,” Blockchain.com. https://www.blockchain.com/charts/pools (accessed May 03, 2021).

[16]     C. Karapapas, I. Pittaras, N. Fotiou, and G. C. Polyzos, “Ransomware as a Service using Smart Contracts and IPFS,” in 2020 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), 2020, pp. 1–5.

[17]     Mix, “Security researchers found over 40 bugs in blockchain platforms in 30 days,” TNW | Hardfork, Mar. 14, 2019. https://thenextweb.com/news/blockchain-cryptocurrency-vulnerability-bug (accessed Apr. 28, 2021).

[18]     M. Finck, “Blockchain and the General Data Protection Regulation: Can distributed ledgers be squared with European data protection law?,” PE 634.44, Jul. 2019. [Online]. Available: https://www.europarl.europa.eu/RegData/etudes/STUD/2019/634445/EPRS_STU(2019)634445_EN.pdf.

santanna

Call Open – SSSA Pisa – Italy – ESRs 1-3-5-14

Application for ESRs 1-3-5-14

host by Sant’Anna School of Advanced Studies – Pisa – Italy

CALL FOR APPLICATION: www.legalityattentivedatascientists.eu/wp-content/uploads/2021/06/CAll-for-application-LeADS_SSSA.pdf 

APPLICATION: sssup.esse3.cineca.it/Home.do 

DEADLINE: September 6th2021

JOB DESCRIPTION:

We are looking for 4 Early Stage Researchers (ESR)/PhD Researchers. They will be working under the framework of LeADS project. Their main task will be to collaborate to the Research and Training program LeADS and to eventually prepare their doctoral thesis in the same framework. The PhD thesis work will be undertaken at Sant’Anna School of Advanced Studies (Pisa, Italy) in the national PhD program on Artificial Intelligence administered by the University of Pisa.
As doctoral students, the ESRs will be jointly-supervised under the direction of the LeADS consortium and will spend also secondment(s) at Consortium members.

POSITIONS:

  • ESR 1 Project Title: Reciprocal interplay between competition law and privacy in the digital revolution
    Objectives: Data are more and more important resources in the so-called Digital Revolution: the impact on competition law is increasingly relevant and so are the implications of data protection law on competition law. The researcher will address these implications, analysing some relevant topics: the impact of data portability and the requirements in terms of interoperability in the new GDPR compared to the barriers to entry and to market dominance; how customer data can be “assessed” as an index of market dominance for the big information providers (Google, Apple, Facebook, Amazon); and how SMEs can benefit from data protection law and competition law in order to increase their volume in the market
  • ESR 3 Project Title: Unchaining data portability potentials in a lawful digital economy
    Objectives: Empirically test the potentials of the right to data portability. The research in the framework of LeADS will relate data portability not only to data protection law, but also to competition law and unfair business practices (e.g., offer or price discrimination between groups of consumers through profiling operations), setting the scene for their regulatory interplay in line with current and forthcoming technologies. In doing so specific attention will be offered to the possible technical solutions to guarantee effective portability. Additionally, the technical, statistical, and privacy implications of the new right will be evaluated, such as the need for standard formats for personal data, and the exception in Article 20.2 of the GDPR, according to which the personal data, upon request by the data subject, should be transmitted from one controller to another “where technically feasible”.
  • ESR 5 Project Title: Differential privacy and differential explainability in the data sphere: the use case of predictive jurisprudence
    Objectives: Human life and economy are exponentially data driven. The switch from residential to cloud based data storage is making increasingly difficult to reap the maximum from data while minimizing the chances of identifying individuals in datasets. Researcher will explore the interplay between differential privacy technologies and the data protection regulatory framework in search of effective mixes.
  • ESR 14 Project Title: Neuromarketing and mental integrity between market and human rights
    Objectives: ESR’s research question is whether and how neuromarketing can affect human rights of individuals, considering in particular recent interpretations of rights contained in the European Convention of Human Rights and in the EU Charter of Fundamental Rights, in particular “mental privacy”, “cognitive freedom”, and “psychological continuity”. Indeed, advanced data analytics provide a very high level of understanding of users’ behaviour, sometimes even beyond the conscious self-understanding of the users themselves exploiting all user’s idiosyncrasies, including user’s vulnerabilities harming the exercise of free decision making

RESPONSIBILITIES:

All ESRs recruited will be expected to carry out the following roles:

  • To manage and carry out their research project within 36 months
  • To write a PhD dissertation within the theme and objectives proposed above
  • To participate in research and training activities within the LeADS network
  • To participate in meetings of the LeADS projects
  • To disseminate their research to the non-scientific community, by outreach and public engagement
  • To liaise with the other research staff and students working in broad areas of relevance to the research project and partner institutions
  • To write progress reports and prepare results and articles for publication and dissemination via journals, presentations, videos and the web
  • To attend progress and management meetings as required and network with the other research groups

ELIGIBILITY CRITERIA:

  • Master of Science (MSc) degree or equivalent
  • Fluent written and spoken English
  • Excellent communication and writing skills.
  • In order to fulfill the eligibility criteria of the Marie Curie ITN at the date of recruitment, applicants must not have resided or carried out their main activity (work, studies, etc.) in Italy for more than 12 months in the 3 years immediately prior to their recruitment. Compulsory national service and/or short stays such as holidays are not considered. Italian candidates can apply if they have resided in another country for more than 2 years of the last 3 years.
  • At the time of recruitment, the candidate cannot have already obtained a doctoral degree and must be in the first 4 years (full-time equivalent) of his research career

OFFER:

  • Fixed Term Contract 36 Month
  • Work Hours: Full Time
  • Location: Pisa
  • Employee and Phd student status
  • Travel opportunities for training and learning
  • yearly gross salary: Living allowance of € 40.966,56, Mobility allowance up to € 7.200, Family allowance € 3.000

APPLICATION PROCEDURE: Please apply ONLINE and include:

  • A detailed Curriculum Vitae et studiorum (in English)
  • a motivation letter (Max 1,000 words in English)
  • a copy of your official academic degree(s)
  • proof of English proficiency (self-assessment)
  • the names and contacts of two referees
  • scan of a valid identification document (e.g., passport)
  • a non-binding research plan of a maximum of 3500 words which must include (in English): 1. the title of the research; 2. the scientific premises and relevant bibliography; 3. the aim and expected results of the research project; The non-binding research plan need to be aligned to one of the research descriptions for the LEADS project.

Consent and AI: a perspective from the Italian Supreme Court of Cassation

With a judgment symbolically delivered on the day of the third anniversary of the entry into force of the GDPR, the Italian Supreme Court of Cassation agrees with the national Data Protection Authority about an automatic reputation system deemed illegitimate.

The consent given, according to the DPA, was not informed and “the system [is] likely to heavily affect the economic and social representation of a wide category of subjects, with rating repercussions on the private life of the individuals listed”.

In its appeal against the decision of the Court of Rome, the DPA, through the Avvocatura dello Stato, challenged “the failure to examine the decisive fact represented by the alleged ignorance of the algorithm used to assign the rating score, with the consequent lack of the necessary requirement of transparency of the automated system to make the consent given by the person concerned informed”.

The facts The so-called Mevalaute system is a web platform (with related computer archive) “preordained to the elaboration of reputational profiles concerning physical and juridical persons, with the purpose to contrast phenomena based on the creation of false or untrue profiles and to calculate, instead, in an impartial way the so-called “reputational rating” of the subjects listed, to allow any third party to verify the real credibility”.

The case and the decision are based on the regime prior to the GDPR, but in fact the Court confirms the dictate of the GDPR itself, relaunching it as the Polar star for all activities now defined as AI under the proposed Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts.

To put it more clearly, the decision is relevant for each “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;” (art. 3 proposed AI Act). That is, for any software produced using “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimisation methods.

The consent to the use of one’s personal data by the platform’s algorithm had been considered invalid by the Italian Data Protection Authority because it was not informed about the logic used by the algorithm. The decision was quashed by the Court of Rome, but the Supreme Court of Cassation thinks otherwise. After all, on other occasions (see Court of Cassation n. 17278-18, Court of Cassation n. 16358-1) the Supreme Court had clarified already that consent to processing as such was not sufficient, it also had to be valid!

Even if based on the notion present in the previous legislation (incidentally, made even more explicit and hardened by the GDPR in the direction indicated today by the Court), the reference to the fact that “consent must be previously informed in relation to a processing well defined in its essential elements, so that it can be said to have been expressed, in that perspective, freely and specifically.” remains very topical.

Indeed, based on today’s principle of accountability, “it is the burden of the data controller to provide evidence that the access and processing challenged are attributable to the purposes for which appropriate consent has been validly requested – and validly obtained.”

The conclusion is as sharp as enlightening: “The problem, for the lawfulness of the processing, [is] the validity … of the consent that is assumed to have been given at the time of consenting. And it cannot be logically affirmed that the adhesion to a platform by members also includes the acceptance of an automated system, which makes use of an algorithm, for the objective evaluation of personal data, where the executive scheme in which the algorithm is expressed, and the elements considered for that purpose are not made known.”

It should be noted that this is a decision that goes beyond the limits of art. 22 GDPR because it opens an interpretation of Articles 13(2)f and 14(2)g that goes beyond the “solely automated” requirement for the automated decision-making mechanism by placing a clear emphasis on the need for transparency of the logic used by the algorithm used.

Algorithms are learning from our behaviour: How must we teach them

Algorithms are learning from our behaviour: How must we teach them

by Daniel Zingaro

Have you ever wondered about why the online suggestions on videos, products, services or special offers you receive fits so perfectly into your preferences and interests? Why your social media feed only shows certain content, but filters out the rest? Or why you get certain results on an internet search on your smartphone, but you can’t get the same results from another device? And why does a map application suggest a certain route over another? Or why you are always matched with cat lovers on dating apps?

Did you just click away and thought that your phone mysteriously understands you? And although you may have wondered about this, you may not have found out why.

How these systems work to suggest specific content or courses of actions is generally invisible.  The input, output and processes of its algorithms are never disclosed to users, nor are they made public. But still such automated systems increasingly inform many aspects of our lives such as the online content we interact with, the people we connect with, the places we travel too, the jobs we apply for, the financial investments we make, and the love interests we pursue. As we experience a new realm of digital possibilities, our vulnerability to the influence of inscrutable algorithms increases.

Some of the decisions taken by algorithms may create seriously unfair outcomes that unjustifiably privilege certain groups over others. Because machine-learning algorithms learn from the data that we feed them with, they inevitably also learn the biases reflected in the data. For example, the algorithm that Amazon employed between 2014 and 2017 to automatize the screening of job applicants reportedly penalised words such as ‘women’ (e.g., the names of women’s colleges) on applicants’ resumes. The recruiting tool learned patterns in the data composed of the previous 10 years of candidates’ resumes and therefore learned that Amazon preferred men to women, as they were hired more often as engineers and developers. This means that women were blatantly discriminated against purely based on their gender with regards to obtaining employment at Amazon.

To avoid a world in which algorithms unconsciously guide us towards unfair or unreasonable choices because they are inherently biased or manipulated, we need to fully understand and appreciate the ways in which we teach these algorithms to function. A growing number of researchers and practitioners already engages in explainable AI that entails that they design processes and methods allowing humans to understand and trust the results of machine learning algorithms. Legally the European Data Protection Regulation (GDPR) requires and spells out specific levels of fairness and transparency that must be adhered to when using personal data, especially when such data is used to make automated decisions about individuals. This imports the principle of accountability for the impact or consequences that automated decisions have on human lives. In a nutshell, this domain development is called algorithmic transparency.

However, there are many questions, concerns and uncertainties that need in depth investigation. For example: 1) how can the complex statistical functioning of a machine learning algorithm be explained in a comprehensible way; 2) to what extent transparency builds, or hampers, trust; 3) to what extent it is fair to influence people’s choices through automated decision-making; 4) who is liable for unfair decisions; … and many more.

These questions need answers if we wish to teach algorithms well to allow for a co-existence between humans and machine to be productive and ethical.

 

Authors:

Dr Arianna Rossi – Research Associate at the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, LinkedIn: https://www.linkedin.com/in/arianna-rossi-aa321374/ , Twitter: @arionair89

Dr Marietjie Botes – Research Associate at the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, LinkedIn:  https://www.linkedin.com/in/dr-marietjie-botes-71151b55/ , Twitter: @Dr_WM_Botes

The beginning of the LeADS era

On January 1st 2021 LeADS (Legality Attentive Data Scientists) started its journey. A Consortium of 7 prominent European universities and research centres along with 6 important industrial partners and 2 Supervisory Authorities is exploring ways to create a new generation of LEgality Attentive Data Scientists while investigating the interplay between and across many sciences.

LeADS envisages a research and training programme that will blend ground-breaking applied research and pragmatic problem-solving from the involved industries, regulators, and policy makers. The skills produced by LeADS and tested by the ESR will be able to tackle the confusion created by the blurred borders between personal and commercial information and between personality and property rights typical of the big data environment. Both processes constitute a silent revolution—developed by new digital business models, industrial standards, and customs—that is already embedded in soft law instruments (such as stakeholders’ agreements) and emerging in case law and legislation (Regulation EU 2016/679 and the e-privacy directive to begin with), while data scientists are mostly unaware of them. They cut across the emergence of the Digital Transformation, and call for a more comprehensive and innovative regulatory framework. Against this background, LeADS is animated by the idea that in the digital economy data protection holds the keys for both protecting fundamental rights and fostering the kind of competition that will sustain the growth and “completion” of the “Digital Single Market” and the competitive ability of European businesses outside the EU. Under LeADS, the General Data Protection Regulation (GDPR) and other EU rules will dictate the transnational standard for the global data economy while training researchers able to drive the process and set an example

The data economy or better way the data society we increasingly live is our explorative target under many angles (from the technological to the legal and ethics one). This new generation is needed to better answer to the challenges of the data economy and the unfolding of the digital transformation scoping. Our Early Stage Researchers (ESRs) will come from many experiences and backgrounds (law, computer science, economics, statistics, management, engineering, policy studies, and mathematics,..).

ESRs will find an enthusiastic transnational, interdisciplinary team of teams tackling the relevant issues from their many angles. Their research will be supported by these research teams in setting the theoretical framework and the practical implementation template of a common language.

LeADS research plan, although already envisages 15 specific topics to be interdisciplinary investigated, remain open-ended.

It is natural in the fields we have selected for which we identified crossover concepts in need of a common understanding of concepts useful for future researchers, policy makers, software developers, lawyers and market actors.

LeADS research strives to create, share cross disciplinary languages and integrate the respective background domain knowledge of its participants in one shared idiolect that it wants to share with a wider audience.

It is LeADS understanding that regulatory issues in data science and AI development and deployment are often perceived (and sometimes are) hurdles to innovation, markets and above all research. Our unwritten goal is to contribute to turn regulatory and ethical constraints which are needed in opportunities for better developments.

LADS aims at nurturing a data science capable of maintaining its innovative solutions within the borders of law – by design and by default – and of helping expand the legal frontiers in line with innovation needs, preventing the enactments of legal rules technologically unattainable.

By Giovanni Comandé