Consent and AI: a perspective from the Italian Supreme Court of Cassation

With a judgment symbolically delivered on the day of the third anniversary of the entry into force of the GDPR, the Italian Supreme Court of Cassation agrees with the national Data Protection Authority about an automatic reputation system deemed illegitimate.

The consent given, according to the DPA, was not informed and “the system [is] likely to heavily affect the economic and social representation of a wide category of subjects, with rating repercussions on the private life of the individuals listed”.

In its appeal against the decision of the Court of Rome, the DPA, through the Avvocatura dello Stato, challenged “the failure to examine the decisive fact represented by the alleged ignorance of the algorithm used to assign the rating score, with the consequent lack of the necessary requirement of transparency of the automated system to make the consent given by the person concerned informed”.

The facts The so-called Mevalaute system is a web platform (with related computer archive) “preordained to the elaboration of reputational profiles concerning physical and juridical persons, with the purpose to contrast phenomena based on the creation of false or untrue profiles and to calculate, instead, in an impartial way the so-called “reputational rating” of the subjects listed, to allow any third party to verify the real credibility”.

The case and the decision are based on the regime prior to the GDPR, but in fact the Court confirms the dictate of the GDPR itself, relaunching it as the Polar star for all activities now defined as AI under the proposed Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts.

To put it more clearly, the decision is relevant for each “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;” (art. 3 proposed AI Act). That is, for any software produced using “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimisation methods.

The consent to the use of one’s personal data by the platform’s algorithm had been considered invalid by the Italian Data Protection Authority because it was not informed about the logic used by the algorithm. The decision was quashed by the Court of Rome, but the Supreme Court of Cassation thinks otherwise. After all, on other occasions (see Court of Cassation n. 17278-18, Court of Cassation n. 16358-1) the Supreme Court had clarified already that consent to processing as such was not sufficient, it also had to be valid!

Even if based on the notion present in the previous legislation (incidentally, made even more explicit and hardened by the GDPR in the direction indicated today by the Court), the reference to the fact that “consent must be previously informed in relation to a processing well defined in its essential elements, so that it can be said to have been expressed, in that perspective, freely and specifically.” remains very topical.

Indeed, based on today’s principle of accountability, “it is the burden of the data controller to provide evidence that the access and processing challenged are attributable to the purposes for which appropriate consent has been validly requested – and validly obtained.”

The conclusion is as sharp as enlightening: “The problem, for the lawfulness of the processing, [is] the validity … of the consent that is assumed to have been given at the time of consenting. And it cannot be logically affirmed that the adhesion to a platform by members also includes the acceptance of an automated system, which makes use of an algorithm, for the objective evaluation of personal data, where the executive scheme in which the algorithm is expressed, and the elements considered for that purpose are not made known.”

It should be noted that this is a decision that goes beyond the limits of art. 22 GDPR because it opens an interpretation of Articles 13(2)f and 14(2)g that goes beyond the “solely automated” requirement for the automated decision-making mechanism by placing a clear emphasis on the need for transparency of the logic used by the algorithm used.