Hola, Bonjour, Ciao and Guten Tag! Here are your practical Privacy insights from November and early December for reading with a comforting winter cup of hot chocolate.
November and early December were quiet – until December kicked into full throttle, giving us major news on the EU AI Act, CJEU decisions galore, ICO enforcement, NOYB complaints against the EC and X, and more, all in a packed fortnight.
We’ll start with the momentous news on the EU AI Act.
On 8 December, after high-pressure, highly-lobbied negotiations, provisional agreement was finally reached on the EU AI Act. Please note, we do not have the final text yet, that’s still to be resolved. And there will be a 2-year transition period, reduced to as little as 6 months for some areas.
Both 9 December press releases from The European Parliament and The Council of Europe confirm the agreement has attempted to balance the protection of rights with the protection of Europe’s position in the race to be an AI superpower. The Council was very clear about Europe’s ambition:
As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions, just as the GDPR has done, thus promoting the European approach to tech regulation in the world stage.
We’ll know the detail when the final wording is published, but we already know 9 key takeaways from the press releases. You can also download our infographic here.
The EU AIA will align its definition of AI system with the OECD definition to ensure ‘sufficiently clear criteria for distinguishing AI from simpler software systems’.
The OECD definition: ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.’
The EU AIA: will not ‘affect member states’ competences in national security or any entity entrusted with tasks in this area’. Nor will it apply ‘to systems which are used exclusively for military or defence purposes’ nor ‘AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons’.
Non-compliance can lead to fines ‘depending on the infringement and size of the company’. And, like GDPR, there’s a tiered approach to fines:
Again like GDPR, the AIA is risk-based: ‘The rules establish obligations for AI based on its potential risks and level of impact.’ This will be welcomed or not in certain sectors but this is a landmark law trying to balance the protection of fundamental rights with competitiveness in this new era. But it’s clear that the lower the risk, the lower the obligations:
‘AI systems presenting only limited risk would be subject to very light transparency obligations, for example disclosing that the content was AI-generated so users can make informed decisions on further use.’
High-risk AI systems include those with ‘significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law’ and those ‘used to influence the outcome of elections and voter behaviour’. High-risk systems, there’s a mandatory fundamental rights impact assessment (start getting used to the acronym FRIA).
Public entities may have a greater burden as ‘certain users of a high-risk AI system that are public entities will also be obliged to register in the EU database for high-risk AI systems‘. And we’ll need to see how the ‘obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system’ dovetails with the ban on emotion recognition in the workplace and educational institutions.
General-purpose AI (GPAI) systems, and the models they are based on, ‘will have to adhere to transparency requirements, including drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training’.
These actions are good practice right now, so these obligations are not surprising and should not be problematic. Further obligations for ‘high-impact GPAI models with systemic risk’ that meet certain criteria: they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. The press release calls these ‘more stringent obligations’ but, to us, they don’t seem any more than good practice for such systems.
While foundation models ‘must comply with specific transparency obligations before they are placed in the market’ there’s also a stricter regime for ‘high impact’ foundation models. These are foundation models ‘trained with large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain’.
The EP doesn’t hold back on the scale of potential threats from AI: ‘Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit’ a range of systems and applications.
Banned applications include:
Much of this will need clarification. For example, ‘untargeted’ scraping of facial images is prohibited, what about ‘targeted’ and what type of targeted? And emotion recognition is widely used right now. Indeed, the ICO was moved to release a statement saying emotion recognition is currently worthless. That’s never stopped some businesses chasing an edge, no matter how imaginary, or doing something ‘because we can’, but these EU press releases suggests interesting times ahead in clarifying exactly where the parameters are on this one.
As expected, law enforcement gets carve-outs, particularly on facial recognition in public places.
Negotiators agreed on a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. There are conditions on the use of “Post-remote” RBI and “Real-time” RBI that are less relevant to our readers.
It is important to know the institutional governance being put in place. There’ll be a few layers:
The EC updated its Q&A on the AIA on 14 December 2023, well worth a deeper read.
Keepabl released ‘Basecamp 4’ in December, including:
Do contact us to see for yourself, book your demo now!
We’ve two Schufa cases from the CJEU in December. We’ll start with Case C‑634/21 which dealt with automated decision-making under Art 22 GDPR so we’ll call it Schufa ADM.
An individual in Germany (called OQ) applied for a loan from a lender. The lender obtained a score on OQ from a separate rating company called Shufa and, after receipt of the score, declined her application. OQ submitted a data subject request with Schufa ‘to send her information on the personal data registered and to erase some of the data which was allegedly incorrect’. OQ was unhappy with Shufa’s response and complained to the regional German DPA, which found in Schufa’s favour. OQ appealed to the German courts, which referred the case to the CJEU.
The key question was whether ‘the automated establishment, by a credit information agency, of a probability value based on personal data relating to a person and concerning his or her ability to meet payment commitments in the future constitutes ‘automated individual decision-making’ within the meaning of [Art 22 GDPR], where a third party, to which that probability value is transmitted, draws strongly on that probability value to establish, implement or terminate a contractual relationship with that person’.
The CJEU decided yes, Schufa’s scoring fell within Art 22 in this case because:
Importantly, the CJEU noted that (our emphasis) ‘according to the factual findings of the referring court, in the event where a loan application is sent by a consumer to a bank, an insufficient probability value leads, in almost all cases, to the refusal of that bank to grant the loan applied for’.
To interpret Article 22, the CJEU reminded itself of the general rules on interpreting EU laws: ‘the interpretation of a provision of EU law requires that account be taken not only of its wording, but also of its context and the objectives and purpose pursued by the act of which it forms part’.
The biggest point of Schufa may be as a reminder that ADM falling within Art 22 – because of a legal or substantially similar effect – can only be carried out with one of those 3 legal bases in Art 22. The CJEU noted that Art 22(1) is a prohibition on automated decision-making falling within its scope unless it is authorised in Art 22(2). (Again these are ADMs where there’s a legal or similarly significant effect on the data subject.) The CJEU left it to the German referring court to decide whether the domestic German law on such scoring satisfied point (b).
The other Schufa case (joined Cases C‑26/22 and C‑64/22) also dealt with the Art 22 point and came to the same conclusion. But it mostly dealt with retention, so we’ll call it Schufa Retention.
As we’ve seen, Schufa create scores on creditworthiness. To do this, part of the data Schufa uses is a private database of those who have been through insolvency / bankruptcy and the discharge of remaining debts. It creates that database itself by copying data from public registers of the same information that are created and maintained under a particular German domestic law. Schufa does this based on the legitimate interests of Schufa itself, its clients, and the public at large.
The case concerned Schufa’s 3-year retention period for its database, which was longer than the 6-month statutory retention period for the public database, but in line with an Art 40 Code of Conduct ‘drawn up in Germany by the association of agencies providing credit information and approved by the competent supervisory authority’.
Before reading on – what’s your thought on this?
Applying the legitimate interest assessment, focussing on the effect on individuals of the information in question and the balancing test, the CJEU decided that Schufa had to delete the data at the same time as the public register.
While EU law allowed for such public registers, in the public interest, it left the retention period to member states and Germany had decided that 6 months was the correct retention period, balancing the need for a functioning credit system with the impact on individuals of being included in such a register and the individuals’ interest to ‘reenter economic life’. Schufa had to reduce its retention period and the 3-year period in the Art 40 Code of Conduct needed to be amended accordingly.
Both Schufa decisions note that all processing had to comply with the rest of GDPR, including data minimisation, and the court in Schufa retention noted that the CJEU had already decided that ‘the presence of the same personal data in several sources reinforces the interference with the individual’s right to privacy’. It therefore asked the German court to look at whether it was even valid for Schufa to maintain the data in its own database while the same data was available in the public database. (Schufa said it was for speed of response to its clients.)
Keeping with the credit industry, the ICO issued a reprimand to Bank of Ireland for sending ‘incorrect outstanding balances on 3,284 customers’ loan accounts to credit reference agencies, organisations that help lenders decide whether to approve financial products’.
Because of the steps the bank took ‘to correct their error, supporting affected customers and reviewing its data-management processes’ the ICO felt a reprimand, not a fine, was appropriate.
On 13 December the ICO fined the Ministry of Defence £350,000 for a breach caused by incorrectly addressed email sent to 245 people in the ‘To’ field. The email was ‘sent by the team in charge of the UK’s Afghan Relocations and Assistance Policy (ARAP), which is responsible for assisting the relocation of Afghan citizens who worked for or with the UK Government in Afghanistan. The data disclosed, should it have fallen into the hands of the Taliban, could have resulted in a threat to life.’
Incorrectly addressed messages are the number one breach reported to the ICO.
As the ICO notes, ‘organisations should use bulk email services, mail merge, or secure data transfer services when sending any sensitive personal information electronically’.
On 28 November the ICO issued a reprimand to NHS Fife because a random stranger was able to walk into a ward, ‘was handed a document containing personal information of 14 people and assisted with administering care to one patient’! This feels way more than a GDPR event but that’s our focus.
Some very clear takeways here. CCTV had been accidentally turned off and the ICO concluded that ‘NHS Fife did not have appropriate security measures for personal information, as well as low staff training rates. Following this incident, NHS Fife introduced new measures such as a system for documents containing patient data to be signed in and out, as well as updated identification processes’.
The ICO’s recommendations speak for themselves:
NOYB have filed a complaint against each of X and the European Commission based on the same context: the use of special category personal data to target ads on X. Although X’s own guidelines say ads should not be based on political views and religious beliefs, that’s what happened.
Interestingly, NOYB allege this is contrary to GDPR (Meta cases suggest so) and the new EU Digital Services Act or DSA.
With that being said, the EC have launched an investigation into X / Twitter for infringement of other areas of the DSA, ironically announced on X…
Major AI news – December saw the publication of a new ISO standard, ISO 42001.
We’re used to an ISMS (Information Security Management System) under ISO 27001. Now there’s an AIMS (Artificial Intelligence Management System). Destined to be very influential, the ISO describes the standard as usable by any size of organisation and ‘intended for use by an organization providing or using products or services that utilize AI systems’. The standard specifies requirements and provides guidance ‘for establishing, implementing, maintaining and continually improving an AI (artificial intelligence) management system within the context of an organization’.
The UK’s NCSC, together with the NSA, FBI and similar organisations from around the world, has published the Guidelines for Secure AI System Development. While aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external [APIs]’ the Guidelines are helpful for anyone looking at AI to ‘make informed decisions about’ four key phases: ‘the design, development, deployment and operation’ of the AI system.
On 14 November, the EDPB adopted Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy Directive on cookies and similar technology and suffice it to say it started a conversation! Only 13 pages long but immediately controversial. If you want to read some of the arguments, Peter Craddock (a very good person to follow on LinkedIn) posted two articles, the first on the EDPB possibly over-reaching it powers and the second on ‘overbroad notions and regulator activism’.
Have a great holidays and here’s to a happy and healthy 2024 for all!
Need to upgrade (or even establish) your RoPA into something that’s easy to create and maintain? Need automated Breach and Rights management?
Do contact us to see for yourself, book your demo now!
We’re thrilled to announce that we’ve introduced intake and management of Freedom of Information requests into our global Rights module. As always, we’ve been collaborating with our ‘Roadmappers’ – customers,…
We’re delighted that our free channel on YouTube covering all things Privacy has shot past 5,000 subscribers! You can see videos on everything from controller v processor, transfers, DPOs, 10…