December Privacy Roundup

Here's our roundup of practical Privacy news from mid-November to mid-December to read over a warm mug of holiday hot chocolate! After a quite few weeks, the period ended in a blaze of news, from the EU AI Act to an AI ISO, to UK ICO enforcement, NOYB complaints and more.
newsletter

Hola, Bonjour, Ciao and Guten Tag! Here are your practical Privacy insights from November and early December for reading with a comforting winter cup of hot chocolate.

November and early December were quiet – until December kicked into full throttle, giving us major news on the EU AI Act, CJEU decisions galore, ICO enforcement, NOYB complaints against the EC and X, and more, all in a packed fortnight.

We’ll start with the momentous news on the EU AI Act.

 

The EU AI Act

On 8 December, after high-pressure, highly-lobbied negotiations, provisional agreement was finally reached on the EU AI Act. Please note, we do not have the final text yet, that’s still to be resolved. And there will be a 2-year transition period, reduced to as little as 6 months for some areas.

Both 9 December press releases from The European Parliament and The Council of Europe confirm the agreement has attempted to balance the protection of rights with the protection of Europe’s position in the race to be an AI superpower. The Council was very clear about Europe’s ambition:

As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions, just as the GDPR has done, thus promoting the European approach to tech regulation in the world stage.

 

Keepabl infographic

We’ll know the detail when the final wording is published, but we already know 9 key takeaways from the press releases. You can also download our infographic here.

 

Key Takeaway 1: Definition

The EU AIA will align its definition of AI system with the OECD definition to ensure ‘sufficiently clear criteria for distinguishing AI from simpler software systems’.

The OECD definition: ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

 

Key Takeaway 2: Scope

The EU AIA: will not ‘affect member states’ competences in national security or any entity entrusted with tasks in this area’. Nor will it apply ‘to systems which are used exclusively for military or defence purposes’ nor ‘AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons’.

 

Key Takeaway 3: Fines

Non-compliance can lead to fines ‘depending on the infringement and size of the company’. And, like GDPR, there’s a tiered approach to fines:

  • €35 million or 7% for violations of the banned AI applications,
  • €15 million or 3% for violations of the AI act’s obligations,
  • €7.5 million or 1.5% for the supply of incorrect information, and
  • more proportionate caps on administrative fines for SMEs and start-ups.

 

Key Takeaway 4: Risk-based

Again like GDPR, the AIA is risk-based: ‘The rules establish obligations for AI based on its potential risks and level of impact.’ This will be welcomed or not in certain sectors but this is a landmark law trying to balance the protection of fundamental rights with competitiveness in this new era. But it’s clear that the lower the risk, the lower the obligations:

AI systems presenting only limited risk would be subject to very light transparency obligations, for example disclosing that the content was AI-generated so users can make informed decisions on further use.’

 

Key Takeaway 5: ​​High-risk AI Systems

High-risk AI systems include those with ‘significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law’ and those ‘used to influence the outcome of elections and voter behaviour’. High-risk systems, there’s a mandatory fundamental rights impact assessment (start getting used to the acronym FRIA).

Public entities may have a greater burden as ‘certain users of a high-risk AI system that are public entities will also be obliged to register in the EU database for high-risk AI systems‘. And we’ll need to see how the ‘obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system’ dovetails with the ban on emotion recognition in the workplace and educational institutions.

 

Key Takeaway 6: Foundation Models (GPAI)

General-purpose AI (GPAI) systems, and the models they are based on, ‘will have to adhere to transparency requirements, including drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training’.

These actions are good practice right now, so these obligations are not surprising and should not be problematic. Further obligations for ‘high-impact GPAI models with systemic risk’ that meet certain criteria: they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. The press release calls these ‘more stringent obligations’ but, to us, they don’t seem any more than good practice for such systems.

While foundation models ‘must comply with specific transparency obligations before they are placed in the market’ there’s also a stricter regime for ‘high impact’ foundation models. These are foundation models ‘trained with large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain’.

 

Key Takeaway 7: Banned Applications

The EP doesn’t hold back on the scale of potential threats from AI: ‘Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit’ a range of systems and applications.

Banned applications include:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation); and
  • some cases of predictive policing for individuals

Much of this will need clarification. For example, ‘untargeted’ scraping of facial images is prohibited, what about ‘targeted’ and what type of targeted? And emotion recognition is widely used right now. Indeed, the ICO was moved to release a statement saying emotion recognition is currently worthless. That’s never stopped some businesses chasing an edge, no matter how imaginary, or doing something ‘because we can’, but these EU press releases suggests interesting times ahead in clarifying exactly where the parameters are on this one.

As expected, law enforcement gets carve-outs, particularly on facial recognition in public places.

 

Key Takeaway 8: Law enforcement exemptions

Negotiators agreed on a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. There are conditions on the use of “Post-remote” RBI and “Real-time” RBI that are less relevant to our readers.

 

Key Takeaway 9: Governance

It is important to know the institutional governance being put in place. There’ll be a few layers:

  1. An AI Office within the European Commission ‘to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states’.
  2. A Scientific Panel of independent experts will ‘advise the AI Office about GPAI models, by contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high impact foundation models, and monitoring possible material safety risks related to foundation models’.
  3. An AI Board (analogous to the EDPB) made up of member states’ representatives, ‘will remain as a coordination platform and an advisory body to the Commission and will give an important role to Member States on the implementation of the regulation, including the design of codes of practice for foundation models’.
  4. Finally, an Advisory Forum for stakeholders ‘such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board’.

 

Updated Q&A on the EU AIA

The EC updated its Q&A on the AIA on 14 December 2023, well worth a deeper read.

 

Excellent Keepabl Product Update

Keepabl released ‘Basecamp 4’ in December, including:

  • Unlimited Forms to intake Breaches and Rights, where you can choose the questions to include and edit the wording to suit your audience, with the submissions going into your Keepabl account with instant email alerts and an immutable PDF record of each submission for any later dispute.

Forms flowchart

  • Audit Logs are now right there on each Activity, Right and Breach.
  • And French and Spanish professional human translations join German, Italian and English, so we’re now available in 5 languages.

Do contact us to see for yourself, book your demo now!

 

CJEU & Automated Decision-Making (‘Schufa ADM’)

We’ve two Schufa cases from the CJEU in December. We’ll start with Case C‑634/21 which dealt with automated decision-making under Art 22 GDPR so we’ll call it Schufa ADM.

 

Context

An individual in Germany (called OQ) applied for a loan from a lender. The lender obtained a score on OQ from a separate rating company called Shufa and, after receipt of the score, declined her application. OQ submitted a data subject request with Schufa ‘to send her information on the personal data registered and to erase some of the data which was allegedly incorrect’. OQ was unhappy with Shufa’s response and complained to the regional German DPA, which found in Schufa’s favour. OQ appealed to the German courts, which referred the case to the CJEU.

 

Was Schufa’s scoring an Art 22 automated decision?

The key question was whether ‘the automated establishment, by a credit information agency, of a probability value based on personal data relating to a person and concerning his or her ability to meet payment commitments in the future constitutes ‘automated individual decision-making’ within the meaning of [Art 22 GDPR], where a third party, to which that probability value is transmitted, draws strongly on that probability value to establish, implement or terminate a contractual relationship with that person’.

The CJEU decided yes, Schufa’s scoring fell within Art 22 in this case because:

  1. it was a decision,
  2. it was based on profiling, expressly mentioned in Art 22, and
  3. because the third party lender ‘draws strongly’ on Schufa’s automated decision in making its own decision, and that decision had legal or significantly similar effect on OQ.

Importantly, the CJEU noted that (our emphasis) ‘according to the factual findings of the referring court, in the event where a loan application is sent by a consumer to a bank, an insufficient probability value leads, in almost all cases, to the refusal of that bank to grant the loan applied for’.

 

How to interpret EU law

To interpret Article 22, the CJEU reminded itself of the general rules on interpreting EU laws: ‘the interpretation of a provision of EU law requires that account be taken not only of its wording, but also of its context and the objectives and purpose pursued by the act of which it forms part’.

  • The CJEU considered Article 22 in some detail, then the references to Art 22 in Arts 13, 14 and 15 (which expressly call out automated decision making in the information to be provided to individuals) and then Recital 71 (which is a lengthy recital giving extensive context and examples), all as part of reviewing Art 22’s objectives and purpose.
  • The CJEU agreed with the referring court that there would be ‘a risk of circumventing [Art 22] and, consequently, a lacuna in legal protection if a restrictive interpretation of that provision was retained, according to which the establishment of the probability value must only be considered as a preparatory act and only the act adopted by the third party can, where appropriate, be classified as a ‘decision’ within the meaning of [Art 22(1)]’.
  • In other words, because the court held that Shufa’s profiling-produced score was an Art 22 automated decision, individuals could obtain relevant information on the logic involved etc from Shufa (the very party who knows that) instead of being limited to asking the lenders who use Shufa’s scoring (and who generally don’t know the underlying logic used by Shufa).

 

Prohibition on Art 22 ADMs save for …

The biggest point of Schufa may be as a reminder that ADM falling within Art 22 – because of a legal or substantially similar effect – can only be carried out with one of those 3 legal bases in Art 22. The CJEU noted that Art 22(1) is a prohibition on automated decision-making falling within its scope unless it is authorised in Art 22(2). (Again these are ADMs where there’s a legal or similarly significant effect on the data subject.) The CJEU left it to the German referring court to decide whether the domestic German law on such scoring satisfied point (b).

 

Key takeaways

  • First, audit whether and where you make a decision based solely on automated processing, including profiling, which produces legal effects concerning individuals, or similarly significantly affects individuals.
  • Include in your audit where you rely on a score or similar result from automated processing, including profiling, provided by a third party and, if you do, how often you follow that score or result.
  • If your decisions follow the automated processing’s score or result regularly, for whatever reason, then your decision likely falls within Art 22. Audit your legal basis and the information you give to individuals.

 

CJEU & Retention (‘Schufa Retention’)

The other Schufa case (joined Cases C‑26/22 and C‑64/22) also dealt with the Art 22 point and came to the same conclusion. But it mostly dealt with retention, so we’ll call it Schufa Retention.

 

Context

As we’ve seen, Schufa create scores on creditworthiness. To do this, part of the data Schufa uses is a private database of those who have been through insolvency / bankruptcy and the discharge of remaining debts. It creates that database itself by copying data from public registers of the same information that are created and maintained under a particular German domestic law. Schufa does this based on the legitimate interests of Schufa itself, its clients, and the public at large.

 

Key question: retention

The case concerned Schufa’s 3-year retention period for its database, which was longer than the 6-month statutory retention period for the public database, but in line with an Art 40 Code of Conduct ‘drawn up in Germany by the association of agencies providing credit information and approved by the competent supervisory authority’.

Before reading on – what’s your thought on this?

Applying the legitimate interest assessment, focussing on the effect on individuals of the information in question and the balancing test, the CJEU decided that Schufa had to delete the data at the same time as the public register.

  • While Schufa, its clients and the public all have legitimate interests including in the proper functioning of the credit system,‘the processing of data relating to the granting of a discharge from remaining debts, by a credit information agency, such as the storage, analysis and communication of such data to a third party, constitutes a serious interference with the fundamental rights of the data subject … .’ 
  • ‘Such data is used as a negative factor when assessing the data subject’s creditworthiness and therefore constitutes sensitive information about his or her private life … . The processing of such data is likely to be considerably detrimental to the interests of the data subject in so far as such disclosure is likely to make it significantly more difficult for him or her to exercise his or her freedoms, particularly where basic needs are concerned.’
  • And ‘the longer the data in question is stored by credit information agencies, the greater the impact on the interests and private life of the data subject and the greater the requirements relating to the lawfulness of the storage of that information’.

While EU law allowed for such public registers, in the public interest, it left the retention period to member states and Germany had decided that 6 months was the correct retention period, balancing the need for a functioning credit system with the impact on individuals of being included in such a register and the individuals’ interest to ‘reenter economic life’. Schufa had to reduce its retention period and the 3-year period in the Art 40 Code of Conduct needed to be amended accordingly.

 

Multiple copies and data minimisation

Both Schufa decisions note that all processing had to comply with the rest of GDPR, including data minimisation, and the court in Schufa retention noted that the CJEU had already decided that ‘the presence of the same personal data in several sources reinforces the interference with the individual’s right to privacy’. It therefore asked the German court to look at whether it was even valid for Schufa to maintain the data in its own database while the same data was available in the public database. (Schufa said it was for speed of response to its clients.)

 

Key takeaways

  • You’re already complying with express retention periods set out in law. You’ll now need to consider any retention period set out in law, and arguably by regulators, that are not maximum durations expressly applicable to you, and have a very good reason if you’re looking to retain for longer.

 

ICO Enforcement

Bank of Ireland Reprimand

Keeping with the credit industry, the ICO issued a reprimand to Bank of Ireland for sending ‘incorrect outstanding balances on 3,284 customers’ loan accounts to credit reference agencies, organisations that help lenders decide whether to approve financial products’.

Because of the steps the bank took ‘to correct their error, supporting affected customers and reviewing its data-management processes’ the ICO felt a reprimand, not a fine, was appropriate.

Key takeaway

  • We’ve seen this time and again: having good incident response procedures in place, with a trained team, can really impact regulatory risk.
    ​​​​​​​

MoD Fine

On 13 December the ICO fined the Ministry of Defence £350,000 for a breach caused by incorrectly addressed email sent to 245 people in the ‘To’ field. The email was ‘sent by the team in charge of the UK’s Afghan Relocations and Assistance Policy (ARAP), which is responsible for assisting the relocation of Afghan citizens who worked for or with the UK Government in Afghanistan. The data disclosed, should it have fallen into the hands of the Taliban, could have resulted in a threat to life.’

Incorrectly addressed messages are the number one breach reported to the ICO.

Key takeaway

As the ICO notes, ‘organisations should use bulk email services, mail merge, or secure data transfer services when sending any sensitive personal information electronically’.

 

NHS Fife Reprimand

On 28 November the ICO issued a reprimand to NHS Fife because a random stranger was able to walk into a ward, ‘was handed a document containing personal information of 14 people and assisted with administering care to one patient’! This feels way more than a GDPR event but that’s our focus.

Some very clear takeways here. CCTV had been accidentally turned off and the ICO concluded that ‘NHS Fife did not have appropriate security measures for personal information, as well as low staff training rates. Following this incident, NHS Fife introduced new measures such as a system for documents containing patient data to be signed in and out, as well as updated identification processes’.

Key takeaways

The ICO’s recommendations speak for themselves:

  1. ‘Improving the overall training rate, … [f]or example, refresher data protection training should be provided to all staff more frequently and underpinned by written guidance on security for employees.
  2. Developing guidance or a policy in relation to formal ID verification.
  3. Reviewing all policies.
  4. Revisiting the data breach reporting process and ensuring relevant personal data breaches are reported within 72 hours.

 

NOYB v X / TWITTER v EC v X / TWITTER + GDPR + DSA

NOYB have filed a complaint against each of X and the European Commission based on the same context: the use of special category personal data to target ads on X. Although X’s own guidelines say ads should not be based on political views and religious beliefs, that’s what happened.

Interestingly, NOYB allege this is contrary to GDPR (Meta cases suggest so) and the new EU Digital Services Act or DSA.

With that being said, the EC have launched an investigation into X / Twitter for infringement of other areas of the DSA, ironically announced on X…

 

ISO:42001 – New ISO Standard for AI

Major AI news – December saw the publication of a new ISO standard, ISO 42001.

We’re used to an ISMS (Information Security Management System) under ISO 27001. Now there’s an AIMS (Artificial Intelligence Management System). Destined to be very influential, the ISO describes the standard as usable by any size of organisation and ‘intended for use by an organization providing or using products or services that utilize AI systems’. The standard specifies requirements and provides guidance ‘for establishing, implementing, maintaining and continually improving an AI (artificial intelligence) management system within the context of an organization’.

 

Guidelines for Secure AI System Development

The UK’s NCSC, together with the NSA, FBI and similar organisations from around the world, has published the Guidelines for Secure AI System Development. While aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external [APIs]’ the Guidelines are helpful for anyone looking at AI to ‘make informed decisions about’ four key phases: ‘the design, development, deployment and operation’ of the AI system.

 

EDPB Art 5(3) Guidelines on Cookies etc

On 14 November, the EDPB adopted Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy Directive on cookies and similar technology and suffice it to say it started a conversation! Only 13 pages long but immediately controversial. If you want to read some of the arguments, Peter Craddock (a very good person to follow on LinkedIn) posted two articles, the first on the EDPB possibly over-reaching it powers and the second on ‘overbroad notions and regulator activism’.

  • The controversy is on whether and how the EDPB is extending the e-Privacy Directives rules from 15 to 20 years ago (which were always about more than just cookies) to more modern methods, in particular, the wording in Art 5(3) to write or read information.
  • Criticism revolves around arguments that even putting a static ad is writing to a device and device information is sent automatically, so the Guidelines are extending Art 5(3) to pretty well every online interaction, which couldn’t be correct. On the other hand, and keeping in mind the CJEU’s reminder in Schufa ADM on how to interpret EU law, one could argue that the context of Art 593), with recitals, is clearly focussed on obtaining information from the individual’s private sphere for anything other than strictly necessary purposes.
  • The consultation period ends 18 January 2024 so do get stuck in.

 

Have a great holidays and here’s to a happy and healthy 2024 for all!

 

Simplify your Compliance with Keepabl

Need to upgrade (or even establish) your RoPA into something that’s easy to create and maintain? Need automated Breach and Rights management?

Do contact us to see for yourself, book your demo now!

 


Related Articles

Keepabl available in 5 languages
Blog
Keepabl now in French, Spanish, German, Italian & English

Bonjour! Hola! We’re thrilled to announce another enhancement to Keepabl – the addition of French and Spanish language translations! Keepabl is committed to making our Privacy Management Software as friendly,…

Read More
Description of Keepabl as seen in the Evening Standard
Blog News & Awards
As seen in The Evening Standard!

London’s Evening Standard, Business section 9 September 2024, advertisement feature The most successful brands are those that are always striving to innovate and adapt in the ever-changing business landscape. Whether…

Read More