You may have noticed we skipped our July newsletter covering June, apologies, so here’s a summer bumper edition! As always, so much to talk about with the new Privacy triumvirate of Transfers, Meta and AI in the headlines as ever. And we’ll look at our new Rights management module.
But first, the excellent news on the adequacy decision for the EU-US DPF and the practical benefits for you and your organisation!
On 10 July 2023, three years minus 6 days from the Schrems II decision, the European Commission released its adequacy decision for the EU-US Data Privacy Framework. Given the volume of transfers due to use of US cloud providers of all types, this is amazing news for everyone operationalising Privacy.
The DPF fixes those two complaints with an Executive Order that limits what the US federal agencies can ask for, and introduces a court and appeal process for non-US individuals to seek redress.
As the rules and Principles in Privacy Shield weren’t really criticised in Schrems II, they’ve just been slightly iterated and are now the rules and Principles in the new DPF. There are 7 main Principles, 16 Supplemental Principles and an Annex 1 focussed on the arbitration model.
An interesting wrinkle is that the DPF works on mutuality: the US also has to determine that the other jurisdiction provides protection from government surveillance, and redress, for US residents.
The DPF decision means that EEA organisations can now transfer personal data to organisations that are self-certified to the DPF and appear on the DPF List without needing to do a Transfer Impact Assessment (or TIA) under EU GDPR.
Don’t listen to anyone saying you also need SCCs to transfer from the EEA to the USA, or that it’s still illegal – you do not and it isn’t, in each case as long as your US importer is self-certified to the DPF and named on the List. Like Google.
Because the UK GDPR is 99% identical to the EU GDPR, the UK and US have simply created an extension to the DPF (a so-called ‘UK-US Bridge’) to allow for transfers from the UK under UK GDPR. Why reinvent the wheel?
The UK-US Bridge is to the DPF what the UK Addendum is to the EU SCCs: it adopts the EU-US DPF in its entirety and says ‘we’ll have that, but replace the EU with the UK and EDPB with the ICO’.
So the rules and Principles in the UK-US Bridge are the EU-US DPF rules and Principles, you just need to read UK and ICO for EU and EDPB. But UK organisations can’t rely on that UK-US Bridge yet, we need to wait until the UK issues its own adequacy decision for the UK-US Bridge. Brexit benefits …
While the UK has the GDPR, albeit the UK GDPR, Switzerland’s law is not GDPR. So Switzerland has its own Swiss-EU DPF but again, like the UK-US Bridge, they’ve also taken the EU-US DPF rules and Principles and made minimal edits to cater for the fact Swiss law isn’t the GDPR (for example in the definition of special categories or sensitive personal data) and to refer to Switzerland and the Swiss regulator not the EEA and the EDPB.
Again, Swiss organisations will need to wait for the Swiss to issuer their own adequacy decision for the Swiss-US DPF.
An excellent impact of the DPF is that the measures put in place by the Executive Order behind it apply to any transfer to the USA from the EEA.
The European Commission themselves determined that the limitation on government surveillance, and the redress for EEA data subjects, brought in by the Executive Order, are sufficient measures to address Schrems II concerns on those two areas. So you can too.
You’ll still need to carry out all other GDPR requirements, for example on processors, but on the Schrems II concerns about surveillance and redress, you can rely on the adequacy decision finding that the DPF resolves them.
NOYB have already stated they’ll challenge the DPF as quickly as possible. The US and EC teams are confident they dealt sufficiently with the CJEU’s two main criticisms. So this will be one to watch but, as no-one but the EC itself and the CJEU can take down an adequacy decision, you can rely for now on the EU-US DPF for your EEA-US transfers and the UK and Swiss equivalents when they arrive, to US entities named in the relevant DPF List. And you do not then need a TIA or SCCs.
We’ve written before about Meta’s reluctant €1.2bn fine from Ireland’s DPC, when Meta claimed to use contract as a legal basis (often called ‘forced consent’ as there’s no choice).
Meta then changed to legitimate interests as their legal basis for targeted, behavioural advertising, and this was shot down in the Competition law CJEU case we’ll see below.
We’ll see that it’s clearer each month that behavioural advertising needs consent, other legal bases won’t do.
The 4 July 2023 CJEU decision against Meta is particularly interesting because this was a Competition law case that involved Data Protection, resulting from a decision by a competition authority in Germany, not a Data Protection law case resulting from a Privacy regulator.
‘Off-Facebook’ includes collecting personal data on Facebook users when they were using other Meta properties such as Instagram and WhatsApp, and collecting personal data about Facebook users when they visited third party sites and apps, through the use of programming interfaces, cookies and similar technologies.
So this decision wasn’t simply about targeted advertising, it was about collecting off-Facebook data and combining that with on-Facebook data to analyse for personalised advertising.
The CJEU made a strong statement in para 117 of the judgement, which is arguably obiter in legal jargon as it’s not directly about the processing in the case. The CJEU clearly stated that behavioural advertising, at least on social media networks, can only happen with consent:
In this regard, it is important to note that, despite the fact that the services of an online social network such as Facebook are free of charge, the user of that network cannot reasonably expect that the operator of the social network will process that user’s personal data, without his or her consent, for the purposes of personalised advertising. In those circumstances, it must be held that the interests and fundamental rights of such a user override the interest of that operator in such personalised advertising by which it finances its activity, with the result that the processing by that operator for such purposes cannot fall within the scope of [legitimate interests].
And indeed, Meta have announced that it is moving from legitimate interests (they were using contract) to consent.
In light of the DPC order banning use of contract, and the CJEU banning use of legitimate interests, for behavioural advertising, as from 4 August 2023 Norway’s data protection regulator has ordered a 3-month ban on behavioural advertising to Norway’s Facebook users until Meta change their system to consent.
It’s worth quoting the main paragraph at length, we’ve split it up to make it easier to read:
‘… this letter imposes a temporary ban on Meta’s processing of personal data of data subjects in Norway for targeting ads on the basis of observed behaviour (hereinafter “Behavioural Advertising”) for which Meta relies on [contract or legitimate interests]. For the avoidance of doubt, Behavioural Advertising includes targeting ads on the basis of inferences drawn from observed behaviour as well as on the basis of data subjects’ movements, estimated location and how data subjects interact with ads and user-generated content. This definition is in line with our understanding of the scope of the IE Decisions.
Please note the limitations of the scope of our order. The order does not in any way ban Meta from offering the Services in Norway, nor does it preclude Meta from processing personal data for advertisement purposes in general. Only Behavioural Advertising as defined above is affected to the extent that it is based on 6(1)(b) or 6(1)(f) GDPR.
Practices such as targeting ads based on information that data subjects have provided in the “About” section of their user profile, or generalised advertising, is out of scope of this order. For example, the order does not itself prevent an advertising campaign on Facebook which, based on profile bio information, targets ads towards females between 30 and 40 years of age residing in Oslo and who have studied engineering.’
We’re excited to announce that Rights have come to Keepabl! We’ve always supported you with reports on where you process about which data subjects, but we now have a full-blown Rights module integrated within Keepabl.
We’re calling it Rights not Data Subject Rights because it’s a global module, not restricted to GDPR. And we’ve moved to calling people Individuals instead of Data Subjects to reduce jargon.
And of course, you can always submit Rights directly into Keepabl, control who see what with tailored My Rights dashboards for your Users and Rights Log for Admins.
And once a Right is entered into Keepabl, you have instant email alerts to you and your team, and our solution leads you through the process of managing each Right.
Come see for yourself, book your demo now!
On 21 July 2023, the White House announced that President Biden had convened ‘seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.’
You’ll recall that the EU is moving forward to regulate AI, whereas the UK and US are more laissez-faire so they do not slow down advances and remain competitive locations for AI development.
The three principles are:
These limited principles seem blindingly obvious. They also don’t include being a good actor when training and creating AI, such as not infringing Privacy laws when scraping data to train models, nor infringement of copyright and other IP in the process.
The Whitehouse AI Commitment came after President Biden released the Blueprint for an AI Bill of Rights, a much fuller document built around five principles:
There’s a few IP and competition investigations into OpenAI, but you can see the overlap with Privacy principles and automated processing rules in GDPR, which is why Privacy is at the forefront of enforcement and investigation on AI.
The AI Bill of Rights came after the release of NIST’s AI Risk Management Framework and more.
Al of this does demonstrate how AI is being treated differently by the US (and the UK is following its lead), even though current laws already apply.
As the IAPP reports, This was reiterated back in April at the IAPP IAPP Global Privacy Summit 2023 by the US FTC’s Commissioner Alvaro Bedoya, who noted:
“First, generative AI is regulated. Second, much of that law is focused on impacts to regular people. Not experts, regular people. Third, some of that law demands explanations. ‘Unpredictability’ is rarely a defense. And fourth, looking ahead, regulators and society at large will need companies to do much more to be transparent and accountable.“
Subscribe to our newsletter to get these practical insights into your inbox each month (ok, so we missed July…)
We’re about to build our DSR solution, integrated within our award-winning SaaS solution – and we want to hear from you first! Take our 9-question survey to become a ‘Roadmapper’…