All non-product images created using deepai.org
Long ago, in the mist of the 90s, the Privacy Impact Assessment or PIA was born.
Since then, GDPR brought in the DPIA and LIAs, Schrems II effectively brought in the TIA and more recently the EU AI Act (EU AIA) brought in the FRIA and more. If you’re new to Privacy and AI, this blog is for you. We’ll set out what they all mean and how they do – or don’t – relate to each other, and we’ll bunch them into easy categories:
OK, so at one level all these assessments look at the impact of something on someone, but in Privacy parlance these look at the impact on individuals from your processing of their data. So let’s start at the top, with the Privacy Impact Assessment or PIA.
The PIA is the big one: Privacy’s equivalent of risk assessments in Security. It uses ‘impact’ instead of ‘risk’, to highlight that we’re looking at the impact on individuals.
PIAs go by many names, and applicable laws will have different rules on when they need to be done and what they’re called. For example:
You’re assessing the impact on individuals from your processing their personal data. One caution about that name, PIA:
It’s worth saying it again, the PIA is focussed on the impact on the individuals concerned, not your organisation. This is the major difference between Privacy and Security assessments for those coming to Privacy from other corporate compliance arenas.
As you can see above, DPIA isn’t a global term and means different things (or nothing) depending on the applicable law. If you’re in the EU or UK, it’s clearly the specified PIA when there’s a likely high risk. But if you’re in Singapore or Brazil, it’s still a PIA but it has different rules and triggers. Always check your applicable law.
If you’re covered by a rule based on certain requirements, such as the rules in GDPR or CCPA based on risk, how do you know when you need to do a full PIA?
Enter the Threshold Assessment or Threshold PIA.
The Threshold Assessment is still a PIA, but it’s lighter-weight, to see if you’ve cleared the hurdle set out in your laws, and need to carry out a full PIA.
The LIA, or Legitimate Interest Assessment, is the easiest one to understand, and to carry out.
While the LIA might seem like a GDPR-specific assessment, only relevant when you’re looking to rely on legitimate interests as the legal basis of your processing (instead of, for example, consent or legal obligation), legitimate interests is a common theme in Privacy laws worldwide, whether or not they call for an LIA.
An LIA is how you work out if you can rely on legitimate interests as the legal basis for your processing.
Again, GDPR does not explicitly say you have to do an LIA, but the balancing test is set out in Art 6, and summarised by the UK ICO as:
1. The purpose test (identify the legitimate interest);
2. The necessity test (consider if the processing is necessary); and
3. The balancing test (consider the individual’s interests).
As GDPR notes, you’re looking to see if the legitimate interests ‘are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child’.
Transfer assessments really come into practical significance with the 2020 CJEU decision in Schrems II, which took down Privacy Shield, the adequacy decision for US entities under EU GDPR.
TIAs, or Transfer Impact Assessments, in the EEA (and generally around the world apart from the UK, which calls them TRAs or Transfer Risk Assessments) look at the impact on individuals from your transferring their personal data out of the jurisdiction to a third country (or, more rarely, an organisation not subject to your jurisdiction, such as the UN or INTERPOL, called ‘international organisations’ in GDPR).
The idea is that Individuals’ personal data should not lose any protection because of the transfer.
Transfer Assessments are typically triggered when you take on a new supplier and they, or a sub-processor in their supply chain, are not in your jurisdiction. So they’re often carried out ‘for’ a particular relationship with a particular third party. Any PIA / DPIA on that processing should surface the need for a Transfer Assessment.
You might hear TPRA for Third Party Risk Assessment, though that’s used more for the full vendor due diligence, or the Security aspects, which have been the cornerstone of onboarding and reviewing suppliers long before Privacy assessments butted in.
There’s clearly Privacy due diligence to be done when using third party vendors, based on Art 28 GDPR. For example, where is the data hosted, is there a transfer, who are the sub-processors, how’s the security?
Those can be in a separate Privacy due diligence questionnaire, or incorporated into the overall TPRA.
And, given the use of personal data in AI use cases, these are all relevant to AI Assessments.
Even before the EU AIA, practitioners were carrying out assessments on AI, incorporating AI into existing assessments such as DPIAs, LIAs and TPRAs. And as the major AI players are in the USA, using AI also often introduces a transfer triggering a TIA.
This makes sense, as you’re likely getting AI from a third party vendor, you’re introducing a new way of doing business or processing personal data, and you’re sending your inputs – well, you need to pin that down.
The EU AIA introduced two new assessments, both limited (at least at the start) to high-risk AI systems: the FRIA, and the CA. We’re more interested in the FRIA here as its targets and purpose are very similar to those we’ve looked at above.
The FRIA is the Fundamental Rights Impact Assessment required to be carried out by certain persons under Art 27 of the EU AIA for certain high-risk AI systems.
Again, it uses the word ‘impact’, which stresses the focus is on the protection of fundamental rights of individuals, as Recital 96 confirms:
The aim of the fundamental rights impact assessment is for the deployer to identify the specific risks to the rights of individuals or groups of individuals likely to be affected, identify measures to be taken in the case of a materialisation of those risks.
The FRIA’s contents are set out in Art 27 of the EU AIA, and includes items mirroring what we’ve discussed above:
The EU AIA expressly cross-refers the need for DPIAs, for example noting in Art 26 on the Obligations of deployers of high-risk AI systems, that they can use the information submitted under Art 13 on Transparency and provision of information to deployers to help in carrying out their DPIA.
Conformity assessments, and the resulting CE marks, are already in existence, to show the compliance of various products with applicable regulations.
Likewise, the AI CA under Section 4 and Art 43 in particular is there to give comfort that high-risk AI systems comply with the requirements of the EU AIA. The resulting CE marks can be physical or digital.
Assessments have always been in Keepabl, prompted, uploaded or linked, and reported on. Now you can actually do your Assessments in Keepabl!
Do contact us for your demo and your 14-Day Free Trial.
In November 2020, the Financial Conduct Authority (FCA) warned firms to be responsible when handling client data, noting that: “Before transferring clients’ personal data, firms should consider whether this is…
CompTIA’s unconference session, CompTIA UK Community meetup, Bristol, June 2019. The ‘unconference’ session is always a highlight of CompTIA Community meetups, and Bristol 2019 was no different. So much great…