This blog series is all about understanding risk, from the common themes and underlying principles, to specifics in Security, Privacy and AI that may apply to your organisation.
We’ll look at what risk actually is, starting with general definitions and then seeing how more specific definitions take that forwards.
First, why does risk matter?
As the UK government states:
Risk is a part of everything we do. We all manage risk – often without realising it – every day. There is significant value in the effective management of risk.
Underlining how important it is to accept this, the UK government goes so far as to state, in its Risk Appetite Guidance Note: ‘Public sector organisations cannot be culturally risk averse and be successful’.
The UK ICO’s Risk Management Policy and Appetite Statement inspires with a powerful statement: ‘Effective risk management is not about avoiding all risk; with an effective risk management culture and strengthened understanding of risk management we may decide to take more risks in some areas of the organisation. This will always be on an informed basis, ensuring that the benefits of the risk-taking enable us to achieve our ambitions and help us to innovate as effectively and cost efficiently as possible.’
It’s important to recognise risk is part of everyday life, and managing that risk is key to success.
There is no single, universal, definition of risk. There is no overarching law, applicable to all areas of life, that says ‘risk means …’. As we’ll see, don’t worry, this is not a problem!
Before we drill down a level, let’s stay right at the top and adopt the high-level definition of ‘risk’ used by the UK government, the EU and ISO standards.
The Orange Book is the UK government’s bible on risk management, and defines risk as:
the effect of uncertainty on objectives
EU and ISO standards, such as ISO 31000 and ISO 27000, use the same definition. As does NHS England. This is as close to a universal definition as you’re likely to find. Two things to say here:
Let’s look at some alternative definitions in use today.
These examples define risk in more practical ways, evolving away from the above high-level definition to various degrees:
These more practical definitions are still high-level, and can be applied to any vertical, field or industry. They’ve just made the definition more understandable, by adding the two factors you use to determine the level of risk, or risk rating: the likelihood of it happening, and the severity of the impact if if does.
It’s totally fine to incorporate likelihood and impact into your definition of risk.
Indeed, we recommend this type of definition in your workplace policies and procedures, and we’ll see that laws, regulations and standards all use this more ‘applied’ type of definition.
A ‘risk’ and its ‘risk rating’ are separate things.
Even, or particularly, if you use likelihood and impact in your definition, it’s important to remember that a risk is one thing, and how urgent it is to attend to is another. In practice, you’re most likely to calculate that level of risk, or risk rating, using likelihood and impact.
Now let’s look at risk in our 3 key domains of Privacy, Security, and AI.
UK and EU GDPRs are the same on risk, as UK GDPR basically just crossed out ‘the EU’ and ‘member state’ and wrote in ‘the UK’.
Risk in GDPR is therefore the risk to the rights and freedoms of an individual. And we look at risk’s likelihood and severity (or impact).
NIST has the same view as GDPR. For example, in its Privacy Framework FAQs, NIST clarifies:
‘The Privacy Framework is centered on protecting individuals’ privacy (whether singly or at the group or societal level) as a means of safeguarding important values around human autonomy and dignity. Protecting business information can be addressed through cybersecurity safeguards.‘
Privacy law risk is risk to the individual, not risk to the organisation.
This is a major mindset shift as most risk programs look at risk to the organisation. There is of course risk to the organisation, for example you could suffer a data breach that leads to fines and customer loss. But Privacy laws worldwide only care about the risk to individuals from your processing their personal data / PII. (On the bright side, the laws are concerned about your organisation being able to process data and share data in order to do business. That’s the balance that runs throughout, and it’s why those rules are there: to enable processing in a way that protects rights.)
In Privacy, what you’re protecting is broad and varied, as are the harms that could befall individuals if a risk materalises.
GDPR speak of the rights and freedoms of individuals, and gives a range of examples of harm including ‘discrimination, identity theft or fraud, financial loss, damage to the reputation, loss of confidentiality of personal data protected by professional secrecy, unauthorised reversal of pseudonymisation, or any other significant economic or social disadvantage‘. NIST speaks of important values around human autonomy and dignity, and also gives a broad range of examples of harm.
As an example, the EU’s NIS2 Directive is focussed on cyber-security and sets out ‘cybersecurity risk-management measures and reporting obligations‘ for medium and large entities in critical sectors.
You can see the definition is highly tailored to the law’s purpose, and it’s the same in pretty well every law.
(We’ve just raised two things outside the scope of this blog, that we’ll look at later:
Turning from laws to standards, and from the EU to the USA, NIST defines ‘Information system-related security risks’ as ‘those risks that arise from the loss of confidentiality, integrity, or availability of information or information systems and reflect the potential adverse impacts to organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations, and the Nation.’
Going global, and the ISO standards, ISO 27001 is a standard focussed on information security and preserving the confidentiality, integrity and availability of information. So, naturally, in that ‘27000 family of standards’, risk is looked at as something threatening information security.
Security is about any information, not just personal data.
Security standards tend to focus on confidential information and intellectual property (there are separate standards on Privacy). This is all context for the next key point …
Security risk is risk to the organisation, not the individual.
Sure, there may / will be risk to those individuals from non-compliance but Security risk programs run by organisations tend to focus on risk to the organisation. Security laws and Security standards are about protecting any information and the risk is risk to the organisation, not risk to any individuals the data refers to.
Just as GDPR tries to strike a balance between ‘the protection of natural persons with regard to the processing of personal data and on the free movement of such data‘, the EU AI Act is clear, in Art 1, about the balance it attempts to strike:
The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.
And, as the EU lays out in its summary, the EU AI Act has an in-built hierarchy or risks, with escalating obligations as the risk increases:
AI risk is broader than risk in both Privacy and Security.
It’s not just risk to individuals (Privacy) or to the organisation (Security). It’s even risk to democracy and the rule of law!
You always need a way to rank, score or prioritise risk so you can deal with the most urgent first.
This brings us onto the level of risk, or risk rating, which is part of your organisation’s risk management process, and the methodology within that, which we’ll look at next.
Book your demo to see how Keepabl makes your Assessments super smooth and lets you implement your risk management process, tailored to your methodology.
In a very welcome speech on 12 September 2018 to the CBI Cyber Security: Business Insight Conference, James Dipple-Johnstone (ICO Deputy Commissioner, Operations) summarised the UK ICO’s approach to security under GDPR and…
Our lovely new Assessments module launched in February 2025. Assessments have always been in Keepabl – prompted, linked, uploaded and reported on – now you can carry out the Assessment…