Skip to content

Compliance issues around anonymised data

Under the previous Data Protection Act 1998, compliance was not mandated where data were anonymised. In the world of the Data Protection Act 2018, where living individuals have a growing data footprint, when can you claim that data really are anonymised?

Anonymous data are not personal data. In the words of GDPR, identification would not be achievable through any “means reasonably likely” (Recital 26). Determining what really is anonymised or not therefore depends on who can access the data and the context in which they work. The environment in which data exist helps shape whether they are anonymised: a freely downloadable dataset is harder to anonymise than one that is held in a secure archive for use only by approved and trained specialists (Elliot et al. 2016).

Data that have been pseudonymised are sometimes referred to as if they are anonymised. When GDPR talks about pseudonymisation it is as a privacy enhancing technique for personal data (Recital 28 and elsewhere). Pseudonymised data might not be directly identifiable (Article 4(5)) but can still be indirectly identifiable (Recital 26). Simply replacing identifiable data with pseudonyms does not make data anonymised; records would not be anonymised to anyone who knows the method whereby pseudonyms were created, can refer to identifiable data, or is able to find combinations of values unique to individuals. For example, imagine an entry in an insurance claims dataset about a collision involving a certain make of four by four driven by a certain elderly male; I’ve not given you any specifics in my description of it, but you might hazard a fair guess as to whose insurance claim information it could be and with an age, location or date you would be able to judge whether your guess were correct by reference to recent British media coverage.

Pseudonymisation is one method as part of anonymisation (for others see the ICO (2012) Code of Practice on Anonymisation). Anonymisation involves more processing beyond pseudonymisation, for example masking, aggregating, and contractual limits to data processing. Such processing to produce anonymous data must comply with data protection law. Processors can therefore only perform anonymisation under contract from the controller(s). Controllers must undertake a risk assessment to manage any high risks to data subjects, such as re-identification and the disclosure of otherwise confidential details, and must also be transparent with data subjects about the processing that is taking place.

Guidance on these matters is available from two freely downloadable books:

  1. For how to assess anonymisation processes, perhaps as part of your Data Protection Impact Assessment, see the Anonymisation Decision-making Framework written by Mark Elliot and colleagues (2016).
  2. A new handbook has also been developed for assessing whether the results of data analysis, such as graphs and tables, are anonymous (Greci et al. 2019).
How FourthLine can help:

FourthLine is working with a number of financial service firms to help them with Operational Resilience enablement and Outsourcing and 3rd-Party Risk Management, through a mixture of end-to-end consulting and resourcing options.

March 28, 2019
Jakes de Kock
Jakes is FourthLine's Marketing Director. He specialises in omni-channel, tech-enabled inbound marketing strategies to drive business growth within the b2b sector.
Contact Us

Company Number: 6952875

VAT Number: 981375491

Privacy Policy

Complaints Procedure

Code of Conduct

CONNECT WITH US

Stay up to date with industry news, risk and resilience events and webinars.

Copyright © 2022, FourthLine. All Rights Reserved.