Nine takeaways for employment lawyers from the ICO’s recent audit outcomes report on AI in recruitment

A detailed audit outcomes report called AI Tools in Recruitment was published by the Information Commissioner’s Office (ICO) on 6 November 2024. The ICO audited developers and providers of AI recruitment tools (with their consent). This has provided unprecedented insight into how AI is being deployed in this space right now – information which is generally jealously guarded. This perspective has allowed the ICO to make important, sector specific recommendations. It is essential reading for employment lawyers. In this blog we highlight 10 takeaways for lawyers to discuss with their employer clients.

Takeaway One: Recruiters need to maintain strong oversight over AI tools

The ICO explains that a recruiter will likely be a “controller” under data protection legislation (either solely or jointly with the AI provider) over any candidate information that it passes to an AI provider.

The ICO explains that recruiters should ensure that their contract with AI providers contains comprehensive instructions about specific data which should be processed, the means and purposes of processing, the output required and the minimum safeguards to protect personal information (pages 8 to 9). The ICO further advises that recruiters periodically check that AI providers are complying with these instructions and not sharing or processing personal information for additional alternative purposes.

Takeaway Two: Some AI providers are scraping the internet for too much personal information

The ICO found that generally speaking AI providers limited the data they collected about people when helping with recruitment tasks for example, a person’s name, contact information, career experience, skills and qualifications (pages 13 to 14). However, some AI providers were collecting and storing less important information, which was gathered from public job networking sites, such as photographs. The ICO suggested that this offended the data minimisation and purpose principles. This is something which recruiters need to address with AI providers.

Takeaway Three: Some AI providers are repurposing personal data inappropriately

The ICO found that some AI providers scraped the internet to help with recruitment but then repurposed the candidate personal data to train, test and maintain their own AI tools (page 14). The ICO commented that in many cases the providers could not demonstrate that this was compatible with data protection laws. The ICO indicated that recruiters should seek contractual undertakings to the effect that this would not happen with the AI providers they engaged (page 15).

Takeaway Four: Some AI providers can retain personal data for too long

The ICO found that several AI providers maintained a large database of potential candidate profiles with personal data which they intended to retain indefinitely with no attempt to remove data that was inaccurate, unnecessary or out of date (page 15). The ICO noted that this was unlikely to be lawful. The ICO explained that recruiters should require contractual undertakings to stop this happening (page 16).

Takeaway Five: Some AI tools are not always audited appropriately for accuracy

The ICO found that the majority of AI providers tested accuracy and repeated accuracy testing especially when there were changes or updates to the system (page 21). But this was not universal. The ICO highlighted one example where the AI provider had not formally assessed the accuracy at all and took the view that “at least better than random” was an appropriate reason to process personal data. The ICO comments – with evident restraint – that: “However, being ‘at least better than random’ would usually not be sufficient to comply with data protection law, where AI actively makes recruitment decisions without human intervention. In these cases we would recommend that providers must assess and monitor accuracy issues. The AI should reach the target accuracy level before processing personal data … Recruiters should not rely on inaccurate AI to make decisions alone”.

Takeaway Six: Some AI providers (still) don’t understand discrimination law

The ICO found that most AI providers understood that there was a risk of bias from their tools (page 18). However, the ICO found that some AI providers had failed to ensure that their training datasets were diverse and representative of the relevant population. Since the risks of unrepresentative datasets are now well-documented, this finding is surprising.

Perhaps more worrying though is that the ICO found that when AI providers measured bias they “usually” deployed an adverse impact analysis methodology which “in many cases” involved the “four fifths rule” (page 21). This is a methodology which is derived from the US where the law infers intentional prejudice when a pattern or practice of preferring (for instance) white recruits over black recruits, is demonstrated to a sufficiently high degree. The inference is said to arise when there is an 80% or worse chance that a white person would be chosen as compared to a black person. However, this is not the way in which equality is measured or defined in the UK. Robin Allen KC and Dee Masters have written about this before in November 2021 for The Legal Education Foundation in an open opinion called The impact of the proposals within “Data: A new direction” on discrimination under the Equality Act 2010.

To make matters worse, the ICO also found that several AI providers were not monitoring “real” data on protected characteristics and were instead inferring the data (page 22). According to the ICO: “This usually involved predicting the person’s gender and ethnicity – often from their name but sometimes also from elements of their candidate profile or application. They added this to the information they already held about them.

The ICO highlights that this poses two key risks. First, inferred data of someone’s race is still special category data. However, the ICO found that some AI providers did not treat inferred information as special category data or identify an additional condition for processing. Second, inferring special category data also runs the risk of capturing inaccurate information about someone (and then making decisions relying on it). For example, the AI may well mis-categorise a person’s race or racial identity.

Legal advisors should be working with their clients to ensure that they are interrogating AI providers to make sure that the tools which have been created are appropriate and lawful as judged against the Equality Act 2010. Employers using these tools will be liable even if the discrimination is created by a third-party AI provider.

Takeaway Seven: Some AI providers are publishing misleading privacy information

The ICO concluded that some AI providers were failing to be transparent about how they were using data (pages 26 - 27). Specifically, several policies did not accurately explain how data was processed, the lawful basis used, how long data was retained or whether demographic data was being inferred. Some did not disclose the issue of AI properly at all. The ICO explained that recruiters are expected to have contractual mechanisms in place to ensure that candidates are properly informed about how their data is being used.

Takeaway Eight: Recruiters must complete Data Protection Impact Assessments through the whole lifecycle of an AI tool

Ultimately the ICO re-enforces that recruiters are obligated to carry out Data Protection Impact Assessments (DPIAs) before an AI tool is used in recruitment and afterwards (page 8). DPIAs are onerous. They require a comprehensive assessment of privacy risks to people as a result of personal information processing, appropriate mitigating controls to reduce these risks, and an analysis of trade-offs between people’s privacy and other competing interests. The ICO’s focus on DPIAs is unsurprising in light of the various problems identified by it during the auditing review.

Takeaway Nine: The ICO report does not have all the answers

Article 22 UK GPDR prohibits decisions based solely on automated processing, including profiling, which produces legal effects concerning a data subject or similarly significantly affects him or her. This is subject to very narrow exceptions.

The ICO says that all the AI tools which it reviewed were not making decisions without any human input: “AI tools reviewed in this work were designed and intended only to support human recruiters to make decisions, rather than to make automated recruitment decisions without human intervention. Most AI tools [that were analysed] provided only indicative grades or fit scores, or suggested a candidate’s behaviour traits or skills which a human recruiter could consider in their decisions.” (page 32) In other words, the ICO considered that the prohibition in Article 22 did not bite.

We are not sure the position is as simple as this. It appears that some of the AI recruitment tools worked by sourcing profiles from public sites and then matching those potential candidates with prospective employers (page 33). In other words, this type of use involves AI automatically rejecting large numbers of “inappropriate” candidates. Depending on the context, it may be that the automatic rejection will fall into Article 22 as it produces legal effects or similarly significant affects. How to manage data protection obligations here is not addressed at all by the ICO. This is a lacuna in the ICO’s report which could be usefully addressed in the future.

Conclusion

AI in recruitment is now well-established. The ICO’s report is a welcome addition to the fast moving AI regulatory landscape, although employers likely still need bespoke advice before and during the deployment of AI tools in the recruitment space due to the complexity of the area. It’s also worth remembering that the ICO has a separate project in which it is examining recruitment tools which process biometric data, such as emotion detection in video interviews (page 5). The ICO are presently preparing guidance, and it is likely to include more invaluable guidance when it comes to navigating this tricky space.

This is a blog by Robin Allen KC, Dee Masters and Grace Corby who recently drafted an AI Bill for the TUC on steps that are needed in the employment sphere to meet the challenges of new technology.

Previous
Previous

Recent Supreme Court judgments with significant implications for employment law

Next
Next

Olivia-Faith Dobbie appointed as a Recorder