AI is on everyone’s lips. So perhaps it’s not surprising that this year’s Biometrics Institute Congress was filled with much hand-wringing about AI. A self-described “non-profit,” the Biometrics Institute is probably best understood as part industry lobbying group, part think tank, part research institute. Each year, it holds a conference in London, which brings together policymakers, vendors, regulators, privacy rights groups, and the occasional not-so-undercover academic (like ourselves).
Here are some of our key insights from this year’s congress:
Biometric vendors are currently debating how to define themselves vis-a-vis AI and how to engage with the current AI hype cycle
“AI” is a notoriously slippery term, so much so that scholars, civil society groups, and policymakers continue to argue over its very definition. Such debates are not purely semantic; rather, they shape how an industry is regulated, how it approaches funders and clients, how it is publicly understood, and how it understands itself. In their annual ‘state of the industry’ report, the Biometric Institute tackles this question, asking “To AI or not to AI”? (We would attempt to summarize the key points for blog readers but alas it is proprietary knowledge that is only available to paid members…) At the Congress, panelists and keynote speakers raised what seemed almost existential and ontological questions. One session, for example, was entitled “What is the relationship between AI and biometrics?”
From one perspective, the relationship between AI and biometrics may seem obvious: AI is expected to enhance the functionality of identity checks. Thanks to machine learning and artificial neural networks, biometric systems have become far more accurate and precise in recent years, improving their technical performance and increasing their spread and market share. More recently, generative AI is posing new risks and vulnerabilities for the sector, including sophisticated forms of synthetic identity fraud, such as face morphing, which has the potential to disrupt the security of the travel sector. OpenAI’s release of its real-time voice API, for example, has renewed alarms about fraudsters circumventing voice recognition software.
But AI is also part of a powerful tech and regulatory imaginary. At the Congress, AI seemed less a technology (or set of technologies) to be adopted than a loaded, polyvalent term to be contended with. Generating a kind of “hyperreality,” AI seems to be everything and nothing at once, invoking both dystopian and utopian futures, producing vociferous proponents, equally vocal detractors, and a growing (if sometimes quieter and more measured) group of skeptics.
The biometrics industry has long been anxious with its public reputation, particularly in the wake of a string of controversies over encoded racism within facial recognition algorithms and in light of mounting resistance to the use of biometric systems for policing, migration control, and state repression. A recent industry survey by the Biometrics Institute revealed concerns that public mistrust with AI would spill over into the sector, further inflaming public opinion: “A significant 80% of respondents believe public opinion on AI will directly impact their views on biometrics. This highlights the need to address public concerns about AI to build trust in biometric applications.” This is one of many reasons why vendors and clients may seek to distance themselves from the AI moniker, or at least carefully navigate how they relate to the capacious and ambiguous term.
Regulators and industry are not necessarily at odds with one another
The Biometrics Institute is a space of engagement — one where regulators and industry can speak productively and where regulators can make a case for compliance. This is in contrast to common understandings about the relationship between regulation and business, whereby the regulators are thought to be adversarial and mistrusting of industry operators.
Several representatives of governmental regulatory bodies were in attendance at this year’s Congress, including the UK’s Information Commissioner’s Office (ICO). Other key public stakeholders in attendance were the EU’s DG CNECT, which oversees the EU’s AI Act, and the Office of Privacy and Civil Liberties at the US Department of Justice. The tone struck in their presentations was one of cooperation and assistance. As John Edwards, UK Information Commissioner, told the audience, “we are on the same side.” Emphasizing that the UK is a space where biometric technologies can flourish, he argued that regulators can help industry: “We want you to be able to use biometric data that add value to society and protect people’s privacy.”
This is not necessarily a story of regulatory capture, but it does speak to the way that industry and regulatory bodies are actively shaping one another. It certainly reflects an increasingly accepted mode of regulation that emphasizes cooperation and highlights the benefits of technological innovation, potentially at the expense of fundamental rights protection.
Rather than restrict biometric use cases, AI regulations may, in the long run, facilitate (and legitimate) their spread
Rather than necessarily inhibiting the biometrics industry, AI regulation can help the sector identify and manage risk, benefiting corporate players. Take, for example, the EU AI Act, which has recently begun to come into force. Several presentations at the Congress were devoted to the new Act and its implications for industry. Irina Orssich, Head of Sector AI Policy at DG CNECT, explained that the Act takes a risk-based approach to biometrics. It divides use cases into a taxonomy of risk — from unacceptable to high-risk to limited to low/minimal risk. Compliance with the EU AI Act and adoption of this taxonomy can be seen as a form of risk mitigation — a means for companies to limit their liability and exposure in ways that will keep regulators at bay. Importantly, however, it also legitimates those applications that are deemed to be less risky according to the rules.

“Besides minimising risks,” notes legal scholar Nathalie Smuha, “regulation could facilitate AI’s uptake, boost legal certainty, and hence also contribute to advancing countries’ position in the…‘race to AI.’” The same could be said about biometrics and how emerging regulations will facilitate their further adoption and acceptance in different contexts, including consumer applications and more security-oriented spaces like borders. It is therefore incumbent upon critical voices to assess how regulators, and the rules they are mandated to enforce, further entrench biometrics in our everyday lives (whether or not they are ultimately understood to be AI) and the implications of this legitimization for our societies and polities.