The UK Government’s Controversial AI Experiment on Asylum Seekers
The United Kingdom government announced on July 22 its intention to implement AI face-scanning technology to assess whether asylum seekers are under 18 years old. Angela Eagle, the asylum minister, stated that this decision was driven by the experimental technology’s potential to be the cheapest option. However, this plan has immediately sparked serious concerns among children’s rights advocates and mental health professionals. Experimenting with unproven technology to determine a child’s age, especially when it impacts their legal entitlement to crucial protections, is being widely condemned as cruel and unconscionable.
The move highlights a growing debate about the ethical implications of deploying advanced AI in sensitive humanitarian contexts, particularly when it involves vulnerable populations like refugee children. Critics argue that prioritising cost over the well-being and legal rights of children is a dangerous precedent, potentially leading to devastating and life-altering errors.
Unproven Technology and Wide Error Margins
A primary concern surrounding the UK’s plan is the unproven nature of facial age estimation technology in real-world settings. Companies that have tested their AI in environments like supermarkets, pubs, and websites have typically calibrated their systems to predict whether a person looks under 25, not specifically under 18. This allows for a wide margin of error, making these algorithms unreliable for the precise distinction required to determine legal minority.
Experts warn that such algorithms struggle significantly to differentiate between a 17-year-old and a 19-year-old, a critical distinction for granting asylum protections. The technology was never designed for the nuanced and high-stakes context of assessing children seeking asylum, and its application risks producing disastrous, life-changing errors that could deny vulnerable individuals the care and legal status they desperately need. Relying on such technology without independent, rigors evaluation in relevant scenarios is a significant ethical and practical concern.
AI’s Inability to Account for Trauma and Physical Alterations
The limitations of AI face-scanning systems become even more apparent when considering the profound impact of trauma and violence on a child’s physical appearance. Algorithms identify patterns in superficial features like the distance between nostrils and the texture of skin. However, they are fundamentally incapable of accounting for the complex ways in which severe trauma and violence can prematurely age a child’s face.
Furthermore, these systems cannot grasp how extreme physical hardships, such as malnutrition, dehydration, sleep deprivation, and prolonged exposure to salt water during dangerous sea crossings, might profoundly alter a child’s facial characteristics. These real-world factors can make a child appear older than their actual age, leading to potentially wrongful classifications by an AI system that lacks human empathy and contextual understanding. The risk of misidentifying a child as an adult due to such factors is substantial, with severe consequences for their safety and legal rights.
Read More: Microsoft’s Q4 2025 Results: Cloud and AI Drive Strong Revenue Growth
Privacy, Non-Discrimination, and Right to Redress
Beyond the accuracy concerns, the proposed use of AI face scans on asylum-seeking children raises significant privacy and non-discrimination risks. These AI systems often lack the ability to explain or reproduce their results, creating a “black box” scenario that further erodes a child’s fundamental right to redress and remedy following a wrongful assessment. If an algorithm makes an incorrect age determination, the lack of transparency in its decision-making process makes it incredibly difficult for individuals to challenge the outcome or understand why a particular decision was made.
This opacity can lead to arbitrary denials of protection without a clear pathway for appeal. Moreover, such technology can introduce or exacerbate existing biases, leading to non-discrimination risks where certain ethnic or racial groups might be disproportionately misclassified. The collection and processing of sensitive biometric data also pose new privacy threats, as this information could be vulnerable to misuse or breaches, further compromising the safety and dignity of already vulnerable children.
History of Misclassification and Abusive Conditions
The UK government has a documented history of repeatedly and illegally subjecting children seeking asylum to abusive conditions by wrongly classifying them as adults. This pattern of misclassification has been a persistent issue, with reports highlighting severe consequences for the affected children. Just last week, the UK’s chief inspector of borders and immigration highlighted the plight of “young people who felt disbelieved and dismissed by the Home Office, whose hopes have been crushed, and whose mental health has suffered” due to flawed age assessment processes.
Cases have emerged where unaccompanied refugee children were allegedly detained in facilities like Manston Detention Centre by purposefully misclassifying them as adults in 2022 and 2024. These past failures underscore the critical need for accurate and humane age assessment methods. Introducing unproven AI technology into a system already prone to errors and human rights violations is viewed by critics as a dangerous escalation that could further entrench abusive practices and exacerbate the suffering of vulnerable children.
Recommendations for Humane Age Assessment Processes
In light of the severe concerns raised, experts and advocates are urging the UK government to halt its plans for AI age assessment and instead implement humane and internationally compliant processes. The chief inspector’s recommendations emphasise the need to fix flawed age assessment procedures, ensuring they adhere strictly to international standards. These standards typically advocate for age assessments to be used only as a last resort, as evidence to resolve serious doubts about a person’s declared age, and never as a primary or sole determinant.
Crucially, such decisions should be conducted by qualified professionals who are specifically trained in child protection and trauma-informed care. These professionals can account for the complex factors that might influence a child’s appearance or behaviour, ensuring a holistic and empathetic approach. The government is urged to listen to the voices of those directly impacted, including the young person who, after surviving a difficult journey to the UK, told inspectors that the government “should not be judging people on their appearance on the first day they meet them.”
Prioritising Care Over Algorithms for Refugee Children
The debate surrounding the UK’s plan to use AI on asylum-seeking children fundamentally boils down to a choice between prioritising cost-cutting algorithms and upholding the inherent rights and well-being of vulnerable individuals. Refugee children, having often endured immense trauma and hardship, desperately need care, protection, and a fair assessment process, not experimental technology that risks producing life-changing errors. The reliance on AI in such sensitive contexts can dehumanise the asylum process, reducing complex human experiences to algorithmic patterns.
Instead of investing in unproven AI, resources should be directed towards strengthening existing age assessment processes with trained professionals, ensuring robust safeguards, and fostering an environment of trust and empathy. The international community and human rights organisations continue to advocate for approaches that prioritise the best interests of the child, emphasising that no child seeking refuge should be subjected to cruel and unconscionable experiments that could deny them the protections they are legally and morally entitled to receive.