A review of The Children’s Manifesto for the Future of AI from the Alan Turing Institute

At the crossroads of technological acceleration and intergenerational ethics stands an unusual but stirring intervention: The Children’s Manifesto for the Future of AI. Produced not by seasoned policy-makers or think-tank philosophers, but by voices typically relegated to the footnotes of AI discourse—children aged 8 to 18—this work demands not only attention, but a wholesale reorientation of how we think about participation, responsibility, and futurity in the age of algorithms.

At its core, this document is not simply a plea for protection, but a lucid and often startling articulation of values, ethics, and imaginative possibility from those who are, as one contributor puts it, “the voice of the future” (Ishrit, age 13). What emerges is a moral grammar for AI development that refuses the binary of utopian hope and dystopian fear, opting instead for a careful, insistent assertion: AI must be made just, inclusive, and humane—or not at all.

The Ethics of Inclusion: A New Political Subject

Perhaps the most radical gesture of the manifesto is its very premise—that children, too, are political subjects in the AI age. “We feel that adults often don’t take our views seriously,” the introduction states, “but we have lots of ideas about the ways AI should and should not be developed” (p. 2). This is not mere sentiment. It is a challenge to traditional models of policy-making and ethical deliberation that routinely exclude those under legal adulthood from consequential debate.

This is nowhere more poignantly voiced than in Ethan’s stark appeal: “You’ll write the laws, but we’ll bear the cost” (p. 2). What’s at stake here is not only participation but intergenerational justice. The children are not asking for tokenistic consultation—they are demanding co-authorship of the AI future.

Education, Equality, and the Pedagogy of Hope

The section on education is perhaps the most robust in the document, and its vision is both practical and poignant. The promise of AI as an equaliser is framed with clarity: “Imagine if each child, no matter where they live, could have a personal AI-powered tutor guiding them every step of the way” (Aarushi, age 15, p. 5). The emphasis is not on replacement, but on supplementation—a thread that recurs with striking consistency. “AI should assist teachers… but it shouldn’t be used to replace teachers” (Alexander et al., age 11, p. 7).

This balance of optimism and caution recasts AI not as an autonomous force, but as a tool whose moral value is contingent on its context and governance. The children envision adaptive systems that support neurodiverse learners and translate lessons into myriad languages. But they also warn that over-reliance may “restrain imagination and hinder creativity” (Aananya, age 13, p. 10), and that inequitable access could deepen existing educational disparities.

Here, their insight overlaps compellingly with educational research such as the “Expertise Reversal Effect” discussed in other pedagogical contexts. While educators debate the balance between scaffolded support and learner autonomy, these children intuitively grasp the stakes: AI must adapt to learners’ diverse and dynamic needs without flattening the complexity of human development.

Safety, Mental Health, and the Ambiguous Intimacies of AI

The manifesto’s exploration of health and wellbeing reveals an astute understanding of AI’s ambivalent intimacy. On one hand, children propose tools that would “monitor online activities… to detect signs of cyberbullying, anxiety, or depression” (Chekwube, age 16, p. 11), and companions that could comfort those feeling lonely or isolated. On the other, they warn that excessive reliance on AI “could lead to them becoming increasingly self-centred,” as human relationships are supplanted by emotionally hollow, algorithmically curated interactions (Frederick, age 16, p. 14).

This duality—a vision of AI as both lifeline and isolator—is one of the manifesto’s most powerful tensions. It resists the temptation to romanticise technological intervention, instead offering a critical literacy grounded in lived experience. AI is not framed as a saviour; it is a system with profound potentials and profound dangers, contingent on how and why it is built.

Environmental Conscience in the Age of Compute

Children as young as nine urge developers and lawmakers to account for AI’s environmental toll. “AI actually produces a significant amount of carbon emissions,” writes Ishrit (age 13), who demands that future systems “be powered by a sustainable energy source” (p. 17). Such insights, unprompted by industry narratives, reflect a growing ecological sensibility that recognises digital infrastructures as material realities—not ethereal “clouds” but extractive, power-hungry machines.

In this way, the manifesto anticipates emerging research and activism that critiques AI not just as an epistemological or economic force, but as an ecological actor. The children’s demands for green tech, carbon transparency, and equitable distribution suggest a vision in which AI development is not separate from climate justice, but inextricably linked to it.

Structural Bias and the Imperative of Accountability

Perhaps nowhere is the manifesto more forceful than in its denunciation of AI bias. “These days AI is being trained with biased data and in turn being racist!” exclaims Tejas, age 10 (p. 19). What might seem naïve in its phrasing is in fact a radical clarity. Bias is not merely an unfortunate side-effect—it is a form of algorithmic violence, a systemic reproduction of injustice masked by neutrality.

The children call for the tracking and removal of “biased and racist data” (p. 3), the auditing of AI systems for fairness, and a commitment to ensuring that training data reflects a wide spectrum of lived experiences. This insistence is both moral and structural. As Ishrit puts it, “AI is often trained on biased sets of data… This can lead to even more prejudice against different races” (p. 20). The message is unmistakable: fairness is not optional—it is foundational.

Transparency, Not Tokenism

If there is a central anxiety running through the manifesto, it is that of opacity—of systems that make decisions without explanation, that harvest data without consent, and that evolve without scrutiny. Children voice deep concern about privacy violations, fake content, and manipulative social media dynamics. “I am worried about my data and how it is exposed to the internet,” writes Mariyah (age 10, p. 22). In the absence of robust safeguards, AI becomes not just a tool, but a threat.

The children demand that developers and governments “explain how their algorithms work” and that they be held accountable across borders (Ethan, age 16, p. 27). This is not a call for paternalistic regulation alone, but for participatory governance. They are not asking for protection from AI—they are asking for protection with AI.

Toward a New Covenant

In its final pages, the manifesto becomes almost prophetic in tone. “Lead with your hearts,” urges Ecenur (age 13, p. 28). “Listen to us, involve us in the conversation, and ensure that AI is developed in ways that benefit everyone.” This is not mere rhetoric. It is a summons—to humility, to imagination, and to shared responsibility.

The Children’s Manifesto for the Future of AI is not a policy document in the traditional sense. It is something rarer: a collective act of ethical world-making by those whose lives will be most deeply shaped by the technologies we build today. It reminds us that in the rush to innovate, we must also remember to listen—especially to those who have not yet been taught to shout.

Shall we finally take them seriously?

The Alan Turing Institute. (2025). The Children’s Manifesto for the Future of AI.


Leave a Reply

Your email address will not be published. Required fields are marked *