AI Is Rewriting Your Child’s Future. Who Orchestrates The Ecosystem ?
- Alejandro Canonero
- Nov 25
- 12 min read
Updated: Nov 26
Why AI in education is no longer a tool choice but a battle over how entire societies learn, think, and behave.

1. The paradigm shift in front of parents and leaders
Nothing is more strategic than your children’s education and their readiness for the future.
Yet we are still speaking about AI in schools as if it were a gadget question. Those that still see AI in education as a tool choice, have already missed the paradigm shift: it is becoming the operating system that scripts how entire societies learn, think and behave across a single interconnected ecosystem of children, parents, teachers, school leaders, governments, technology providers, policy makers and influencers.
The question in front of ministers, CEOs, and investors is no longer whether Artificial Intelligence will reshape education; it is whether we can coordinate and orchestrate those roles in a collaborative rather than adversarial way. In the last two years, the narrative has been dominated by a race to adopt tools. Schools, ministries, and vendors rushed into pilots, platforms, and proofs of concept. The result has often been what I call a pilot pandemic: a great deal of experimentation, very little structural change. [1] [4]
Viewed from Dubai and from the GESS 2025 conversations, a different story is emerging. We are not simply adding a new technology layer. We are entering a collision of competing ecosystems. On one side sits the Black Box model, opaque, convenient, and extractive. On the other sits a Human First strategy that is transparent, sovereign, and accountable. [2] [3]
From GESS and from my work in AI, cloud, and software ecosystems, three fronts of this war are clear.
The people front: pedagogy, cognition, and the daily work of teachers.
The systems front: infrastructure, data, and communications.
The sovereignty front: governance, inclusion, and ecosystem power.
The leaders who prevail will treat AI not as a gadget, but as what Dr Dala Kakos called Digital Plastic, a malleable material that can be shaped for human flourishing, or allowed to become cognitive waste. [1] The question is not whether you will use this plastic, but who will shape it.

2. What war history teaches about Orchestrating Ecosystems
2.1 From more tanks to integrated campaigns
History offers a useful warning. In the early years of the Second World War, many generals still believed victory would come from having more tanks and artillery. The German blitzkrieg overturned that assumption. Success came from combining armour, air power, logistics, intelligence, and communications into a single coordinated system.
The Allied response was not only to build more units, but to construct an integrated ecosystem of radar, codebreaking, convoys, industrial production, and joint command that changed the course of the war. The lesson is simple. Those who treated war as an ecosystem rather than an inventory reshaped the battlefield.
AI in education is approaching the same threshold. Countries, school groups, and technology providers that still think in terms of individual tools and pilots will be outmanoeuvred by those that orchestrate entire learning systems.

3. Front one, people: killing the AI Solver and elevating teachers
3.1 The danger of the AI Solver
The first casualty of unmanaged AI adoption is human thinking.
In many classrooms and offices, AI is already used as a Solver. It writes essays, generates lesson plans, prepares emails, and answers exam style questions. It reduces friction, removes the discomfort of not knowing, and quietly shuts down the learning process.
At GESS, one of the most powerful counterexamples came from initiatives framed as the Bright Side of Mistakes. Instead of asking AI for answers, learners are coached to ask AI to interrogate their reasoning. The interface barely changes; the impact does.
Solver prompts ask: “What is the answer.”
Coach prompts ask: “Here is my reasoning. Analyse it and give me three questions that reveal where I am thinking incorrectly.”
The first path moves the student or employee into passive mode. The second keeps the human in the loop, forces them to review their own logic, and makes AI a mirror rather than a crutch. This is aligned with what cognitive scientists describe as System 2 thinking, the deliberate and effortful mode that underpins deep understanding.
The broader lesson is uncomfortable. The technology industry has trained executives to believe that less friction is always better. In education, the opposite is often true. Deep learning introduces productive friction. It makes the learner wrestle with uncertainty in a safe way. The real risk is not that AI makes school too hard; it is that it makes thinking too easy.
3.2 The Arcadia example: AI as a teacher multiplier
Phil Long, from Arcadia School in Dubai, made this point concrete in his GESS session on AI in primary schools. Children, he argued, are already cognitively ready for AI. They learn by recognising patterns, making predictions, and experimenting. The real question is whether the adults around them are ready.
His answer was not to flood classrooms with tools. It was to put pedagogy before technology.
In practice, that looks like three moves.
Use AI to extend teacher preparation, not to replace teacher judgement. A teacher can ask an assistant to propose quiz items, differentiate reading texts for different levels, or suggest alternative analogies for a concept. The teacher then curates, edits, and personalises.
Use AI to accelerate formative feedback. Instead of relying only on termly tests, teachers can run small, low stakes writing or reasoning tasks and ask AI to surface common misconceptions, freeing them to spend time in direct interaction with students.
Use AI to model curiosity. Teachers who work visibly with AI in front of the class, asking it to challenge their own explanations, send a clear signal: everyone is a learner, including the adult.
Independent research on generative AI in education shows a similar pattern. In the United Kingdom, for example, the Department for Education’s work on generative AI and its educator and expert views report highlight strong interest in using AI to reduce administrative load and personalise support, combined with concern about overreliance, malpractice, and data protection. [5] [10] Teachers are not asking to be replaced. They are asking to be augmented and respected as the human core of the system.
In ecosystem language, a teacher like Phil Long is not simply a user of tools. He is a local orchestrator. He decides how different AI services enter the daily life of students, what data is created, and how that data feeds back into the wider platform. When a critical mass of teachers are equipped in this way, the entire education system starts to act more like an ecosystem and less like a collection of disconnected programmes. The competitive advantage does not sit in the tool; it sits in the teacher’s ability to orchestrate it.
3.3 Why unplugged inquiry still matters
Paradoxically, the best preparation for a high technology future is often unplugged. Before touching a model or an interface, learners need to experience the Inquiry Cycle in the physical world. That is the logic behind phygital learning environments that blend robotics kits, sensors, and physical prototypes with simulations and digital tools. The core principle is simple. Start in reality, then abstract, then simulate. [4]
If we adopt AI as a universal Solver, we export human judgement to external models. If we reposition AI as a Coach and a multiplier for teachers, we strengthen human judgement and make citizens more capable nodes in any future ecosystem. That is how you turn AI from a shortcut into a discipline for thinking.

4. Front two, systems: from swamp to data spine
4.1 The swamp of fragmented systems
If pedagogy is the soul of transformation, data is the soil. Most institutions are still trying to plant on a swamp.
Many schools and universities have accumulated a decade of misaligned tools. One platform for attendance, another for behaviour, another for learning management, and a fourth for communications. Data sits in silos. Reporting is manual. AI pilots are launched on top of messy foundations.
Darren Bastyan’s warning at GESS was blunt. AI will not save you if the underlying systems are broken. If a teacher needs three logins and four spreadsheets to understand a single student, any talk of predictive analytics is wishful thinking.
This is where executive attention is often misdirected. Leaders attend AI showcases and request generative pilots, while ignoring the invisible work of integration. Yet every serious framework for AI in education, from UNESCO guidance for policy makers to OECD work on trustworthy AI, puts data quality, interoperability, and transparency at the centre. [3] [4] [8]
4.2 Building the data spine
The first operational obligations are not glamorous.
Consolidate the data spine. Student, teacher, and content data must flow into a coherent model that can be queried, governed, and audited.
Automate low value processes. Simple workflows such as nurse forms, staff certificates, attendance notifications, and code of conduct trackers are ideal entry points. They release time and reveal where the real constraints lie.
Redesign communication as a product, not an afterthought.
This backbone is not an IT detail. It is the infrastructure that decides what you can see, what you can measure, and what you can change.
4.3 Communication as a learning intervention
There is now robust evidence that small, well designed nudges in communication can improve attendance, engagement, and exam outcomes. Text based interventions that provide parents with simple, timely information about attendance and progress have been shown to reduce chronic absenteeism and improve course completion. [7] [8] [9] [12]
When communication platforms move from generic announcements to actionable messages based on real time data, they become part of the learning ecosystem rather than a noisy channel on the side.
For CEOs and ministers, this systems front is often delegated to CIOs and vendors. That is a mistake. In ecosystem terms, this is your substrate. Whoever controls the data spine and the integration layer controls which citizens and which partners can participate in the value that AI creates. If you do not own your data spine, you do not own your future options.

.
5. Front three, sovereignty: Glass Box governance and neuro inclusive AI
5.1 The opacity problem
Beyond classrooms and platforms lies the strategic front: sovereignty, identity, and inclusion.
Global AI models are largely trained on open internet data and proprietary corpora that are seldom fully documented. UNESCO’s Recommendation on the Ethics of Artificial Intelligence warns that the absence of clear disclosure about training data and data flows makes it difficult for education systems to assess risk, bias, and compliance. [2] [16] The OECD AI Principles similarly call for transparency, robustness, and accountability as prerequisites for trustworthy AI. [3] [17]
For small states and large school groups alike, this is not an abstract concern. If you do not know which languages, cultures, and learning profiles are adequately represented in the data, you cannot assume that the outputs will be fair.
5.2 The Glass Box principle in practice
The UAE move toward a Glass Box principle is one response. Instead of accepting fully opaque models, the strategy emphasises three things, aligned with emerging international practice.
Transparency about data sources and model behaviour.
Sovereign hosting and control for sensitive data, often through national or regional cloud arrangements.
Explicit ethical oversight at national level, including AI ethics and cybersecurity councils.
Glass Box governance is not a slogan. It is a design choice about where decisions are made, who can audit them, and whose interests the system ultimately serves.
5.3 Neuro normative injustice and inclusion
The second axis of this strategic horizon is inclusion.
Martin Bloomfield and Claudia Lemke’s work on neuronormative injustice in AI illustrates how apparently neutral systems can reproduce exclusion at scale. Their analysis, shared through their article “Do Androids Dream of Dyslexic Sheep.” and related talks, shows how image and text models frequently treat neurodivergent identities as anomalies or fail to represent them at all. [11] [14] [19]
The principle extends beyond dyslexia or Tourette syndrome. Any group that is underrepresented in training data risks becoming invisible or caricatured in AI outputs. If education systems adopt those outputs without critique, they embed those biases into assessment, feedback, and classroom materials.
A Human First AI strategy therefore requires governance on two fronts.
Technical governance to insist on transparency, auditable models, and privacy by design.
Social governance to build cultures of belonging inside schools and universities, where human differences are recognised and valued, and where students are taught to question both human and machine outputs. [1] [2] [6]
This is where citizens enter the ecosystem play directly. A student who learns to challenge an AI generated stereotype, a parent who questions how their child’s data is used, or a teacher who refuses to deploy a tool without clear documentation is not a passive subject. They are an active participant in the governance of the ecosystem. Citizens are not the audience of this system; they are its co governors.

6. From pilots to ecosystems: the War of the Ecosystems playbook
6.1 Education as a strategic theatre
The final step is to move from narrative to execution.
In War of the Ecosystems, I describe how AI, cloud, and software markets have shifted from product competition to ecosystem warfare. Platform leaders win not by shipping more features, but by orchestrating networks of partners, data, and users that reinforce each other.
Education is now entering the same logic. A country’s AI in education strategy is no longer an isolated policy domain. It is a frontline in its broader digital power game: how it negotiates with global platforms, how it cultivates local innovators, how it protects the data and agency of its citizens.
6.2 The four structural layers of a Human First ecosystem
A Human First education ecosystem has four structural layers.
Philosophy and policy. Clear commitments to human agency, inclusion, and transparent AI, aligned with global frameworks but anchored in local values. [1] [2] [3] [4]
Data spine and infrastructure. Integrated, governable data that can support both operational reporting and advanced analytics, with clear rules about what can be used, by whom, and for what.
Human architecture. Teachers as AI fluent coaches, students as active problem solvers, families as informed partners, and school leaders as orchestrators of local ecosystems. [4] [5] [6]
Partner and platform strategy. A deliberate mix of global providers, local innovators, and sovereign capabilities that prevents lock in and keeps decision rights close to those affected.
In that context, the GESS examples are not isolated anecdotes. They are early plays in a broader war.
Digital Plastic is a way of talking about AI that keeps human agency at the centre. [1]
Phygital learning and innovation initiatives show how pedagogy can integrate physical and digital tools to cultivate inventor mindsets rather than passive consumption. [4]
AI walkthrough tools, communication platforms, and small automations demonstrate how targeted use of AI can release time and attention back to humans. [5] [9]
Neuro inclusive critiques of AI remind us that every model is a choice about whose experience counts. [11]
6.3 Education inside the War of the Ecosystems
This is the core of the War of the Ecosystems thinking applied to education. AI, cloud, and data are not neutral utilities; they are the terrain on which platforms, nations, and communities compete to define whose values, whose languages, and whose children will thrive.
Education leaders who understand this will stop treating AI as a classroom upgrade and start treating it as a strategic theatre where pedagogy, infrastructure, and governance must be designed together.
For CEOs and ministers, the practical question is simple.
A Black Box ecosystem, where external vendors, invisible training data, and opaque models define how your children learn and how your workforce is evaluated.
A Human First ecosystem, where your institutions insist on visibility, build their own data spines, empower teachers and students to work with AI critically, and negotiate with vendors from a position of strength.
In the first scenario, citizens are data exhaust. They generate value that is captured elsewhere. In the second, citizens are co creators. Their data, their problems, and their solutions feed into a shared ecosystem that your country or institution can actually shape.
The war of the ecosystems is already under way. The winners will be those who treat AI in education not as a procurement category but as a long term orchestration challenge. The most important decision is not which model you deploy. It is which humans you choose to empower, and which ecosystem you are consciously building around them.

7. References
Speakers at GESS included:
Phil Long Amra Iuoop Eyad Salamin 🍉Dr. Sreejit Chakrabarty Noel Tuohy Baz Nijjar Romeo Ramirez Matthew Esterman 🧔🏻Dr. Ambika Gulati Mohsen Saleh Dr. Dala Farouki Kakos Tania Tiffany Al Jaroudy Dr.Seham Mohammed Darren Bastyan Mark Sheldon Dana Abdel Jabbar Darren McCormick Tai Paschall Mohamed Aboughonim, BSc, PGCEi Dr. Ambika Gulati Steve Bambury Claudia Lemke Narentheren Kaliappen Stephanie Holt Rana Tamim Dr Martin Bloomfield Frank Furnari Dr. Sreejit Chakrabarty Rachael Pryce
#AIinEducation #GESSDubai #EdTech #DigitalSovereignty #WarOfTheEcosystems
[1] UNESCO. “Guidance for generative AI in education and research.” 2023.https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research
[2] UNESCO. “Recommendation on the Ethics of Artificial Intelligence.” 2021.https://en.unesco.org/artificial-intelligence/ethics
[3] OECD. “OECD AI Principles.” 2019.https://oecd.ai/en/ai-principles
[4] UNESCO. “AI and Education: Guidance for Policy-makers.” 2021.https://teachertaskforce.org/sites/default/files/2023-07/2021_UNESCO_AI-and-education-Guidande-for-policy-makers_EN.pdf
[5] Department for Education, United Kingdom. “Generative artificial intelligence in education.” 2023.https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education
[6] World Economic Forum. “Seven principles on responsible AI use in education.” 2024.https://www.weforum.org/stories/2024/01/ai-guidance-school-responsible-use-in-education
[7] Rogers, Todd, and Avi Feller. “Reducing student absences at scale by targeting parents.” Science, 2018.https://pubmed.ncbi.nlm.nih.gov/30962603
[8] Cortes, Kalena E., et al. “Too Little or Too Much. Actionable Advice in an Early-Childhood Text Messaging Experiment.” Education Finance and Policy, 2021. Working paper version.https://cepa.stanford.edu/sites/default/files/wp18-16-v201808.pdf
[9] Brandt, Andreas, et al. “The effect of SMS nudges on higher education performance.” Empirical Economics, 2024.https://link.springer.com/article/10.1007/s00181-023-02516-5
[10] Department for Education, United Kingdom. “Generative AI in education, Educator and expert views.” 2023.https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf
[11] Bloomfield, Martin, and Claudia Lemke. “Do Androids Dream of Dyslexic Sheep. Ethics and Neuronormative Injustice in AI.” GESS Education, 2025.https://www.gesseducation.com/gess-talks/articles/do-androids-dream-dyslexic-sheep-ethics-and-neuronormative-injustice-ai
[12] Sanders, Michael, et al. “Using Text Reminders to Increase Attendance and Attainment in Higher Education.” 2019.https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3349116
[13] UNESCO. “Artificial intelligence in education” portal summary. 2023.https://www.unesco.org/en/digital-education/artificial-intelligence
[14] GESS Dubai. Speaker profile: Prof Dr Claudia Lemke. “Do Androids Dream of Dyslexic Sheep.” 2025.https://www.gessdubai.com/gessdubai/node/1605
[15] World Economic Forum. “Shaping the Future of Learning: The Role of AI in Education 4.0.” 2024.https://www3.weforum.org/docs/WEF_Shaping_the_Future_of_Learning_2024.pdf
[16] UNESCO. “UNESCO Recommendation on the Ethics of Artificial Intelligence” submission to OHCHR. 2021.https://www.ohchr.org/sites/default/files/2022-03/UNESCO.pdf
[17] OECD. “Recommendation of the Council on Artificial Intelligence.” 2019.https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449
[18] UNESCO. “Artificial intelligence in education” overview page. 2025.https://www.unesco.org/en/digital-education/artificial-intelligence
[19] Bloomfield, Martin. “Do Androids Dream of Dyslexic Sheep.” Talk and commentary references. 2025.https://www.linkedin.com/posts/martin-bloomfield-dyslexia-bytes_id-like-you-to-imagine-something-activity-7394617763324981250-7m1S
_Page_1_Image_0001_edited.jpg)
