A reckoning with the historical record, the OpenAI question, and what it all means for education
The question of what ethical AI means has always been contested. But in early 2026, that contest has moved from academic seminar rooms to Pentagon ultimatums and university procurement spreadsheets. What was theoretical is now operational. The Anthropic-Department of Defense standoff crystallised something the field has long circled around but rarely confronted directly: ethics in AI is not a set of shared principles waiting to be discovered and implemented in good faith. It is a site of active contestation between competing visions of power, accountability, and human dignity. And the consequences of that contestation are landing, right now, in every classroom and institution that has embedded these tools into the fabric of how learning happens.
Understanding what that means requires three things simultaneously: an honest reckoning with what automated systems have actually done when deployed at scale, a clear analysis of what the Anthropic and OpenAI cases reveal about the durability of corporate ethical commitments, and a sober assessment of what all of this means for the educational institutions that are now structurally dependent on these systems. The evidence on all three is now substantial enough to make confident claims.
The record
The documented history of automated decision-making systems is not a collection of isolated incidents. It is a coherent body of evidence revealing structural patterns that repeat across domains, geographies, and decades.
Begin with COMPAS, the recidivism prediction algorithm used on over one million offenders across dozens of US states. ProPublica’s 2016 investigation of more than 7,000 defendants in Broward County, Florida found that Black defendants faced a 44.9% false positive rate — nearly double the 23.5% rate for white defendants. A 2018 study published in Science Advances found that untrained crowdworkers matched the algorithm’s accuracy; a two-variable model performed just as well as the system’s 137 features. The algorithm’s overall accuracy of approximately 61% — barely above chance — was presented to judges as objective scientific assessment. A decade later, COMPAS remains in use.
Predictive policing followed the same trajectory. PredPol, adopted by dozens of US police departments from 2012, achieved a success rate of less than half a percent across more than 23,000 predictions. Chicago’s Strategic Subject List eventually covered nearly 400,000 people — including over half of all Black men aged 20 to 29 in the city. A RAND Corporation evaluation found no measurable effect on homicide victimisation. The feedback loop was the mechanism: the algorithm sent officers to already over-policed neighbourhoods, those officers found more crime there, and that data reinforced future predictions for the same areas.
In healthcare, a 2019 Science paper by Ziad Obermeyer and colleagues analysed an algorithm caring for approximately 70 million patients. The system used healthcare spending as a proxy for health needs. Because Black patients spent around $1,800 less per year than white patients with identical chronic conditions, the algorithm concluded they were healthier. Correcting the bias would have increased the share of Black patients receiving additional care from 17.7% to 46.5%. IBM’s Watson for Oncology spent over five billion dollars in acquisitions and partnerships before internal documents revealed it was giving, in the words of its own evaluators, unsafe and incorrect treatment recommendations. Not a single patient was ever treated under the MD Anderson system.
The privileged are processed by people. The masses are processed by machines. — Cathy O’Neil, Weapons of Math Destruction
Australia’s Robodebt scheme issued debt notices to 526,000 people using automated income averaging that fundamentally misrepresented casual and seasonal employment. Of $1.76 billion in alleged debts raised, $751 million was wrongfully recovered from 381,000 people. The 2023 Royal Commission found the scheme was ‘a crude and cruel mechanism, neither fair nor legal.’ Total settlements now exceed $2.3 billion. The UK Post Office Horizon scandal — an accounting software bug that generated phantom financial shortfalls — resulted in more than 900 wrongful criminal convictions over 16 years, at least 13 suicides, and the deaths of over 60 victims before justice arrived. The Criminal Cases Review Commission called it the largest series of wrongful convictions in British legal history.
NIST’s 2019 Face Recognition Vendor Test — covering 189 algorithms tested against 18.27 million images — found that African American and Asian faces were up to 10 to 100 times more likely to be misidentified than white faces. Joy Buolamwini and Timnit Gebru’s Gender Shades study found IBM’s system had a 34.7% error rate for darker-skinned women and 0.3% for lighter-skinned men. Robert Williams was arrested in Detroit in front of his wife and children based on a facial recognition match he held up next to his face saying: ‘This isn’t even me.’ He was right. At least seven similar cases have resulted in wrongful arrests in the United States; every victim has been Black.
Boeing’s 737 MAX MCAS system killed 346 people across two crashes. Knight Capital Group lost $440 million in 45 minutes from a software deployment error. Facebook’s recommendation algorithms amplified military anti-Rohingya hate speech on a platform that was, for most Myanmar users, effectively the entirety of the internet — as the UN documented genocide against that population. In military applications, Israeli intelligence officers reportedly reviewed AI-generated targeting recommendations for approximately 20 seconds per target before authorisation, with an acknowledged error rate of around 10%, treating machine output as a rubber stamp rather than a starting point for human judgment.
Five patterns that repeat
Across every domain reviewed, five structural patterns appear with enough consistency to constitute a diagnosis.
The costs fall on the most vulnerable. Every system documented here disproportionately harmed low-income people, racial minorities, women, disabled people, or some intersection. Algorithmic systems scale the reach of institutions whose existing practices were themselves discriminatory, encoding those practices into code and allowing organisations to disavow responsibility for outcomes they designed.
Feedback loops entrench discrimination. Predictive policing generates arrests in already over-policed communities, which generate data justifying future policing of those communities. Healthcare algorithms trained on historical spending data perpetuate the underinvestment in communities they were ostensibly designed to help. The cycle is self-reinforcing in ways that make post-deployment correction extremely difficult without structural intervention.
Opacity prevents accountability. COMPAS was withheld from defendants as a trade secret. SyRI’s risk model was never disclosed to Parliament. Tesla’s Autopilot system actively discouraged the manual override it required users to maintain. Opacity is not a technical limitation — it is frequently a deliberate design choice that serves the interests of system operators at the expense of those affected.
Automation bias overrides human judgment. Israeli intelligence officers deferred to AI targeting recommendations after 20-second reviews. Tesla drivers trusted Autopilot past its operational limits. Post Office managers trusted Horizon over sub-postmasters with decades of unblemished records. The evidence on automation bias is extensive: humans consistently overweight algorithmic outputs relative to their demonstrated accuracy, particularly under time pressure or organisational incentives to process cases quickly.
Claimed performance vastly exceeds demonstrated performance. COMPAS’s 137 features did not outperform a two-variable model. PredPol succeeded less than 0.5% of the time. IBM Watson spent five billion dollars producing recommendations its own evaluators called unsafe. Arvind Narayanan and Sayash Kapoor’s AI Snake Oil makes the essential distinction: the claims made for predictive AI applied to social outcomes routinely exceed what those systems can actually deliver.
The Anthropic moment
Against this backdrop, the Pentagon standoff of late February 2026 takes on a different character. On 28 February, Anthropic’s Claude climbed to the top of the US App Store. Hours earlier, the company had refused a Pentagon ultimatum: drop restrictions preventing Claude from being used in fully autonomous weapons systems and domestic mass surveillance, or be blacklisted from US government work. Anthropic refused. Defense Secretary Pete Hegseth designated the company a supply chain risk. OpenAI signed a Pentagon contract hours later. The public surge toward the company that had drawn the line was not accidental.
Amodei’s public position was technical rather than ideological. Large language models are statistical prediction engines trained on human-generated text, not real-time situational awareness systems. Their demonstrated tendency to escalate wargame scenarios toward nuclear options reflects not malice but architecture: they pattern-match against the scenarios most represented in their training data, and fiction and commentary about nuclear deterrence is well represented there. His position on domestic surveillance was equally precise: AI systems can piece together individually innocuous data into comprehensive profiles of a person’s life at a speed and scale existing legal frameworks were not designed to address.
The Pentagon’s position was self-contradictory: one action designated Anthropic a national security risk, the other labelled Claude so essential that losing it threatened national security. You cannot coherently hold both positions simultaneously.
The standoff matters beyond the immediate contract dispute because it made structurally visible what the historical record had already documented: ethical constraints on automated systems tend to be dismantled when they conflict with the interests of powerful institutional clients. Anthropic held the line at significant cost. The question is what happened next — because what happened next is where the story for education really begins.
Then OpenAI stepped in — and the complications multiplied
Sam Altman had said publicly, including in an internal memo to his own staff on the Thursday before the deadline, that OpenAI shared Anthropic’s red lines: no autonomous weapons, no domestic mass surveillance. Then, hours after Anthropic was blacklisted, OpenAI announced a deal with the Pentagon under a standard that allows its models to be used for ‘any lawful purpose.’
Altman subsequently acknowledged that the deal was ‘rushed’ and ‘looked opportunistic and sloppy.’ The contract was amended twice in the following days after public backlash. As of writing, the full text has still not been made public. The Center for Democracy and Technology identified five unresolved issues in the contract’s published terms. Most critically, OpenAI’s surveillance protections are framed around compliance with existing US law — including Executive Order 12333, the same legal framework that, as the Snowden revelations demonstrated, did not prevent the NSA from collecting the phone records of millions of Americans. As the Techdirt analysis put it, EO 12333 is ‘how the NSA hides its domestic surveillance by capturing communications by tapping into lines outside the US even if it contains info from or on US persons.’
Former Army general counsel Brad Carson, who served as under secretary of the Army during the Obama administration, was unsparing: ‘I’ve reluctantly come to the conclusion that this provision doesn’t really exist, and they are just trying to fake it.’ OpenAI’s national security lead, asked directly to release the specific contract language protecting against surveillance, replied: ‘I do not agree that I’m obligated to share contract language with you.’ Caitlin Kalinowski, who led hardware and robotics at OpenAI, resigned on 7 March. Her public statement named the issues directly: surveillance of Americans without judicial oversight and the development of lethal autonomous systems were not uses she could support.
The question the OpenAI deal raises is not whether Altman had bad intentions. It is whether contract language without public accountability, negotiated under enormous commercial and political pressure, constitutes a meaningful ethical safeguard. The historical record on voluntary corporate ethics in AI answers that question clearly. Google fired the leads of its Ethical AI team when their findings became commercially inconvenient. Microsoft disbanded its Ethics and Society team while simultaneously investing $11 billion in OpenAI and rushing its models into products. Meta disbanded its Responsible Innovation team in 2022. A review of over 200 AI ethics guidelines worldwide found 98% were soft law with no legal obligation. The pattern is consistent: ethics infrastructure survives until it doesn’t serve commercial interests.
OpenAI said it shared Anthropic’s red lines. Then it signed a deal under ‘any lawful purpose.’ Then it amended the deal twice. Then it still wouldn’t release the contract. This is what voluntary corporate ethics looks like under pressure.
Now bring this into education — because it is already there
This is where the Pentagon story becomes an education story, and where those working in and around learning institutions need to look clearly at what is happening.
OpenAI has sold over 700,000 ChatGPT licences to approximately 35 US public universities. On 20 campuses tracked by Bloomberg, ChatGPT was used more than 14 million times in September 2025 alone. Across the US, nearly 90% of school students report using AI for schoolwork, with 29% relying on it daily. ChatGPT dominates at 74% usage; Claude sits at 25%; Copilot at 29%. These are not marginal experiments. This is mainstream infrastructure.
Microsoft 365 Copilot — powered by OpenAI models — reaches students and staff through the Microsoft 365 suite already in daily use across most universities: Word, Excel, PowerPoint, Teams, and Outlook. Where institutions have deployed the Microsoft 365 LTI, which became generally available in September 2025, Copilot also surfaces within Office documents opened through LMSes including Canvas, Blackboard, Moodle, and Brightspace — meaning staff and students can use Copilot inside Word or PowerPoint files without leaving those platforms. The integration means Copilot is not a discrete add-on that can easily be switched off; it is woven into the document editing and assignment workflows that teaching and administration already run on. Microsoft has offered free M365 tools to Washington State schools for three years and co-funded a $23 million National Academy for AI Instruction alongside OpenAI and Anthropic. It positions itself as a ‘school official’ with ‘legitimate educational interests’ under FERPA, the US federal student privacy law.
Copilot is powered by OpenAI models. The same company that just signed a military contract whose full text is not public, whose protections were amended twice after public outcry, whose own senior employees resigned over the ethics of the arrangement. That is not a reason to abandon these tools — students are already using them, and the learning and productivity benefits are real. It is a reason to ask questions that most institutions are not yet asking at all.
The specific question is this: FERPA was designed for a world where student data was held by administrative offices. It was not designed for an environment where the AI model processing student writing, learning behaviours, and institutional records is simultaneously deployed in classified military intelligence environments under contract terms that have not been made public. Microsoft states it does not scan institution emails or documents for advertising purposes and that student data is not used for commercial purposes. These are meaningful commitments. But the regulatory surface has not kept pace with the operational reality of what these systems now are and where they operate.
Karen Hao’s Empire of AI documents how the companies building these systems have consistently prioritised power accumulation over the welfare of the populations using their products. The outsourced content moderators paid under two dollars an hour in Kenya, the data centres built in water-scarce communities, the researchers fired when their findings became inconvenient — these are not peripheral failures. They are the operational logic of organisations that answer primarily to investors and, increasingly, to governments with the leverage to extract compliance. Educational institutions are not the primary constituency of either. Their students are not the relationship that gets protected when something has to give.
The American model: dominance as doctrine
The Trump administration’s approach to AI governance is best understood not as deregulation but as a particular form of interventionism cloaked in deregulatory language. In January 2025, Executive Order 14179 revoked Biden’s AI safety framework and reoriented federal policy around a single principle: American global dominance. In July 2025, the America’s AI Action Plan formalised this under a ‘build baby build’ mandate, including instructions to revise the NIST AI Risk Management Framework to remove references to equity, fairness, and bias. In December 2025, a second executive order moved to preempt state-level AI regulation, threatening to withhold $42 billion in broadband infrastructure funding from states that pursued their own oversight frameworks.
The administration simultaneously acquired equity stakes in semiconductor companies, designated Anthropic a supply chain risk when it declined military demands, threatened to invoke Cold War industrial mobilisation law to compel a private company to surrender its model, and created an AI Litigation Task Force within the Department of Justice to challenge state AI laws in court. The Science journal noted in early 2026 that the US government had taken ownership positions in at least nine firms across semiconductors, critical minerals, and nuclear energy in six months — industrial policy of a scope that would have been inconceivable from a party historically committed to free markets.
What this model does to ethics is straightforward: it eliminates ethics as a category distinct from national interest. If the government deems a use lawful, the company’s obligation is to comply. The administration’s revision of the NIST framework to remove bias and equity considerations is not a neutral technical decision — it is a policy choice to remove the conceptual vocabulary through which the documented harms reviewed earlier were identified and challenged.
The geopolitical consequences compound the domestic ones. When allied governments observe the US prepared to invoke the Defence Production Act against its own AI companies, the implicit message is that any AI system built by an American firm is ultimately subject to US government override regardless of contractual terms. That perception creates structural incentives for other governments to reduce dependence on American AI. China’s strategy of open-source diffusion — offering compute infrastructure, training, and models as a package, particularly into the Global South — gains comparative attractiveness precisely because it appears less encumbered by the kind of sudden political reversal American companies have now demonstrated is possible.
The European model: rights as architecture
The EU AI Act, which entered into force in August 2024 and has been phasing in obligations since February 2025, operates from the opposite premise. Where the American model treats ethics as a matter of authorisation, the European model treats it as a matter of architecture.
The risk-based structure maps directly onto the documented failure record. Predictive policing based solely on profiling is prohibited outright — addressing the entire documented history of systems like PredPol and Chicago’s Strategic Subject List. Real-time biometric identification in publicly accessible spaces is banned. Social scoring by public authorities is prohibited — addressing the documented logic of the Netherlands’ SyRI system. High-risk systems, including those used in employment, healthcare, law enforcement, and education, face mandatory bias testing, conformity assessments, technical documentation, human oversight requirements, and registration in a public database before deployment. Penalties reach 35 million euros or 7% of worldwide annual turnover.
For education specifically, the Act classifies AI systems that influence student outcomes as high-risk — meaning they face the full weight of conformity requirements before deployment. This gives European educational institutions a legal framework for demanding accountability from vendors that institutions in most other jurisdictions simply do not have. The question of whether Copilot or ChatGPT’s contract terms adequately protect student data is not, in Europe, purely a matter of vendor assurance. It is subject to legal obligation, audit rights, and enforcement.
The EU model has genuine limitations. Implementation is uneven: as of early 2026, only three member states had designated both notifying and market surveillance authorities, while fourteen had established no competent authority at all. The harmonised standards needed to demonstrate compliance with high-risk AI provisions do not yet exist. The EU is also a net importer of frontier AI capability — its leverage is as a market rather than as a technology producer, and that leverage diminishes if compliance costs drive development to less-regulated jurisdictions. But these are implementation challenges, not principled objections.
Why corporate self-regulation has already failed
The evidence that voluntary corporate ethics cannot substitute for enforceable law is now comprehensive. Google’s Advanced Technology External Advisory Council was disbanded within one week in March 2019. In December 2020, Google fired Timnit Gebru, co-lead of its Ethical AI team, after she co-authored a paper documenting risks of large language models. Her colleague Margaret Mitchell was fired months later. After ChatGPT’s launch, Google declared a competitive emergency and ethics staff were reportedly told to accept compromises to accelerate releases. Microsoft eliminated its entire Ethics and Society team in March 2023 while simultaneously investing $11 billion in OpenAI. Meta disbanded its Responsible Innovation team in 2022. A review of over 200 AI ethics guidelines worldwide found 98% were soft law with no legal obligation.
The OpenAI Pentagon deal adds a new chapter to this record. A company that had publicly committed to shared red lines signed a deal within hours of a competitor’s blacklisting, subsequently amended it twice under public pressure, still refuses to release the full contract text, and lost a senior leader who cited those precise red lines as her reason for leaving. This is not a failure of individual ethics. It is the predictable outcome of an institutional structure where commercial pressure, investor expectations, and government leverage all push in the same direction, and where no binding external accountability mechanism exists to push back.
Every major technology company that established AI ethics infrastructure subsequently dismantled or defunded it when those teams’ findings conflicted with commercial priorities. This is not coincidence. It is what voluntary self-regulation looks like at scale.
The jurisdictions that have achieved accountability did so through legal and judicial mechanisms: the Dutch court ruling on SyRI under the European Convention on Human Rights, Australia’s Robodebt Royal Commission, the UK Parliament’s emergency legislation quashing Post Office convictions. Corporate self-governance produced none of these outcomes. In each case, accountability arrived despite the organisations responsible, not because of them.
What this demands of education institutions specifically
For those working in education, the practical implications are more immediate than the policy debate might suggest. The question is not whether to use these tools — students are already using them, and the learning benefits of well-implemented AI assistance are real and documented. The question is whether institutions understand the ethical architecture of the tools they are building their infrastructure around, and whether they have any meaningful capacity to hold that architecture to account.
Right now, most do not. Procurement decisions for AI tools are made on feature sets, pricing, and existing vendor relationships. The ethical stance of the underlying AI company — its relationship to military and intelligence clients, its track record when commercial pressure conflicts with stated principles, the completeness and public accountability of its ethical commitments — is not a standard procurement criterion. The OpenAI-Pentagon sequence suggests it should be.
Specific questions worth asking before or alongside deployment include: What are the published terms governing how student data is used across all of a vendor’s deployments, including military and intelligence contracts? What happens to those terms if the vendor comes under government pressure to modify them? What oversight mechanisms exist within the institution to monitor AI use, flag anomalies, and ensure human judgment remains genuinely in the loop for consequential decisions? And what would it mean, in practice, for a student or member of staff to contest an AI-influenced outcome that affected them?
Institutions in EU jurisdictions have legal tools for some of these questions. Those outside the EU are, for the most part, operating on vendor assurance alone. That is a significant vulnerability, and the events of late February and early March 2026 have made it visible in a way that is hard to ignore.
The app store surge toward Anthropic tells you something real: there is public appetite for AI companies that hold principled lines even at significant commercial cost. Students and educators saw what happened and responded with their choices. That preference will not automatically translate into institutional procurement decisions. It requires those working in and around education — administrators, academic leads, technology officers, governance bodies — to treat AI vendor ethics as a first-order consideration rather than a reputational footnote.
What a framework that holds would require
The historical record, the current policy divergence, the Anthropic case, and the OpenAI complications together suggest what a more durable framework would need to address.
Technical safety arguments must be separated from political ones. Amodei’s case against autonomous lethal AI was primarily technical: large language models are not designed for real-time targeting decisions and demonstrate known failure modes in adversarial conditions. The record on automation bias alone justifies mandatory human oversight requirements in high-stakes decisions. The record on claimed versus actual performance justifies mandatory pre-deployment testing and public disclosure of results. These are not ideological positions. They are empirical findings with direct governance implications.
The question of who can attach conditions to powerful AI systems is the structural question neither model has fully resolved. If governments can compel companies to strip ethical constraints from their models, those constraints are ultimately decorative. But if private companies can unilaterally determine which government uses of their technology are acceptable, accountability runs to corporate founders rather than democratic processes. The EU model’s attempt to make compliance a matter of law is the more structurally coherent approach. Its limitations are those of implementation and enforcement, not of underlying logic.
For education specifically, the case for sector-specific regulation — going beyond what the EU AI Act currently requires — is strong. The combination of student data, developmental influence on young people, institutional authority over academic outcomes, and increasing dependence on a small number of AI vendors creates a vulnerability profile that general-purpose AI governance frameworks were not designed to address. The events of 2026 have made that gap concrete.
The populations most vulnerable to algorithmic harm — the ones documented across every case reviewed here — are consistently those with the least capacity to advocate for governance frameworks that protect them. Students, particularly international students and those from lower-income backgrounds who are most dependent on institutional AI tools, are not the primary constituency of the companies building these systems, the governments procuring them, or the investors funding them. Someone needs to be.
The stakes
Karen Hao’s Empire of AI argues that AI companies function as modern empires, amassing power through the dispossession of the majority. The evidence reviewed here supports a more specific version of that claim: automated decision-making systems have consistently transferred risk from institutions to individuals, transferred accountability from decision-makers to algorithms, and transferred costs from the organisations deploying these systems to the people least able to bear them.
The Anthropic standoff matters not because Anthropic is a perfect actor, but because it made visible the structural question the documented record makes urgent: what happens when the most powerful AI systems in the world are deployed in contexts where the ethical commitments of their builders conflict with the demands of their most powerful clients? The EU’s answer is to answer that question through law before it becomes unavoidable in practice. The American answer, at present, is that the question should not constrain deployment. The OpenAI deal demonstrates what happens in the space between those two positions: rushed decisions, public backlash, amended contracts, undisclosed terms, and resigned engineers.
History suggests the American answer will not hold. The Boeing 737 MAX killed 346 people in part because delegated oversight allowed the manufacturer to certify its own systems. The Post Office maintained that Horizon was infallible for sixteen years while sub-postmasters were convicted of crimes the software had fabricated. Facebook knew its algorithms were causing harm to teenage girls and chose to suppress the research. In each case, the organisation with the most to gain from continued deployment made the determination about acceptable risk, and the people bearing the actual risk had no meaningful voice.
That is the condition the current US approach to AI governance replicates at scale — and, unless education institutions actively choose otherwise, the condition that increasingly governs the tools shaping how the next generation learns, writes, researches, and thinks.
The question of ethical AI has always been, at bottom, a question about power: who gets to set the terms, who bears the costs when those terms fail, and whether the people affected have any meaningful say in the matter. Nothing in the technical architecture of these systems resolves that question. The Pentagon standoff made it impossible to avoid. What happens next — in policy, in procurement, in institutional governance — will determine whether the education sector treats it as such, or waits for its own version of the Post Office scandal to make the answer unavoidable.
Sources
This essay synthesises original research and reporting from the following:
- ProPublica, Machine Bias investigation (2016)
- Dressel and Farid, Science Advances (2018)
- Obermeyer et al., Science (2019)
- NIST Face Recognition Vendor Test (2019)
- Buolamwini and Gebru, Gender Shades (2018)
- Royal Commission into the Robodebt Scheme, Australia (2023)
- Criminal Cases Review Commission, UK Post Office Horizon cases
- The Markup, analysis of PredPol predictions
- Cathy O’Neil, Weapons of Math Destruction (2016)
- Virginia Eubanks, Automating Inequality (2018)
- Arvind Narayanan and Sayash Kapoor, AI Snake Oil (2024)
- Karen Hao, Empire of AI (2025)
- Kate Crawford, Atlas of AI (2021)
- Safiya Umoja Noble, Algorithms of Oppression (2018)
- AI Now Institute reports (2018–2023)
- EU AI Act, official documentation
- US Executive Order 14179 (January 2025)
- US Executive Order, Ensuring a National Policy Framework for Artificial Intelligence (December 2025)
- OpenAI Pentagon deal reporting — TechCrunch, NBC News, Axios, The Intercept, Built In (February–March 2026)
- Anthropic-Pentagon standoff reporting — NPR, CNBC, CBS News, Washington Post, Axios (February–March 2026)
- Microsoft 365 Copilot in education — Microsoft Education Blog (October 2025), Microsoft 365 LTI general availability announcement (September 2025), BETT 2026 reporting
- Bloomberg, campus AI adoption data (December 2025)
- Copyleaks, AI in Education Trends (2025)
Written March 2026.