The AI Literacy Scam: Why America Is About to Spend Billions Solving a Problem That Mathematics Already Solved
The AI Literacy Scam: Why America Is About to Spend Billions Solving a Problem That Mathematics Already Solved
Let me say something that will make many people in education technology uncomfortable.
The entire “AI literacy” movement — the mandates, the new curricula, the vendor pitches, the frantic legislative scramble — is building a solution to a problem we manufactured by teaching mathematics badly for fifty years.
Thanks for reading Dr.’s Substack! Subscribe for free to receive new posts and support my work.
That’s the argument. And I’m going to make it with evidence, not opinion.
The Setup
In 2029, the OECD will administer PISA’s first-ever Media and Artificial Intelligence Literacy (MAIL) assessment. Fifteen-year-olds across participating nations will be tested on their ability to engage effectively, ethically, and responsibly with AI systems. This isn’t optional enrichment. This is an internationally benchmarked competence, and every participating country will receive a public score.
The United States is already behind. As The 74 put it bluntly: America is about to be graded on AI literacy, and we are not prepared. The predictable machinery has begun to move — state legislatures drafting mandates, districts shopping for packaged AI curricula, edtech companies retooling their marketing decks.
Here’s what nobody seems willing to say out loud:
the competencies MAIL will measure are not new. They are not exotic. They are not uniquely “AI.” They are, almost point by point, the competencies that mathematics has always been responsible for developing — when mathematics is taught as something other than a compliance exercise.
What MAIL Actually Measures (Read This Carefully)
The MAIL framework assesses five process categories: access and use information; analyze and evaluate; create; participate and collaborate; and a cross-cutting dimension of reflecting and acting ethically and responsibly.
Now here’s what PISA already defines as mathematical literacy: the capacity to reason mathematically, to formulate and employ and interpret mathematics in real-world contexts, and to make well-founded judgments as a reflective citizen.
Read those again. Slowly.
The architecture is functionally identical. One is labeled “AI literacy.” The other has been called “mathematical literacy” for over two decades. The difference is branding.
The EC-OECD’s AI Literacy Framework (AILit) makes the connection even more damning for the “new subject” crowd. It defines AI literacy as the knowledge, durable skills, and attitudes needed to engage with, create with, manage, and design AI — while critically evaluating its benefits, risks, and ethics. And then — in what should be a headline for every curriculum director in the country — it explicitly names the primary implementation barrier: uncertainty about where AI literacy fits in existing subject areas.
I’ll tell you where it fits. It fits in mathematics. It has always fit in mathematics. The reason nobody sees it is that we’ve spent decades stripping mathematics of everything that would make the connection obvious.
How We Broke Math (and Why That Matters Now)
Here is the uncomfortable history.
Mathematics, as a discipline, is fundamentally about sensemaking, orientation, and modeling — constructing representations of reality, testing them against evidence, revising them, and reasoning about their limitations. That’s not my definition. That’s the definition embedded in PISA’s mathematical literacy framework. It’s the definition implied by the Common Core Standards for Mathematical Practice. It’s the definition articulated by the National Academies in their five-strand model of mathematical proficiency: conceptual understanding, procedural fluency, strategic competence, adaptive reasoning, and productive disposition.
But that is not how most American students experience mathematics.
What they experience is procedural performance. Memorize the steps. Apply the algorithm. Get the answer. Move on. Mathematics taught this way produces students who can compute but cannot reason, who can follow procedures but cannot evaluate whether those procedures are appropriate, who can get correct answers to questions they were never taught to ask.
This version of mathematics — the one most of us grew up with — could never have served as a foundation for AI literacy. Not because mathematics lacks the capacity, but because we surgically removed the parts that matter. We kept the skeleton and threw away the living organism.
And now, when the world demands people who can evaluate AI-generated claims, reason about data quality, interrogate model assumptions, and make ethical judgments about automated systems — we look around the school building and conclude that nobody teaches those things.
We’re wrong. We just stopped letting math teachers do it.
What “Mathematics as a Creative Force” Actually Looks Like
I use this phrase deliberately: mathematics as a creative force. Not as a slogan. As an operational definition — teachable, observable, assessable. I’ve built a community of educators around this definition.
When mathematics operates as a creative force in the classroom, students are doing five specific things:
They are posing and refining questions about patterns, quantities, and relationships — treating problem formulation as part of the intellectual work, not as something that arrives pre-packaged.
They are creating, testing, and revising models — symbolic, graphical, computational, simulation-based — to explain, predict, or decide under real constraints. And they understand that models are human-made and revisable.
They are arguing from evidence and structure — constructing and critiquing reasoning, including evaluating when methods are appropriate and what limitations follow from assumptions or data quality.
They are iterating strategically with tools — including AI tools — not as answer machines but as environments for exploration, calculation, simulation, and evaluation. This means they are also evaluating their senses, the greatest tools humans have for smelling, touching, and intuiting their orientation in the world. The tools support insight. It does not replace judgment.
And they are exercising agency and responsibility — linking mathematical results to context, interpreting consequences, identifying risks such as bias or overconfidence, and deciding which actions are warranted.
Every single one of these practices maps directly onto MAIL competencies. Every single one is grounded in existing mathematics standards. And every single one is absent from most American math classrooms — not because the standards don’t call for them, but because our instructional culture abandoned them in favor of speed, coverage, and testable procedures.
The Billion-Dollar Distraction
Here is where I’ll say the thing that edtech won’t like.
If you are a state legislator writing an AI literacy mandate that creates a new, standalone course, you are about to spend enormous resources building parallel infrastructure for competencies that already have a home. You will hire new specialists. You will purchase new curricula. You will create new assessment frameworks. And in ten years, you will have an AI literacy course that reaches some students in some schools for an AI model that no longer exists — while every student in every school continues to sit through mathematics instruction that develops none of the capacities you’re trying to build.
This is not hypothetical. It is the exact pattern we’ve watched play out with financial literacy, media literacy, digital citizenship, and coding. New mandate, new course, uneven adoption, inequitable access, negligible systemic impact.
The alternative is to do the harder, better thing: transform mathematics instruction itself through the people teaching it. Not by adding AI topics to math, but by teaching math as what it actually is — a discipline of sensemaking, reasoning, evaluation, and ethical judgment through orientation. When you do that, AI literacy isn’t a separate outcome. It’s an inherent byproduct.
The Evidence Already Exists
Let me be clear: I’m not asking anyone to take this on faith.
Decision-tree instruction using CODAP and Jupyter environments has demonstrated that secondary students can build predictive models and evaluate their performance — using probability, conditional reasoning, and data representation that are already part of the mathematics curriculum. Bootstrap’s Data Science curriculum has shown that AI-relevant learning — facility with data, statistical reasoning, societal implications — happens naturally through algebra, functions, and statistics. Not through a separate AI course. Through math, taught well.
The GAISE II framework from the American Statistical Association lays out a pre-K–12 progression for statistical literacy that foregrounds multivariate thinking, technology-infused analysis, and assessments measuring conceptual understanding — precisely the capacities needed to understand how AI systems learn from data and why they fail. California’s mathematics framework has already moved in this direction, with explicit attention to data science foundations across K–12. To be fair, they have also faced intense opposition.
The question is not whether the evidence supports anchoring AI literacy in mathematics. The question is why we keep ignoring evidence in favor of shiny new programs.
The Equity Argument (The One Nobody Can Dismiss)
Here’s the argument that should end the debate, even if the structural and evidentiary ones don’t. If AI literacy is a new, standalone subject, it will be distributed the way every new subject is distributed: unevenly. Affluent districts will adopt it first. Well-resourced schools will staff it. Everyone else will wait. AI literacy will become another ZIP-code-determined opportunity — another sorting mechanism in a system already optimized for sorting.
The U.S. Department of Education’s own AI guidance frames this moment as an inflection point that can either widen or narrow disparities, depending on policy choices. Commentary around the PISA MAIL assessment has already flagged uneven local capacity as a central risk.
Mathematics is the structural answer to this problem. It is a universal entitlement in K–12 scheduling. Every student takes it. Every year. In every school. If AI literacy lives inside mathematics, it cannot be rationed. It cannot be reserved for the schools with the budget for a new elective. It travels with the student because the subject already does.
But — and this is the critical caveat — equity is only achieved if instruction actually transforms. Anchoring AI literacy in a mathematics classroom that still runs on worksheets and procedural drilling accomplishes nothing. The transformation has to be real: toward modeling, data investigation, evidence-based reasoning, and creative problem-solving.
That requires investment in the people who choose to teach. Not technology. People.
The money we’re about to spend on standalone AI literacy programs? Spend it there instead.
Six Policy Moves That Would Actually Work
If policymakers are serious — and not just performing urgency — here’s what a coherent strategy looks like:
One. State mathematics standards should explicitly name AI-relevant competencies within existing strands: data reasoning, model evaluation, uncertainty quantification, and fairness metrics inside statistics, probability, functions, and modeling.
Two. States and districts should develop mathematics performance task banks that integrate AI-generated output evaluation, data investigation, and ethical reasoning — aligned with MAIL’s competency categories.
Three. Fund sustained professional learning that builds teacher capacity in mathematics for teaching. This is a level of sophistication required, which means we cannot just hire groups to do one-off workshops. This is ongoing, emergent, practice-embedded development that can be sensitive to the contexts in which teachers teach.
Four. Provide vetted tools and datasets that enable reproducible student investigations — environments like CODAP that emphasize interpretability and student agency rather than black-box automation.
Five. Require district AI plans to address access, bias, privacy, and transparency — consistent with federal guidance on equity and data quality.
Six. Coordinate across subjects so that MAIL competencies appear in ELA, science, and social studies — a true recognition of the embedded and interdisciplinary nature of intellectual thought.
None of this requires a new subject. All of it requires taking an ancient subject seriously.
The Story We Need to Stop Telling
There’s a story the education system has been telling itself for decades: that mathematics is a technical subject for technical people, a gatekeeper discipline defined by right answers and procedural speed. That story produced generations of students who learned to fear mathematics or learned to compute but never to reason. It is the reason we now find ourselves believing that we need an entirely new field — AI literacy — to teach capacities that mathematics was designed to develop.
The story was always wrong. Mathematics is a creative discipline. It is the science of modeling, the art of structured reasoning, and the practice of making judgments under uncertainty. It is, when taught with integrity, the single most powerful preparation for navigating a world increasingly shaped by artificial intelligence.
We don’t need a new story. We need to stop telling the old one.
And then we need to do what storytellers have always done: tell a truer one. One where every student — regardless of ZIP code, background, or perceived ability — gets access to the kind of mathematical thinking that makes AI comprehensible, evaluable, and accountable. That’s the work. It’s not glamorous. It doesn’t come in a vendor package. And it will take longer than a legislative session.
But it’s the only version of AI literacy that actually works. And it’s the only one that reaches everyone at a speed that will matter.
Thanks for reading Dr.’s Substack! Subscribe for free to receive new posts and support my work.