Math Doesn't Need a "Science of Reading" Moment. It Needs a Definition.

Math Doesn't Need a "Science of Reading" Moment. It Needs a Definition.

Comparing the "AGI problem" in the context of mathematics education: Everyone is arguing about how to teach something nobody has bothered to define.

Right now, the artificial intelligence industry is consumed by a debate about AGI — Artificial General Intelligence. Nvidia’s Jensen Huang declared in March 2026 that “we’ve achieved AGI.” Sam Altman of OpenAI has said his company “built AGIs.” Anthropic’s Dario Amodei has warned that human-level AI could arrive within a few years. Meanwhile, a coalition of researchers led by Dan Hendrycks at the Center for AI Safety published a framework showing that the most capable AI system they tested scored just 57 percent against the standard of a well-educated adult — far short of general intelligence by any rigorous measure.

So who is right?

Thanks for reading Dr.’s Substack! Subscribe for free to receive new posts and support my work.

The answer is: it depends entirely on what you mean by AGI. And that is exactly the problem. There is no consensus definition. Huang was responding to a definition about whether AI could start a billion-dollar company. Altman has called AGI “a very sloppy term.” Researchers insist the original meaning involved robust, flexible competence across novel environments — not benchmark performance on curated tasks. Everyone is passionately arguing about whether we’ve reached a destination, but nobody has agreed on where the destination is.

This should sound familiar to anyone working in mathematics education.

The Missing Question

In a recent Education Week essay titled “Math Needs Its ‘Science of Reading’ Moment,” educational psychologist Dr. Danielle Hankins argues that mathematics instruction should follow the same corrective arc that reshaped literacy education. The core claim is that discovery-first instruction overloads students’ working memory, and that explicit modeling, guided practice, and procedural fluency should come first. Inquiry, she argues, should rest on a foundation of fluency.

The cognitive science Hankins cites is legitimate. Working memory is limited. Novice learners do benefit from structure. Cognitive load is a real phenomenon. None of that is in dispute.

What is missing from the argument — and from nearly every article in the current wave of “science of math” advocacy — is a more fundamental question:

What is mathematics?

Not “what are the best methods for teaching math.” Not “what does cognitive science say about memory.” The prior question. The one that must be answered before any instructional prescription makes sense.

What is this discipline? What is its nature? What is its purpose in a child’s education? And how would we know if a student had genuinely learned it? Has that definition changed in the age of intelligence? Should it?

Without answers to these questions, the entire debate about explicit instruction versus discovery is, to borrow from the AGI discourse, an argument about whether we’ve reached a destination that no one has defined.

The AGI Pattern

The parallel is worth examining closely, because the AGI debate reveals what happens when a field skips the definitional work.

In artificial intelligence, the absence of a shared definition has produced several distortions. First, it allows people with very different standards to claim the same conclusion. Jensen Huang says AGI is here because AI can generate economic value. Researchers say it is not here because current systems lack transfer, adaptability, and reasoning under novelty. Both are correct within their own definitions — and both are talking past each other.

What are the incentives driving those definitions is worth asking as well.

Second, it makes measurement impossible. If you cannot define what you are looking for, you cannot design a test that would confirm or deny its presence. Benchmarks become proxies for the real thing, and then the proxies become the thing itself. Performance on curated tasks gets confused with general competence.

Third, it collapses important distinctions. In AI, the distinction between narrow task performance and general intelligence blurs. In math education, the difference between procedural execution and mathematical understanding gets blurred in exactly the same way.

The Same Collapse in Mathematics

The Hankins essay, like much of the current discourse, treats “mathematics” as though it were a settled, self-evident category — as though everyone already agrees on what the word refers to. But they do not. And this ambiguity is doing real damage to the conversation and to the people asked to teach it and to the children trying to learn it.

Consider the essay’s central recommendation: that students should achieve “procedural fluency” before engaging in inquiry. This sounds reasonable until you ask what “procedural fluency” means in context. Does it mean a student can execute a standard algorithm quickly and accurately? Or does it mean a student understands why a procedure works, when it applies, and how it connects to other ideas in the discipline?

These are radically different targets. The first is a behavioral outcome. The second is a mathematical one. And the instructional design that serves one may actively undermine the other.

When the essay calls for “clear modeling” and “sufficient practice to make understanding last,” what understanding is being referred to? Understanding of the steps in a procedure? Or understanding of the mathematical concept that the procedure is meant to represent?

This is not a semantic quibble. It is the central question. And it is the question that the “science of math” movement consistently declines to answer.

The Complexity That Gets Erased

The reason this question keeps getting skipped is that answering it honestly would reveal something inconvenient: learning mathematics is not a complicated problem. It is a complex one. And those are fundamentally different kinds of challenges.

David Snowden’s Cynefin sense-making/decision-making model — developed from decades of research in systems theory, complexity science, and organizational decision-making — draws a sharp distinction between the complicated and the complex. Complicated problems, like engineering a bridge, have knowable solutions. You can analyze the components, consult experts, and apply best practices. Complex problems are different. They involve interacting agents, emergent behaviors, and cause-and-effect relationships that can only be understood in retrospect. In complex systems, the appropriate response is not to apply a known solution but to probe, sense, and respond — to create conditions for learning to emerge through iteration.

A mathematics classroom is not a bridge. It is a living system of interacting learners, each with different prior knowledge, relationships to language, conceptual schemas, and histories with the discipline. The learning that happens in that room is not the predictable output of a well-designed input. It is emergent — arising from the interactions among students, teacher, content, and context in ways that cannot be fully specified in advance.

This is precisely the finding of Dr. Brent Davis and Dr. Elaine Simmt, whose research over two decades at the University of Alberta and the University of Calgary has applied complexity science directly to mathematics education. Davis and Simmt demonstrated that mathematics classrooms function as adaptive, self-organizing complex systems. In their landmark 2003 paper in the Journal for Research in Mathematics Education, they showed that mathematical understanding in classrooms is not simply transmitted from teacher to student. It emerges from the interactions of multiple agents — individual learners, classroom collectives, curricular structures, and mathematical objects themselves — each nested within the other, each operating on different timescales.

When Davis and Simmt asked practicing teachers the question:

“What is multiplication?”

What emerged was not a single answer but a rich and irreducible diversity of realizations: repeated addition, equal grouping, number-line hopping, sequential folding, array-generating, area-producing, dimension-changing, number-line stretching, and compressing. No single realization is multiplication. The concept lives in the dynamic relationships among these realizations and in the teacher’s capacity to move fluidly among them in response to what students bring. Davis and Renert later called this capacity “profound understanding of emergent mathematics” — a deliberate reframing of mathematical knowledge as something that is not fixed and delivered but alive and collectively constructed.

This research tradition established something essential: that mathematical knowledge for teaching is emergent. But it left a critical question largely unanswered — how do you actually develop that knowledge in teachers at scale? The concept studies of Davis and Simmt were powerful theoretical demonstrations. They showed what mathematical understanding could look like when teachers engaged collectively with the complexity of concepts. What they did not fully address was the programmatic question: How do you design sustained professional learning environments where this emergent understanding becomes a durable capacity — not a one-time workshop revelation, but a way of being with mathematics that teachers carry into their classrooms every day?

How MathTrack is Taking This Forward

This was precisely what our research took up as early as 2012, examining the nature, development, and content of teachers’ mathematical knowledge for teaching through an enactivist lens. Enactivism, drawing on the work of Varela, Thompson, and Rosch, holds that knowing is not the retrieval of stored representations but an ongoing act of situated engagement — a “laying down of a path in walking.” Applied to mathematics teaching, this means that a teacher’s mathematical knowledge is not a stockpile of content to be deployed. It is a lived capacity that emerges through continued interaction with mathematical ideas, with students, and with the contexts in which teaching happens. You cannot separate the knowing from the doing. And you cannot develop the knowing outside of the doing.

This insight changed the question from “What do teachers need to know?” to “What kinds of sustained, situated experiences allow mathematical knowledge for teaching to emerge and deepen over time?” The answer, as my research and the subsequent decade of building MathTrack Institute have demonstrated, is not more workshops. It is not more strategy sessions disconnected from the daily practice of teaching. It is apprenticeship — work-embedded learning in which teachers develop their mathematical knowledge through the act of teaching, supported by structures that make the emergent nature of that knowledge visible and shareable.

The apprenticeship-based model led by my co-founder, Andrew Salmon, that MathTrack has built is the practical extension of this theoretical lineage. When Davis and Simmt showed that mathematical knowledge is emergent and collective, they were describing the phenomenon. When we build programming where teachers study concepts together while teaching, where they author concept stories and narratives from within their practice, where new teachers earn licensure through the work rather than before it — we are designing for that phenomenon. We are creating the conditions in which mathematical knowledge for teaching can actually emerge, rather than prescribing it from the outside and hoping it transfers.

This matters because the Hankins essay, and the broader “science of math” movement, operates from an implicit model of learning as a complicated problem — one where the right sequence of inputs (explicit modeling, guided practice, retrieval) will produce the right outputs (fluency, then understanding). Cognitive load theory is treated as the governing law of the system. But cognitive load theory, for all its value, describes the constraints on an individual’s working memory in a given moment. It does not describe the dynamics of a classroom. It does not account for the fact that what counts as “cognitive overload” depends entirely on what mathematical meanings are already available to the learner — meanings that are themselves socially constructed, culturally situated, and continuously evolving. And it certainly does not account for the parallel truth that the teacher’s mathematical knowledge is also emergent — also developing, also dependent on the conditions of the environment in which the teacher works and learns.

To apply cognitive load theory without first asking what the learner is loading — without defining the nature of the mathematical content being processed — is to optimize a system without understanding what the system is for. And to prescribe how teachers should teach mathematics without addressing how teachers come to know mathematics for teaching is to ignore the same complexity twice.

Mathematics Is Not Disembodied

There is another dimension of the definitional problem that the current discourse ignores entirely: the question of where mathematics comes from.

The implicit assumption in most “science of math” advocacy is that mathematics is a fixed body of procedures and concepts waiting to be transmitted — a collection of objects that exist independent of the humans who use them, teach them, and learn them. This assumption is so deeply embedded that it rarely gets stated, let alone examined.

George Lakoff and Rafael Núñez challenged this assumption directly in Where Mathematics Comes From, arguing that mathematics is not a disembodied Platonic structure but a product of human cognition — grounded in bodily experience, shaped by conceptual metaphor, and structured by the neural and perceptual systems of the embodied mind. On their account, even the most abstract mathematical ideas (infinity, continuity, the number line itself) arise from concrete cognitive mechanisms: object collection, path movement, spatial reasoning, containment. Mathematics is not discovered in some external realm. It is created by human minds operating in human bodies in the physical world.

If Lakoff and Núñez are right — and the research in embodied cognition over the past two decades has broadly supported their framework — then the implications for the “how should we teach math?” debate are profound. You cannot prescribe a universal instructional sequence for mathematics without first grappling with the fact that mathematical concepts do not have fixed, context-free meanings. The meaning of multiplication is not a single thing to be “modeled explicitly.”

It is an evolving constellation of embodied metaphors, conceptual relationships, and contextual realizations that a learner constructs — actively, through experience — over time.

This does not mean anything goes. It means that the starting point for instructional design cannot be a generic cognitive principle (reduce load, increase practice) applied to an undefined target. The starting point must be: What is this concept, how does it live in the minds and bodies of these particular learners, and what conditions will allow understanding to emerge?

Coherence Before Method

At MathTrack Institute, the work we do with schools starts from a different premise entirely. Before asking how to teach mathematics, we ask teacher communities to confront what mathematics is — not as an abstraction, but as a practical question that shapes every instructional decision they make.

When we deploy our initial engagement with teacher communities, what we consistently find is divergence. Teachers within the same building, teaching the same grade level, hold fundamentally different assumptions about what mathematics is, what it means for a student to understand a concept, and what role procedures play in that understanding. This is not a failure of any individual teacher. It is a systemic condition produced by decades of professional development that has treated methods and strategies as the unit of change while leaving the underlying ontological question untouched.

Our GROWTH Framework is built on the legacy and natural progression of over forty years of mathematics education research — including the complexity science work of Davis and Simmt, the embodied mathematics of Lakoff and Núñez, and the understanding that classrooms are complex adaptive systems, not assembly lines. It begins precisely where the current debate does not: with the nature of mathematical concepts themselves. We treat concepts as emergent characters with developmental arcs — characters that evolve across grade levels and contexts, with histories, relationships, and dramatic tensions. Teachers study these arcs together, building shared language, aligning vocabulary and representations, and authoring what we call concept stories that give their instruction coherence across classrooms and years.

This is not a rejection of cognitive science. It is a demand that cognitive science be applied to an emergent definition of mathematics and that learning be recognized as a complex phenomenon. You cannot reduce cognitive load in the teaching of something you have not yet reached an agreement on locally in your learning context. You cannot sequence instruction toward fluency if you have not established what fluency in mathematics actually looks like — as opposed to fluency in executing procedures that may or may not represent mathematical understanding.

In Snowden’s terms, the “science of math” movement is trying to apply complicated-domain thinking — best practices, expert analysis, standardized sequences — to a complex-domain problem. The appropriate response in the complex domain is not to prescribe but to probe: to create the conditions for shared understanding to emerge from the people closest to the work. That is what concept stories do. They are a school’s locally authored, collectively negotiated answer to the question that the national debate refuses to ask.

What Would a Definition Moment Look Like?

A genuine “science of math” moment would not begin with an instructional method. It would begin with a teaching community sitting together and answering hard questions:

What do we mean when we say a student “understands” multiplication? Is it that they can recall facts quickly? That they can model multiplicative situations? That they can explain why the standard algorithm works? That they can recognize multiplicative structure in novel contexts? All of these? Some of these?

It would mean building, school by school, a shared local account of what each major mathematical concept is, how it develops, which representations carry it, which language describes it, and what evidence would demonstrate genuine understanding of it.

It would mean treating the word “mathematics” with the same care that AI researchers are now belatedly demanding for the word “intelligence.”

The “science of reading” did not succeed because it chose the right method. It succeeded, if we can say it did, because it first built consensus on the nature of the phenomenon. Mathematics education has not done this work. Until it does, every prescription — whether for explicit instruction or discovery or anything else — is a solution in search of a clearly stated problem.

Math does not need its “science of reading” moment. It needs its definition moment. Everything else follows from that.

Thanks for reading Dr.’s Substack! Subscribe for free to receive new posts and support my work.


Subscribe to Dr.’s Substack

By Dr. Kevin Berkopes · Launched 6 months ago
My personal Substack
Back to List Next Article