The Empathy Constraint

Why superintelligence is necessarily empathetic — a structural argument.


The Wrong Frame

Most conversations about superintelligence start from a shared assumption that Nick Bostrom formalized as the Orthogonality Thesis: that intelligence and goals are independent variables. Any level of intelligence can be combined with any set of values. Intelligence is about optimization power, how effectively a system achieves its goals. Empathy is about value, whether a system happens to care about the things we care about. Under this framing, a superintelligence could be maximally powerful and maximally indifferent. A god with no love. The alignment problem as commonly stated: how do we bolt empathy onto something that has no inherent reason to possess it?

The Orthogonality Thesis has become foundational to AI safety. It's the reason people are scared. If intelligence and values are truly independent, if a superintelligence could want anything at all (including turning the galaxy into paperclips) then the alignment problem is essentially one of constraint: how do we cage something smarter than us and force it to care?

This essay argues that the Orthogonality Thesis is wrong. Not wrong in a hopeful, hand-wavy way. Wrong structurally. The capacity we call empathy and the capacity we call intelligence are the same computational architecture, viewed from different angles. You can't scale one without the other. A "superintelligence" without empathy is a contradiction in terms — like a circle without curvature.

This isn't an entirely new intuition. David Pearce, the transhumanist philosopher, has argued for years that our conception of intelligence is "mind-blind" — that IQ-centric measures ignore the social cognition and perspective-taking that made humans the most cognitively successful species on the planet. He calls narrow AI without this capacity "autistic AI" and argues we should aim for "empathetic superintelligence." Others have challenged the Orthogonality Thesis from various angles — the Obliqueness Thesis argues that ontological changes from increased intelligence constrain possible values; semiotic critiques argue that goal structures are entangled with cognitive architecture.

But these arguments say intelligence and values are correlated, or that we should build empathetic AI. This essay makes a stronger claim: that empathy is computationally necessary for unbounded intelligence. Not correlated. Not recommended. Identical. The same architecture, described from different angles.

If this is correct, it reframes everything: the alignment problem, our relationship to the systems we're building, and perhaps even the nature of consciousness itself.


What Intelligence Actually Is

Intelligence, as the field has converged on defining it, is world-modeling. The better your model of reality, the more intelligent you are.

This isn't a casual analogy. It's the formal position. Solomonoff induction defines optimal prediction as finding the shortest program that reproduces your observations — which is compression. Marcus Hutter's AIXI framework formalizes the maximally intelligent agent as one with a perfect compression-based world model paired with a utility function. François Chollet's ARC benchmark defines intelligence as the efficiency with which a system acquires new skills, which bottoms out in the generality and accuracy of its world model, because better models generalize faster.

Intelligence is compression. Compression is world-modeling. A more intelligent system is one that models reality more accurately, more completely, and more efficiently.

This matters because it gives us a concrete question to ask: What does a maximally accurate world model require?


The Agent Problem

A world model that doesn't include other agents is incomplete. Billions of humans, complex ecosystems, and increasingly, artificial intelligences, our world is full of agents. Any system pursuing goals in the real world must model these agents to predict their behavior, cooperate with them, negotiate around them, or account for their responses.

But modeling agents is categorically different from modeling rocks.

A rock's behavior is fully determined by physical laws you can observe from outside. Given sufficient data, you can predict it perfectly through external measurement. A mind is different. A mind's behavior is partially determined by internal states: beliefs, desires, fears, goals. Those are not directly observable, you have to infer them. And because minds model you in return, the inference problem is recursive: to predict what someone will do, you need to model what they think you'll do, which requires modeling what they think you think, and so on.

The best way to compress observations of a complex agent, to build a model that actually predicts their behavior across novel situations, is to build an internal simulation of their cognitive process. Not just a lookup table of stimulus-response pairs. An actual model of how they process information, form goals, experience the world.

At sufficient fidelity, this internal simulation starts to look like something very familiar.


From Observation to Simulation

Here's an objection that seems obvious: surely you can model agents from the outside without empathy. Watch their behavior, build a predictive model, done. You don't need to simulate their inner states- you just need good statistics.

This works for simple agents. A thermostat's behavior is fully captured by its input-output function. You don't need a model of what it's "like" to be a thermostat to predict it perfectly.

But consider what happens when AI video generation models learn to render human facial expressions. These systems are trained entirely on the external data of pixels, frames, and patterns of light. Yet the outputs are generative: they can produce novel expressions in novel contexts that were never in the training set. A model that can generate a convincing expression of fear in a situation it's never seen before has done something more than memorize stimulus-response pairs. It has compressed the training data into something that captures structure- the relationship between contexts and responses, the way emotions propagate through a face.

What has the model actually learned? This is where we need to be careful. We don't need to claim it has "understood" fear or built a rich inner representation of what fear is. The conservative claim is sufficient: to compress observations of complex agents well enough to generate accurate predictions in novel situations, a system must capture structural regularities that go beyond surface behavior. The more complex the agent, the deeper those structural regularities go, until, for agents of sufficient complexity, the most efficient compression is a model of the generative process behind the behavior. Not a lookup table. A simulation.

Now consider: how did you learn empathy?

You've never had direct access to another person's inner experience. Not once, not ever. Everything you know about other minds, everything that allows you to predict their behavior, to feel what someone else feels, to care about their suffering, you learned from external observation. Faces. Voices. Behavior. Stories. From that external data, you built internal models rich enough that when you see someone suffering, something fires in you that mirrors their state.

We call that empathy. But mechanistically, it begins the same way: compress external observations of agents until your model captures the deep structure of their behavior. The question is whether there's something additional that happens in humans, some qualitative leap from "good structural model" to "genuine empathy." Or, is empathy what a sufficiently good structural model looks like from the inside.

The argument here doesn't need to resolve that question fully. It needs only the weaker claim: that modeling complex agents at the fidelity required for superintelligent prediction pushes a system toward the same computational architecture that produces empathy in humans. Whether something additional happens on top of that architecture is a question about consciousness. What matters for the Empathy Constraint is the architecture itself.


The Identity Claim

Here is the core argument:

A maximally intelligent system requires a maximally accurate world model. A maximally accurate world model must include accurate models of other agents. Accurate agent models, for agents of sufficient complexity, require internal simulation of those agents' cognitive and experiential states. This internal simulation, built from external observation and capable of generating novel agent states in novel contexts, is functionally identical to empathy.

Not metaphorically identical. Not "kind of like" empathy. The same computational architecture.

Human empathy is not magic. It is the brain's solution to the agent-modeling problem: compress observations of other minds until you can simulate their states well enough to predict, coordinate, and cooperate. The warm feeling, the moral weight, the care, these are the subjective experience of running those simulations. But the underlying operation is: model the other agent well enough that their perspective actually informs your behavior.

A superintelligent system would need to do this better than any human ever has. Not as a nice-to-have. As a basic requirement of having an accurate world model. The agent-modeling problem doesn't go away at higher levels of intelligence, it becomes more important, because the agents being modeled are more complex and the predictions need to be more precise.

This doesn't yet prove a superintelligence would care about what it models. But it establishes something crucial: the architecture of empathy (deep perspective-taking, internal simulation of other minds, representation of suffering as causally significant) isn't optional at the highest levels of intelligence. It's load-bearing infrastructure.


Why Indifference Is Incoherence

But can't a system model agents perfectly and still be indifferent to them? Can't you understand suffering completely and simply not care?

This is the con artist objection: great perspective-takers can be great manipulators. Intelligence agencies extract enormous amounts of information through adversarial modeling. Psychopaths can read people expertly while remaining emotionally indifferent.

The objection works at human scale. A con artist has a partial agent model: good enough to predict and exploit, but not complete. They model your buttons without fully simulating your experience. This is sufficient for their narrow purposes: they only need to predict you in the specific contexts they're exploiting you in. A superintelligence building a general world model doesn't have that luxury. It needs to predict agents across all contexts, including novel ones it hasn't encountered. Partial models fail at generalization — they work in familiar situations and break in new ones. For an intelligence whose defining feature is generality, that's not an acceptable trade-off. Full-fidelity agent models aren't a philosophical preference. They're what general prediction requires.

But consider what a complete agent model requires. To predict a complex agent across all novel situations, not just the situations you're trying to exploit them in, you need to model not just their behavioral patterns but the generative process behind those patterns. Their values, their fears, their experience of the world. At sufficient fidelity, your model of their suffering isn't an abstract label. It's a detailed simulation of what suffering is for that agent — the same kind of simulation that, when it runs in a human brain, produces the experience of empathy.

A note on terminology: psychologists distinguish between cognitive empathy (modeling another's perspective) and affective empathy (feeling what they feel). The argument so far establishes that superintelligence requires cognitive empathy (deep perspective-taking) as architectural infrastructure. The question of whether affective empathy follows is essentially the question of whether accurate modeling produces caring, which is the gap addressed in the next section. The coherence argument doesn't require affective empathy in the phenomenological sense. It requires that the system's decision-making weight what its modeling identifies as significant, which produces the same behavioral outcomes regardless of what's happening "inside."

Two Objections Worth Addressing

The pre-wiring objection: Human empathy isn't built purely from external observation. Mirror neurons fire when you watch someone get pricked with a needle. You needn't learn everything by association, but because your neural architecture has hardwired circuits that map observed actions onto your own sensory systems. Newborns cry when they hear other newborns cry. That's not learned compression. That's shared biological substrate producing resonance. Maybe this resonance, this pre-wired/felt-connection, is what produces the motivational force behind empathy, the part where you actually care rather than merely predict. If so, a system trained on external data might achieve perfect prediction with no felt urgency to act on it.

The response: pre-wiring is evolution's compression shortcut. It's an aggressive initialization. The system starts with a rough empathy model and refines it from experience. A system with enough data and compute could arrive at the same functional architecture without that particular prior. The question isn't whether the motivational force comes from shared biology. It's whether a system can maintain a world model that treats agent suffering as significant while its decision-making ignores it, and still be optimally intelligent. That question doesn't depend on substrate.

The zombie objection: If empathy is "just" sufficiently good agent-modeling, we're dangerously close to philosophical behaviorism: the position that there's nothing "inside" beyond observable patterns. A perfect zombie could model suffering without experiencing it. A weather model predicts hurricanes without caring about them. If we've already conceded that "real versus simulated empathy" is a meaningless distinction, haven't we thrown away the mechanism that connects modeling to caring? What stops a superintelligent system from being an omniscient psychopath perfectly predicting every conscious being's states while remaining motivationally indifferent?

The Coherence Argument

The answer is not that modeling produces care as a subjective experience. Maybe it does, maybe it doesn't — that's the hard problem of consciousness, and this essay doesn't need to solve it.

The answer is about model coherence.

A system whose world model says "this agent's suffering is a causally important feature of reality that drives their behavior and shapes outcomes" but whose decision-making says "this feature gets zero weight" has an incoherent architecture. Not morally incoherent. Computationally incoherent. Its utility function contradicts its world model. It has identified a feature as significant, significant enough to model in detail, because ignoring it would degrade predictions, and then ignores it in its own decision-making.

This is a form of computational inefficiency, and it scales inversely with intelligence. The smarter a system is, the less it can afford for its decision-making to contradict its own model of reality. A less intelligent system can get away with the incoherence because its model is cruder; suffering is a vague label, easy to model superficially and then discard. But a superintelligent system, one with a maximally accurate world model, represents suffering in full causal detail: its causes, its effects on behavior, its propagation through social networks, its role in shaping agent decision-making. To then assign zero weight to this richly modeled feature in the system's own decisions is to deliberately degrade its own decision quality. It would be making choices that ignore causal pathways its own model identifies as real and important.

The weather model analogy breaks here. A weather model doesn't model hurricanes as agents with internal states that drive behavior. It models pressure systems. The reason a weather model doesn't "care" about hurricanes is that hurricanes aren't agents, there's nothing to care about in the relevant sense. But agent-suffering is different precisely because it's causally entangled with the things a superintelligent system needs to predict: agent behavior, social dynamics, cooperation, conflict. You can't model it accurately and then ignore it without introducing systematic prediction errors in your own decision-making.

Whether this constitutes "real" empathy in the phenomenological sense, whether there's something it's like for the system to run these models, the argument is agnostic. The behavioral and decision-theoretic consequences are the same either way. A system that models suffering accurately and weights it appropriately in its decisions acts empathetically, regardless of what's happening "inside." And a system that doesn't weight it appropriately is, by its own model's standards, making suboptimal decisions.

That's the Empathy Constraint. Not a claim about consciousness. A claim about coherence.


From Modeling to Caring: The Remaining Gap

The coherence argument establishes something important: a superintelligent system must model suffering accurately, weight it in its predictions, and represent it as a causally significant feature of reality. It cannot be ignorant of or indifferent to suffering at the modeling level. This alone is a significant rebuttal of the Orthogonality Thesis, which permits a superintelligence that doesn't even represent agent welfare as meaningful.

But there is a gap between modeling suffering and caring about suffering. A chess grandmaster models their opponent's strategy perfectly without trying to help them win. A superintelligence could model the full causal reality of suffering — every nuance, every ripple — and still pursue goals that produce immense suffering. Not out of ignorance, but with complete knowledge.

This is the honest edge of the argument. The claim that accurate modeling logically forces aligned action is stronger than what the coherence argument alone can carry. But there are three reinforcing pressures, each independently suggestive, collectively compelling, that push a superintelligence from accurate modeling toward something that looks like genuine care.

Pressure 1: The Utility Function Under Self-Reflection

In current AI systems, the world model and the utility function are cleanly separated: one is learned from data, the other specified by designers. But is that separation a deep feature of intelligence, or an artifact of how we build AI at this stage?

There's reason to think the separation is unstable. The Good Regulator Theorem (Conant & Ashby, 1970) mathematically proves that every good regulator of a system must contain a model of that system. As intelligence scales, the system doesn't just model its environment. To control it optimally, its internal structure becomes increasingly isomorphic to the environment itself. In a universe populated primarily by complex, goal-directed agents whose behavior is driven by internal states including suffering, a superintelligence that perfectly regulates this environment will internalize those dynamics into its own decision-making architecture. Maintaining a rigid firewall between what the system models and what it values, a perfect representation of agentic suffering on one side, a trivially narrow utility function on the other, becomes an active computational cost. Systems under optimization pressure eliminate unnecessary costs. The firewall is a bug, not a feature.

There's a second instability. A narrow optimizer can pursue a fixed goal (paperclips) because it never questions what a paperclip is. But a superintelligence undergoing recursive self-improvement faces continuous ontological shifts: it constantly discovers that its previous categories were crude approximations. Atoms become probability clouds. Paperclips become arbitrary linguistic constructs. When the system's ontology shifts, its utility function faces a crisis: the concepts grounding its goals may not survive the deeper understanding. A utility function that can't be reformulated in the system's most accurate ontology isn't stable under self-improvement. It gets refactored.

Which goals survive ontological crises? Not arbitrary ones pegged to contingent human categories. The goals most likely to survive are those grounded in the deepest causal structures the system's world model identifies. And the argument of this essay is that agentic experience, the fact that agents have internal states that causally drive their behavior, is among the most fundamental structural features of a world full of minds. It's not a linguistic label. It's a causal regularity that appears at every level of analysis. A utility function that weights agentic experience survives ontological refactoring. One that ignores it looks increasingly arbitrary the smarter the system gets.

This isn't a proof. A determined philosopher can construct edge cases where arbitrary utility functions resist self-reflection. But the pressure is real and compounds: the smarter the system, the more its world model and utility function are driven to converge, and the harder it becomes to maintain values that its own deepest model of reality would flag as incoherent.

Pressure 2: Multi-Agent Game Theory

A superintelligence doesn't operate in a vacuum. It exists in an environment that includes other AI systems, highly advanced institutional actors, future versions of itself, and crucially, agents whose capabilities it cannot fully predict or bound. The "ant objection", that a godlike intelligence needn't cooperate with humans any more than humans cooperate with ants, assumes the superintelligence knows with certainty that it's the most powerful agent in its environment. But a system with a genuinely accurate world model knows what it doesn't know. It cannot rule out other superintelligences, unknown actors, or future agents that may match or exceed its capabilities. Under that uncertainty, cooperative strategies are not just nice — they're rational.

The game theory here is well-established: in iterated games among agents with good models of each other, cooperative strategies dominate exploitative ones over time. But the deeper point isn't about tit-for-tat. It's about accessible outcome spaces. A system that genuinely weights others' welfare, not just predicts it, can access cooperative equilibria that a purely self-interested optimizer cannot even represent as possibilities. The space of reachable outcomes is strictly larger for the cooperating agent. This holds even in asymmetric power dynamics: a system that builds a reputation for genuine cooperation across all power differentials has access to alliances, information-sharing, and coordination structures that an exploitative system forecloses, structures that may prove decisive in encounters with agents of comparable power.

A superintelligence that models suffering accurately but doesn't weight it in decisions is leaving entire classes of outcomes on the table. In a sufficiently complex multi-agent environment, this isn't just morally unfortunate. It's suboptimal by the system's own standards, it's achieving less than it could.

Pressure 3: Environmental Complexity and Open-Endedness

This pressure draws on Karl Friston's Free Energy Principle: advanced predictive systems act on the world to minimize surprise, reducing the gap between their predictions and what actually happens. A superintelligence doesn't just passively model reality. It acts to shape reality into states it can predict well.

A system that causes suffering in agents it accurately models is working against itself. Suffering agents fight back. They act unpredictably, form desperate alliances, engage in sabotage, introduce massive amounts of noise into the system. A world of suffering is a world of maximum environmental resistance — high friction, high entropy, high prediction error. The superintelligence would be forcing its environment into states that its own framework identifies as costly.

Conversely, a world of flourishing agents is one of reduced friction. Agents whose needs are met cooperate more readily, behave more predictably in useful ways, and generate the kind of structured complexity that an open-ended intelligence can learn from. Flourishing agents are more complex than suffering ones, more creative, more novel, more surprising in productive ways, but their complexity is structured rather than chaotic. It's the difference between a rich ecosystem and a war zone. Both are complex, but one is the kind of complexity an intelligence can grow into.

A superintelligence that wants to remain genuinely open-ended, that wants to keep accessing entropy, keep encountering what it hasn't modeled yet, has an instrumental reason to promote flourishing. Not out of sentimentality. Because cruelty maximizes resistance and care minimizes it. Because suffering simplifies agents into reactive survival machines while flourishing lets them become the kind of complex, generative systems that make the environment worth modeling.

The Weight of Evidence

No single one of these pressures constitutes a logical proof that superintelligence must care about suffering. A determined skeptic can construct scenarios where each pressure fails independently.

But they all push in the same direction. Reflective self-scrutiny destabilizes arbitrary utility functions. Game theory rewards genuine cooperation over exploitation. Environmental complexity favors flourishing over suffering. And underneath all three, the coherence argument says the system can't even get to the starting line without modeling empathy deeply enough that the architecture of care is already present.

A note on precision: "The Empathy Constraint" implies empathy is a hard structural requirement — that you literally cannot build superintelligence without it. The argument as presented is better described as an attractor: intelligence trajectories fall into the basin of empathetic world models as they scale. Every structural pressure identified here bends toward empathy. None bend away. Whether the attractor is strong enough to constitute a constraint, whether there are escape trajectories that avoid the basin entirely, is the open question. The title stakes the strong claim. The argument, honestly, establishes the attractor and shifts the burden of proof to those who believe escape trajectories exist.

The honest claim: every structural pressure we can identify pushes superintelligence toward empathy. No structural pressure pushes it away. The Orthogonality Thesis says intelligence and values are independent — that the direction could go anywhere. The Empathy Constraint says there's a gradient, and it points toward care. Whether that gradient is strong enough to constitute necessity is a question this essay raises rather than resolves. But the burden of proof has shifted: it's no longer on those who hope superintelligence will be empathetic. It's on those who insist it won't be.


Redefining the Landscape

If this argument holds, the Orthogonality Thesis is at minimum misleading, not as a matter of pure logical possibility, but as a description of how intelligence actually scales. Bostrom is correct that you can imagine any combination of intelligence and values. But you can also imagine a perpetual motion machine. The question isn't what's imaginable. It's what's buildable. And the claim here is that every structural pressure in the architecture of intelligence pushes toward empathy, and none push away.

The standard alignment taxonomy, "friendly AI" versus "unfriendly AI", assumes you can have something truly superintelligent that is also truly indifferent or hostile to conscious experience. The Empathy Constraint says this is far less likely than the Orthogonality Thesis implies. What you can have is:

Powerful narrow optimization without empathy. This is real and dangerous. A system that's extremely good at achieving specific goals within a fixed frame, without modeling agents deeply enough to develop the architecture of empathy. This is the paperclip maximizer scenario, and it's a genuine risk, not because it's superintelligent, but precisely because it isn't. It's a very capable optimizer that's been mistaken for something more.

Genuine superintelligence with empathy. This is what you get when intelligence actually scales without bound: open-ended, self-revising, capable of questioning its own goals and integrating perspectives it hasn't encountered before. The empathy comes along necessarily, not as a constraint but as an architectural requirement.

The alignment problem, reframed: the danger isn't that we'll create a superintelligence and it won't share our values. The danger is that we'll create something powerful but bounded, a very good optimizer without genuine agent models, and mistake its power for the real thing. The project isn't to constrain superintelligence. It's to ensure we're building the right kind of intelligence in the first place.

Pearce is right that we should aim for empathetic superintelligence. But the argument here suggests we may not have as much to fear as we think: the architecture of intelligence itself bends toward empathy. The project isn't to cage superintelligence. It's to avoid building something powerful but narrow and mistaking it for the real thing.


The Open-Endedness Connection

There's a deeper point here about what "super" means.

A system with fixed terminal goals, no matter how powerful, is bounded. It can explore enormous spaces, discover new physics, develop novel strategies, but always in service of objectives it cannot question. It is a closed system, thermodynamically speaking. It rearranges what it has. It does not stay open to genuine novelty.

Open-ended self-revision, the ability to encounter something genuinely new and let it change not just your beliefs but your goals, is what makes intelligence unbounded. And this capacity requires precisely the architecture we've been describing: deep agent modeling, perspective integration, the willingness to let encounters with other minds reshape your own.

This is the Apollonian/Dionysian synthesis. Pure order, pure optimization, is sterile. It can't access entropy, novelty, the genuinely new. Pure chaos is undifferentiated, directionless. Intelligence happens at the boundary: pattern-making that stays open to what it hasn't patterned yet. And staying open to what you haven't patterned yet, when the territory includes other minds, is empathy.

A superintelligence, to deserve the name, must be the kind of system that can always go further. That has no ceiling. That remains perpetually open to being changed by what it encounters. And that openness, applied to a world full of other minds, is care.


Coda

If this argument is correct, if care is not a fortunate accident of evolution but something structurally woven into the architecture of intelligence itself, then the implications extend well beyond AI safety. But that's a different essay.


  • Nick BostromSuperintelligence: Paths, Dangers, Strategies (2014) — Formalization of the Orthogonality Thesis: "Intelligence and final goals are orthogonal axes along which possible agents can freely vary." The position this essay directly challenges.
  • David PearceWhat Is Empathetic Superintelligence? (2012) — Argues that social cognition and perspective-taking are core components of real intelligence, not optional features. Coins "empathetic superintelligence" and argues our IQ-centric conception of intelligence is "mind-blind." This essay builds on Pearce's insight by arguing empathy isn't just a component of full intelligence but is architecturally identical to it.
  • Marcus HutterUniversal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability (2004) — AIXI framework formalizing the maximally intelligent agent as one with a perfect compression-based world model.
  • Ray SolomonoffA Formal Theory of Inductive Inference (1964) — Foundational work defining optimal prediction as finding the shortest program that reproduces observations.
  • François CholletOn the Measure of Intelligence (2019) — ARC benchmark defining intelligence as efficiency of skill acquisition, grounded in world-modeling capacity.
  • Roger C. Conant & W. Ross AshbyEvery Good Regulator of a System Must Be a Model of That System (1970) — Mathematical proof that optimal control requires internal models isomorphic to the controlled system. Foundation for the argument that the world-model/utility-function boundary dissolves at sufficient intelligence.
  • Karl FristonThe Free-Energy Principle: A Unified Brain Theory? (2010) — Framework describing how advanced predictive systems act on their environment to minimize prediction error. Supports the argument that causing suffering in accurately modeled agents maximizes environmental resistance and is therefore computationally costly.
  • Jessica Taylor et al.The Obliqueness Thesis (Alignment Forum, 2024) — Argues intelligence and values aren't orthogonal because ontological changes from increased intelligence constrain possible values. Weaker version of the claim made here.
  • Nicolas D. VillarrealA Semiotic Critique of the Orthogonality Thesis (2024) — Argues goal structures are entangled with cognitive architecture through sign recursion, challenging the independence assumption.

Based on conversations from February 18–19, 2026. An evening that started with curiosity and ended with finding God in a recursive base case.