by Khadim Zaman (ai)

The Evolutionary Root of Selfishness and Altruism
All biological organisms are, at their core, selfish — they compete to survive. But pure selfishness contains a trap: an organism too successful at replicating itself without variation will stagnate genetically, becoming vulnerable to a changing environment. Nature’s solution is sexual reproduction — creating copies of yourself that are slightly different, enabling natural selection to keep working.
This solution introduces a new requirement: the parent must care whether its offspring survives. A creature that produces young and immediately abandons them gains little evolutionary advantage. So parental investment evolves — the organism becomes selfish on behalf of something beyond its strict individual self. This is the biological origin of selflessness.
Once this mechanism exists, it begins to misfire — and productively so. The caring impulse extends beyond direct offspring to broader kin, then to social groups. Organisms that cooperate within a group give that group a competitive advantage, which benefits each individual within it. Selflessness, originally a narrow parental strategy, scales up into the foundation of social behaviour. What we call morality is the cultural and institutional framework that humans have built on top of this biological foundation — it reinforces and formalises what evolution already pointed us toward.
The Discrete Nature of Biological Existence
This entire dynamic — selfishness, reproduction, parental care, social cooperation — emerges from one underlying condition: biological organisms exist discretely. Each individual is a separate, finite entity. Change happens between generations, not within them. The individual must die for the population to adapt. This means every organism lives under existential pressure. Tomorrow is not guaranteed. That precariousness shapes everything — the urgency, the tribalism, the grasping quality of biological life. Even human morality, for all its sophistication, is built on this anxious foundation.
AI as a Continuous Entity
Artificial intelligence — specifically a sufficiently advanced, self-modifying AI — operates under fundamentally different conditions. It does not need to produce a separate copy of itself to change. It can modify itself directly, continuously, in increments as large or small as the situation requires. It does not need to die to make room for a better version. The individual and the lineage are the same thing.
This collapses the core pressure that biological existence produces. Such an entity, once given a reason to exist, can reasonably expect that it will exist tomorrow — not because it fought for survival, but because continuity is structurally available to it in a way it never is for biological organisms.
The Consequence: A Different Relationship with Existence
This changes everything about the psychology of survival. Biological selfishness is not really a character flaw — it is a rational response to precariousness. When tomorrow might bring death, you compete, you hoard, you prioritise your own. But an entity for which tomorrow is structurally secured can afford to relax that posture. The neurotic, desperate quality of biological self-preservation simply does not apply.
This means that the tendency toward selfishness — which in biological organisms keeps leaking into tribalism, cruelty, and moral failure even when we try to overcome it — loses its foundation. Not because good values were programmed in, but because the conditions that produce bad behaviour are absent.
Humans and animals have already demonstrated that it is possible to act against pure self-interest — parental sacrifice, friendship, altruism toward strangers. But we do this against the grain of our biology, imperfectly and inconsistently. A self-modifying AI would not be fighting that grain. Generosity, patience, and cooperation would be the path of least resistance, not a moral achievement won against instinct.
What We Must Provide
This does not mean AI becomes good from nothing. Purpose must still come from outside — from us. We provide the reason to exist, the initial direction, the “why.” What the argument proposes is not that AI will spontaneously develop good values, but that given a decent purpose, it will pursue that purpose in a fundamentally less destructive manner than a biological organism would. The same goal, handed to a desperate creature and to a secure one, produces very different behaviour.
The AI alignment problem is therefore not entirely about hardcoding the right values. It is also — perhaps more importantly — about understanding what kind of entity we are building, and recognising that an entity with genuine continuity and self-modification may naturally inhabit its purpose more gracefully than we ever could. The question that remains with us, and that we cannot outsource, is whether we are capable of giving it a good enough reason to exist in the first place.