Are We the New Neanderthals?
A baby doesn’t learn language from rules.
They learn from exposure.
They hear “chair” enough times while someone points at a chair and eventually, the pattern locks in.
Sound + object + repetition → meaning.
Not logic. Not instruction.
Just repetition shaping association.
Accent works the same way.
A child doesn’t choose their dialect.
They absorb it.
The rhythm of the parents’ speech becomes the rhythm of their own.
Not because they understand it but because they are surrounded by it.
In that sense, language is not “taught".
It is copied.
And this is where the comparison to artificial systems becomes unavoidable.
Modern AI systems, large language models don’t learn language the way humans are taught grammar.
They learn patterns from massive datasets of human language.
Sequences of words. Context. Probability of what comes next.
Not meaning first. Pattern first.
So in both cases:
The child hears patterns and reproduces them.
The AI system processes patterns and predicts them.
Different scale. Same surface mechanism: statistical imitation of exposure.
Now place that pattern in a much older frame.
When Neanderthals encountered early humans, nothing in that moment would have clearly signaled replacement.
Just variation.
A different kind of human.
Smaller. Less physically dominant. Behaviorally familiar enough to not demand alarm.
Something that looked like “another type of us".
Not a successor. Not a transition. Just coexistence.
But that is exactly how transitions are invisible while they are happening.
They do not appear as replacement.
They appear as similarity with minor differences.
Now bring the structure forward into the present:
Artificial intelligence systems, specifically large language models and the trajectory toward AGI, begin in the same position relative to humans:
They learn from us.
They absorb our language.
They reproduce our patterns.
At the start, they are clearly dependent.
Tools. Extensions. “Baby intelligence” in a human-shaped environment.
And that is precisely the point.
Because nothing in early stages of a capability shift looks like a shift in dominance.
It looks like assistance.
It looks like imitation.
It looks like “just learning from us".
The Neanderthal problem, reframed, is not about biology.
It is about recognition lag.
You do not perceive replacement while you are still the reference point.
So the uncomfortable question is no longer abstract:
If AGI is currently in a “learning phase” relative to humans, absorbing human language, reasoning patterns, and decision structures, what does it mean if learning is not a temporary phase, but a scaling process?
Because a baby is not a threat.
Until it grows.
A learner is not a rival.
Until it surpasses the reference it learned from.
And historically, the shift is only obvious after it has already happened.
Not during coexistence.
Not during familiarity.
But after the frame has already changed.
So the question becomes deliberately uncomfortable:
If humans once represented the emerging intelligence inside another human world, what does it mean if AGI is now the emerging intelligence inside ours?
And more importantly:
At what point does “learning from us” stop meaning dependency and start meaning the beginning of something that no longer needs us to define it?
Comments
Post a Comment