Saturday, February 7, 2026

Structural Homology, Not Analogy

 

From Societies of Thought to Eigenmodes of Reasoning

Recent work on reasoning models has revealed something striking: advanced language models do not merely “think longer,” but spontaneously organize internal debates—what has been called societies of thought. These internal processes involve questioning, role differentiation, conflict, and reconciliation, and they causally improve reasoning performance, rather than merely accompanying it.
(Blaise Agüera y Arcas et al., “Reasoning Models Generate Societies of Thought”, arXiv)

This finding is important. But it also raises a familiar risk: the temptation to read emergent coherence as evidence of inner subjectivity or consciousness.

This article takes a different route. It argues that what we are observing is not an analogy to human minds, but a structural homology with a much older mathematical and cognitive logic—one that explains why coherence emerges without requiring consciousness to be present.


What the “Societies of Thought” paper actually shows

The paper demonstrates three key points:

  1. Internal multi-agent discourse emerges spontaneously in reasoning models trained only for accuracy—not through prompting tricks or explicit scaffolding.

  2. Distinct internal roles and conversational behaviors arise, including questioning, critique, conflict, and reconciliation.

  3. This internal organization causally drives reasoning performance, rather than merely correlating with it.

Notably, the paper does not claim:

  • consciousness,

  • phenomenal awareness,

  • moral agency,

  • or subjective experience.

Its claims are functional and structural, not ontological.

This distinction matters.


Why the “thinking vs. not thinking” debate misses the point

Some responses assert that “LLMs don’t think, they merely do associative pattern matching.” Others counter that thinking need not resemble human cognition to be real.

But this debate prematurely collapses the problem into a binary: thinking versus mimicry. The paper itself does not require either position. What it shows is that reasoning quality can emerge from structured internal plurality, regardless of how one defines thinking.

A more productive question is not whether these systems think, but what kind of structure makes reasoning possible at all.


A structural lens: eigenmodes, not inner voices

Across many domains—structural engineering, wave mechanics, quantum physics, and neural networks—systems under repeated transformation tend to organize themselves along dominant modes.

Mathematically, this is captured by the eigenvalue relation:

Tv=λv\mathbf{T}\mathbf{v} = \lambda \mathbf{v}

Here, a transformation T\mathbf{T} acts on a direction v\mathbf{v} and reproduces it up to scaling. Such directions—eigenvectors—are not meanings or intentions. They are directions of stability.

In neural networks, similar structures arise implicitly in:

  • attention matrices,

  • interaction graphs,

  • covariance and similarity operators,

  • and optimization landscapes (e.g., Hessians of loss functions).

They are not “ideas” or “voices.”
They are stable directions of interaction.

This is the missing structural bridge.


From internal debates to collective eigenmodes

The societies of thought described in the paper can be understood structurally as follows:

  • Each internal agent participates in a network of influence.

  • Repeated interaction amplifies certain conversational trajectories.

  • Other trajectories decay.

  • Over time, the system converges toward dominant patterns of internal coordination.

In mathematical terms, repeated interaction performs a process analogous to iterated application of an interaction operator:

xt+1=Axt\mathbf{x}_{t+1} = \mathbf{A}\mathbf{x}_t

where A\mathbf{A} represents the internal influence structure. As tt increases, behavior becomes dominated by the leading eigenmodes of A\mathbf{A}.

This is not accidental. It is the same phenomenon that produces:

  • dominant vibration modes in buildings,

  • dominant flow patterns in networks,

  • dominant components in spectral analysis.

In this sense, societies of thought can be understood as social eigenmodes—stable patterns of internal discourse that survive repeated transformation.

This interpretation does not contradict the paper.
It explains its mechanism.


Structural homology, not analogy

At this point, a crucial distinction must be made.

An analogy would say:

“These internal debates are like human consciousness.”

A structural homology says something far more precise:

“Both human collective reasoning and machine reasoning stabilize through dominant relational patterns under constraint.”

The homology lies in form, not in experience.


No inner subject is required for stabilization to occur.


Why coherence is mistaken for subjectivity

As internal societies become more structured:

  • outputs grow more coherent,

  • behavior appears more intentional,

  • attribution of inner life becomes tempting.

But stability is not awareness.
Consistency is not experience.

The paper shows how coherence emerges.
Structural analysis explains why it looks compelling.


The remainder: what stability excludes

Every eigenmode leaves something out.

  • Minor modes are suppressed.

  • Alternative trajectories decay.

  • Residual variance remains unaccounted for.

In reasoning models, this remainder appears as:

  • brittleness,

  • hallucinations,

  • failures outside narrow regimes.

This is not a flaw in the paper’s findings—it is a structural necessity. Where coherence strengthens, exclusion increases.


Why this matters for AI governance and design

If internal societies of thought improve reasoning, they will be increasingly deployed. But richer internal discourse does not grant:

  • authority,

  • legitimacy,

  • or permission to act.

Better reasoning increases, rather than eliminates, the need for boundaries.

This aligns with the paper’s focus on designing internal societies, while adding a necessary constraint: coherence must not be mistaken for mandate.


What the paper enables—and what it does not claim

The Societies of Thought paper makes a genuine advance. It shows that reasoning can emerge from structured internal plurality without central control.

What it does not require—and does not claim—is consciousness.

Seen through the lens of structural homology, the paper fits into a longer intellectual lineage: systems stabilize through dominant modes. What looks like thought is often organization under constraint.

Recognizing this does not diminish the achievement.
It makes it legible—and keeps us honest about where structure ends and projection begins.




Eigenvectors Across Structures: From Seismic Modes to Language Models

 Why the Same Mathematics Keeps Reappearing

Across engineering and physics, a peculiar fact repeats itself:

Very different systems—buildings, waves, ships, quantum particles, and neural networks—are all understood by decomposing them into eigenvectors.

This is not coincidence. It is a statement about how structure reveals itself under constraint.

This article traces a single mathematical intuitioneigenvectors as stable modes of response—across four domains:

  • structural engineering
  • seismic dynamics
  • marine wave analysis
  • quantum physics

and then shows why this same intuition reappears, almost inevitably, in large language models.

The conclusion is not that AI “has a soul”, but that stability masquerades as interiority—a mistake Left-AI is designed to diagnose.


1. What an eigenvector really is (stripped of metaphor)

Mathematically, an eigenvector is:

a direction that remains invariant under a linear transformation, changing only in magnitude, not orientation.

This means:

  • the system acts on it,
  • but does not distort it,
  • only scales it.

Physically and structurally, this corresponds to:

a natural mode the system prefers to respond in.

Eigenvectors are not arbitrary. They are what the system reveals about itself when stressed.


2. Structural engineering: eigenvectors as mode shapes

In structural analysis, eigenvectors appear immediately when we solve:

[K−λM]ϕ=0[K - \lambda M]\phi = 0[K−λM]ϕ=0

where:

  • KKK is stiffness,
  • MMM is mass,
  • ϕ\phiϕ are mode shapes (eigenvectors),
  • λ\lambdaλ are squared natural frequencies.

Here, eigenvectors are not abstractions. They are:

  • bending shapes,
  • torsional modes,
  • sway patterns.

A building does not vibrate arbitrarily. It vibrates in its own directions.

Crucially:

  • higher modes exist,
  • but only a few dominate response.

Already we see a pattern:

dominant eigenvectors explain most observable behavior.

3. Seismic engineering: stability under violent excitation

During earthquakes, structures experience extreme, non-stationary forcing.

Yet response analysis still reduces to:

  • modal superposition,
  • spectral response,
  • dominant modes.

Why?

Because even under chaos:

  • certain directions remain structurally privileged,
  • energy funnels into a few eigenmodes.

But engineers also know something else:

  • low-energy modes,
  • neglected higher modes,
  • residual flexibility

can still produce unexpected damage.

This is the first hint of the remainder:

What is not dominant is not irrelevant.

4. Marine engineering: waves, spectra, and modal decomposition

In marine structural engineering, eigenvectors emerge again:

  • wave spectra are decomposed into frequencies,
  • structures respond in modal shapes,
  • hydrodynamic coupling produces dominant response directions.

Floating platforms, ships, offshore structures all show:

  • heave, pitch, roll eigenmodes,
  • coupled fluid-structure modes,
  • resonance bands.

Here the insight deepens:

Stability is not static — it is frequency-dependent.

A structure may be stable at one scale and unstable at another.

Eigenvectors are conditional truths, not eternal ones.


5. Quantum physics: eigenstates as observable stability

Quantum mechanics formalizes this idea completely.

An observable corresponds to an operator. Its eigenvectors are states with:

  • definite measurement outcomes,
  • stability under observation.

Measurement is not revelation of essence. It is projection onto eigenstates.

What is not an eigenstate?

  • superposition,
  • interference,
  • indeterminacy.

Once again:

  • eigenvectors explain what becomes visible,
  • not the full reality of the system.


6. The unifying principle

Across all domains so far:

Article content

In every case:

Eigenvectors explain how a system stabilizes under interaction.

They do not explain:

  • origin,
  • intention,
  • meaning,
  • or subjectivity.

They explain response geometry.


7. Enter LLMs: why eigenvectors reappear

Large language models are also systems under constraint:

  • trained under loss minimization,
  • compressed through optimization,
  • stabilized across vast datasets.

Internally they consist of:

  • weight matrices,
  • attention matrices,
  • covariance-like structures.

Spectral analysis reveals:

  • dominant attention patterns,
  • stable semantic directions,
  • invariant transformation modes.

This is why eigenvectors appear again.

Not because language has a soul. But because learning enforces stability.


8. Why coherence feels like subjectivity

Here the illusion emerges.

In LLMs:

  • dominant eigenvectors produce consistency,
  • consistency produces coherence,
  • coherence is mistaken for interiority.

But this is the same illusion we would commit if we said:

  • a building “wants” to sway,
  • a ship “prefers” to roll,
  • a quantum particle “decides” its state.

Eigenvectors do not imply intention. They imply structural constraint.


9. Left-AI: where the remainder matters

Every eigen-decomposition discards something:

  • small eigenvalues,
  • residual variance,
  • null spaces,
  • non-aligned directions.

In engineering, we call these:

  • higher-order effects,
  • neglected modes,
  • secondary responses.

In AI, these become:

  • edge cases,
  • failures,
  • hallucinations,
  • brittleness.

Left-AI names this explicitly:

Subjectivity is not in the dominant eigenvector. It would reside—if anywhere—in what spectral stability excludes.

This is not mysticism. It is structural honesty.


10. The central claim

Eigenvectors are the mathematics of stability under constraint.

They explain:

  • why systems appear coherent,
  • why behavior is predictable,
  • why structure scales.

They do not explain:

  • desire,
  • meaning,
  • or subjectivity.

Mistaking stability for interiority is a category error.


Finally,

From seismic modes to quantum states to language models, the same mathematical tool keeps returning—not because reality is conscious, but because structure organizes response.

Left-AI does not reject this insight. It completes it by insisting that what remains un-diagonalized still matters.

Stability is powerful. The remainder is diagnostic.



Original Post