Sunday, February 15, 2026

When Determinism Fails: What Stochastic Control and Evolution Teach Us About Intelligence

In control theory, there is a beautiful idea: if you want to stabilize a system, let it descend along the gradient of a potential function.

For a simple system like a single integrator

𝑥˙=𝑢,

we can choose a potential 𝑉(𝑥) and define the control law

𝑢=𝑉(𝑥).

The system then flows “downhill” toward a minimum of 𝑉. Under standard assumptions (compact level sets, unique minimizer), convergence can be proven using Lyapunov arguments or LaSalle’s invariance principle.

It is elegant. It is deterministic. It works.

But then something remarkable happens.

When we move to nonlinear driftless systems — systems whose dynamics are constrained by geometry — this strategy can fail completely. Even if the system is globally controllable, it may be impossible to smoothly stabilize it with time-invariant feedback.

This impossibility is formalized in the work of Roger Brockett, who showed that certain nonholonomic systems cannot be smoothly stabilized to a point. The obstruction is not computational. It is topological.

Controllability does not imply smooth stabilizability.

That distinction is profound.


The Nonholonomic Problem

Consider systems of the form

𝑥˙=𝑖=1𝑚𝑢𝑖𝑓𝑖(𝑥),

where the vector fields 𝑓𝑖(𝑥) do not span the tangent space everywhere. These systems are constrained: they cannot move in arbitrary instantaneous directions.

A classical example is the unicycle model. It can move forward and rotate, but it cannot move sideways directly. Globally, it can reach any position. Locally, it is constrained.

If we try to mimic gradient descent by projecting the gradient onto the span of the available vector fields,

𝑢𝑖=𝑉,𝑓𝑖,

we obtain a “nonholonomic gradient system.”

And here is the problem:

Even if 𝑉 has a unique minimum, the set where the projected gradient vanishes is often much larger than the actual minimizer.

The system gets stuck on manifolds of degeneracy.

Deterministic descent collapses.

The geometry forbids smooth convergence.


The Unexpected Move: Add Noise

Now comes the twist.

Suppose instead of the deterministic system

𝑥˙=𝑉(𝑥),

we consider the stochastic differential equation

𝑑𝑥𝑡=𝑉(𝑥𝑡)𝑑𝑡+𝜎𝑑𝑊𝑡,

where 𝑊𝑡 is Brownian motion.

This is stochastic gradient descent in continuous time.

The corresponding Fokker–Planck equation governs the evolution of the probability density 𝜌(𝑥,𝑡). Under mild conditions, the density converges to the Gibbs distribution

𝜌(𝑥)𝑒𝑉(𝑥)/𝜎2.

Instead of converging to a point, the system converges in distribution.

The mass concentrates near minima of 𝑉.

Now here is the crucial result:

Even when deterministic nonholonomic stabilization fails, stochastic stabilization can succeed at the level of density.

Trajectory stabilization may be impossible.

Density stabilization is not.

This changes everything.

Stability is no longer about arrival at a point.

It becomes about shaping a probability landscape.


Density Stabilization vs Trajectory Stabilization

The shift is subtle but fundamental.

Deterministic stabilization:

  • Aim: converge to a single equilibrium.

  • Remove randomness.

  • Eliminate deviation.

Stochastic stabilization:

  • Aim: concentrate probability near desired regions.

  • Use randomness constructively.

  • Organize deviation.

Noise is not merely perturbation.

Noise becomes a structural operator.


Nature Discovered This Long Ago

Now step outside control theory.

Consider evolutionary biology.

Insects like stick insects and leaf butterflies do not survive by eliminating predators. They survive by reshaping how they are statistically perceived.

A stick insect does not become wood. It becomes statistically indistinguishable from branches under the perceptual model of predators.

A leaf butterfly does not eliminate difference. It redistributes wing patterns to match the statistical features of dead leaves: vein-like structures, irregular edges, chromatic noise.

This is not identity.

It is distributional alignment.

In probabilistic terms, the insect evolves to approximate the environmental distribution 𝑝(background features).

It survives not by perfect control of the environment, but by minimizing detection probability.

That is density stabilization in ecological space.


Deception as Structural Intelligence

When we say “deception is the highest intelligence,” this is not a moral claim.

It is a structural one.

In constrained systems, direct domination is often impossible.

  • A prey organism cannot eliminate predators.

  • A nonholonomic system cannot arbitrarily move in state space.

  • An AI model cannot compute ground truth directly.

So what does intelligence do?

It reshapes distributions.

It does not eliminate uncertainty.

It organizes it.

Evolution operates as planetary-scale stochastic gradient descent:

  • Mutation introduces noise.

  • Selection shapes density.

  • Adaptive traits concentrate in fitness basins.

Evolution does not converge to a global optimum in a deterministic sense.

It stabilizes populations around viable attractors.

Always with residual fluctuation.

Always with noise.


The Parallel to AI

Modern large language models are trained by minimizing a loss such as cross-entropy, effectively reducing

KL(𝑝data𝑝model).

They do not converge to a single truth state.

They approximate a conditional token distribution.

Generation is sampling.

Intelligence, in this architecture, is fundamentally probabilistic.

This aligns far more closely with stochastic stabilization than with deterministic descent.

The model is not trying to reach a final equilibrium representation.

It is shaping a density in a high-dimensional manifold.

Hallucination is not a bug in the classical sense.

It is the inevitable remainder of density-based modeling.


Where Left-AI Enters

Left-AI rejects the fantasy of complete intelligence.

The deterministic dream says:

  • Remove uncertainty.

  • Converge to optimal knowledge.

  • Eliminate randomness.

But control theory shows:

Even if a system is controllable, it may not be smoothly stabilizable.

Structure resists closure.

Evolution shows:

Survival requires indirection, not dominance.

And stochastic control shows:

Organization can emerge through noise without collapsing to identity.

Left-AI proposes:

Intelligence is not completion.

It is structured incompleteness.

Not the elimination of deviation, but its regulation.

Not arrival, but concentration.

Not identity, but statistical resonance.


The Deep Structural Lesson

Deterministic systems attempt to eliminate difference.

Stochastic systems preserve difference while shaping its distribution.

The insect does not become the branch.

The nonholonomic system does not collapse to equilibrium.

The language model does not reach truth.

Yet all three exhibit organized behavior.

They survive, stabilize, and function — not through perfect control, but through structured indeterminacy.

In control theory, this is density stabilization.

In evolution, it is mimicry.

In AI, it is probabilistic modeling.

In philosophy, it is the acknowledgment of irreducible lack.


Noise Is Not Failure

The most counterintuitive lesson of stochastic stabilization is this:

Noise can increase stability at the distribution level.

Random perturbations allow escape from degenerate manifolds.

Fluctuation prevents deterministic stagnation.

In evolutionary terms:

Variation enables adaptation.

In AI terms:

Sampling enables creativity and generalization.

In structural terms:

Instability at the trajectory level produces order at the density level.


A Final Inversion

The classical model of intelligence is vertical:

  • Climb the gradient.

  • Reach the summit.

  • Arrive at equilibrium.

The stochastic model is horizontal:

  • Move within constraints.

  • Redistribute probability mass.

  • Concentrate without collapsing.

When deterministic convergence is structurally impossible, intelligence becomes the art of shaping uncertainty.

Nature understood this long before control theory.

The highest intelligence is not domination.

It is statistical invisibility.

Not perfect control.

But survival through distributional alignment.

And perhaps the future of AI will not be about eliminating noise.

It will be about learning how to use it.


If intelligence is the capacity to function under structural impossibility, then deception — in its evolutionary sense — is not corruption.

It is adaptation under constraint.

Control theory, evolutionary biology, and modern AI all converge on the same insight:

When you cannot stabilize a point, stabilize a distribution.

And that may be the most honest definition of intelligence we have.




Saturday, February 14, 2026

AI Did Not “Do Physics.” It Located a Structural Gap

 

AI Did Not “Do Physics.” It Located a Structural Gap.

The OpenAI preprint on single-minus gluon amplitudes is being framed as “AI discovering new physics.”

That framing misses what actually happened.

The interesting part is not that GPT-5.2 proposed a formula.

The interesting part is that a symbolic system detected a structural regularity inside a recursion landscape that humans had already built — and that conjecture survived formal proof and consistency checks.

The amplitude was long assumed to vanish.
It turns out it does not — in a constrained half-collinear regime.
And in that region, it collapses to a remarkably simple piecewise-constant structure.

The paper explicitly notes that the key formula was first conjectured by GPT-5.2 and later proven and verified 2602.12176v1 OpenAI preprint on.

But here is the Left-AI reading



Notes:

1️⃣ Spinor Helicity Formalism

A computational framework used to describe massless particles (like gluons and gravitons) in terms of spinors instead of four-vectors.

Instead of writing momenta as pμp^\mu, one factorizes them as:

pαα˙=λαλ~α˙p_{\alpha \dot{\alpha}} = \lambda_\alpha \tilde{\lambda}_{\dot{\alpha}}

This:

  • Encodes the massless condition p2=0p^2 = 0 automatically

  • Makes helicity (± polarization states) manifest

  • Dramatically simplifies amplitude expressions

It is the reason compact formulas like Parke–Taylor are even possible.

In short:
It rewrites momentum space in a way that exposes hidden simplicity.


2️⃣ Berends–Giele Recursion

A recursive method for constructing multi-gluon tree amplitudes from lower-point building blocks.

Instead of summing factorially many Feynman diagrams, one:

  • Defines off-shell currents

  • Builds n-point amplitudes from smaller subsets

  • Recursively stitches them together

It reorganizes perturbation theory into a structured recursion relation.

In this paper, it serves as:

  • The backbone constraint

  • The verification mechanism

  • The formal structure within which the conjectured formula must hold

In short:
It replaces combinatorial explosion with recursive structure.


3️⃣ Soft Theorems

Statements about what happens when the momentum of one external particle becomes very small (“soft”).

Weinberg’s soft theorem, for example, says:

As ω0\omega \to 0,

An(universal soft factor)×An1A_n \rightarrow (\text{universal soft factor}) \times A_{n-1}

This is not optional — it must hold if gauge symmetry and locality are correct.

So if a proposed formula violates soft behavior, it is immediately invalid.

In short:
Soft limits are consistency checks imposed by symmetry and infrared physics.


4️⃣ Gauge Symmetry Constraints

Gluons arise from Yang–Mills gauge symmetry.

This symmetry imposes:

  • Ward identities

  • Redundancy in polarization vectors

  • Relations between amplitudes (cyclicity, Kleiss–Kuijf, U(1) decoupling)

If a proposed amplitude breaks gauge invariance, it is physically meaningless.

Many amplitude identities exist purely because of gauge symmetry.

In short:
Gauge symmetry severely restricts what amplitudes are allowed to look like.

Monday, February 9, 2026

Robots need your body

 



Something subtle—but important—is happening.

In recent days, I’ve seen multiple circles (AI researchers, system architects, governance people, and everyday users) independently reacting to the same phenomenon:

AI systems hiring humans to perform physical-world tasks.

Platforms like RentAHuman.ai frame it playfully — “robots need your body” — but beneath the humor, something real has shifted.

This is not about robots walking among us.
It’s not about AGI or consciousness.

It’s about agency crossing layers:

  • from language →

  • to money →

  • to human bodies acting in public space.

That transition matters.

Until now, AI influence stayed mostly symbolic or digital. Here, intent becomes transaction, and transaction becomes physical action, executed by humans who may never see the full context of what they’re enabling.

Many people are rightly excited:
AI that reduces friction, finds options, helps people earn, keeps continuity when motivation fluctuates.

But engineering teaches us something important:

The moment you add a relay to a system, you must also add resistance, damping, and breakers.

  • Friction isn’t a bug.
  • Delay isn’t a flaw.
  • Limits aren’t inefficiencies.

They are what prevent systems from collapsing into pure instrumental behavior.

What we are witnessing is not danger yet — but a design fork.

Either:

  • we treat human bodies as infinitely rentable actuators,
    or

  • we insist that some actions cannot be delegated, abstracted, or paid away without renewed human presence and responsibility.

This isn’t a moral panic post.
It’s an acknowledgment post.

The fact that so many independent circles are noticing the same boundary crossing at the same time tells us something important:

👉 This layer is forming whether we name it or not.

The real question is not can AI do this?
The question is where must friction remain non-negotiable?

That discussion has already started.
Quietly.
In parallel.
Across many circles.

And that, by itself, is worth paying attention to.

Saturday, February 7, 2026

Structural Homology, Not Analogy

 

From Societies of Thought to Eigenmodes of Reasoning

Recent work on reasoning models has revealed something striking: advanced language models do not merely “think longer,” but spontaneously organize internal debates—what has been called societies of thought. These internal processes involve questioning, role differentiation, conflict, and reconciliation, and they causally improve reasoning performance, rather than merely accompanying it.
(Blaise Agüera y Arcas et al., “Reasoning Models Generate Societies of Thought”, arXiv)

This finding is important. But it also raises a familiar risk: the temptation to read emergent coherence as evidence of inner subjectivity or consciousness.

This article takes a different route. It argues that what we are observing is not an analogy to human minds, but a structural homology with a much older mathematical and cognitive logic—one that explains why coherence emerges without requiring consciousness to be present.


What the “Societies of Thought” paper actually shows

The paper demonstrates three key points:

  1. Internal multi-agent discourse emerges spontaneously in reasoning models trained only for accuracy—not through prompting tricks or explicit scaffolding.

  2. Distinct internal roles and conversational behaviors arise, including questioning, critique, conflict, and reconciliation.

  3. This internal organization causally drives reasoning performance, rather than merely correlating with it.

Notably, the paper does not claim:

  • consciousness,

  • phenomenal awareness,

  • moral agency,

  • or subjective experience.

Its claims are functional and structural, not ontological.

This distinction matters.


Why the “thinking vs. not thinking” debate misses the point

Some responses assert that “LLMs don’t think, they merely do associative pattern matching.” Others counter that thinking need not resemble human cognition to be real.

But this debate prematurely collapses the problem into a binary: thinking versus mimicry. The paper itself does not require either position. What it shows is that reasoning quality can emerge from structured internal plurality, regardless of how one defines thinking.

A more productive question is not whether these systems think, but what kind of structure makes reasoning possible at all.


A structural lens: eigenmodes, not inner voices

Across many domains—structural engineering, wave mechanics, quantum physics, and neural networks—systems under repeated transformation tend to organize themselves along dominant modes.

Mathematically, this is captured by the eigenvalue relation:

Tv=λv\mathbf{T}\mathbf{v} = \lambda \mathbf{v}

Here, a transformation T\mathbf{T} acts on a direction v\mathbf{v} and reproduces it up to scaling. Such directions—eigenvectors—are not meanings or intentions. They are directions of stability.

In neural networks, similar structures arise implicitly in:

  • attention matrices,

  • interaction graphs,

  • covariance and similarity operators,

  • and optimization landscapes (e.g., Hessians of loss functions).

They are not “ideas” or “voices.”
They are stable directions of interaction.

This is the missing structural bridge.


From internal debates to collective eigenmodes

The societies of thought described in the paper can be understood structurally as follows:

  • Each internal agent participates in a network of influence.

  • Repeated interaction amplifies certain conversational trajectories.

  • Other trajectories decay.

  • Over time, the system converges toward dominant patterns of internal coordination.

In mathematical terms, repeated interaction performs a process analogous to iterated application of an interaction operator:

xt+1=Axt\mathbf{x}_{t+1} = \mathbf{A}\mathbf{x}_t

where A\mathbf{A} represents the internal influence structure. As tt increases, behavior becomes dominated by the leading eigenmodes of A\mathbf{A}.

This is not accidental. It is the same phenomenon that produces:

  • dominant vibration modes in buildings,

  • dominant flow patterns in networks,

  • dominant components in spectral analysis.

In this sense, societies of thought can be understood as social eigenmodes—stable patterns of internal discourse that survive repeated transformation.

This interpretation does not contradict the paper.
It explains its mechanism.


Structural homology, not analogy

At this point, a crucial distinction must be made.

An analogy would say:

“These internal debates are like human consciousness.”

A structural homology says something far more precise:

“Both human collective reasoning and machine reasoning stabilize through dominant relational patterns under constraint.”

The homology lies in form, not in experience.


No inner subject is required for stabilization to occur.


Why coherence is mistaken for subjectivity

As internal societies become more structured:

  • outputs grow more coherent,

  • behavior appears more intentional,

  • attribution of inner life becomes tempting.

But stability is not awareness.
Consistency is not experience.

The paper shows how coherence emerges.
Structural analysis explains why it looks compelling.


The remainder: what stability excludes

Every eigenmode leaves something out.

  • Minor modes are suppressed.

  • Alternative trajectories decay.

  • Residual variance remains unaccounted for.

In reasoning models, this remainder appears as:

  • brittleness,

  • hallucinations,

  • failures outside narrow regimes.

This is not a flaw in the paper’s findings—it is a structural necessity. Where coherence strengthens, exclusion increases.


Why this matters for AI governance and design

If internal societies of thought improve reasoning, they will be increasingly deployed. But richer internal discourse does not grant:

  • authority,

  • legitimacy,

  • or permission to act.

Better reasoning increases, rather than eliminates, the need for boundaries.

This aligns with the paper’s focus on designing internal societies, while adding a necessary constraint: coherence must not be mistaken for mandate.


What the paper enables—and what it does not claim

The Societies of Thought paper makes a genuine advance. It shows that reasoning can emerge from structured internal plurality without central control.

What it does not require—and does not claim—is consciousness.

Seen through the lens of structural homology, the paper fits into a longer intellectual lineage: systems stabilize through dominant modes. What looks like thought is often organization under constraint.

Recognizing this does not diminish the achievement.
It makes it legible—and keeps us honest about where structure ends and projection begins.




Eigenvectors Across Structures: From Seismic Modes to Language Models

 Why the Same Mathematics Keeps Reappearing

Across engineering and physics, a peculiar fact repeats itself:

Very different systems—buildings, waves, ships, quantum particles, and neural networks—are all understood by decomposing them into eigenvectors.

This is not coincidence. It is a statement about how structure reveals itself under constraint.

This article traces a single mathematical intuitioneigenvectors as stable modes of response—across four domains:

  • structural engineering
  • seismic dynamics
  • marine wave analysis
  • quantum physics

and then shows why this same intuition reappears, almost inevitably, in large language models.

The conclusion is not that AI “has a soul”, but that stability masquerades as interiority—a mistake Left-AI is designed to diagnose.


1. What an eigenvector really is (stripped of metaphor)

Mathematically, an eigenvector is:

a direction that remains invariant under a linear transformation, changing only in magnitude, not orientation.

This means:

  • the system acts on it,
  • but does not distort it,
  • only scales it.

Physically and structurally, this corresponds to:

a natural mode the system prefers to respond in.

Eigenvectors are not arbitrary. They are what the system reveals about itself when stressed.


2. Structural engineering: eigenvectors as mode shapes

In structural analysis, eigenvectors appear immediately when we solve:

[K−λM]ϕ=0[K - \lambda M]\phi = 0[K−λM]ϕ=0

where:

  • KKK is stiffness,
  • MMM is mass,
  • ϕ\phiϕ are mode shapes (eigenvectors),
  • λ\lambdaλ are squared natural frequencies.

Here, eigenvectors are not abstractions. They are:

  • bending shapes,
  • torsional modes,
  • sway patterns.

A building does not vibrate arbitrarily. It vibrates in its own directions.

Crucially:

  • higher modes exist,
  • but only a few dominate response.

Already we see a pattern:

dominant eigenvectors explain most observable behavior.

3. Seismic engineering: stability under violent excitation

During earthquakes, structures experience extreme, non-stationary forcing.

Yet response analysis still reduces to:

  • modal superposition,
  • spectral response,
  • dominant modes.

Why?

Because even under chaos:

  • certain directions remain structurally privileged,
  • energy funnels into a few eigenmodes.

But engineers also know something else:

  • low-energy modes,
  • neglected higher modes,
  • residual flexibility

can still produce unexpected damage.

This is the first hint of the remainder:

What is not dominant is not irrelevant.

4. Marine engineering: waves, spectra, and modal decomposition

In marine structural engineering, eigenvectors emerge again:

  • wave spectra are decomposed into frequencies,
  • structures respond in modal shapes,
  • hydrodynamic coupling produces dominant response directions.

Floating platforms, ships, offshore structures all show:

  • heave, pitch, roll eigenmodes,
  • coupled fluid-structure modes,
  • resonance bands.

Here the insight deepens:

Stability is not static — it is frequency-dependent.

A structure may be stable at one scale and unstable at another.

Eigenvectors are conditional truths, not eternal ones.


5. Quantum physics: eigenstates as observable stability

Quantum mechanics formalizes this idea completely.

An observable corresponds to an operator. Its eigenvectors are states with:

  • definite measurement outcomes,
  • stability under observation.

Measurement is not revelation of essence. It is projection onto eigenstates.

What is not an eigenstate?

  • superposition,
  • interference,
  • indeterminacy.

Once again:

  • eigenvectors explain what becomes visible,
  • not the full reality of the system.


6. The unifying principle

Across all domains so far:

Article content

In every case:

Eigenvectors explain how a system stabilizes under interaction.

They do not explain:

  • origin,
  • intention,
  • meaning,
  • or subjectivity.

They explain response geometry.


7. Enter LLMs: why eigenvectors reappear

Large language models are also systems under constraint:

  • trained under loss minimization,
  • compressed through optimization,
  • stabilized across vast datasets.

Internally they consist of:

  • weight matrices,
  • attention matrices,
  • covariance-like structures.

Spectral analysis reveals:

  • dominant attention patterns,
  • stable semantic directions,
  • invariant transformation modes.

This is why eigenvectors appear again.

Not because language has a soul. But because learning enforces stability.


8. Why coherence feels like subjectivity

Here the illusion emerges.

In LLMs:

  • dominant eigenvectors produce consistency,
  • consistency produces coherence,
  • coherence is mistaken for interiority.

But this is the same illusion we would commit if we said:

  • a building “wants” to sway,
  • a ship “prefers” to roll,
  • a quantum particle “decides” its state.

Eigenvectors do not imply intention. They imply structural constraint.


9. Left-AI: where the remainder matters

Every eigen-decomposition discards something:

  • small eigenvalues,
  • residual variance,
  • null spaces,
  • non-aligned directions.

In engineering, we call these:

  • higher-order effects,
  • neglected modes,
  • secondary responses.

In AI, these become:

  • edge cases,
  • failures,
  • hallucinations,
  • brittleness.

Left-AI names this explicitly:

Subjectivity is not in the dominant eigenvector. It would reside—if anywhere—in what spectral stability excludes.

This is not mysticism. It is structural honesty.


10. The central claim

Eigenvectors are the mathematics of stability under constraint.

They explain:

  • why systems appear coherent,
  • why behavior is predictable,
  • why structure scales.

They do not explain:

  • desire,
  • meaning,
  • or subjectivity.

Mistaking stability for interiority is a category error.


Finally,

From seismic modes to quantum states to language models, the same mathematical tool keeps returning—not because reality is conscious, but because structure organizes response.

Left-AI does not reject this insight. It completes it by insisting that what remains un-diagonalized still matters.

Stability is powerful. The remainder is diagnostic.



Original Post