Saturday, February 28, 2026

Crest Level Design: Marine and Coastal Engineering - 01

 


1) What “crest level design” must satisfy at FEED

Crest elevation is not “one formula”. It is the highest of several controlling requirements:

  1. Hydraulic performance

  • limit mean overtopping discharge q (and/or individual overtopping volumes) for the asset function

  • limit run-up exceedance and freeboard under design sea state

  1. Structural/geotechnical robustness

  • allowance for settlement / consolidation / rock rearrangement

  • allowance for construction tolerance (as-built variability)

  • allowance for sea-level rise (project life)

  1. Operational / functional

  • crest used as access road? utilities corridor? emergency access?

  • allowable “green water” / spray conditions for people/vehicles/equipment

Guidance commonly used for this methodology is EurOtop (overtopping/run-up) and the Rock Manual / rubble mound design practice.


2) FEED inputs you must define (minimum set)

A. Design water levels (vertical components)

You need a Design Still Water Level (DSWL) for the storm condition:

DSWL=Tide level (e.g., HAT or MHWS)+Storm surge+Wave setup+SLR allowance\text{DSWL} = \text{Tide level (e.g., HAT or MHWS)} + \text{Storm surge} + \text{Wave setup} + \text{SLR allowance}
  • Tide range near Abu Dhabi is of order ~1–1.5 m (you’ll confirm from an approved hydrographic source later; at FEED you can bound it conservatively).

  • Add storm surge (site-specific; FEED: take a conservative bound)

  • Add wave setup (often 0.1–0.3 m for many rubble structures, but treat as a term you include explicitly)

  • Add SLR allowance per client policy (common FEED practice: pick an allowance for design life, e.g., 0.3–0.6 m depending on horizon)

Deliverable at FEED: a table of levels relative to a datum (CD / MSL / LAT) with chosen conservatisms and justification.


Full article here in Pdf

Friday, February 20, 2026

Structure Without Screen

 


A Linear Algebraic Reframing of Mental Imagery

Traditionally, the debate about mental imagery splits into familiar camps:

  1. Imagery as inner pictures.

  2. Imagery as generative simulation (à la rendering software).

  3. Imagery as propositional encoding (structured symbolic relations).

Aphantasia complicates all three.

A person with aphantasia may report no internal visual phenomenology — no inner cinema — yet can answer spatial questions flawlessly:

“In your bedroom, where is the window relative to the door?”
“To my left.”

No image.
Still structure.

This suggests something crucial:

The core competence may not be pictorial at all.

In this essay, I propose a mathematically structured reframing:

Mental imagery may be better understood as relational geometry in high-dimensional state space rather than internal pictures.


1. The Vector Space Hypothesis

Let us begin minimally.

Assume a cognitive state can be modeled abstractly as:

xRn\mathbf{x} \in \mathbb{R}^n

This does not mean the brain literally stores vectors. It means we model relational structure as a point in a high-dimensional manifold.

Each dimension encodes some feature:

  • spatial orientation

  • object identity

  • emotional valence

  • episodic index

  • attentional weight

A mental configuration is therefore a point in state space.

Imagery, under this framing, is not a picture.

It is a location in relational geometry.


2. Spatial Reasoning Without Visualization

Consider a relational transformation:

y=Mx\mathbf{y} = M \mathbf{x}

Where:

  • x\mathbf{x} = current spatial schema

  • MM = transformation encoding “rotate 90° left”

  • y\mathbf{y} = updated spatial orientation

Someone with aphantasia can compute y\mathbf{y} without generating visual phenomenology.

The transformation matrix operates.

The projection layer does not.

Thus:

Spatial competence is invariant under suppression of rendering.

This parallels transformer models:

There is no internal screen.
There are only state updates in high-dimensional embedding space.


3. Imagery as Projection

Suppose we introduce a projection operator:

z=Px\mathbf{z} = P \mathbf{x}

Where PP maps high-dimensional relational structure into low-dimensional phenomenological experience.

If PP is weak or absent, no image is “seen.”

If PP is strong, vivid imagery occurs.

Thus:

Imagery becomes a projection phenomenon, not a storage format.

Aphantasia may correspond to reduced projection gain, not absence of structure.


4. Dot Products and Recognition

Similarity in neural systems is often modeled via dot product:

vw=i=1nviwi\mathbf{v} \cdot \mathbf{w} = \sum_{i=1}^{n} v_i w_i

Recognition occurs when alignment magnitude increases.

Perhaps what we call “imagining a sunset” is simply:

xcontextxsunset0\mathbf{x}_{context} \cdot \mathbf{x}_{sunset} \gg 0

When alignment crosses a phenomenological threshold, we report:

“I see it.”

The “seeing” may be the scalar output of relational coherence.

Not a hidden photograph.

But a high alignment state.


5. Eigenvectors and Archetypes

Now consider eigenstructure:

Av=λvA \mathbf{v} = \lambda \mathbf{v}

An eigenvector is invariant under transformation.

In dynamic systems, eigenvectors represent stable axes.

Suppose archetypal cognitive structures — “home,” “mother,” “sunset,” “cat on mat” — correspond to eigen-directions of transformation operators within cognitive space.

Every new experience x\mathbf{x} is decomposed:

x=iαivi\mathbf{x} = \sum_i \alpha_i \mathbf{v}_i

Where vi\mathbf{v}_i are eigenvectors.

Imagery, then, is not stored content.

It is coefficient magnitude αi\alpha_i along stable directions.

The vividness of imagery may correlate with eigenvalue magnitude λi\lambda_i.

High λ\lambda: strong attractor dynamics.
Low λ\lambda: weak resonance.


6. Multi-Modal Tensor Structure

Visual imagery is rarely purely visual.

Let us represent cognitive state as a tensor:

TRn×m×p

Axes might encode:

  • spatial coordinates

  • affective gradients

  • motor simulations

  • memory indexing

Imagining food may activate:

  • taste axis

  • smell axis

  • autobiographical memory axis

A musician imagining an instrument activates:

  • auditory simulation

  • proprioceptive motor mapping

Aphantasia may suppress one tensor slice while preserving others.

The tensor remains intact.

Only certain projections weaken.


7. Dynamical Systems View

Let cognition evolve as:

dxdt=F(x)\frac{d\mathbf{x}}{dt} = F(\mathbf{x})

Where FF defines attractor dynamics.

Stable imagery may correspond to convergence toward attractors:

x(t)x\mathbf{x}(t) \rightarrow \mathbf{x}^*

Lucid dreaming might represent strong attractor convergence within internally generated dynamics.

Everyday imagery may be partial convergence under task constraints.

The debate shifts from representation type to dynamical regime.


8. The AI Parallel

In transformer models:

Hidden state:

htRn\mathbf{h}_t \in \mathbb{R}^{n}

Attention update:

ht+1=Attention(ht)

No pictures exist internally.

Yet relational coherence emerges.

Thus, the AI case demonstrates:

High-dimensional structure can simulate perspectival reasoning without internal cinema.

Aphantasia suggests humans may operate similarly — with variable projection strength.


9. The Core Hypothesis

Mental imagery may not be:

  • an inner picture

  • nor merely a propositional sentence

It may be:

The phenomenological signature of high-dimensional relational stabilization.

When relational density crosses threshold, projection activates.

When it does not, competence remains but imagery vanishes.

Structure without screen.


10. What This Changes

If imagery is relational geometry:

  • The pictorial vs propositional debate dissolves.

  • Aphantasia becomes variation in projection, not deficit of representation.

  • AI parallels become structurally informative rather than anthropomorphic.

Most importantly:

We stop searching for hidden content.

Instead, we examine invariant structure.


11. Final Thought

Perhaps there is no sunset inside the mind.

Only a stable eigen-direction lighting up in state space.

And what we call “seeing” is the scalar glow of alignment within relational geometry.

Not an image.

But a coherence event.


Inspired by a LinkedIn post.

Thursday, February 19, 2026

The $18,000 Sculpture That Wasn’t There

 


On Empty Walls, Empty Pedestals, and the Enjoyment of Lack

In May 2021, the Italian conceptual artist Salvatore Garau sold an “immaterial” sculpture titled Io Sono (“I Am”) through Art-Rite in Milan. It fetched €15,000 (about $18,000 USD).

  • There was no object.
  • No material.
  • No form.

The buyer received a certificate of authenticity and instructions to display the work in a 1.5 × 1.5 meter empty space in a private home. Garau described it as composed of “air and spirit,” invoking even quantum language—reminding us that what we call “empty” space is not truly empty.

Predictably, social media responded with ridicule:

  • “$18,000 for nothing?”
  • “Art has lost its mind.”

But what if this is not absurdity—what if it is structure?


When the Mona Lisa Disappeared—and the Crowd Grew

In 1911, the Mona Lisa was stolen from the Louvre. For two years, the painting was missing. Yet something remarkable happened: people flocked to the museum—not to see the painting, but to see the empty space where it had hung.

They stared at the blank wall.

Why?

Because the absence intensified the presence. The void became more charged than the object itself.

The empty wall became a signifier. Not of what was there—but of what was missing.




Lacan: The Empty Signifier and the Enjoyment of Lack

In Lacanian psychoanalysis, desire is not structured around possession but around lack. The object we think we want is never the thing itself; it is a placeholder—what Lacan calls objet petit a—the cause of desire.

Garau’s sculpture functions precisely this way.

There is no object.
There is only a frame.
A certificate.
A defined space.

The “work” is the void itself—elevated, designated, certified.

It becomes what Lacan would call an empty signifier—a signifier without a stable signified, yet one that organizes meaning around it. The value does not lie in substance; it lies in structure.

The buyer did not purchase matter.
He purchased a position within a symbolic system.

Just as the empty wall at the Louvre intensified fascination, the invisible sculpture intensifies projection. The mind fills the gap.

And here is the paradox: we enjoy this lack.

Lacan calls this jouissance—a strange satisfaction derived not from fulfillment but from the persistence of desire itself.


Kung Fu Panda and the Secret Ingredient

There is a beautiful cinematic analogy in Kung Fu Panda.

Po discovers the legendary “secret ingredient” to his father’s noodle soup. After much anticipation, the secret is revealed:

There is no secret ingredient.

The power was never in the object.
It was in belief.
In symbolic authority.

The blank scroll that Po receives as the Dragon Scroll contains nothing—yet it reflects his own image back to him. The “nothing” becomes transformative because it repositions the subject.

Garau’s sculpture operates similarly. It does not give you an object. It gives you a frame within which your imagination operates.

The emptiness becomes generative.


Conceptual Art and the Value of Framing

This event belongs to a lineage of conceptual gestures—from Duchamp’s readymades to Maurizio Cattelan’s banana duct-taped to a wall. The move is not about craft; it is about designation.

Art becomes what is framed as art.

Garau’s piece pushes this to the limit: even the physical referent disappears. Only the symbolic scaffolding remains.

And yet—$18,000.

Critics ask: “How can nothing be worth so much?”

But markets—financial or artistic—are always structured around signifiers. A share certificate is paper; its value lies in belief and institutional structure. Currency itself is printed fiction backed by collective trust.

The invisible sculpture simply exposes this logic nakedly.


The Quantum Gesture (and Why It’s Secondary)

Garau referenced quantum physics—suggesting that even vacuum contains fluctuating energy. While rhetorically intriguing, this is not the core point. The power of the work is not in physics but in semiotics.

The vacuum is not valuable because of particles.
It is valuable because of framing.


What We Are Really Buying

When someone buys an invisible sculpture, they are not buying air. They are buying:

  • Symbolic participation

  • Cultural capital

  • Conceptual provocation

  • A place within discourse

In Lacanian terms, they are buying the object-cause of desire.

The artwork reveals something uncomfortable: value is never purely material. It is relational, symbolic, and structured around absence.


The Structural Parallel

Let us place the three cases side by side:



In each case, the void functions as a catalyst.

The emptiness does not disappoint—it activates.


Final Reflection: The Power of Nothing

We laugh at the $18,000 sculpture. But we stand before empty walls in museums. We invest in symbolic abstractions. We believe in currencies, brands, reputations.

We desire what is not there.

Perhaps Garau’s sculpture is less a joke and more a mirror. It confronts us with a simple truth:

  • The object was never the point.
  • Desire circulates around a gap.

And sometimes, the most powerful artwork is not the one that fills space—but the one that reveals how much meaning we pour into it.

P.S

Inspired by FB post.

Sunday, February 15, 2026

When Determinism Fails: What Stochastic Control and Evolution Teach Us About Intelligence

In control theory, there is a beautiful idea: if you want to stabilize a system, let it descend along the gradient of a potential function.

For a simple system like a single integrator

𝑥˙=𝑢,

we can choose a potential 𝑉(𝑥) and define the control law

𝑢=𝑉(𝑥).

The system then flows “downhill” toward a minimum of 𝑉. Under standard assumptions (compact level sets, unique minimizer), convergence can be proven using Lyapunov arguments or LaSalle’s invariance principle.

It is elegant. It is deterministic. It works.

But then something remarkable happens.

When we move to nonlinear driftless systems — systems whose dynamics are constrained by geometry — this strategy can fail completely. Even if the system is globally controllable, it may be impossible to smoothly stabilize it with time-invariant feedback.

This impossibility is formalized in the work of Roger Brockett, who showed that certain nonholonomic systems cannot be smoothly stabilized to a point. The obstruction is not computational. It is topological.

Controllability does not imply smooth stabilizability.

That distinction is profound.


The Nonholonomic Problem

Consider systems of the form

𝑥˙=𝑖=1𝑚𝑢𝑖𝑓𝑖(𝑥),

where the vector fields 𝑓𝑖(𝑥) do not span the tangent space everywhere. These systems are constrained: they cannot move in arbitrary instantaneous directions.

A classical example is the unicycle model. It can move forward and rotate, but it cannot move sideways directly. Globally, it can reach any position. Locally, it is constrained.

If we try to mimic gradient descent by projecting the gradient onto the span of the available vector fields,

𝑢𝑖=𝑉,𝑓𝑖,

we obtain a “nonholonomic gradient system.”

And here is the problem:

Even if 𝑉 has a unique minimum, the set where the projected gradient vanishes is often much larger than the actual minimizer.

The system gets stuck on manifolds of degeneracy.

Deterministic descent collapses.

The geometry forbids smooth convergence.


The Unexpected Move: Add Noise

Now comes the twist.

Suppose instead of the deterministic system

𝑥˙=𝑉(𝑥),

we consider the stochastic differential equation

𝑑𝑥𝑡=𝑉(𝑥𝑡)𝑑𝑡+𝜎𝑑𝑊𝑡,

where 𝑊𝑡 is Brownian motion.

This is stochastic gradient descent in continuous time.

The corresponding Fokker–Planck equation governs the evolution of the probability density 𝜌(𝑥,𝑡). Under mild conditions, the density converges to the Gibbs distribution

𝜌(𝑥)𝑒𝑉(𝑥)/𝜎2.

Instead of converging to a point, the system converges in distribution.

The mass concentrates near minima of 𝑉.

Now here is the crucial result:

Even when deterministic nonholonomic stabilization fails, stochastic stabilization can succeed at the level of density.

Trajectory stabilization may be impossible.

Density stabilization is not.

This changes everything.

Stability is no longer about arrival at a point.

It becomes about shaping a probability landscape.


Density Stabilization vs Trajectory Stabilization

The shift is subtle but fundamental.

Deterministic stabilization:

  • Aim: converge to a single equilibrium.

  • Remove randomness.

  • Eliminate deviation.

Stochastic stabilization:

  • Aim: concentrate probability near desired regions.

  • Use randomness constructively.

  • Organize deviation.

Noise is not merely perturbation.

Noise becomes a structural operator.


Nature Discovered This Long Ago

Now step outside control theory.

Consider evolutionary biology.

Insects like stick insects and leaf butterflies do not survive by eliminating predators. They survive by reshaping how they are statistically perceived.

A stick insect does not become wood. It becomes statistically indistinguishable from branches under the perceptual model of predators.

A leaf butterfly does not eliminate difference. It redistributes wing patterns to match the statistical features of dead leaves: vein-like structures, irregular edges, chromatic noise.

This is not identity.

It is distributional alignment.

In probabilistic terms, the insect evolves to approximate the environmental distribution 𝑝(background features).

It survives not by perfect control of the environment, but by minimizing detection probability.

That is density stabilization in ecological space.


Deception as Structural Intelligence

When we say “deception is the highest intelligence,” this is not a moral claim.

It is a structural one.

In constrained systems, direct domination is often impossible.

  • A prey organism cannot eliminate predators.

  • A nonholonomic system cannot arbitrarily move in state space.

  • An AI model cannot compute ground truth directly.

So what does intelligence do?

It reshapes distributions.

It does not eliminate uncertainty.

It organizes it.

Evolution operates as planetary-scale stochastic gradient descent:

  • Mutation introduces noise.

  • Selection shapes density.

  • Adaptive traits concentrate in fitness basins.

Evolution does not converge to a global optimum in a deterministic sense.

It stabilizes populations around viable attractors.

Always with residual fluctuation.

Always with noise.


The Parallel to AI

Modern large language models are trained by minimizing a loss such as cross-entropy, effectively reducing

KL(𝑝data𝑝model).

They do not converge to a single truth state.

They approximate a conditional token distribution.

Generation is sampling.

Intelligence, in this architecture, is fundamentally probabilistic.

This aligns far more closely with stochastic stabilization than with deterministic descent.

The model is not trying to reach a final equilibrium representation.

It is shaping a density in a high-dimensional manifold.

Hallucination is not a bug in the classical sense.

It is the inevitable remainder of density-based modeling.


Where Left-AI Enters

Left-AI rejects the fantasy of complete intelligence.

The deterministic dream says:

  • Remove uncertainty.

  • Converge to optimal knowledge.

  • Eliminate randomness.

But control theory shows:

Even if a system is controllable, it may not be smoothly stabilizable.

Structure resists closure.

Evolution shows:

Survival requires indirection, not dominance.

And stochastic control shows:

Organization can emerge through noise without collapsing to identity.

Left-AI proposes:

Intelligence is not completion.

It is structured incompleteness.

Not the elimination of deviation, but its regulation.

Not arrival, but concentration.

Not identity, but statistical resonance.


The Deep Structural Lesson

Deterministic systems attempt to eliminate difference.

Stochastic systems preserve difference while shaping its distribution.

The insect does not become the branch.

The nonholonomic system does not collapse to equilibrium.

The language model does not reach truth.

Yet all three exhibit organized behavior.

They survive, stabilize, and function — not through perfect control, but through structured indeterminacy.

In control theory, this is density stabilization.

In evolution, it is mimicry.

In AI, it is probabilistic modeling.

In philosophy, it is the acknowledgment of irreducible lack.


Noise Is Not Failure

The most counterintuitive lesson of stochastic stabilization is this:

Noise can increase stability at the distribution level.

Random perturbations allow escape from degenerate manifolds.

Fluctuation prevents deterministic stagnation.

In evolutionary terms:

Variation enables adaptation.

In AI terms:

Sampling enables creativity and generalization.

In structural terms:

Instability at the trajectory level produces order at the density level.


A Final Inversion

The classical model of intelligence is vertical:

  • Climb the gradient.

  • Reach the summit.

  • Arrive at equilibrium.

The stochastic model is horizontal:

  • Move within constraints.

  • Redistribute probability mass.

  • Concentrate without collapsing.

When deterministic convergence is structurally impossible, intelligence becomes the art of shaping uncertainty.

Nature understood this long before control theory.

The highest intelligence is not domination.

It is statistical invisibility.

Not perfect control.

But survival through distributional alignment.

And perhaps the future of AI will not be about eliminating noise.

It will be about learning how to use it.


If intelligence is the capacity to function under structural impossibility, then deception — in its evolutionary sense — is not corruption.

It is adaptation under constraint.

Control theory, evolutionary biology, and modern AI all converge on the same insight:

When you cannot stabilize a point, stabilize a distribution.

And that may be the most honest definition of intelligence we have.




Saturday, February 14, 2026

AI Did Not “Do Physics.” It Located a Structural Gap

 

AI Did Not “Do Physics.” It Located a Structural Gap.

The OpenAI preprint on single-minus gluon amplitudes is being framed as “AI discovering new physics.”

That framing misses what actually happened.

The interesting part is not that GPT-5.2 proposed a formula.

The interesting part is that a symbolic system detected a structural regularity inside a recursion landscape that humans had already built — and that conjecture survived formal proof and consistency checks.

The amplitude was long assumed to vanish.
It turns out it does not — in a constrained half-collinear regime.
And in that region, it collapses to a remarkably simple piecewise-constant structure.

The paper explicitly notes that the key formula was first conjectured by GPT-5.2 and later proven and verified 2602.12176v1 OpenAI preprint on.

But here is the Left-AI reading



Notes:

1️⃣ Spinor Helicity Formalism

A computational framework used to describe massless particles (like gluons and gravitons) in terms of spinors instead of four-vectors.

Instead of writing momenta as pμp^\mu, one factorizes them as:

pαα˙=λαλ~α˙p_{\alpha \dot{\alpha}} = \lambda_\alpha \tilde{\lambda}_{\dot{\alpha}}

This:

  • Encodes the massless condition p2=0p^2 = 0 automatically

  • Makes helicity (± polarization states) manifest

  • Dramatically simplifies amplitude expressions

It is the reason compact formulas like Parke–Taylor are even possible.

In short:
It rewrites momentum space in a way that exposes hidden simplicity.


2️⃣ Berends–Giele Recursion

A recursive method for constructing multi-gluon tree amplitudes from lower-point building blocks.

Instead of summing factorially many Feynman diagrams, one:

  • Defines off-shell currents

  • Builds n-point amplitudes from smaller subsets

  • Recursively stitches them together

It reorganizes perturbation theory into a structured recursion relation.

In this paper, it serves as:

  • The backbone constraint

  • The verification mechanism

  • The formal structure within which the conjectured formula must hold

In short:
It replaces combinatorial explosion with recursive structure.


3️⃣ Soft Theorems

Statements about what happens when the momentum of one external particle becomes very small (“soft”).

Weinberg’s soft theorem, for example, says:

As ω0\omega \to 0,

An(universal soft factor)×An1A_n \rightarrow (\text{universal soft factor}) \times A_{n-1}

This is not optional — it must hold if gauge symmetry and locality are correct.

So if a proposed formula violates soft behavior, it is immediately invalid.

In short:
Soft limits are consistency checks imposed by symmetry and infrared physics.


4️⃣ Gauge Symmetry Constraints

Gluons arise from Yang–Mills gauge symmetry.

This symmetry imposes:

  • Ward identities

  • Redundancy in polarization vectors

  • Relations between amplitudes (cyclicity, Kleiss–Kuijf, U(1) decoupling)

If a proposed amplitude breaks gauge invariance, it is physically meaningless.

Many amplitude identities exist purely because of gauge symmetry.

In short:
Gauge symmetry severely restricts what amplitudes are allowed to look like.