A Linear Algebraic Reframing of Mental Imagery
Traditionally, the debate about mental imagery splits into familiar camps:
-
Imagery as inner pictures.
-
Imagery as generative simulation (à la rendering software).
-
Imagery as propositional encoding (structured symbolic relations).
Aphantasia complicates all three.
A person with aphantasia may report no internal visual phenomenology — no inner cinema — yet can answer spatial questions flawlessly:
“In your bedroom, where is the window relative to the door?”
“To my left.”
No image.
Still structure.
This suggests something crucial:
The core competence may not be pictorial at all.
In this essay, I propose a mathematically structured reframing:
Mental imagery may be better understood as relational geometry in high-dimensional state space rather than internal pictures.
1. The Vector Space Hypothesis
Let us begin minimally.
Assume a cognitive state can be modeled abstractly as:
This does not mean the brain literally stores vectors. It means we model relational structure as a point in a high-dimensional manifold.
Each dimension encodes some feature:
-
spatial orientation
-
object identity
-
emotional valence
-
episodic index
-
attentional weight
A mental configuration is therefore a point in state space.
Imagery, under this framing, is not a picture.
It is a location in relational geometry.
2. Spatial Reasoning Without Visualization
Consider a relational transformation:
Where:
-
= current spatial schema
-
= transformation encoding “rotate 90° left”
-
= updated spatial orientation
Someone with aphantasia can compute without generating visual phenomenology.
The transformation matrix operates.
The projection layer does not.
Thus:
Spatial competence is invariant under suppression of rendering.
This parallels transformer models:
There is no internal screen.
There are only state updates in high-dimensional embedding space.
3. Imagery as Projection
Suppose we introduce a projection operator:
Where maps high-dimensional relational structure into low-dimensional phenomenological experience.
If is weak or absent, no image is “seen.”
If is strong, vivid imagery occurs.
Thus:
Imagery becomes a projection phenomenon, not a storage format.
Aphantasia may correspond to reduced projection gain, not absence of structure.
4. Dot Products and Recognition
Similarity in neural systems is often modeled via dot product:
Recognition occurs when alignment magnitude increases.
Perhaps what we call “imagining a sunset” is simply:
When alignment crosses a phenomenological threshold, we report:
“I see it.”
The “seeing” may be the scalar output of relational coherence.
Not a hidden photograph.
But a high alignment state.
5. Eigenvectors and Archetypes
Now consider eigenstructure:
An eigenvector is invariant under transformation.
In dynamic systems, eigenvectors represent stable axes.
Suppose archetypal cognitive structures — “home,” “mother,” “sunset,” “cat on mat” — correspond to eigen-directions of transformation operators within cognitive space.
Every new experience is decomposed:
Where are eigenvectors.
Imagery, then, is not stored content.
It is coefficient magnitude along stable directions.
The vividness of imagery may correlate with eigenvalue magnitude .
High : strong attractor dynamics.
Low : weak resonance.
6. Multi-Modal Tensor Structure
Visual imagery is rarely purely visual.
Let us represent cognitive state as a tensor:
Axes might encode:
-
spatial coordinates
-
affective gradients
-
motor simulations
-
memory indexing
Imagining food may activate:
-
taste axis
-
smell axis
-
autobiographical memory axis
A musician imagining an instrument activates:
-
auditory simulation
-
proprioceptive motor mapping
Aphantasia may suppress one tensor slice while preserving others.
The tensor remains intact.
Only certain projections weaken.
7. Dynamical Systems View
Let cognition evolve as:
Where defines attractor dynamics.
Stable imagery may correspond to convergence toward attractors:
Lucid dreaming might represent strong attractor convergence within internally generated dynamics.
Everyday imagery may be partial convergence under task constraints.
The debate shifts from representation type to dynamical regime.
8. The AI Parallel
In transformer models:
Hidden state:
Attention update:
No pictures exist internally.
Yet relational coherence emerges.
Thus, the AI case demonstrates:
High-dimensional structure can simulate perspectival reasoning without internal cinema.
Aphantasia suggests humans may operate similarly — with variable projection strength.
9. The Core Hypothesis
Mental imagery may not be:
-
an inner picture
-
nor merely a propositional sentence
It may be:
The phenomenological signature of high-dimensional relational stabilization.
When relational density crosses threshold, projection activates.
When it does not, competence remains but imagery vanishes.
Structure without screen.
10. What This Changes
If imagery is relational geometry:
-
The pictorial vs propositional debate dissolves.
-
Aphantasia becomes variation in projection, not deficit of representation.
-
AI parallels become structurally informative rather than anthropomorphic.
Most importantly:
We stop searching for hidden content.
Instead, we examine invariant structure.
11. Final Thought
Perhaps there is no sunset inside the mind.
Only a stable eigen-direction lighting up in state space.
And what we call “seeing” is the scalar glow of alignment within relational geometry.
Not an image.
But a coherence event.
Inspired by a LinkedIn post.




