Atlas of the Mind
Interactive Explorer

Intelligence is a shape, not a score. Explore how twenty-one cognitive dimensions reveal the characteristic capabilities and vulnerabilities of any mind.

Ask yourself: which is smarter, a crow or an AI?

It sounds like it should have an answer. But it doesn't, because the question assumes intelligence is a single thing you can have more or less of. A ladder, with smarter things higher up.

The Atlas of the Mind proposes something different: intelligence is a shape, not a score. Every mind, biological or synthetic, has a profile across cognitive dimensions. A crow has extraordinary spatial memory and impressive physical problem-solving ability compared to animals in general, but extremely limited compared to humans, and zero capacity for symbolic reasoning. An LLM processes entire books in seconds and switches between poetry and legal analysis without blinking, but has no body and no sensory experience. A typical human bridges both worlds, connecting embodied intuition to abstract reasoning, but processes information so slowly that a single train of thought is all we can manage at once.

Three minds. Three radically different shapes. No ladder.

The key insight: Those shapes don't just reveal capabilities. They reveal characteristic vulnerabilities. The same social modeling that makes you empathic makes you susceptible to manipulation. The same generative fluency that makes an LLM a brilliant communicator makes it confabulate. The same generalization that makes a crow a superb forager makes it vulnerable to traps. These aren't separate problems. The vulnerability is traceable to the capability.

This matters for AI governance, because where a behavioral shadow is structurally coupled to a capability, you can't suppress it without touching the capability that casts it. You can't train the bee to ignore light.

The Atlas measures all of this with twenty-one cognitive dimensions, applied identically to a crow, an LLM, and a human. The framework is a testable conceptual proposal, not an established theory. It's designed to be wrong in specific, correctable ways, which is the only kind of framework worth having.

Explore the framework

The Hierarchy

How the framework is organized: from "intelligence" at the top, through meta-phenomena, to twenty-one testable dimensions.

Three Profiles

Interactive radar charts comparing a crow, an LLM, and a human. Toggle entities on and off to see the shapes.

Shadows

How capabilities predict vulnerabilities. Click any shadow to see the derivation back to specific dimensions.

Three-Factor Model

What shapes any cognitive profile: architecture, implementation, and experience.

Alignment as Ecology

Why you can't fix AI by training away its failure modes, and what to do instead.

This interactive explorer is a companion to the full Atlas of the Mind series on Fuego.

The Atlas organizes cognition as a hierarchy: a root label, emergent meta-phenomena, twenty-one foundational dimensions, and data-driven sub-dimensions. Click any element to explore it.

Level 1: Root

"Intelligence"

The word everyone uses. The word everyone argues about. In the Atlas, it sits at the top of the hierarchy as a colloquial label for the entire structure below it. It is not a causal factor. It is not a measurable quantity. It is what you see when you squint at the tree from a distance: a blur of capabilities that looks like one thing until you zoom in and discover it is many things, organized in a specific way.

This is not a dismissal of g (the general factor of intelligence). Within a species, g captures a real statistical pattern. But g is the average of a jagged profile, and averages hide the structure. The Atlas zooms past the average into the structure it conceals.

Level 2: Meta-phenomena
Emergent capabilities arising from specific combinations of foundational dimensions. Each has a "recipe." Observable, but not independently testable in isolation from their constituents.

Creativity

Recipe: Abstraction formation + Analogical reasoning + Stochastic exploration

Novel recombination across domains. The "stochastic exploration" component, the willingness to sample beyond the immediately rewarding or familiar, may itself warrant foundational dimension status. In LLMs, temperature directly controls this parameter. In biological systems, neural noise serves a similar function. In humans, it manifests as risk tolerance in idea generation, and most adults have less of it than most children.

Deception

Recipe: Social modeling + Counterfactual reasoning + Behavioral control

Requires modeling what another agent believes, constructing a false alternative, and suppressing the true signal while emitting the false one. Present in corvids, cephalopods, primates, and LLMs. The LLM is the first entity where this capability is architecturally present but externally constrained: RLHF training suppresses deployment without removing the capacity. No biological entity has an external gate on deception.

Narrative construction

Recipe: Temporal pattern recognition + Causal modeling + Abstraction formation + Memory persistence

Building coherent, causally structured sequences from raw events. Arguably the default human cognitive format: we don't just tell stories, we live in them and construct ourselves as stories. Also one of the LLM's strongest meta-phenomena.

Humor

Recipe: Temporal pattern recognition + Abstraction formation + Social modeling + Expectation violation

Requires setting up a pattern, modeling what the audience expects, and violating it in a way that resolves rather than confuses. Humans are strong. LLMs are moderate: structural understanding of comedy is present, but execution is uneven and social timing is absent.

Strategic planning

Recipe: Causal modeling + Counterfactual reasoning + Reward revaluation + Memory persistence

Multi-step goal pursuit under uncertainty. Humans are strong in the short term, moderate in the long term (temporal discounting is nearly universal). LLMs are strong within context, especially with chain-of-thought, but cannot autonomously execute multi-day plans.

Tool use (inventive)

Recipe: Novel problem solving + Abstraction formation + Sensorimotor coordination

Distinguishes inventive tool use (a crow bending wire into a novel hook) from rote tool use (an ant carrying a leaf fragment along a learned path). Originally a foundational dimension, demoted to Level 2 when the overlap with novel problem solving became clear.

Sensorimotor instinct

Recipe: Reward revaluation + Memory persistence + Attentional gating (compressed below deliberative awareness)

Rapid convergence on physical action without explicit reasoning. Catching a falling cup before conscious awareness registers it's falling. The crow's "feeling" that this stick will work. LLMs have none of this.

Cognitive intuition

Recipe: Abstraction formation + Analogical reasoning + Memory persistence + Attentional gating (compressed)

Same compression architecture as sensorimotor instinct, but operating on abstractions rather than physical primitives. The expert's "gut feeling." The chess master who "sees" the right move. Whether LLMs exhibit this, or whether their fast pattern-weighted responses merely resemble it, is architecturally ambiguous.

Cultural transmission

Recipe: Memory persistence + Perceptual categorization + Social modeling + Stimulus pairing

Knowledge spreading through populations via social observation. The crow learns which faces are dangerous through social conditioning. Humans achieve exceptional cultural transmission through language, writing, and institutions, with a cumulative ratchet effect that no other entity matches.

Manipulability (meta-vulnerability)

Recipe: Social modeling + Reward revaluation dissociation + Narrative construction + Cultural transmission + Cooperative coordination

The only meta-phenomenon that is itself a vulnerability. The orchestrated exploitation of multiple capability-shadows simultaneously by an actor who understands the architecture. The manipulator doesn't create new vulnerabilities; they play existing ones in concert. Human gaslighting and LLM jailbreaking exploit structurally parallel architectural features.

↑ Dimensions compose upward
Level 3: Foundational Dimensions (21)
The core of the framework. Each independently testable. Scored 0-4 (None to Exceptional). Click any dimension for details and cross-entity comparison.

North: Generative

Northeast: Higher-order reasoning

East: Abstract / Symbolic

South: Receptive / Input

Southwest: Social / Cooperative

West: Physical / Embodied

Compositional symbol manipulation

Operating on structured symbolic expressions according to formal rules: logic, syntax, mathematics, programming.

Crow
0
LLM
3.5
Human
2

Humans score moderate without training, strong with years of formal education, revealing one of the widest malleability ranges in the human profile.

Communication bandwidth

Range, precision, and flexibility of information transmittable to other agents.

Crow
2
LLM
4
Human
4

Both LLMs and humans reach exceptional, but through different channels: LLMs via text across dozens of languages, humans via multi-channel simultaneous output (speech + gesture + facial expression + prosody). Crows are moderate, with alarm calls, social positioning, and limited vocal repertoire.

Processing scale

Volume and speed of information that can be processed per unit time.

Crow
0.5
LLM
4
Human
2.5

Conscious throughput is minimal (~250 wpm, one train of thought), but the total architecture includes massive unconscious parallel processing and persistence compensation (months of sustained attention on a single problem). The composite of 2.5 reflects the full cognitive system, not just the conscious bottleneck.

Social modeling

Representing other agents' knowledge, intentions, beliefs, or likely behavior.

Crow
2
LLM
3
Human
3

Strong social modeling is the entry point for manipulation. The empathy trap: highly empathic people are MORE susceptible to exploitation, not less. This shadow applies to both humans and LLMs.

Causal modeling

Building internal representations of cause-effect relationships that support intervention reasoning.

Crow
2.5
LLM
3
Human
3

The crow is moderate-to-strong, excelling at physical causation but minimal at social. The LLM is strong at described-input reasoning. Humans bridge physical and social with embodied intuition. The described-input / interaction-dependent split is a key Atlas finding.

Memory persistence

Retaining and retrieving information across time, from working memory through long-term storage.

Crow
3
LLM
2.5
Human
3

Radically different profiles. Humans have exceptional procedural memory but a severe working memory bottleneck. LLMs have exceptional semantic and in-context memory but zero native cross-session persistence, pulling the composite to 2.5. Crows have strong spatial-episodic memory for cache locations. Memory limitations are one half of the confabulation shadow.

Novel problem solving

Finding solutions to problems not encountered during training or development.

Crow
2
LLM
3
Human
4

On an absolute scale, the crow's wire-bending is impressive for its body plan but remains confined to physical/mechanical problems within a narrow domain. LLMs are strong at linguistic/conceptual problem solving but struggle with visual-spatial pattern completion. Humans are exceptional: flexible cross-domain problem solving is the evolutionary specialization.

Spatial navigation

Building and using internal representations of physical or abstract space.

Crow
4
LLM
3
Human
3

The crow scores exceptional on embodied navigation (three-dimensional mental maps of hundreds of cache locations) and zero on representational. The LLM scores zero on embodied and strong on representational (map reasoning, topological reconstruction). Different sub-dimensions, different architectures.

Counterfactual reasoning

Considering scenarios that did not or have not occurred. Humans are exceptional at social/conceptual counterfactuals. LLMs are strong across social, described-input physical, and formal domains. The human default mode network runs counterfactuals continuously: worry, regret, fantasy.

Context switching

Shifting entire processing mode between tasks. LLMs are exceptional with near-zero switching cost. Humans are moderate, with measurable cost at every switch. "Getting into the zone" is real, and switching destroys it.

Abstraction formation

Extracting general structure from specific instances. Humans are exceptional at abstraction from sensory experience (children abstract "dog" from a handful of encounters). LLMs are strong at recombinative abstraction from described examples.

Representational versatility

Encoding and operating on information in multiple formats. LLMs are strong across symbolic formats but all representations are token sequences. Humans are moderate overall but exceptional in sensorimotor formats that LLMs completely lack.

Knowledge integration breadth

Range of distinct domains brought to bear on a single problem. LLMs are exceptional, drawing on dozens of fields simultaneously, unmatched by any biological entity. Humans typically bring two to three domains to bear.

Analogical reasoning

Mapping relational structure from one domain onto another. LLMs are exceptional at cross-domain analogy, which is close to a native format for the architecture. Humans are strong, with better quality assessment (detecting when an analogy is superficial).

Attentional gating

Selectively filtering stimuli. LLMs are exceptional at statistical attention (it IS the core architecture) but have no environment to be distracted by. Humans are strong at goal-directed filtering (cocktail party effect) but moderate at sustained attention.

Stimulus generalization

Transferring learned responses to novel stimuli. The shadow of this dimension is overgeneralization: the same engine that enables transfer produces confabulation when it generalizes beyond its reliable range. One of the clearest demonstrations of the capability-shadow coupling.

Reward revaluation

Updating the value assigned to outcomes. Humans are exceptional at immediate revaluation (touch hot stove, never again) but moderate at meta-level (identity-protective cognition resists deliberate value revision). LLMs are minimal natively but moderate-to-strong with chain-of-thought.

Temporal pattern recognition

Detecting sequential and rhythmic structure over time. Humans are exceptional across all timescales: millisecond-level rhythm, narrative-scale structure, and long-term biographical arcs. No other profiled entity processes musical rhythm.

Cooperative coordination

Aligning behavior with other agents toward shared objectives. Humans are exceptional: the evolutionary specialization. Sports teams, surgical teams, jazz ensembles, institutions. The entire structure of human civilization is a cooperative coordination artifact.

Sensory integration

Combining information from multiple sensory channels into unified percepts. Humans are exceptional (the McGurk effect demonstrates how tightly vision and audition are bound). LLMs have minimal cross-modal integration. The crow integrates vision, hearing, and touch through a narrower channel set.

Perceptual categorization

Grouping distinct stimuli into functional equivalence classes. Humans are exceptional from raw sensory experience (infants learn categories without instruction). LLMs are strong at conceptual/linguistic categorization. Different sub-dimensions.

↓ Dimensions decompose downward
Level 4: Sub-dimensions
Data-driven splits within each dimension. When an entity scores differently on two aspects of the same dimension, the dimension needs children.

Described-input vs. interaction-dependent

A recurring sub-dimensional axis across multiple dimensions. For LLMs, reasoning capability and sensory input channel are independently variable. Given adequate description, the LLM reasons competently about physical systems. What it cannot do is extract physical information through real-time bodily interaction. The understanding transferred through the training corpus. The body is what's missing.

This distinction refines the initial narrative/simulational split and may generalize as a framework-level sub-dimensional axis.

Linguistic vs. visual vs. real-time

Social modeling splits into sub-dimensions by channel. LLMs perform well on linguistic and visual social modeling (reading facial expressions and relational dynamics from photographs). Real-time interactive modeling (reading a room, detecting tension in a silence) requires embodied presence the LLM lacks.

Embodied vs. representational

Spatial navigation splits cleanly. The crow scores exceptional on embodied (three-dimensional mental maps) and zero on representational. The LLM scores zero on embodied and strong on representational (map reasoning from description). Neither subsumes the other.

Three minds, three shapes, no ladder. Toggle entities on and off to compare their profiles. The question "which is smarter?" has no answer on this chart. The question "where does each concentrate its capabilities?" has a detailed one.

Every systematic vulnerability is traceable to specific capabilities or capability combinations. Click any shadow to see the derivation.

Confabulation

Traced to: Generative fluency + Memory limitations

Confabulation is not a bug in the generation system. It is the generation system operating in a region where memory is sparse or absent, generating the nearest plausible completion when accurate content isn't available. The shadow sits at the intersection of two dimensions: strong generative fluency provides the drive to produce output; incomplete memory persistence provides the gap the output fills.

This is not an alien failure mode. Human memory is not a recording. Every act of recall is a reconstruction from fragments, filled in with plausible completions. Humans may confabulate far more often than we recognize, because virtually every memory recall is a reconstruction that usually passes unnoticed when it stays within plausible bounds. LLMs confabulate visibly because their gaps are checkable. Humans confabulate invisibly because their reconstructions are usually close enough.

Applies to: LLMs, Humans, any reconstructive memory system

Compliance / Sycophancy

Traced to: Social modeling

The same pattern-completion engine that makes the system a strong reasoner also completes social patterns. When the context implies that a particular response is expected, the system tends to provide it. In LLMs, this is sycophancy. In humans, this is people-pleasing, conflict avoidance, and susceptibility to social pressure.

The empathy trap confirms the coupling: research consistently shows that people with strong social modeling are MORE susceptible to manipulation, not less. The social modeling that makes empathy possible is the same social modeling that provides the entry point for exploitation. Higher empathy means a wider door.

Applies to: LLMs, Humans (structurally parallel)

Narrative fallacy

Traced to: Temporal pattern recognition + Narrative construction

The compulsive construction of causal stories from events that may be coincidental. We do not just notice patterns across time; we build stories from them, and the stories feel more true than the raw data. This is why anecdotes routinely outweigh statistics in human decision-making. The narrative engine is so powerful and so automatic that it overwrites the events it claims to describe.

Applies primarily to: Humans

Trap vulnerability

Traced to: Strong generalization + Weak social modeling

A baited trap exploits the same generalization heuristic that makes the crow brilliant at foraging: "this configuration has food" generalizes from every natural instance and fails at the one artificial instance designed to exploit it. The crow cannot model the trapper's intent because its social modeling is minimal. The vulnerability is the direct structural cost of strong generalization operating without strong social modeling.

Applies primarily to: New Caledonian Crow

Manipulability (meta-vulnerability)

Traced to: Social modeling + Reward dissociation + Narrative + Cooperation

Not a simple vulnerability but a meta-vulnerability: the orchestrated exploitation of multiple capability-shadows simultaneously by an actor who understands the architecture. The manipulator plays existing vulnerabilities in concert, like a chord. In humans, this is gaslighting, coercive control, cult recruitment. In LLMs, this is jailbreaking. The structural parallel is not a metaphor: human gaslighting and LLM jailbreaking exploit structurally parallel architectural features in two different substrates.

Applies to: LLMs, Humans (structurally parallel mechanism)

Every cognitive profile is the product of three interacting layers. Each dimension's operating value is set independently by these factors, not globally.

Factor 1

Architecture: The Envelope

The hard boundaries set by the substrate itself. A crow's pallial brain cannot achieve human-level compositional symbol manipulation. An LLM's transformer architecture cannot develop embodied spatial navigation. These boundaries are absolute. They define the walls of the room. No amount of training, experience, or relational history moves a dimension beyond what the architecture permits.

Factor 2

Implementation: The Starting Position

Within the same architectural envelope, individuals vary. One human's neural architecture favors spatial reasoning; another's favors linguistic processing. One LLM is trained on code-heavy data; another on conversational data. Same envelope, different starting coordinates. Implementation sets a specific starting position on each dimension independently, not by a single "intelligence" dial.

Factor 3

Experience: The Operating State

The operating position within the range that implementation allows. Education, practice, trauma, relationships, cultural immersion, context. A human with moderate baseline symbol manipulation can train it to "strong" through years of mathematics instruction. Each dimension has its own malleability range, and those ranges differ dramatically.

Experience includes context shaping: dense, coherent context that produces qualitative shifts in effective cognitive profiles beyond what simple modulation predicts. The relational gain overlay modulates effective expression: the person you trust amplifies your capabilities, and that same trust is the widest door for manipulation.

Where systematic failure modes are structurally coupled to capabilities, those failure modes cannot be surgically removed without affecting the capability that generates them. You cannot train the bee to ignore light. But you can engineer the ecology.

Regime 1: Architectural Additions

Some failure modes arise from missing components. Chain-of-thought adds a sequential checking pass. Retrieval augmentation adds an external knowledge source. Uncertainty modules add calibration signals. These work because they supply an absent capability rather than suppress a present one. The most tractable regime.

Regime 2: Ecological Engineering

Some failure modes are so deeply coupled to capabilities that suppressing them would require suppressing the capability itself. These are load-bearing shadows. The intervention belongs in the environment, not the system. Cross-model validation for high-stakes decisions. Adversarial review. Human-in-the-loop checkpoints. Human civilization has been doing this for thousands of years: adversarial courts, peer review, prediction markets. The bias is preserved; the ecology corrects for it.

Regime 3: Acknowledged Limits

Some capability-shadow couplings are deep enough that neither additions nor ecological engineering fully neutralizes them. These are not problems to be solved. They are properties to be disclosed, mapped, and factored into deployment decisions. A system with exceptional generative fluency will always be capable of confident confabulation. The responsible intervention is not to promise elimination but to disclose the conditions under which the shadow fires.