How Does Imagination Really Work in the Brain? A New Theory That Changes How Cyberpunk Worlds Are Built and Bought
What if the mind’s eye is less a paintbrush and more a noise-cancelling filter? New neuroscience argues exactly that, and it matters for anyone designing neurotech, immersive fiction, or the interfaces people will wear into neon-lit futures.
A lone developer in a cramped studio straps on a headband and waits for the overlay to settle. The city outside pulses, but the image appearing behind closed eyelids is not summoned so much as coaxed into being; the device suppresses the wrong things until the right thing remains. That moment of silence, not a burst of activity, is the new scientific tension at the heart of imagination research, and it changes who gets to claim agency over inner worlds.
Most press coverage framed the finding as another example of top-down mental imagery lighting up visual cortex. That is the obvious reading. The underreported shift is this: imagination may work by selectively quieting ongoing spontaneous activity rather than by building vivid images from scratch, and that inversion changes what engineers and storytellers can plausibly do when they try to tune human experience. The reporting below leans on recent press and an advance Psychological Review paper and then follows the science into practical product decisions and cultural consequences. (uts.edu.au)
Why this rewrites the rulebook for neurotech and design
For years the default engineering mental model has been feedback equals activation: send signals from high-level systems down and the visual system lights up in service of the image. The new hypothesis proposes feedback acts more like a filter that stabilises one pattern out of an ocean of spontaneous neural chatter. This matters to device designers because the control variable is suppression, not excitation, which is a different engineering problem with different failure modes. (philpapers.org)
The new theory at a glance
The spontaneous activity reshaping hypothesis argues that early sensory areas are already producing shifting, low-amplitude patterns without input, and imagination works by holding some of those patterns in place while damping others. The Psychological Review article synthesises electrophysiology, imaging, and behavioural data to show the signature of imagination looks like selective suppression more than global firing. That inversion helps explain why most imagined images feel weaker than real perception and why people with aphantasia have markedly different baseline excitability. (philpapers.org)
What computational models and memory science add to the picture
Separate lines of work model imagination as a generative process trained by hippocampal replay, where consolidated cortical networks play back probabilistic scene samples. Those models show how stored regularities can be reassembled without exhaustive new activity, making suppression and selective gating efficient strategies for producing coherent images. The hippocampus also appears to function as a predictive map for planning, which fits a story where imagination samples possible futures from an internally maintained statistical world model rather than inventing them ex nihilo. (nature.com)
What this means for cyberpunk culture and product teams
In cyberpunk stories the interface between mind and machine is a site of power, piracy, and art. If imagination is sculpted by silence, the map of possible interventions shifts toward tools that alter background dynamics: adaptive noise suppression, inhibitory neuromodulation, and finely targeted attention-mapping rather than crude stimulation. That changes the plausibility of plot devices like memory overwrite, instant haptics, and vivid implant-driven hallucinations; those tricks are still possible, but they look more like clever gating and pattern-selection than brute-force painting. No one said the future would be subtle, only that it might be quieter.
Imagination, according to the new theory, is less a factory of images and more a curator that mutes everything but the piece it likes.
Why small teams should watch this closely (concrete SME scenarios)
A boutique VR studio of 10 people wanting to prototype neurofeedback features can do so without multi-million-dollar implants by combining consumer EEG headbands for initial trials with software that emphasises suppression-based training. A single Muse 2 headband retails around $250, so a 10-person lab experiment using five headbands plus software and developer time can be prototyped for under $10,000 in hardware costs and roughly $40,000 in person-months if two engineers work three months at a modest indie rate. That buys early signal collection and feature validation before any commitment to invasive or clinical-grade systems. Using consumer EEG is not a shortcut to clinical-grade accuracy, but it is a realistic, affordable way to validate interaction concepts. (choosemuse.com)
A plausible 20-person AR studio could budget similarly for user testing and then partner with a research lab for deeper physiology, turning an initial $50,000 concept budget into a staged rollout that mitigates regulatory and ethical risk. The arithmetic makes design sense: fail cheap on the question of whether selective gating improves user sense of agency, then scale to more expensive hardware only when behavioural effects are clear.
The cost nobody is calculating: attention engineering and energy
Shifting from activation to suppression changes energy and safety profiles. Suppressing ongoing activity requires fine temporal control and may demand continuous monitoring and adaptive algorithms, which adds software complexity and cloud compute costs. For subscription-based neuro-interaction features, that cost is ongoing, not a one-time device purchase, and should be budgeted into lifetime customer value rather than treated as a trivial API call. This is the kind of spreadsheet no one glamorizes in fiction, and yet it decides whether a startup survives.
Risks, ethics and open questions that matter to creators
If imagination is shaped by quieting parts of ongoing neural life, then interventions become interventions into the substrate of self. That raises acute consent, addiction, and manipulation risks that fiction has only begun to dramatise. The hypothesis remains to be validated across modalities, timescales, and cultures, and the difference between correlation in imaging and causal control with stimulation is still a methodological cliff. Product teams should not assume that because the cortex can be nudged, desires can be rewritten without long term side effects.
The short forward-looking close
This quieter model of imagination reframes design from painting to pruning, which is both a technical challenge and an ethical invitation; cyberpunk creators and engineers must now debate whether silence should be sold, regulated, or celebrated.
Key Takeaways
- The new theory suggests imagination works by selectively suppressing spontaneous brain activity, not by simply activating visual cortex.
- That inversion makes suppression, gating, and attention-mapping the practical levers for neurotech and immersive experiences.
- Small teams can prototype relevant features affordably using consumer EEG hardware before scaling to clinical systems.
- Ethical risks multiply when products target the background dynamics that help form personal experience.
Frequently Asked Questions
How should a small AR studio start experimenting with neurofeedback under this theory?
Begin with off-the-shelf consumer EEG devices to collect baseline patterns while testing suppression-based interaction metaphors. Use those data to validate whether selective gating improves user experience before investing in expensive hardware or clinical partnerships.
Will this theory let companies reliably implant vivid hallucinations into users?
Not easily. The theory reframes vividness as a matter of stabilising subtle background patterns, which is more about precision gating than brute stimulation. Reliable, safe clinical-level effects still require major technical and regulatory advances.
Does this change the regulatory risk for neurotech startups?
Yes. Devices aiming to alter spontaneous brain dynamics implicate informed consent, mental autonomy, and new safety pathways, so regulatory planning should begin early and include ethics review and legal counsel.
Can narrative designers use this to create more believable cyberpunk interfaces?
Absolutely. Portraying mind interfaces as tuning or muting background processes is both scientifically plausible and narratively rich, offering subtler forms of influence and conflict than instant content injection.
Is the science settled enough to base products on it now?
The hypothesis is compelling and supported by converging data, but causal control experiments are still needed. Treat early prototypes as exploratory research rather than product guarantees.
Related Coverage
Readers interested in the technical and cultural fallout of this shift should explore stories on hippocampal generative models, predictive processing in perception, and the evolving market for consumer brain sensing devices. Those topics show where memory, imagination, and interface economics intersect, and they offer practical follow-ups for product teams and speculative fiction writers alike.
SOURCES: https://www.uts.edu.au/news/2026/04/how-does-imagination-really-work-in-the-brain-new-theory-upends-what-we-knew, https://philpapers.org/rec/KOESTM, https://www.nature.com/articles/s41562-023-01799-z, https://www.nature.com/articles/s41467-023-35967-6, https://choosemuse.com/products/muse-2