Sitemap

Hallucination v. imagination

15 min readMay 28, 2025

Some surprising connections between dream states and deep learning

Press enter or click to view image in full size
WORK62-W (1962) by Onoda Minoru, oil, gofun and glue on plywood. Permanent collection of Tate Modern, London, photo by me.

I discovered something truly new using AI, represented by five white-on-black scatterplots. But what exactly? Like the classic lucid dream technique of inspecting text as a reality check, repeated inspection of these showed only vague images, and no clarity about the specific concepts discovered. It seems I am doing lucid dreaming without conscious intention.

I wrote that in a dream diary on April 29, 2025. I’m an AI researcher, I had been trying to create something new, and had also been looking at a lot of scatterplots, so the dream itself wasn’t surprising. At the same time, in the evenings I had recently finished reading Dreams of Awakening, a 2013 book about lucid dreaming by Charlie Morley that someone gave me. I had no particular interest in the topic and had no intention of doing any lucid dreaming.

I loved the book. I still have no intention of actively practicing lucid dreaming — I’m content to just sit back and enjoy the absurdity of one crazy adventure after the next in dreams. But after reading the book I realized that I was doing it anyway. I was probing, asking questions about what I was experiencing, pushing things further and sometimes in particular directions. Mainly when in that state balanced between wakefulness and dreaming — which is exactly where Morley places lucid dreaming.

I took a more active interest in my dreams (hence the dream diary), started enjoying them more — particularly their striking realism. And then I started to find some unexpected links with the machine learning work that I and others were doing. More importantly, Morley convinced me that what we normally call being “awake” is actually just another level of dreaming. What he regards as being fully awake is when you realize that the “real” world is just a dream — a consensual one we all see and create in our own heads. If you believe this, it fundamentally changes your outlook.

So in this article I will share some amazing things I have found out so far about dreams, AI, and the construction of reality. Dreams are a rich area, everyone has their own obviously, and there is way too much research as well as pseudo-science and speculation to cover everything. I will focus specifically on:

* World models in the brain and in AI;
* The emptiness of things, both real and virtual;
* How dreams and AI systems make predictions;
* How both the brain and AI rely on partial connections; and
* The surprising role of the body and movement in dreams, and in robotics.

It goes without saying that this is all partial knowledge in progress — part of my current understanding at the moment, subject to change, shared here informally and not scientifically tested.

Reality check!

I mentioned this above. It is one of Morley’s suggested techniques for testing whether you might be in a dream. Look at your hands, he says, then look again. Do they look the same? Are there any distortions or extra fingers? (Note similarities with early generative AI. His book was written before the six-fingered generated human was a thing.)

Or, if you see any text, look closely. Can you read it? When you look again, is it the same? That was my tip-off on the scatterplots — they didn’t look the same, and they didn’t make any sense. You can do a reality check, Morley points out, when you’re awake too. Walking down the street, washing the dishes, any time of day. By normalizing it in “waking” life, you’re more likely to try it in a dream.

Press enter or click to view image in full size
Matrix Multiplication Series 29 (1967) by Frieder Nake, photo by me.

World models

A fundamental assumption in neuroscience is that we develop a mental model of the world based on what we perceive with our senses, which then guides our decisions and actions. It is flexible enough so that we can react quickly and intuitively to new situations as they arise, by subconsciously comparing past and present states.

This understanding has been transferred over to AI — specifically reinforcement learning, where a model of the world can be used to train a system to make predictions as we do in daily life. See https://worldmodels.github.io/.

Morley’s book on lucid dreaming is supported by lots of neuroscience research, and one insight that surprised me was that one part of brain generates the physical spaces within dreams, which you then navigate using another part of the brain, just like a videogame character.

This was unwittingly replicated for AI by the authors of the above world modelling paper: “We first train a large neural network to learn a model of the agent’s world in an unsupervised manner, and then train the smaller controller model to learn to perform a task using this world model.”

Dream worlds, as you know, are not complete or coherent. So maybe a better comparison is this research in which AI generates a game world in real time, not ahead of time. This means that (like your dreaming brain), the AI has some understanding of the physics and construction of physical spaces. In the case of the AI-generated worlds in this research, a friend pointed out to me, “They don’t yet work as persistent-world games because if you turn back, the world has changed.” Just as in a dream.

Press enter or click to view image in full size
Yin-Yang (1969) by Betty Danon, collage on paper, photo by me.

Everything is empty

That brings us neatly to the concept of emptiness. You know that games are constructed (whether by AI or humans) out of textured surfaces with nothing behind, like a Hollywood set. Textures can be rich or dynamic, and volumetric 3D modelling exists, but the latter is too processor-intensive and overkill for games; in any case it’s equally empty inside, being constructed from geometric digital points and not the wood, dirt, drywall and metal of our physical environment.

But, while the “real” world has some sort of material structure, what if it is as empty as a videogame?

This is backed up by physics. If you zoom in far enough, what you see are particles without determined positions, which appear very far apart at that scale. You’ve seen diagrams of atoms — electrons in orbit around a nucleus, with plenty of space between. Atoms are made up mainly of empty space.

It’s more nuanced than that: What holds those particles together are strong electromagnetic fields, and in quantum theory, the position of each particle at a given time is described as a cloud of probabilities. So, more specifically, we could say that material things are made primarily of energy. When you touch something, it feels solid because those electron clouds push back against your finger.

This is where things get all metaphysical. See, Morley is also a Buddhist monk. In the book he draws from both neuroscience and quantum physics, but also from Buddhism. And one of the core ideas of Buddhism is about the emptiness of things — you, your chair, the world at large. But emptiness here is not nothingness — in Buddhism it represents infinite potential.

This actually matches up with the physics: If each of those trillions of particles is represented by a cloud of probabilities, there are practically infinite positions it can take at any given time. Nothing is fixed, but always in a state of flux. Even seemingly solid things are in constant motion at the quantum level.

Now, let’s apply this back to dreams and waking life. If everything is empty but also in constant motion, with infinite potential, how does it appear so fixed, so detailed? Isn’t it amazing that your brain can construct whole dream worlds that seem perfectly real? What if your brain equally constructs how we perceive the “real” world?

Following this (admittedly counterintuitive) logic, this would mean that our lives are not necessarily conditioned or constrained by the external world, but primarily by our inner life. It follows that everything “out there” is connected to the world “in here”.

Again, there is some physics to support this: every persistent, seemingly solid object exerts a gravitational pull, however small. Everything really is connected in this way. The illusion of dualistic reality is that things seem solid and separate from us. If everything is an illusion and nothing exists independently, everything becomes dreamlike.

Still with me? I add at this point that I am not a Buddhist myself. I became interested in some of the ideas for all the fascinating reasons being explored here. I had read a bit about Buddhism previously, but nothing really resonated until I read Morley’s book — it taught me more about Buddhism, not even being explicitly about Buddhism.

I remain a non-Buddhist. However, if you accept, at a minimum, that our internal state affects how we see and act in the world, this might change the way that we view other people, things and situations. Inside and outside are connected in precisely this way: My attitude toward the world influences how I see and treat things and people, and this might cause real changes in the “real” world. All that talk by the Dalai Llama about showing compassion toward others; Yoko Ono’s suggestion that simply visualizing world peace can bring it about — it all suddenly makes sense.

It’s not difficult to apply this to AI. Chatbots, generated images and video — you already know that they are similarly “empty” — not grounded in “reality” but simply the result of next-token prediction. It starts to approach the dreaming mind in creating believably realistic worlds.

More generally, it’s about attention. Attention is all you need in machine learning — this idea almost singlehandedly launched the current wave of AI. And for humans, too, what we choose to attend to at any given time directly influences the future — our own and others’. (I wrote about this in detail here.)

Press enter or click to view image in full size
Rotorelief (1935) by Marcel Duchamp, cardboard disks printed on both sides, photo by me.

Predicting the future

Next-token prediction brings us to something weirder, and even more interesting. You know that most AI systems exist for this very purpose — predicting what comes next, or what something is, based on past data that it’s trained on. Returning to the concept of world models, I discussed at the start how the human brain does something similar. What about dreams? Can they predict the future?

This idea has been explored for a long, long time — probably since humans started wondering about things in general, thousands of years ago. J.W. Dunne, a British engineer and philosopher, speculated in 1927 that early humans may have come up with the idea of the soul based on their experience of dreams. His book, An Experiment with Time, documents his efforts to determine whether dreams show visions of things to come, after he had a few uncanny experiences in this regard.

I won’t go into detail (you can access the book for free here), except to say that he became convinced, and provided examples from his own as well as others’ dream experiences. He noted however that very few dreams predict the future; many are simply the mind making sense of its experiences of the recent past, as is generally accepted.

Morley, in Dreams of Awakening, also believes dreams can foretell the future, and the way he explains it is convincing. He points to psychologist Carl Jung, who said that the brain can easily predict future events not through some mystical property, but simply due to the amount of information it stores. We are simply unable to see all this information consciously, and some of it emerges in dreams. I don’t know about you, but I see things in dreams that I had completely forgotten about. No wonder Sigmund Freud looked to dreams for repressed desires and anxieties.

Think about it: How much sensory information do you take in during a single day? Without even taking into account screens and social media, that’s already a lot. But predicting the future? Morley goes back to quantum theory to explain a nonlinear relation between past, present and future — something about “nonlocal communication through the vast quantum interconnectivity of reality.”

This actually matches with Dunne’s explanation — that time is not linear, but we fail to see its nonlinearity in waking life due to the overriding preference of the conscious, rational mind for linearity and causality. And Dunne was an engineer, not a Buddhist monk.

Personally, though, I don’t buy it. I prefer my much more parsimonious explanation above: that the quality and quantity of attention we devote to certain things simply makes them more likely to happen. Or something like that. Dunne actually said something similar — that just knowing about something or actively devoting attention to it, like writing down a dream, makes it more likely to occur in waking life, because we’re more attuned to it.

According to Tiffany D’Elia, it’s like that scene in The Matrix when Neo visits The Oracle, and a monastic child shows him how to bend a spoon with his mind because, as the child says, “There is no spoon.” In the interconnectedness of things, the spoon is not separate, and reality is a projection of our own consciousness. This, D’Elia says, goes back to the Hermetic school of philsosophy: “The mind is not inside the universe, the universe is inside the mind.” It’s all a bit too mystical for me, but her argument here accords with what I’ve detailed above.

Let’s go back to world models again. The authors of that paper cite evidence suggesting that “what we perceive at any given moment is governed by our brain’s prediction of the future based on our internal model”. And similarly, in AI reinforcement learning, “an artificial agent also benefits from having a good representation of past and present states, and a good predictive model of the future”.

Press enter or click to view image in full size
Sonder (Ana) (2024) by Felipe Baeza, ink, acrylic, cut paper, varnish, twine on paper, photo by me.

Partial connections

So this model of the world, whether in our head or in an AI model, isn’t coherent, or complete. Memories/data are not stored whole but in parts. Through a similar kind of next-token prediction, both generative AI systems and dreams seem to construct things by combining these partial memories, and that’s why both generative AI and dreams have a similar, seemingly random quality — because there is some randomness involved. Dunne, 100 years ago, called it the train of ideas.

When you saw the title of this article, maybe the first thing you thought of was Google’s DeepDream. Born in the fairly early days of the current AI wave, it was developed by Google researcher Alexander Mordvintsev, who was working on computer vision.

Most researchers were building image recognition systems by training them on labeled data, then feeding in an image and having the system classify it based on the training set. Mordvintsev instead fed in an image and then stopped the inference process halfway through, before the system decided what the image was. He then sort of fed the image back and forth repeatedly. You’ve seen the resulting DeepDream imagery.

This article explains nicely:

Our human perceptual system behaves in a similar way, making us “see” things that aren’t really there, like the face on the moon or pictures in clouds or the happy face in the Galle crater on Mars, an illusion called pareidolia. The dream-like images generated by Mordvintsev’s algorithm were almost hallucinogenic, startlingly akin to those experienced by people on LSD. Did that mean that an artificial neural net was not that artificial? Could we say that Mordvintsev had found a way to look into the machine’s unconscious, into its inner life, its dreams?

Artist Pierre Huyghe followed this train of ideas a few years later. Working with researchers who recorded people’s neural activity when they were thinking of particular objects, he then had an AI system play back this data to reconstruct mental representations of the objects. The results were as weird as the DeepDream imagery — abstract, constantly shifting, but with some clearly recognizable elements. Dreamlike, in other words.

Because of these partial connections in the brain, we shouldn’t read too much into the imagery that dreams throw up. Here, subsequent psychology research has shown that Freud probably overreached in this regard. Author Martin Amis, in his novel London Fields, writes, “We are all poets or babies in the middle of the night, struggling with being.”

The partial connections in generative AI systems, however, can throw up some surprising things. Artists Holly Herndon and Mat Dryhurst explain:

As AI models are trained, they compress data into representations of “concepts”. These concepts (a person, a thing, an idea, etc.) are understood by the model in multiple dimensions and in relation to all of the other concepts the model recognises. The place in mathematical space that a model assigns to a given concept is called its “embedding”. In theory, every possible thing a model can know — all of the infinite connections that can be drawn from the data it contains — already exists with in it. (From their 2024 exhibition catalog All Media is Training Data, p.79)

If that is the case, they wondered whether they could create an artwork and then see if it already existed in a model’s latent space. Sounds crazy, and yet this is exactly what they found. The created a physical sculpture of a horse, and then searched for it in a model which had been trained before they made the sculpture. And there it was. OMG! We might be wary of what AI might predict or create, but again, this depends on what we attend to and feed into it.

Press enter or click to view image in full size
Twilight (1975) by Gwenn Thomas / Joan Jonas, silver gelatin print, photographed by me.

The body strikes back

Everything I’ve discussed so far has been purely mental, except where our mental conceptions influence how we see and act in the physical world. But you and I both know that human “intelligence” is vastly different from AI, owing particularly to the fact that we have a body and exist “in the world”, in relation to and interconnected with other physical and biological things. Therefore, our relation to dreams can’t be all that similar to an AI model.

The body is typically paralyzed during dreams, presumably so that we don’t wake up or hurt ourselves while we’re acting out in those dreams. The occasional twitch, not to mention rapid eye movements (REM), clue the outside observer that a sleeper is dreaming.

But, the surprising thing I discovered is that it actually works the other way around. Our mental activity doesn’t give rise to physical activity, but bodily movement actually drives our dreams.

Morley discusses this, noting for example that if you want to remember a dream you just had, stay in exactly the same position — when you move, the memory fades. We’re still laying down; it doesn’t mean that we physically run or fly and then dream it, but that dreams arise from physical sensations, they don’t prompt them.

Research by neuroscientist Mark Blumberg, carried out over a couple of decades, confirms this, by comparing the order in which neural activity and physical activity occur. His results, spelled out in this article, were clear: “The body and brain weren’t disconnected. The brain was listening to the body.” The body is paralyzed during sleep not because of the twitches, but to make them clear. The article reports:

In a series of papers, Blumberg articulated his theory that the brain uses REM sleep to “learn” the body. You wouldn’t think that the body is something a brain needs to learn, but we aren’t born with maps of our bodies; we can’t be, because our bodies change by the day, and because the body a fetus ends up becoming might differ from the one encoded in its genome. “Infants must learn about the body they have,” Blumberg told me. “Not the body they were supposed to have.”

What’s more, “Memories, too, have long been thought a product of the brain, but are increasingly understood as also tied to the body.”

You might think there couldn’t possibly be a link with AI, but one word: robotics. Blumberg speculated using the same idea — by randomly twitching, a robot might be able to learn how to move in the world. Indeed, such work was already underway: researchers had already found that their robot “could essentially learn to walk from scratch by systematically twitching to map the shape and function of its body.”

What happens next? You guessed it:

Watching the robot twitch, a fellow-researcher commented that it looked like it was dreaming. The team laughed and thought nothing of it until the fall of 2013, when Bongard met Blumberg when he gave a talk on adaptive robots. Suddenly, the idea of a dreaming robot didn’t seem so far-fetched. “Dreaming is a safe space, a time to try things out and retune or debug your body,” Bongard told me. (source)

Some conclusions

Dreams are a safe space for humans too — maybe the last place we are truly free (though paradoxically not in control). For now. In the novel The Dream Hotel, Laila Lalami describes a near future in which AI companies monitor our dreams via a devices that optimizes a good night’s sleep, but also reports disturbing dreams to the government. Isn’t your sleep tracking device already halfway there?

Now we know that the body also reports to the subconscious mind. But is everything somehow already present in the mind, as in the AI model? Maybe, but only in pieces. From these pieces, both we and AI string these together to create a train of ideas. (Sometimes, a crazy train.)

Neither dreams nor AI know where they’re going, but they keep going in the same direction. That box full of pieces is endless, representing all the data that minds and models collect. So, paradoxically, world models are somehow empty of meaning, but with infinite potential connections. And attention is all you need to see the world, maybe create it, maybe predict the future.

Don’t forget to do a periodic reality check: There is no spoon.

--

--

Responses (1)