The Technium

Dreams are the Default for Intelligence

I have a proto-theory: That our brains tend to produce dreams at all times, and that during waking hours, our brains tame the dream machine into perception and truthiness. At night, we let it run free to keep the brain areas occupied. The foundational mode of the intelligence is therefore dreaming.

Here’s how I got there: For a while I’ve been intensely exploring generative AI systems, creating both text and visual images almost daily, and I am increasingly struck by their similarity to dreams. The AIs seem to produce dream images and dream stories and dream answers. The technical term is “hallucinations” but I think they are close to dreams. I’ve come to suspect that this similarity between dreams and generative AI is not superficial, poetic, or coincidental. My unexpected hunch is that we’ll discover that the mechanism that generates dreams in our own heads will be the same (or very similar) to the ones that current neural net AI’s use to generate text and images.

When I inspect my own dreams, I struck by several things. One, is that their creativity seems to be beyond me, as in, I don’t recognize that as something I could have thought of. This is very similar to the kind of synthetic creativity produced in a flash by the neural nets. Their creations are produced by the system itself rather than by individual will power or choice. When I am dreaming, I am receiving images/stories that are produced for me, not really by me. Same with generative AI, which produces images via the prompts that go “beyond” the power of the prompt words and much more dependent on the universe it has been trained on.

Secondly, dream images are often impressionistic, but yield details when given attention. So in my dream my brain is producing child-like figures marching toward a school building-ish structure on a road-ish image. There is enough detail in “things-ish” to suggest the thing. This is also like NN diffusion models that basically produce things that resemble other things rather than an actual specific memory of a thing. When my dream mind focuses on some part of that picture, the new details are produced on the spot. Greater details are rendered only if needed, and often they are not needed. When they come, the rendered details are also impressionistic (despite their details) and not specific to anything real. This too, is how NN also work. Their incredibly specific results are like memories that are produced rather than recalled.

Finally, dreams seem realistic only in short spurts. Their details are almost hyperreal, as in current AI systems. But as our dreams proceed, they sway in their logic, quickly veering into surreal territory. One of the defining signatures of dreams is this dream logic, this unrealistic sequence of events, this alien disjuncture with cause and effect, which is 100% true of AI systems today. For short snips AIs are very realistic, but they quickly become surreal over any duration. A scene, a moment, a paragraph, will be incredibly realistic, and the next moment too, by itself, but the consecutive narrative between the pieces is absent, or absurd, and without realism. At any length, the AI stuff feels like dreams.

My conjecture is that they feel like dreams because our heads are using the same methods, the same algorithms. so to speak. Our minds, of course, are using wet neurons, in much greater numbers and connections than a GPU cluster, but algorithmically, they will be doing similar things.

It is possible that this whole apparatus of generation is actually required for perception itself. The “prompt” in ordinary sight may be the stream of data bits from the optic nerve in the eye balls, which go on to generate the “vision” of what we see. The same algorithms which generate the hallucinations for AI art — and for human dreams — may also be the heavy-duty mechanisms that we use to perceive (vs just “see”.) If that were so, then we’d need additional mechanisms to tamp down and tame the innate tendency for our visual system to hallucinate. That mechanism might be the constant source of data from our senses, which keeps correcting the dream engine, like a steady stream of prompts. To be clearer, it may be that the perception engine in our eyes/mind is built very much like a generative AI engine. It is throwing up guesses, suggestions, of chair-ish notions (this is a chair), which is then checked against itself a half-second later (yes, more chairlike), to second guess and eventually confirmation, until everything in view shifts a full second later, when it regenerates another vision of what it is seeing.

During waking moments with the full river of data from all our senses, plus the oversight our conscious attention, the tendency of the generative engine to hallucinate is kept in check. But during the night, when the prompting from the senses diminish, the dreams take over with a different kind of prompt, which may simply be the points where our subconscious is paying attention. The generative algos produce these lavish images, sounds, and stories that in some way regenerate in response to our subconscious attention.

Neurobiologist David Eagleman has a theory that the evolutionary purpose of dreaming is to protect our visual apparatus. Our brains are so plastic and malleable, that their processing power can be quickly taken over by different brain functions. So if the huge visual/auditory department closes down at night, or 1/3 of the day, other brain functions would begin to colonize this resource that was not being used. To prevent that hijacking, the brain keeps its sensory department busy 24/7 by running dreams. That keeps it occupied and fully staffed for daytime.

A generative perception dream engine is the flip of this. Instead of a sensory engine that is allowed to dream at night to keep it robust, I suggest that the default state of this engine is to dream, and that it is managed during the day to not hallucinate. To dream, then, is not a higher order function, but the most primeval one, that is only refined by more sophisticated function that align it with reality. (This will also be the developmental path of AI. To go from Deepdream and hallucinations to reliable perception and answers.)

A corollary of this theory —that dreaming is the raw state of perception — is that all animals with eyeballs will dream. Without language they will not have access to their dreams the same way, but dream they would. A second corollary of this dream inversion theory, would be that as AI become more complex and sophisticated, able to perceived in ways we humans can’t, that they would retain the tendency to hallucinate at their very core. The dreaminess of AI won’t go away; it will just be educated, compensated, managed, and suppressed toward rationality and realism.


© 2023