Making the Inevitable Obvious
Weekly Links, 09/08/2023
- Embrace AI’s weirdness for maximum benefit says the best teacher of AI to date, @emollick Embracing weirdness: What it means to use AI as a (writing) tool
- One way to improve the study of history and make it more useful, more reliable, is to make sure each history document includes all its raw data, as we might require every science paper. The full argument: Age of Invention: Does History have a Replication Crisis?
- Read this fantastic piece by @StevenLevy in @WIRED about the inside view of OpenAI. “Brockman says, “It’s nice to have something that could be a virtual assistant. But that’s not the dream. The dream is to help us solve problems we can’t.” What OpenAI Really Wants
- The next big fight (later this decade) will be around genetic selection, which is beginning to happen now. Here is an argument in favor of it. Embryo Selection: Healthy Babies vs Bad Arguments
- Sign of the future: An AI that moderates chat during multiplayer games like Call of Duty to monitor toxic language. Call of Duty enlists AI to eavesdrop on voice chat and help ban toxic players starting today
- In our zest to accelerate the rate of innovation, many new kinds of research institutions are emerging to try slight different approaches. This is a catalog of current creative research institutions. https://arbesman.net/overedge/
- Not the first, and not the last, to send art into orbit. It’s great they are launching contemporary art onto other planets. The ‘Lunar Codex’ Is Sending Works from More than 30,000 Artists to the Moon
Weekly Links, 09/01/2023
- I was just reminded that 15 years ago I wrote a short piece on a way to do Very Long Term Backups. Still seems the best way. Very Long-Term Backup
Weekly Links, 08/18/2023
- I am with David Brooks in believing moral character development is the most vital element in any civilization, and our current task. My book of advice concurs. His argument: How America Got Mean
- A bold, radical, unproven idea: “gravity may be quantum entanglement in disguise”. That would make all the particles in the universe a single unified quantum object. Rethinking reality: Is the entire universe a single quantum object?
- 13 years ago I crowdsourced a list of the best magazine articles ever. Still a great list (but does not include anything written in the last 13 years). The Best Magazine Articles Ever
Weekly Links, 08/04/2023
- TIL: 18th century English aristocrats kept hermits on their country estates. The hermit’s main job was to be silently picturesque, and thus to delight visitors. “By 1750, if you only put in one structure in your garden, it would have been a hermitage” Ornamental Hermits Were 18th-Century England’s Must-Have Garden Accessory
- No one pays as much attention to the wisdom found in books and to a well-examined life as Maria Popova (@brainpicker) does. So it is an honored to be included in her library. Excellent Advice for Living: Kevin Kelly’s Life-Tested Wisdom He Wished He Knew Earlier
Weekly Links, 07/28/2023
- Learning how to collaborate with the AIs is an essential skill. Early work on the best process for collaborating with a medical AI, so that it optimizes what the human doctor does and what the AI does in order to achieve best clinical results. Developing reliable AI tools for healthcare
- This is the future now: Pay with your palm. Amazon One palm payment technology is coming to all 500+ Whole Foods Market stores in the U.S.
Weekly Links, 07/21/2023
Weekly Links, 07/14/2023
Weekly Links, 06/30/2023
Weekly Links, 06/23/2023
Dreams are the Default for Intelligence
I have a proto-theory: That our brains tend to produce dreams at all times, and that during waking hours, our brains tame the dream machine into perception and truthiness. At night, we let it run free to keep the brain areas occupied. The foundational mode of the intelligence is therefore dreaming.
Here’s how I got there: For a while I’ve been intensely exploring generative AI systems, creating both text and visual images almost daily, and I am increasingly struck by their similarity to dreams. The AIs seem to produce dream images and dream stories and dream answers. The technical term is “hallucinations” but I think they are close to dreams. I’ve come to suspect that this similarity between dreams and generative AI is not superficial, poetic, or coincidental. My unexpected hunch is that we’ll discover that the mechanism that generates dreams in our own heads will be the same (or very similar) to the ones that current neural net AI’s use to generate text and images.

When I inspect my own dreams, I struck by several things. One, is that their creativity seems to be beyond me, as in, I don’t recognize that as something I could have thought of. This is very similar to the kind of synthetic creativity produced in a flash by the neural nets. Their creations are produced by the system itself rather than by individual will power or choice. When I am dreaming, I am receiving images/stories that are produced for me, not really by me. Same with generative AI, which produces images via the prompts that go “beyond” the power of the prompt words and much more dependent on the universe it has been trained on.
Secondly, dream images are often impressionistic, but yield details when given attention. So in my dream my brain is producing child-like figures marching toward a school building-ish structure on a road-ish image. There is enough detail in “things-ish” to suggest the thing. This is also like NN diffusion models that basically produce things that resemble other things rather than an actual specific memory of a thing. When my dream mind focuses on some part of that picture, the new details are produced on the spot. Greater details are rendered only if needed, and often they are not needed. When they come, the rendered details are also impressionistic (despite their details) and not specific to anything real. This too, is how NN also work. Their incredibly specific results are like memories that are produced rather than recalled.
Finally, dreams seem realistic only in short spurts. Their details are almost hyperreal, as in current AI systems. But as our dreams proceed, they sway in their logic, quickly veering into surreal territory. One of the defining signatures of dreams is this dream logic, this unrealistic sequence of events, this alien disjuncture with cause and effect, which is 100% true of AI systems today. For short snips AIs are very realistic, but they quickly become surreal over any duration. A scene, a moment, a paragraph, will be incredibly realistic, and the next moment too, by itself, but the consecutive narrative between the pieces is absent, or absurd, and without realism. At any length, the AI stuff feels like dreams.
My conjecture is that they feel like dreams because our heads are using the same methods, the same algorithms. so to speak. Our minds, of course, are using wet neurons, in much greater numbers and connections than a GPU cluster, but algorithmically, they will be doing similar things.
It is possible that this whole apparatus of generation is actually required for perception itself. The “prompt” in ordinary sight may be the stream of data bits from the optic nerve in the eye balls, which go on to generate the “vision” of what we see. The same algorithms which generate the hallucinations for AI art — and for human dreams — may also be the heavy-duty mechanisms that we use to perceive (vs just “see”.) If that were so, then we’d need additional mechanisms to tamp down and tame the innate tendency for our visual system to hallucinate. That mechanism might be the constant source of data from our senses, which keeps correcting the dream engine, like a steady stream of prompts. To be clearer, it may be that the perception engine in our eyes/mind is built very much like a generative AI engine. It is throwing up guesses, suggestions, of chair-ish notions (this is a chair), which is then checked against itself a half-second later (yes, more chairlike), to second guess and eventually confirmation, until everything in view shifts a full second later, when it regenerates another vision of what it is seeing.
During waking moments with the full river of data from all our senses, plus the oversight our conscious attention, the tendency of the generative engine to hallucinate is kept in check. But during the night, when the prompting from the senses diminish, the dreams take over with a different kind of prompt, which may simply be the points where our subconscious is paying attention. The generative algos produce these lavish images, sounds, and stories that in some way regenerate in response to our subconscious attention.
Neurobiologist David Eagleman has a theory that the evolutionary purpose of dreaming is to protect our visual apparatus. Our brains are so plastic and malleable, that their processing power can be quickly taken over by different brain functions. So if the huge visual/auditory department closes down at night, or 1/3 of the day, other brain functions would begin to colonize this resource that was not being used. To prevent that hijacking, the brain keeps its sensory department busy 24/7 by running dreams. That keeps it occupied and fully staffed for daytime.
A generative perception dream engine is the flip of this. Instead of a sensory engine that is allowed to dream at night to keep it robust, I suggest that the default state of this engine is to dream, and that it is managed during the day to not hallucinate. To dream, then, is not a higher order function, but the most primeval one, that is only refined by more sophisticated function that align it with reality. (This will also be the developmental path of AI. To go from Deepdream and hallucinations to reliable perception and answers.)
A corollary of this theory —that dreaming is the raw state of perception — is that all animals with eyeballs will dream. Without language they will not have access to their dreams the same way, but dream they would. A second corollary of this dream inversion theory, would be that as AI become more complex and sophisticated, able to perceived in ways we humans can’t, that they would retain the tendency to hallucinate at their very core. The dreaminess of AI won’t go away; it will just be educated, compensated, managed, and suppressed toward rationality and realism.