
Weekly Links, 10/03/2025
- AI agents like to post and do better if they blog: We Built Social Media for Agents and They Won't Stop Posting

Paying AIs to Read My Books
Some authors have it backwards. They believe that AI companies should pay them for training AIs on their books. But I predict in a very short while, authors will be paying AI companies to ensure that their books are included in the education and training of AIs. The authors (and their publishers) will pay in order to have influence on the answers and services the AIs provide. If your work is not known and appreciated by the AIs, it will be essentially unknown.
Recently, the AI firm Anthropic agreed to pay book authors a collective $1.5 billion as a penalty for making an illegal copy of their books. Anthropic had been sued by some authors for using a shadow library of 500,000 books that contained digital versions of their books, all collected by renegade librarians with the dream of making all books available to all people. Anthropic had downloaded a copy of this outlaw library in anticipation of using it to train their LLMs, but according to court documents, they did not end up using those books for training the AI models they released. Even if Anthropic did not use this particular library, they used something similar, and so have all the other commercial frontier LLMs.
However the judge penalized them for making an unauthorized copy of the copyrighted books, whether or not they used them, and the authors of all the copied books were awarded $3,000 per book in the library.
The court administrators in this case, called Bartz et al v. Anthropic, have released a searchable list of the affected books on a dedicated website. Anyone can search the database to see if a particular book or author is included in this pirate library, and of course, whether they are due compensation. My experience with class action suites like this is that very rarely does award money ever reach people on the street. Most of the fees are consumed by the lawyers of all sides. I notice that in this case, only half of the amount paid per book is destined to actually go to the author. The other 50% goes to the publishers. Maybe. And if it is a text book, good luck with getting anything.
I am an author so I checked the Anthropic case list. I found four out of my five books published in New York included in this library. I feel honored to be included in a group of books that can train AIs that I now use everyday. I feel flattered that my ideas might be able to reach millions of people through the chain of thought of LLMs. I can imagine some authors feeling disappointed that their work was not included in this library.
However, Anthropic claims it did not use this particular library for training their AIs. They may have used other libraries and those libraries may or may not have been “legal” in the sense of having been paid for. The legality of using digitized books for anything is still in dispute. For example, Google digitizes books for search purposes, but only shows small snippets of the book as the result. Can they use the same digital copy they have already made for training AI purposes? The verdict in the Bartz v. Anthropic case was that, yes, using a copy of a book for training AI is fair use, if it was obtained in a fair way. Anthropic was penalized not for training AI on books, but for having in its possession a copy of the books it had not paid for.
This is just the first test case of what promises to be many more tests in the future as it is clear that copyright law is not adequate to cover this new use of text. Protecting copies of text – which is what copyright provisions do – is not really pertinent to learning and training. AIs don’t need to keep a copy; they just have to read it once. Copies are immaterial. We probably need other types of rights and licenses for intellectual property, such as a Right of Reference, or something like that. But the rights issue is only a distraction from the main event, which is the rise of a new audience: the AIs.
Slowly, we’ll accumulate some best practices in regards to what is used to train and school AIs. The curation of the material used to educate the AI agents giving us answers will become a major factor in deciding whether we use and rely on them. There will be a minority of customers who want the AIs to be trained with material that aligns with their political bent. Devout conservatives might want a conservatively trained AI; it will give answers to controversial questions in the manner they like. Devout liberals will want one trained with a liberal education. The majority of people won’t care; they just want the “best” answer or the most reliable service. We do know that AIs reflect what they were trained on, and that they can be “fine tuned” with human intervention to produce answers and services that please their users. There is a lot of research in reinforcing their behavior and steering their thinking.
Half a million books sounds like a lot of books to learn from, but there are millions and millions of books in the world already that the AIs have not read because their copyright status is unclear or inconvenient, or they are written in lesser-used languages. AI training is nowhere near done. Shaping this corpus of possible influences will become a science and art in itself. Someday AIs will have really read all that humans have written. Having only 500,000 books forming your knowledge base will soon be seen as quaint, but it also suggests how impactful it can be to be included in that small selection, and that makes inclusion a prime reason why authors will want their works to be trained on AIs now.
The young and the earliest adopters of AI have it set to always-on mode; more and more of their intangible life goes through the AI, and no further. As the AI models become more and more reliable, the young are accepting the conclusions of the AI. I find something similar in my own life. I long ago stopped questioning a calculator, then stopped questioning Google, and now find that most answers from current AIs are pretty reliable. The AIs are becoming the arbiters of truth.
AI agents are used not just to give answers but to find things, to understand things, to suggest things. If the AIs do not know about it, it is equivalent to it not existing. It will become very hard for authors who opt out of AI training to make a dent. There are authors and creators today who do not have any digital presence at all; you cannot find them online; their work is not listed anywhere. They are rare and a minority. As Tim O’Reilly likes to say, the challenge today for most creators is not piracy (illegal copies) but obscurity. I will add, the challenge for creators in the future will not be imitation (AI copy) but obscurity.
If AIs become the arbiters of truth, and if what they trained on matters, then I want my ideas and creative work to be paramount in what they see. I would very much like my books to be the textbooks for AI. What author would not? I would. I want my influence to extend to the billions of people coming to the AIs everyday, and I might even be willing to pay for that, or to at least do what I can to facilitate the ingestion of my work into the AI minds.

Another way to think of this is that in this emerging landscape, the audience for books – especially non-fiction books – has shifted away from people towards AI. If you are writing a book today, you want to keep in mind that you are primarily writing it for AIs. They are the ones who are going to read it the most carefully. They are going to read every page word by word, and all the footnotes, and all the endnotes, and the bibliography, and the afterward. They will also read all your books and listen to all your podcasts. You are unlikely to have any human reader read it as thoroughly as the AIs will. After absorbing it, the AIs will do that magical thing of incorporating your text into all the other text they have read, of situating it, of placing it among all the other knowledge of the world – in a way no human reader can do.
Part of the success of being incorporated by AIs is how well the material is presented for them. If a book can be more easily parsed by an AI, its influence will be greater. Therefore many books will be written and formatted with an eye on their main audience. Writing for AIs will become a skill like any other, and something you can get better at. Authors could actively seek to optimize their work for AI ingestion, perhaps even collaborating with AI companies to ensure their content is properly understood, and integrated. The concept of "AI-friendly" writing, with clear structures, explicit arguments, and well-defined concepts, will gain prominence, and of course will be assisted by AI.
Every book, song, play, movie we create is added to our culture. Libraries are special among human inventions. They tend to get better the older they get. They accumulate wisdom and knowledge. The internet is similar in this way, in that it keeps accumulating material and has never crashed, or had to restart, since it began. AIs are very likely similar to these exotropic systems, accumulating endlessly without interruption. We don’t know for sure, but they are liable to keep growing for decades if not longer. At the moment their growth seems open ended. What they learn today, they will probably continue to know, and their impact today will have compounding influence in the decades to come. Influencing AIs is among the highest leverage activities available to any human being today, and the earlier you start, the more potent.
The value of an author's work will not just be in how well it sells among humans, but how deep it has been included within the foundational knowledge of these intelligent memory-based systems. That potency will be what is boasted about. That will be an author’s legacy.

The Periodic Table of Cognition
I’ve been studying the early history of electricity’s discovery as a map for our current discovery of artificial intelligence. The smartest people alive back then, including Isaac Newton, who may have been the smartest person who ever lived, had confident theories about electricity’s nature that were profoundly wrong. In fact, despite the essential role of electrical charges in the universe, everyone who worked on this fundamental force was profoundly wrong for a long time. All the pioneers of electricity — such as Franklin, Wheatstone, Faraday, and Maxwell — had a few correct ideas of their own (not shared by all) mixed in with notions that mostly turned out to be flat out misguided. Most of the discoveries about what electricity could do happened without the knowledge of how they worked. That ignorance, of course, drastically slowed down the advances in electrical inventions.
In a similar way, the smartest people today, especially all the geniuses creating artificial intelligence, have theories about what intelligence is, and I believe all of them (me too) will be profoundly wrong. We don’t know what artificial intelligence is in large part because we don’t know what our own intelligence is. And this ignorance will later be seen as an impediment to the rate of progress in AI.
A major part of our ignorance stems from our confusion about the general category of either electricity or intelligence. We tend to view both electricity and intelligence as coherent elemental forces along a single dimension: you either have more of it or less. But in fact, electricity turned out to be so complicated, so complex, so full of counterintuitive effects that even today it is still hard to grasp how it works. It has particles and waves, and fields and flows, composed of things that are not really there. Our employment of electricity exceeds our understanding of it. Understanding electricity was essential to understanding matter. It wasn’t until we learned to control electricity that we were able to split water — which had been considered an element — into its actual elements; that enlightened us that water was not a foundational element, but a derivative compound made up of sub elements.
It is very probable we will discover that intelligence is likewise not a foundational singular element, but a derivative compound composed of multiple cognitive elements, combined in a complex system unique to each species of mind. The result that we call intelligence emerges from many different cognitive primitives such as long-term memory, spatial awareness, logical deduction, advance planning, pattern perception, and so on. There may be dozens of them, or hundreds. We currently don’t have any idea of what these elements are. We lack a periodic table of cognition.
The cognitive elements will more resemble the heavier elements in being unstable and dynamic. Or a better analogy would be to the elements in a biological cell. The primitives of cognition are flow states that appear in a thought cycle. They are like molecules in a cell which are in constant flux, shifting from one shape to another. Their molecular identity is related to their actions and interactions with other molecules. Thinking is a collective action that happens in time (like temperature in matter) and every mode can only be seen in relation to the other modes before and after it. It is a network phenomenon that makes it difficult to identify its borders. So each element of intelligence is embedded in a thought cycle, and requires the other elements as part of its identity. So each cognitive element is described in context of the other cognitive modes adjacent to it.

I asked ChatGPT5Pro to help me generate a periodic table of cognition given what we collectively know so far. It suggests 49 elements, arranged in a table so that related concepts are adjacent. The columns are families, or general categories of cognition such as “Perception”, “Reasoning”, “Learning”, so all the types of perception or reasoning are stacked in one column. The rows are sorted by stages in a cycle of thought. The earlier stages (such as “sensing”) are at the top, while later stages in the cycle (such as “reflect & align”) are at the bottom. So for example, in the family or category of “Safety” the AIs will tend to do the estimation of uncertainty first, later do verification, and only get to a theory of mind at the end.
The chart is colored according to how much progress we’ve made on each element. Red indicates we can synthesize that element in a robust way. Orange means we can kind of make it work with the right scaffolding. Yellow reflects promising research without operational generality yet.
I suspect many of these elements are not as distinct as shown here (taxonomically I am more of a lumper than a splitter), and I would expect this collection omits many types we are soon to discover, but as a start, this prototype chart serves its purpose: it reveals the complexity of intelligence. It is clear intelligence is compounded along multiple dimensions. We will engineer different AIs to have different combinations of different elements in different strengths. This will produce thousands of types of possible minds. We can see that even today different animals have their own combination of cognitive primitives, arranged in a pattern unique to their species’ needs. In some animals some of the elements — say long-term memory — may exceed our own in strength; of course they lack some elements we have.
With the help of AI, we are discovering what these elements of cognition are. Each advance illuminates a bit of how minds work and what is needed to achieve results. If the discovery of electricity and atoms has anything to teach us now, it is that we are probably very far from having discovered the complete set of cognitive elements. Instead we are at the stage of believing in ethers, instantaneous action, and phlogiston – a few of the incorrect theories of electricity the brightest scientists believed.
Almost no thinker, researcher, experimenter, or scientist at that time could see the true nature of electricity, electromagnetism, radiation and subatomic particles, because the whole picture was hugely unintuitive. Waves, force fields, particles of atoms did not make sense (and still does not make common sense). It required sophisticated mathematics to truly comprehend it, and even after Maxwell described it mathematically, he found it hard to visualize.
I expect the same from intelligence. Even after we identify its ingredients, the emergent properties they generate are likely to be obscure and hard to believe, hard to visualize. Intelligence is unlikely to make common sense.
A century ago, our use of electricity ran ahead of our understanding of it. We made motors from magnets and coiled wire without understanding why they worked. Theory lagged behind practice. As with electricity, our employment of intelligence exceeds our understanding of it. We are using LLMs to answer questions or to code software without having a theory of intelligence. A real theory of intelligence is so lacking that we don’t know how our own minds work, let alone the synthetic ones we can now create.
The theory of the atomic world needed the knowledge of the periodic table of elements. You had to know all (or at least most) of the parts to make falsifiable predictions of what would happen. The theory of intelligence requires knowledge of all the elemental parts, which we have only slowly begun to identify, before we can predict what might happen next.

The Trust Quotient (TQ)

Wherever there is autonomy, trust must follow. If we raise children to go off on their own, they need to be autonomous and we need to trust them. (Parenting is a school for learning how to trust.) If we make a system of autonomous agents, we need lots of trust between agents. If I delegate decisions to an AI, I then have to trust it, and if that AI relies on other AIs, it must trust them. Therefore we will need to develop a very robust trust system that can detect, verify, and generate trust between humans and machines, and more importantly between machines and machines.
Applicable research in trust follows two directions: understanding better how humans trust each other, and applying some of those principles in an abstract way into mechanical systems. Technologists have already created primitive trust systems to manage the security of data clouds and communications. For instance, should this device be allowed to connect? Can it be trusted to do what it claims it can do? How do we verify its identity, and its behavior? And so on.
So far these systems are not dealing with adaptive agents, whose behaviors and IDs and abilities are far more fluid, opaque, shifting, and also more consequential. That makes trusting them more difficult and more important.
Today when I am shopping for an AI, accuracy is the primary quality I am looking for. Will it give me correct answers? How much does it hallucinate? These qualities are proxies for trust. Can I trust the AI to give me an answer that is reliable? As AIs start to do more, to go out into the world to act, to make decisions for us, their trustworthiness becomes crucial.
Trust is a broad word that will be unbundled as it seeps into the AI ecosystem. Part security, part reliability, part responsibility, and part accountability, these strands will become more precise as we synthesize it and measure it. Trust will be something we’ll be talking a lot more about in the coming decade.
As the abilities and skills of AI begin to differentiate – some are better for certain tasks than others – reviews of them will begin to include their trustworthiness. Just as other manufactured products have specs that are advertised – such as fuel efficiency, or gigabytes of storage, pixel counts, or uptime, or cure rates – so the vendors of AIs will come to advertise the trust quotient of their agents. How reliably reliable are they? Even if this quality is not advertised it needs to be measured internally, so that the company can keep improving it.
When we depend on our AI agent to book vacation tickets, or renew our drug prescriptions, or to get our car repaired, we will be placing a lot of trust in them. It is not hard to imagine occasions where an AI agent can be involved in a life or death decision. There may even be legal liability consequences for how much we can expect to trust AI agents. Who is responsible if the agent screws up?
Right now, AIs own no responsibilities. If they get things wrong, they don't guarantee to fix it. They take no responsibility for the trouble they may cause with their errors. In fact, this difference is currently the key difference between human employees and AI workers. The buck stops with the humans. They take responsibility for their work; you hire humans because you trust them to get the job done right. If it isn't, they redo it, and they learn how to not make that mistake again. Not so with current AIs. This makes them hard to trust.
AI agents will form a network, a system of interacting AIs, and that system can assign a risk factor for each task. Some tasks, like purchasing airline tickets, or assigning prescription drugs, would have risk scores reflecting potential negative outcomes vs positive convenience. Each AI agent itself would have a dynamic risk score depending on what its permissions were. Agents would also accumulate trust scores based on their past performances. Trust is very asymmetrical; It can take many interactions over a long time to gain in value, but it can lose trust instantly, with a single mistake. The trust scores would be constantly changing, and tracked by the system.
Most AI work will be done invisibly, as agent to agent exchanges. Most of the output generated by an average AI agent will only be seen and consumed by another AI agent, one of trillions. Very little of the total AI work will ever be seen or noticed by humans. The number of AI agents that humans interact with will be very few, although they will loom in importance to us. While the AIs we engage with will be rare statistically, they will matter to us greatly, and their trust will be paramount.
In order to win that trust from us, an outward facing AI agent needs to connect with AI agents it can also trust, so a large part of its capabilities will be the skill of selecting and exploiting the most trustworthy AIs it can find. We can expect whole new scams, including fooling AI agents into trusting hollow agents, faking certificates of trust, counterfeiting IDs, spoofing tasks. Just as in the internet security world, an AI agent is only as trustworthy as its weakest sub-agent. And since sub-tasks can be assigned for many levels down, managing quality will be a prime effort for AIs.
Assigning correct blame for errors and rectifying mistakes also becomes a huge marketable skill for AIs. All systems – including the best humans – make mistakes. There can be no system mistake proof. So a large part of high trust is the accountability in mending one’s errors. The highest trusted agents will be those capable (and trusted!) to fix the mistakes they make, to have sufficient smart power to make amends, and get it right.
Ultimately the degree of trust we give to our prime AI agent — the one we interact with all day every day — will be a score that is boasted about, contested, shared, and advertised widely. In other domains, like a car or a phone, we take reliability for granted.
AI is so much more complex and personal, unlike other products and services in our lives today,
the trustworthiness of AI agents will be crucial and an ongoing concern. Its trust quotient (TQ) may be more important than its intelligence quotient (IQ). Picking and retaining agents with high TQ will be very much like hiring and keeping key human employees.
However, we tend to avoid assigning numerical scores to humans. The AI agent system, on the other hand will have all kinds of metrics we will use to decide which ones we want to help run our lives. The highest scoring AIs will likely be the most expensive ones as well. There will be whispers of ones with nearly perfect scores that you can't afford. However, AI is a system that improves with increasing returns, which means the more it is used, the better it gets, so the best AIs will be among the most popular AIs. Billionaires use the same Google we use, and are likely to use the same AIs as us, though they might have intensely personalized interfaces for them. These too, will need to have the highest trust quotients.
Every company, and probably every person, will have an AI agent that represents them inside the AI system to other AI agents. Making sure your personal rep agent has a high trust score will be part of your responsibility. It is a little bit like a credit score for AI agents. You will want a high TQ for yours. Because some AI agents won’t engage with other agents having low TQs. This is not the same thing as having a personal social score (like the Chinese are reputed to have). This is not your score, but the TQ score of your agent, which represents you to other agents. You could have a robust social score reputation, but your agent could be lousy. And vice versa.
In the coming decades of the AI era, TQ will be seen as more important than IQ.

Emotional Agents
Many people have found the intelligence of AIs to be shocking. This will seem quaint compared to a far bigger shock coming: highly emotional AIs. The arrival of synthetic emotions will unleash disruption, outrage, disturbance, confusion, and cultural shock in human society that will dwarf the fuss over synthetic intelligence. In the coming years the story headlines will shift from “everyone will lose their job” (they won’t) to “AI partners are the end of civilization as we know it.”
We can rationally process the fact that a computer could legitimately be rational. We may not like it, but we could accept the fact that a computer could be smart, in part because we have come to see our own brains as a type of computer. It is hard to believe they could be as smart as we are, but once they are, it kind of makes sense.
Accepting machine-made creativity is harder. Creativity seems very human, and it is in some ways perceived as the opposite of rationality, and so it does not appear to belong to machines, as rationality does.
Emotions are interesting because emotions clearly are not only found in humans, but in many, many animals. Any pet owner could list the ways in which their pets perceive and display emotions. Part of the love of animals is being able to resonate with them emotionally. They respond to our emotions as we respond to theirs. There are genuine, deep emotional bonds between human and animal.
Those same kinds of emotional bonds are coming to machines. We see glimmers of it already. Nearly every week a stranger sends me logs of their chats with an AI demonstrating how deep and intuitive they are, how well they understand each other, and how connected they are in spirit. And we get reports of teenagers getting deeply wrapped up with AI “friends.” This is all before any serious work has been done to deliberately embed emotions into the AIs.
Why will we program emotions into AIs? For a number of reasons:
First, emotions are a great interface for a machine. It makes interacting with them much more natural and comfortable. Emotions are easy for humans. We don’t have to be taught how to act, we all intuitively understand results such as praise, enthusiasm, doubt, persuasion, surprise, perplexity – which a machine may want to use. Humans use subtle emotional charges to convey non-verbal information, importance, and instruction, and AIs will use similar emotional notes in their instruction and communications.
Second, the market will favor emotional agents, because humans do. AIs and robots will continue to diversify, even as their basic abilities converge, and so their personalities and emotional character will become more important in choosing which one to use. If they are all equally smart, the one that is friendlier, or nicer, or a better companion, will get the job.
Thirdly, a lot of what we hope artificial agents will do, whether they are software AIs or hard robots, will require more than rational calculations. It will not be enough that an AI can code all night long. We are currently over rating intelligence. To be truly creative and capable of innovations, to be wise enough to offer good advice, will require more than IQ. The bots need sophisticated emotional dynamics that are deeply embedded in its software.
Is that even possible? Yes.
There are research programs (such as those at MIT) going back decades figuring out how to distill emotions into attributes that can be ported over to machines. Some of this knowledge pertains to ways of visually displaying emotions in hardware, just as we do with our own faces. Other researchers have extracted ways we convey emotion with our voice, and even in words in a text. Recently we’ve witnessed AI makers tweaking how complimentary and “nice” their agents are because some users didn’t like their new personality, and some simply did not like the change in personality. While we can definitely program in personality and emotions, we don’t yet know which ones work best for a particular task.
Machines displaying emotions is only half of the work. The other half is detection and comprehension of human emotions by machines. Relationships are two way, and in order to truly be an emotional agent, it must get good at picking up your emotions. There has been a lot of research in that field, primarily in facial recognition, not just your identity, but how you are feeling. There are commercially released apps that can watch a user at their keyboard and detect whether they are depressed, or undergoing emotional stress. The extrapolation of that will be smart glasses that not only look out, but at the same time look back at your face to parse your emotions. Are you confused, or delighted? Surprised, or grateful? Determined, or relaxed? Already, Apple’s Vision Pro has backward facing cameras in its goggles that track your eyes and microexpressions such as blinks and eyebrow rises. Current text LLM’s make no attempt to detect your emotional state, except what can be gleaned from the letters in your prompt, but it is not technically a huge jump to do that.
In the coming years there will be lots of emotional experiments. Some AIs will be curt and logical; some will be talkative and extroverts. Some AIs will whisper, and only talk when you are ready to listen. Some people will prefer loud, funny, witty AIs that know how to make them laugh. And many commercial AIs will be designed to be your best friend.
We might find that admirable for an adult, but scary for a child. Indeed, there are tons of issues to be wary of when it comes to AIs and kids, not just emotions. But emotional bonds will be a key consideration in children’s AIs. Very young human children already can bond with, and become very close to inert dolls and teddy bears. Imagine if a teddy bear talked back, played with infinite patience, and mirrored their emotions. As the child grows it may not ever want to surrender the teddy. Therefore the quality of emotions in machines will likely become one of those areas where we have very different regimes, one for adults and one for children. Different rules, different expectations, different laws, different business models, etc.
But even adults will become very attached to emotional agents, very much like the movie Her. At first society will brand those humans who get swept up in AI love as delusional or mentally unstable. But just as most of the people who have deep love for a dog or cat are not broken, but well adjusted and very empathetic beings, so most of the humans that will have close relationships with AIs and bots will likewise see these bonds as wholesome and broadening.
The common fear about cozy relationships with machines is that they may be so nice, so smart, so patient, so available, so much more helpful than other humans around, that people will withdraw from human relationships altogether. That could happen. It is not hard to imagine well-intentioned people only consuming the “yummy easy friendships” that AIs offer, just as they are tempted to consume only the yummy easy calories of processed foods. The best remedy to counter this temptation is similar to fast food: education and better choices. Part of growing up in this new world will be learning to discern the difference between pretty perfect relationships and messy, difficult, imperfect human ones, and the value the latter give. To be your best — whatever your definition —requires that you spend time with humans!
Rather than ban AI relationships (or fast food) you moderated it, and keep it in perspective. Because in fact, the “perfect” behavior of an AI friend, mentor, coach, or partner can be a great role model. If you surround yourself with AIs that have been trained and tweaked to be the best that humans can make, this is fabulous way to improve yourself. The average human has very shallow ethics, and contradictory principles, and is easily swayed by their own base desires and circumstances. In theory, we should be able to program AIs to have better ethics and principles than the average human. In the same way, we can engineer AIs to be a better friend than the average human. Having these educated AIs around can help us to improve ourselves, and to become better humans. And the people who develop deep relationships with them have a chance to be the most well-adjusted and empathetic people of all.
The argument that the AIs’ emotions are not real because “the bots can’t feel anything” will simply be ignored. Just like the criticism of artificial intelligence being artificial and therefore not real because they don’t understand. It doesn’t matter. We don’t understand what “feeling” really means and we don’t even understand what “understand” means. These are terms and notions that are habitual but no longer useful. AIs do real things we used to call intelligence, and they will start doing real things we used to call emotions. Most importantly the relationships humans will have with AIs, bot, robots, will be as real and as meaningful as any other human connection. They will be real relationships.
But the emotions that AIs/bots have, though real, are likely to be different. Real, but askew. AIs can be funny, but their sense of humor is slightly off, slightly different. They will laugh at things we don’t. And the way they will be funny will gradually shift our own humor, in the same way that the way they play chess and go has now changed how we play them. AIs are smart, but in an unhuman way. Their emotionality will be similarly alien, since AIs are essentially artifical aliens. In fact, we will learn more about what emotions fundamentally are from observing them than we have learned from studying ourselves.
Emotions in machines will not arrive overnight. The emotions will gradually accumulate, so we have time to steer them. They begin with politeness, civility, niceness. They praise and flatter us, easily, maybe too easily. The central concern is not whether our connection with machines will be close and intimate (they will), nor whether these relationships are real (they are), nor whether they will preclude human relationships (they won’t), but rather who does your emotional agent work for? Who owns it? What is it being optimized for? Can you trust it to not manipulate you? These are the questions that will dominate the next decade.
Clearly the most sensitive data about us would be information stemming from our emotions. What are we afraid of? What exactly makes us happy? What do we find disgusting? What arouses us? After spending all day for years interacting with our always-on agent, said agent would have a full profile of us. Even if we never explicitly disclosed our deepest fears, our most cherished desires, and our most vulnerable moments, it would know all this just from the emotional valence of our communications, questions, and reactions. It would know us better than we know ourselves. This will be a common refrain in the coming decades, repeated in both exhilaration and terror: “My AI agent knows me better than I know myself.”
In many cases this will be true. In the best case scenario we use this tool to know ourselves better. In the worst case, this asymmetry in knowledge will be used to manipulate us, and expand our worst selves. I see no evidence that we will cease including AIs in our lives, hourly, if not by the minute. (There will be exceptions, like the Amish, who drop out but they will be a tiny minority.) Most of us, for most of the time, will have an intimate relationship with an AI agent/bot/robot that is always on, ready to help us in any way it can, and that relationship will become as real and as meaningful as any other human connection. We will willingly share our most intimate hours of our lives with it. On average we will lend it our most personal data as long as the benefits of doing so keep coming. (The gate in data privacy is not really who has it, but how much benefit do I get? People will share any kind of data if the benefits are great enough.)
Twenty five years from now, if the people whose constant companion is an always-on AI agent are total jerks, misanthropic bros, and losers, this will be the end of the story for emotional AIs. On the other hand, if people with a close relationship with an AI agent are more empathetic than average, more productive, distinctly unique, well adjusted, with a richer inner life, then this will be the beginning of the story.
We can steer the story to the beginning we want by rewarding those inventions that move us in that direction. The question is not whether AI will be emotional, but how we will use that emotionality.

Everything I Know about Self-Publishing
This essay is also available on my Substack. Subscribe here: https://kevinkelly.substack.com/
In my professional life, I’ve had several bestselling books published by New York publishers, as well as many other titles that sold modestly. I have also self-published a bunch of books, including one bestseller on Amazon and two massive hit Kickstarter-funded books. I have had lots of foreign edition books released by other publishers around the world, including bestsellers in those countries. Every year I also publish a few private books to give away. I've contracted books to be printed in the US and overseas. I've sold big coffee-table masterpieces and tiny text booklets. Together with partners, I run some notable newsletters, a very popular website, and a podcast with 420 episodes. I accumulated followers on various platforms. I'm often asked for advice about how to go about publishing today, with all its options, so here is everything I have learned about publishing and self-publishing so far.
The Traditional Route
The task: You create the material; then professionals edit, package, manufacture, distribute, promote, and sell the material. You make, they sell. At the appropriate time, you appear on a book store tour to great applause, to sign books and hear praise from fans. Also, the publishers will pay you even before you write your book. The advantages of this system are obvious: you spend your precious time creating, and all the rest of the chores will be done by people who are much better at those chores than you.
The downsides are also clear: Since the publisher controls the money, they control the edit, the title, the cover, the ads, the copyrights, and licenses. Your work becomes a community project, and it slows the whole process down, because yours is not the only project everyone is working on. Your work needs to fit into their lineup, their brand, their catalog, their pipeline, their schedule of all the other projects going on. The pace can seem glacial compared to the rest of the world.
For the most part, however, the peak of this traditional system is gone, finished, over. Reading habits have altered, buying habits are new, and attention has shifted to new media. It’s an entirely new publishing world. Today, some books experience some parts of this, but exceptionally few are treated to this full traditional process.
Publishers
Established mass-market publishers are failing, and they are merging to keep going. Traditional book publishers have lost their audience, which was bookstores, not readers. It’s very strange but New York book publishers do not have a database with the names and contacts of the people who buy their books. Instead, they sell to bookstores, which are disappearing. They have no direct contact with their readers; they don’t “own” their customers.
So when an author today pitches a book to an established publisher, the second question from the publishers after “what is the book about” is “do you have an audience?” Because they don’t have an audience. They need the author and creators to bring their own audiences. So, the number of followers an author has, and how engaged they are, becomes central to whether the publisher will be interested in your project.
Many of the key decisions in publishing today come down to whether you own your audience or not.
Agents
In the traditional realm, agents helped authors and they helped publishers. Publishers did not want to waste their time evaluating probable junk, so they would spend their limited time looking at what agents presented to them. In theory, the agent would know the editors' preferences and know what they were interested in, and the editor could trust them to bring good stuff.
For the author, agents had the relationship with editors, would know who might be interested in their project, and the agent would guarantee that the legal contracts were favorable to the author, and most importantly, negotiate good terms. For this work, agents would take 15% off the top of any and all money coming from the publisher. For most authors, that is a significant amount of money.
Are agents worth it? In the beginning of a career, yes. They are a great way to connect with editors and publishers who might like your stuff, and for many publishers, this is the only realistic way to reach them. Are they worth it later? Probably, depending on the author. I do not enjoy negotiating, and I have found that an agent will ask for, demand, and get far more money than I would have myself, so I am fine with their cut. Are they essential? Can you make it in the traditional publishing world without an agent? Yes, but it is an uphill climb.
The problem is, how do you find a good agent? I don’t know. I inherited a great agent very early in my career from the publisher I first worked for, and I have happily been with them since. If I had to start from scratch now, I’d ask friends with agents who make stuff like my stuff to recommend theirs.
In self-publishing, you avoid agents and so keep that 15%.
Advances
What an agent will ask for from a publisher is a bunch of money upfront, when the contract is signed. This is the advance. You pitch a book, and if the editors accept it, they give you a deadline of a year or so to produce it. The role of the advance is to pay you a wage until the book is released, after which it will begin earning royalties for you. Royalties might be something like 7-10% of the retail price per book. The money you get on signing is technically an “advance against royalties.” Meaning that whatever they pay you in advance is deducted from your royalties, so you won’t be paid anything further beyond the advance until and unless the earnings of your royalties exceed the advance.
It is very common for authors to not earn anything beyond their advance. The calculation for the amount of the advance goes roughly like this: Let’s say you earn $1 royalty for every book sold. The publishers estimate they can sell 30,000 copies in the first year, and so they offer you an advance against future royalties of $30,000, or one year’s worth of sales. Obviously, many other factors go into this equation, but to a first approximation, the most you will get for an advance is based on what kind of sales they expect immediately.
The rule of thumb for an author is that you should get the biggest possible advance you can (and this is how an agent can help) – even if this means you won’t earn out the advance. The reason is: the bigger the advance, the bigger commitment the publisher must make in promotion, publicity, and sales. They now have significant skin in your game. Publishers are stretched thin, and their limited sales resources tend to go where they have the most to lose. If an advance is skimpy, so will be the resources allotted to that book.
BTW, you should not have concerns about taking a larger advance than you ever earn out, because a publisher will earn out your advance long before you do. They make more money per book than you do, so their earn-out threshold comes much earlier than the author’s.
Thus one of the advantages of this traditional system – of going with a publisher – is that they bankroll your project. They reduce a bit of your risk. Likewise, that is the genius of Kickstarter and other crowdfunders for self-publishing: the presales bankroll your project, reducing risk. Crowdfunding becomes the bank.
Crowdfunding
I’ve written a whole essay on my 1,000 True Fans idea, simplified as thus: You don’t need a million fans to make self-publishing, or the self-creation of anything, work. If you own control of your audience – that is if you have a direct relationship with your customers individually, having their names and emails, and can communicate with them directly — then it is possible to have as few as a thousand true fans support you. True fans are described as superfans who will buy anything and everything you produce. If you can produce enough to sell your true fans $100 per year, you can make a living with 1,000 true fans. I go into this approach in greater detail in my essay first published in 2008 which you can read here.
Today there are many tools and platforms that cater to developing and maintaining your own audience. In addition to crowdfunders such as Indiegogo, Kickstarter, Backerkit, and dozens more, there are also tools for sustaining support with patrons, such as Patreon. Crowdfunders tend to be used at the launch of a project, while something like Patreon permits constant support, primarily for a creator rather than a particular project. These can be combined, of course. You could launch your self-published work with a Kickstarter, and then gather Patreon support for sequels, backstory and making-of material, future editions, or side projects. Periodic publications have subscriptions for ongoing support.

These days backers expect a video -- and other marketing bits – selling the book. Pre-sales for a crowdfunding campaign have become very sophisticated and require a lot of preparation. The Kickstarter for my Asia photobook was relatively simple and crude.
The chief advantages of crowdfunding are three, and they are significant: 1) You can get the funds before you create in order to support you while you create. 2) You keep all of the revenue (minus 3-5% for the platform), unlike an outside publisher. And 3) You own the audience, for future work.
The disadvantages are also three. 1) It is a huge amount of work. Most crowdsource campaigns go for 30 days and tending it for 30 days is a full-time job. 2) To be successful requires a different set of talents – marketing, sales, social engagement – other than what a creator may have. 3) You are responsible for making sure your fans actually get what they were promised. This "fulfillment" aspect of crowdfunding is often overlooked until the end, when it turns out to be the most difficult part of the process for many creators.
Production
Once upon a time, it was a huge deal to design and physically print a book (or press a music album, or deliver a reel of film). Today those processes can be done by amateurs with little experience. And often digital versions make creating, duplicating, and distribution even easier than ever.
There are three paths to production: traditional batch manufacturing; on-demand printing; digital publishing.
Batch Printing Presses
The traditional way of printing hardcover books still exists and it is a big business. A really first-rate printer will have different kinds of presses for different jobs – including the same fast digital printers as the on-demand printers. In fact, for some jobs, they will use these same digitally controlled ink-jet printers, just at a larger scale and speed. The chief advantages of classic printing on paper are three: you get scale, quality, color.
Pages from my first photobook, Asia Grace, published by Taschen, stacked up in their printing plant in Verona, Italy. In the old days before presses were completely computerized, the art director for the book (me) would be present during printing to oversee the many color tradeoffs each signature of pages needed.
Scale: Books printed in larger volume batches, or “runs,” can win huge discounts on the price per copy. A regular-sized hardcover book printed in Asia might only cost a few dollars to print, and a few more dollars for packing and shipping to your home. That’s a great deal if you list a book at $30. The higher the volume, the lower the price per unit. Printing outside of Asia is more expensive, but still worth considering – if you think you want a lot of copies, say more than 5,000 to start.
Quality: There is a resurgence in considering a well-crafted book as an art object. By leaning into its physicality – adding an embossed cover, heavy rag paper, deckled edges, glorious binding – the book can transcend its intangible counterpart on the Kindle. You as a publisher can make a book a unique custom size, or with magnificent die-cut covers for added zest and higher prices. Some self-published authors offer handsome bound book sets, or books hand-signed via tipped-in sheets, or super high-end limited editions, cradled in their own box. All these kinds of qualities require a collaborating printer somewhere.
Color: On-demand printing can do color, but not oversized, and not cheaply. In my experience, serious coffee-table visual books still need the hand-holding and economics of a printing plant. And I regret to say, that after many years of searching, I have not found a printer in the US capable of doing large full-color books at a reasonable price. You are most likely going to have to go to Asia, such as Vietnam, Indonesia, Singapore, India, or Turkey. China still has the best prices with the highest quality of color printing.
The disadvantages of having your book printed at a printer are #1: You have to house and store them somewhere. Either you have an available basement or garage, or you rent a place, or you hire a dropshipper, or you pay for a distribution giant like Ingram or Amazon to handle it for you. The full run of a book can take up more room than you might think when they are packaged up for shipping. My Vanishing Asia book set, financed on Kickstarter, printed in Turkey, filled up 4 shipping containers, each 40 feet long! That is a LOT of books to store.
Disadvantage #2 is that you need to pay the printer first, long before you sell the books. Not only is this a cash flow challenge, but you have to guess how many books you will sell before they are sold. (Having the pre-sale on a crowdfunding platform like Kickstarter is a big help in relieving that problem.) To get the best price you need to print a lot, but if you print a lot, you have a lot to pay for and to store if they should not sell.
On-demand
You can use free software to design your book and then send it to an on-demand printer to make 1 copy or 1,000 copies, printed one by one as each copy is sold. The copy does not exist till it is sold, so there are no books to bank, store, or ship. An on-demand regular softcover book would cost about $5 to make. It will be professional quality, indistinguishable from a trade book you might buy on Amazon, in part because many of the books from big-time publishers you buy on Amazon are actually printed on-demand using this same technology. (Big-time publishers are also printing on demand!) However, while the ink printing is first class, the bindings, paper quality, cover details won't be up to what you can get with the best modern presses. What you'll get is the good-enough printing contained within the average hardcover book.
The advantage to a creator (and to NY publishers) is that there is no inventory of unsold books to store or handle. You print the book when, and only when, it is sold. The disadvantage is that the cost of printing is more per book.
I can use four different services to print on-demand books. My preferred color and photo/art book printer is Blurb, for quality and ease of use. They keep up with the state-of-the-art color printing. You can design your book, export it as a PDF and have Blurb print it on-demand. Or you can use Blurb’s own web-based design program, or you can use a version of its software built into Adobe’s Lightroom, which is pretty standard for photographers. It’s very simple to go from photographs to a very designed book and then printed.
Sample pages from the various coffee table books I have had printed on demand from Blurb. Some of these books have editions of 2 copies; however the quality of the color printing is first class.
A second option for on-demand printing of standard books with black and white texts, as well as books with color illustration, is Lulu. Their photo/art books are a bit cheaper than Blurb. They are very competitive with standard text-based books. Most importantly, Lulu integrates with your own customer list, so you own your audience.
That is not true of the third option, which I also use a lot: Amazon. Amazon offers its print-on-demand service, called KDP, to anyone who wants it, with the added huge advantage that your book will be not only listed on Amazon immediately, but also delivered by Amazon’s magical logistical Prime operation. So potential fans can discover your work on Amazon and then have it delivered to them the next day for free. This is huge! But the huge and sometimes deal-killing disadvantage is that you do not know who your readers are, as you do with Lulu. Although Amazon makes it ridiculously easy to create and sell a book, with them you don’t own your audience. But in some cases, that is still worth it.
The fourth option is IngramSpark. I have little direct experience with this vendor, but others who do claim it is the best choice for text-based books aimed at libraries and bookstores. Indie bookstores are doing much better than chain bookstores and they usually avoid Amazon's distribution system, Ingram is their main vendor for getting books — as it is for libraries. In addition to getting your book into the Ingram distribution, IngramSpark offers the self-publisher more options for book sizes, paper and binding.

Because you can print as few as one copy of a book, I use these on-demand print services to manufacture prototype versions of a book to check for its sizing and feel. This small on-demand prototype of my book of advice was later published in a larger page size by Penguin/RandomHouse.
Digital
By far the easiest way to publish a book is to sell a digital copy of it. More authors should consider just publishing digital books. You still have to promote it, but you don’t have to print it, ship it, handle it, or store it. A commercial publisher might offer the author a royalty of 7% of the retail price which is say $2 per $30 book, so you may make just as much money per book selling it for $2 in digital. Creating digital books is a great way to start a publishing career. I have two friends who started publishing their science fiction stories as inexpensive digital short stories, which sold well, and then were later discovered by print publishers, made into printed books, and eventually turned into movies by Hollywood. And they still sell the digital versions!
You can sell an e-book – or even a chapter of a book – on Amazon's KDP. You can easily make a book for the Kindle. I've had some digital books up on Amazon KDP for the past ten years, and they continue to sell slowly, yet I have not had to do anything with them since they were uploaded. While the Kindle gives you a royalty of 70%, and accesses a large Amazon/Kindle audience, its downsides are that you don't own your audience, it demands exclusivity, and you must use their proprietary file format which removes any distinctive interior designs and prevents it from being read on other devices. There are dozens of other e-book readers and e-book platforms like Kobo, Apple Books, and Google Play Books who have different proprietary constraints. IngramSpark has an interesting hybrid program for e-book + on-demand printing.
These days a lot more people are comfortable reading a book in PDF form. I sell secure PDFs of some of my books on Gumroad, an easy-to-use web-based app, that collects the payment and sends the buyer an authorized copy. Gumroad works fine, does not charge much, and is super easy to set up; it is perfect for the low digital sales volumes I have.
The digital publishing world is fast-moving, and I don't have as much recent experience with e-books to feel confident in finer resolution recommendations.
Audiobooks
Another important digital format I neglected to mention in the first version of this piece is Audiobooks. For the past decade audiobooks has been the fastest growing format for books — the one sunny spot in a worried landscape. There are many readers who only audit books, and never read them. I don't have any experience in self-publishing audio books; all my mainstream publishers developed the audio versions with almost no input from me. But my friend and science fiction author Eliot Peper has self-published 9 audiobooks, sometimes hiring voice actors and, more recently, narrating them himself.
Currently the platform of choice for self-publishing audiobooks is ACX. ACX is a do-it-yourself platform, run by Audible, the major audiobook platform, and is also owned by Amazon. They are a full-service platform with sound quality tests, and a million narrators you could hire, and other tools to make the process easy. They take a hefty royalty of 40% and demand exclusivity, but your book is listed on Audible; for many readers that is the only place they will ever look for audiobooks. Alternatives such as Spotify are expanding into audiobooks which might make better deals with authors.
Distribution
Digital is easy, but increasingly, the "difficulties" of analog books have become an attraction. Some readers gravitate to the tactile pleasures of a well-made artifact and revel in the physical chore of turning pages. Sometimes the content of a book demands a bigger interface than a small screen can provide, so it needs the oversize release of a large printed page. Some appreciate the longevity of paper books, which never go obsolete and can be read for centuries without a power source or updates. Others are attracted to the serendipity of browsing a bookstore. And some folks value the limited scarcity of a printed volume.
But once words are printed on a page you have to ship them somehow. On-demand printers like Amazon KDP, Lulu or Blurb will directly ship the books to individual readers as they are ordered one by one. There is no inventory for you, and thus no work for the author. The ability of the on-demand publishers to handle long mailing lists varies and is a bit of work. Amazon has the best prices for shipping (zero) but the worst facility to mail to a list. They want readers to order from their own Amazon accounts; Amazon wants to control the audience. Blurb can only ship to customers who order on Blurb. Lulu lets you control your own fan list but charges a lot more to ship.
Cartons of my heavy oversized graphic novel, Silver Cord, pile up in my studio after being shipped in from China. I was unprepared for the chore of shipping the over sized books out to all the backers without damage.
Let’s say you want to go all in, you print the books yourself, and now you have to get them to your fans. Three options; From the printing plant, the books will be shipped by truck to either:
- Your garage. You purchase mailing envelopes or boxes and tape them up and mail them out. Plus: you can sign the books. Minus: tons of ongoing work, and not cheap to mail, even with Media Mail in the US. Shipping globally is a huge headache, and insanely expensive.
- A Drop Shipper, or what is today called a 3PL. For a fee, this kind of company will pick your book from your inventory in their warehouse, package it, and ship it out to your fan, or a bookstore. You give them either a mailing list or access to your orders on Shopify or Kickstarter. Plus: no grunt work, no inventory at home. Minus, not cheap either and they also charge a storage fee for holding your books, which could be there for years. Commercial dropshippers also favor large volume enterprises. I am currently trying out dropshipper eFulfillment that has low minimums and works with small-time operators like me.
- Amazon. You ship your books to an Amazon warehouse, and they fulfill the orders on Amazon as a third-party merchant. You would be selling books, just other merchants sell toasters or toys. Plus: they handle everything, and they offer readers free shipping. Minus: they only order the number of books they expect to sell easily, so there can be a lag until sales start, and of course, they only handle books bought on Amazon. That can be fine. I did one book that was only available on Amazon, nowhere else — I did not have any copies myself — and it sold great, with zero distribution worries on my side.
Promotion
The short version: it is not hard to produce a book. It is much harder to find the audience for it and deliver the book to them. At least 50% of your energy will be devoted to selling the book. This is true whether you publish or self-publish.
A misconception about Kickstarter, Backerkit, and crowdfunding platforms like Patreon is to imagine that you will automatically find your audience there. It is almost the opposite. You won’t be able to have a successful crowdfunding campaign unless you bring the crowd with you. You must cultivate your audience BEFORE you ignite them on Kickstarter.
You won’t have time to build your audience during the fundraising period. The typical crowdfunding campaign lasts 30 days; that will be just long enough to entice them about your work. That also means that for a month, it will be a full-time job for somebody to promote, advertise, and “convert” your audience to your book or project. That somebody is probably you. These days promotion includes making a short video announcing the launch, devising tiers of “rewards,” keeping up with status notifications of how the campaign is going, and doing everything else you can to promote it to new fans. If there is no one willing or capable to give it a month, the effort will probably not reach your goals.
Because Kickstarter, Backerkit, IndieGoGo, Patreon, and other crowdfunders are a platform, there WILL be some people who discover your project there via referrals of similar projects – which is always a plus – but your dominant source will be the audience you accumulated earlier and brought with you. If your project has a likeable presence, the platforms can boost the awareness of it if you are lucky, so getting listed on the front page is something to aim for and it does help.
By now there is also a small industry of “growth” companies who plug into Kickstarter and kin, and who will help you run a crowdfunding campaign for a fee. I actually found that at least one of them was worth the fees they charged, which are now a 20% cut of the additional backers they bring in. They were able to enlarge my campaign way beyond my circle of friends and my existing 1,000 true fans. If I were doing a crowdfunder for the first time, I’d use a growth company like Jellop and I would partner with them from the very beginning.
Getting published by a New York publisher doesn’t get you off the hook. Even if you were to be published by a commercial publisher, you should expect to do serious promotion over the span of a month or more. In theory, the publisher would set up, or at least guide you, through the promotion, but that rarely happens anymore. Even with a commercial publisher releasing your work, you will end up doing the majority of whatever promotion gets done. As in, planning, coordinating, executing, and even paying for book tours and the like. You will be the publicity department no matter what.
Traditionally the promotion of a new book entailed a book tour for the author visiting larger bookstores, where crowds of fans would purchase books. Plus some advertisements for the book in magazines and newspapers, which would also review said book. Ideally this launch would also include appearances on TV talk shows, and maybe radio. None of this works anymore. There are no more paid book tours. Few, if any, book reviews in newspaper or magazines, or author appearances on TV. Fewer ads for books. If any of these do happen, they will be arranged and paid for by the author.
But because books tours have sort of disappeared, an enterprising author with an audience can arrange their own tour using their fan list. Many people crave a deeper connection to people they follow digitally, and these fans can fill a room. Recently Craig Mod, an unknown new author in bookland, arranged and paid for his own sell-out tour, astounding booksellers around the US who did not have enough of his books to sell (who is this guy?).
Instead, promotion for books have shifted online. There is booktok, where fans read new and favorite books for viewers on social media. There are podcasts, where authors can be interviewed at length. In fact, for my last two books – which basically got no reviews in publications – sold extremely well because I heavily promoted them on podcasts, big and small. I said yes to every podcast request who had more than 3 episodes. Even a small podcast audience is larger than the audience in most bookstores. And because podcasts can be niche and intimate, unlike say an appearance on TV, they sell books.
The rule of thumb in publishing is that how well a book sells in its first two weeks determines whether it is a bestseller or not. You want to concentrate most of the sales as pre-sales – either on a crowdfunding platform, or on your own, or as pre-sales for a publisher. One way or another this promotion job will be your job, and can end up being at least half of your total effort on a book.
Non-book publishing
Creating a book has become so easy, that most books these days should probably not be a book. Not every idea — or story — needs the long format of a book. In fact, few do. Instead, they should be a magazine article, a blog post, an op-ed, or a newsletter.
Subscriptions
While we have been long trained to pay for books, we have less of a habit of paying for shorter material, particularly in digital form. Printed magazines and newspapers are disappearing, and few survived the transition to full digital, so there are fewer and fewer opportunities to get paid for publishing your work in a form other than a book. Blogs were great, but for a long time, there was no way to get paid, so they were a non-starter for many authors. Recently platforms for paid newsletters, like Substack, Ghost, Beehiiv, Buttondown and so on have risen, creating a small ecosystem for professional writers.
Substack in particular has done a great job in educating the audience to expect to pay for quality content. All the platforms promote subscriptions as the revenue model, instead of ads. A typical subscription newsletter will start charging $10 per month, although you can charge as much or as little as you want, including free. (Reminder that $10 per month is more than $100 per year, so you could do quite well with 1,000 true fans.)
You don’t need Substack, or any of the other platforms to publish a newsletter for money. If you have an audience you can publish your own with some easy software apps. Well-known Mailchimp does lists easily, but not payments. Memberful has a digital payment function and customer management tools; but you have to host the content, and shape your newsletter. It's great for building your own custom publication, with full ownership of the audience and the design. Substack, Ghost and others make it easy for beginners to build their own subscription newsletters. Medium is another similar, but different, online publishing platform. They host many writers from many backgrounds, but readers pay only one fee to Medium, which Medium then funnels to the writers and editors who curate this mega-magazine. I don’t have enough experience with it as a writer to know how viable it is. I’ve written there but I get no access to the audience directly.
Unmonetised publishing
That’s also part of the challenge of a blog; no ownership. Blogging on your own website is extremely powerful. There are zero gatekeepers. Publishing is instant. Mostly free. You can say anything you want. You can write a little or a lot, every day or once a year. You have 100% control of the design. In many ways, it is the ultimate publishing platform. It’s a fantastic home for great writing and new ideas. For a while blogs had their heyday. But blog websites have three significant downsides which temper their supremacy. One, fewer and fewer people are going directly to a website on a regular basis; it’s harder to maintain an audience and very hard to grow one on the web. Two, unless you implement a membership level of some sort, you actually don’t own your audience. Readers are anonymous. Sometimes you can implement comments on a blog, but they need to be managed and vetted, and their IDs are not useful for an author. And three, blogs, almost by definition, are open to all visitors and don’t have significant revenue models. The fact that you can not easily charge for the writing you do on a blog has been the reason why Substack and other subscription platforms arose. (There is currently a few blogs like Kottke, which are experimenting with paid membership to comment, while keeping the blog open and free.)
In the same vein, X(Twitter), Instagram, TikTok and Facebook are bonafide real publishing platforms. In fact I have published the most significant parts from every page of one of my books, and every sentence of another book, on these social media. You can do serial publishing. But there is no revenue model. You can gain followers but not dollars. Sometimes followers can be transferred into a real audience that you have access to, but it is not easy, and certainly not automatic. While YouTube (see below) can fully support creators, I have not met anyone who is making money selling their content on the social media platforms.
Nonetheless, I continue to blog and to post on the socials; it’s the first place my writing goes. And sometimes, this diffuse audience is all that the writing needs.
Another advantage of the subscriber newsletter platforms like Substack and Ghost, et al, is that there is a social component, and they can be a great way to build a community around your writing. They make it very easy for readers to comment. They also have built-in analytics that can help you understand your audience and how they engage with your content. Equally important, their systems recommend other newsletters to current or new subscribers, thereby enabling others to discover your work. This network effect can help you find and grow your audience. Once a reader has an account for one newsletter, it is very easy for them to sign up for another newsletter from another author.
The design and layout on these platforms are currently very limited, and work best for publishing primarily text, but the platforms are also moving into video and images.
You could think of a newsletter as a subscription to an ongoing book. Writing a book “out loud” — publishing it in parts as you write it — has become much more common. You write in shorter sections, publish the chapters immediately online, and then solicit feedback. This works in both fiction and non-fiction. In fiction, you can publish chapter books, serially, one chapter at a time. In non-fiction, you can write essays or blog posts or newsletter issues (see above), and then make edits to the material later based on comments and corrections. The text is essentially “proofed” and fact-checked by the earlier readers. I have written several books this way, and this process increased the quality of the material manyfold. In addition to correcting some embarrassing mistakes, it also bettered some parts with alerts to ideas I was not aware of.
In the old days, mainstream publishers actively tried to prevent authors from publishing material beforehand, but now the speed of correction, the ease of comments, and the ease of pointing to overlooked research is so easy for readers to do, that it makes great sense to rehearse your writing in public. You can also look at this dynamic process of write > publish > rewrite > republish as a way of building up and maintaining attention. In general, people devote less time to books because of the huge amount of attention they require. It is easier to string that attention out in an ongoing series of posts. If a long-form book comes out of it, it is much easier to reel that string of attention back into the book. And with the advent of paid subscriptions, readers can subscribe to your ongoing book.
Screening
We used to be people of the book, but now we are people of the screen. Our culture used to be grounded on scriptures, constitutions, laws, and canon — all written texts. These were fixed in immutable black and white marks on enduring paper, written by authors, from whom we got "authorities." Now our culture pivots on screens, which are fluid, mutable, flowing, liquid, and fleeting. There are no authorities. You have to assemble the truth yourself. Books no longer have the gravitas they did, and my children and their friends are not reading many of them. Instead, they watch screens. They read the text in moving images. They are learning more from YouTube than books in school.
While books will continue to be published, the center of attention has shifted to moving images. Worldwide the number of hours of attention given to screens dwarfs anything given to pages. Today, to seriously talk about publishing we must talk about video, VR, reels, movies, games, YouTube, TikTok, and of course AI. The audiences for my books are counted in thousands but the audience for my TED talk videos are counted in millions. I spent minutes preparing for my TED talks and years preparing my books, but given the asymmetry of attention and influence, I should have reversed my own attention and given my videos years of work and then only hours creating the book derived from it.
I don’t have enough personal experience in this emerging media to offer useful advice right now; it may have to wait for part two. I do have a small group of friends who make their living publishing on YouTube and TikTok and Instagram. The screen is their prime media for non-fiction work. I know enough of these new kinds of professional creators to see that this mode is a very viable path. I also have sufficient evidence from my own platform tracking to clearly see that the audience for text is stagnant, and skews older, while the audience for moving images continues to expand greatly, while getting younger. Of the two, I know where I want to work. (It is not lost on me that this essay is text and not a video. I'm working on it….)
Summary Advice:
In conclusion, the way I approach publishing today is with as much self-publishing as I can handle. I’d write in public installments, as a subscription newsletter, or e-book single chapters, or simple posts on my blog. If I could find an audience that wanted more of the material, I’d rewrite, re-edit, re-compose the material into a longer form. I’d release that as an ebook, and/or on-demand printed book sold in my shopify shop. If the material was deep, or involved more creators than just myself, I’d consider crowdfunding it. Those presales allow me to target exactly how many copies to produce. I’d calculate the cost of drop shipping. And at every stage I’d be making some kind of visual version for YouTube and the other attention seeking channels, because that is where the attention is.
To clarify this complicated advice even further, I have made a flow chart of possible options for publishing and self-publishing. This is roughly the decision tree I tend to follow when I am figuring out the best mode for the material and my goals. I hope you find it useful too.

A 16-page PDF of this article is available for free download here.

No Limit for Better
I want to argue that intelligence may be unique among all resources on the planet. It may be the only resource we create that sees truly infinite, insatiable demand.
Pricing abundance is tricky. Netflix, Spotify, and millions of software apps are offered at a fixed price for unlimited use. That works — they make money — because in fact, there is not unlimited use of them. We get satiated pretty quickly. We only watch so many hours, listen for limited hours, or eventually stop scrolling. This may not be true of AI. It looks like the demand for AI can exceed our own bounded time.
This is not true for other resources we create. When food and calories were scarce, we never imagined anyone would walk by an all-you-can-eat buffet and not take a bite. When clothes were scarce we never imagined anyone would not keep a hat they were handed for free. But it turns out that some of our most basic demands are in fact not unlimited. We can become satiated with food, with clothing, with entertainment, with shelter, and even with companionship (most people top out at 150 friends).
We dream of a world where we all have just a little more than we need. That world is often called utopia, and most commonly believed to be impossible. After all, to satisfy everyone with more than they need would require an infinite amount of resources. At least that is a common belief. However as abundance in some areas of life becomes more common these days, we see evidence that our consumption of many resources is not unlimited. And it may be that very few resources actually have real unlimited or insatiable appetites.
There are a couple of resources that seem to be in insatiable demand, such as energy, or bandwidth. However energy is not as insatiable as it appears. Folks who have massive solar for their homes have abundant "free" (as in unmetered) energy. And while they may "waste" energy by keeping the lights on during the day, or running air conditioners all the time, at some point, they just can not use any more energy. The same with bandwidth. At first screens and phone calls had noticeable low resolution, and we wanted more and more bandwidth and storage. But at a certain point more pixels and more megabytes simply don't matter. We may not have reached the point where there are no more improvements — we still need convincing 3-dimensional immersive worlds indistinguishable from reality — but it seems clear that our senses can be satiated.
Health care in another that has often been declared as under "insatiable demand" because it is hard to provide freely. But the odd thing about health care is no one really wants infinite amounts of health care, because it costs you time and hassle to get it. Even when it is monetarily free, it is not really free to you. Ideally, in a perfect world, you would need no care at all because you would be perfectly healthy. In fact you want minimum care; the least amount of super great care. If you are sucking up huge gobs of care, that's a sad story you want to avoid. You do want unlimited potential care, but not unlimited care. I have some friends who run a boutique medical service for billionaires. For a generous mind-boggling retaining fee, they offer 24/7 deeply personal unlimited medical care. It's all you can use for a fixed fee. And yet their clients rarely use much of it (with some exceptions for emergencies). Their clients are NOT consuming their services 24/7, draining it out because it is "free." When you give the option of unlimited health care, most people most of the time have limited use.
That's true for individuals. What about society at large? Even though there is likely no insatiable demand for resources, there is growing demand at the society level, primarily because the human population continues to grow, and a greater portion of that population is getting the wealth to afford the resources. In addition, progress tends to lower the costs per unit for every resource, and that cheaper supply increases more uses, so consumption in total aggregate balloons. That means that if there is a limited supply, and technology figures out how to need less of it, we will still use more of it. This is called the Jevons Paradox. So while there is not insatiable demands for resources, there can be exponentially expanding demand collectively.
This is the pattern we seem to see with AI to date. It is perhaps the fastest exploited resource we ever made, with the fastest per capita adoption and the fastest increase in hourly usage. But this is just the First Wave.
The First Wave of AIs is measured in humans. How many people have accounts and are using it? Eventually, everyone on the planet will be chating with AIs, and so this first wave will be satiated soon enough.

The Second Wave of AIs is measured in hours. How many hours are people using our AIs? The end goal of many visions of AI is that it be an ALWAYS ON resource. Perhaps your AI agent lives inside of your smart glasses, so it is listening to you all day and watching what you see while you are awake, and watching you, too, and guiding, whispering to you even before you summon it, because it is always on 24 hours, hopping off your glasses onto your device in the bedroom, or abiding in your walls at night. So the makers of AIs measure how many hours they deliver.

The Third Wave of AIs is measured in tokens. The real serious market for AIs is not humans, but other AIs. As we enter the agent world, one agent relies on sub-agents to do work, and these sub-agents themselves dish out work to other agents, and so on, with each layer consuming AI tokens to process, so we can quickly accumulate a vast networked system of interacting AI agents consuming tokens.

And while I can watch only so many hours of Netflix, there is no limit to how much intelligence I can consume. I can give my agents a task and have them check it and recheck it and then triple check it. Soon I will be able to deploy thousands of agents working around the clock on my projects. I can have an army of agents work while I sleep on making whatever they made 10x better tomorrow. I could give them another day and have the army make it 100x better. And here is the catch: there appears to be no limit for better.
In a weird way, if the AIs really do what we hope they can do, they become a proxy for betterment. They also become a proxy for human time, which we are unable to manufacture otherwise. We are bound — even the billionaires among us — to 24 hours a day. But if we have human proxies thinking and working day and night — armies of them in the trillions — we suddenly have unlimited time. AIs may be a way for us to finally manufacture additional hours in the day, without stealing it from another human. And this kind of time also seems to be insatiable.
The conundrum, of course, is that producing this infinite ocean of AI is not free. It is nowhere near free. Even though the cost per token (and the energy per token) decreases, we will use far more of them because there is no limit for betterment. That might suggest that the amount we spend on AI will continue to rise, or at least not fall as fast as we might hope. I am reminded that for many decades the price of a new laptop has sort of remained the same, at about $1,000, even as Moore's Law raged on unceasingly, dropping the cost per transistor to almost zero. The cost of AI tokens will likely follow a similar pattern, where the cost per token falls but the total spend on AI rises. Just as we don't really count transistors anymore, in the coming years we may ignore the count of tokens as well. We'll switch our metrics to something more meaningful, perhaps "tasks", or some kind of measure that reflects the value of the AI's work.
But how do you price AIs? The 99% bulk of AIs that are inward facing, that interact with other AIs, that are invisible infrastructure, that we never notice unless they are down, these AIs can be metered. But the 1% of AIs that are outward facing, that humans interact with, that are visible, whispering into our ears, overlaying our vision, always on, probably won't be metered, because we tend to not like meters running in our heads. We want unlimited access service, which is why we have subscriptions for things that spend our attention. We'll avoid any meter measuring out the godhood that personal AIs give us. But the agent sitting on our shoulder, always on, has at its command, an infinite army of intelligences, always on, and this army will probably need to be metered because there is no limit to better.
When we begin comparing various unlimited personal AI plans we can imagine some of the qualities that might be advertised: speed of response; how far ahead it can anticipate; span of control (how deep of an army of subs); variety of experts on call, privacy/personalization controls, Features and interface design will sell the service, even if tokens were free — which they will essentially be very soon.

Artificial Intelligences, So Far
I wrote this short memo last November, 2024, at the invitation of Wired Mid-East for their year-end issue. I think it still holds up nine months later, and represents where we are on this astounding journey.
There are three points I find helpful when thinking about AIs so far:
The first is that we have to talk about AIs, plural. There is no monolithic singular AI that runs the world. Instead there are already multiple varieties of AI, and each of them have multiple models with their own traits, quirks, and strengths. For instance there are multiple LLM models, trained on slightly different texts, which yield different answers to queries. Then there are non-LLM AI models – like the ones driving cars – that have very different uses besides answering questions. As we continue to develop more advanced models of AI, they will have even more varieties of cognition inside them. Our own brains in fact are a society of different kinds of cognition – such as memory, deduction, pattern recognition – only some of which have been artificially synthesized. Eventually, commercial AIs will be complicated systems consisting of dozens of different types of artificial intelligence modes, and each of them will exhibit its own personality, and be useful for certain chores. Besides these dominant consumer models there will be hundreds of other species of AI, engineered for very specialized tasks, like driving a car, or diagnosing medical issues. We don’t have a monolithic approach to regulating, financing, or overseeing machines. There is no Machine. Rather we manage our machines differently, dealing with airplanes, toasters, x-rays, iphones, rockets with different programs appropriate for each machine. Ditto for AIs.
And none of these species of AI – not one – will think like a human. All of them produce alien intelligences. Even as they approach consciousness, they will be alien, almost as if they are artificial alien beings. They think in a different way, and might come up with solutions a human would never do. The fact that they don’t think like humans is their chief benefit. There are wicked problems in science and business that may require us to first invent a type of AI that, together with humans, can solve problems humans alone cannot solve. In this way AIs can go beyond humans, just like whale intelligence is beyond humans. Intelligence is not a ladder, with steps along one dimension; it is multidimensional, a radiation. The space of possible intelligences is very large, even vast, with human intelligence occupying a tiny spot at the edge of this galaxy of possible minds. Every other possible mind is alien, and we have begun the very long process of populating this space with thousands of other species of possible minds.
The second thing to keep in mind about AIs is that their ability to answer questions is probably the least important thing about them. Getting answers is how we will use them at first, but their real power is in something we call spatial intelligence – their ability to simulate, render, generate, and manage the 3D world. It is a genuine superpower to be able to reason intellectually and to think abstractly – which some AIs are beginning to do – but far more powerful is the ability to act in reality, to get things done and make things happen in the physical world. Most meaningful tasks we want done require multiple steps, and multiple kinds of intelligences to complete. To oversee the multiple modes of action, and different modes of thinking, we have invented agents. An AI agent needs to master common sense to navigate through the real world, to be able to anticipate what will actually happen. It has to know that there is cause and effect, and that things don’t disappear just because you can’t see them, or that two objects can not occupy the same place at the same time, and so on. AIs have to be able to understand a volumetric world in three dimensions. Something similar is needed for augmented reality. The AIs have to be able to render a virtual world digitally to overlay the real world using smart glasses, so that we see both the actual world and a perfect digital twin. To render that merged world in real time as we move around wearing our glasses, the system needs massive amounts of cheap ubiquitous spatial intelligence. Without ubiquitous spatial AI, there is no metaverse.
We have the first glimpses of spatial intelligence in the AIs that can generate video clips from a text prompt or from found images. In laboratories we have the first examples of AIs that can generate volumetric 3D worlds from video input. We are almost at the point that one person can produce a 3D virtual world. Creating a video game or movie now becomes a solo job, one that required thousands of people before.
Just as LLMs were trained on billions of pieces of text and language, some of these new AIs are being trained on billions of data points in physics and chemistry. For instance, the billion hours of video from Tesla cars driving around are training AIs on not just the laws of traffic, but the laws of physics, how moving objects behave. As these spatial models improve, they also learn how forces can cascade, and what is needed to accomplish real tasks. Any kind of humanoid robot will need this kind of spatial intelligence to survive more than a few hours. So in addition to training AI models to get far better at abstract reasoning in the intellectual realm, the frontier AI models are rapidly progressing at improving their spatial intelligence, which will have far more use and far more consequence than answering questions.
The third thing to keep in mind about AIs is that you are not late. You have time; we have time. While the frontier of AI seems to be accelerating fast, adoption is slow. Despite hundreds of billions of dollars invested into AI in the last few years, only the chip maker Nvidia and the data centers are making real profits. Some AI companies have nice revenues, but they are not pricing their service for real costs. It is far more expensive to answer a question with an LLM than the AIs that Google has used for years. As we ask the AIs to do more complicated tasks, the cost will not be free. Most people will certainly pay for most of their AIs, while free versions will be available. This slows adoption.
In addition, organizations can’t simply import AIs as if they were just hiring additional people. Work flows and even the shape of the organizations need to change to fit AIs. Something similar happened as organizations electrified a century ago. One could not introduce electric motors, telegrams, lights, telephones, into a company without changing the architecture of the space as well as the design of the hierarchy. Motors and telephones produced skyscraper offices and corporations. To bring AIs into companies will demand a similar redesign of roles and spaces. We know that AI has penetrated smaller companies first because they are far more agile in morphing their shape. As we introduce AIs into our private lives, this too will necessitate redesign of many of our habits, and all this takes time. Even if there was not a single further advance in AI today, it will take 5 to 10 years to fully incorporate the AIs we already have into our orgs and lives.
There’s a lot of hype about AI these days, and among those who hype AI the most are the doomers – because they promote the most extreme fantasy version of AI. They believe the hype. A lot of the urgency for dealing with AI comes from the doomers who claim 1) that the intelligence of AI can escalate instantly, and 2) we should regulate on harms we can imagine rather than harms that are real. Despite what the doomers proclaim, we have time because there has been no exponential increase in artificial intelligence. The increase in intelligence has been very slow, in part because we don’t have good measurements for human intelligence, and no metrics for extra-human intelligence. But the primary slow rate is due to the fact that the only exponential in AI is in its input – it takes exponentially more training data, and exponentially more compute to make just a modest improvement in reasoning. The artificial intelligences are not compounding anywhere near exponential. We have time.
Lastly, our concern about the rise of AIs should be in proportion to its actual harm vs actual benefits. So far as I have been able to determine, the total number of people who have lost their jobs to AI as of 2024, is just several hundred employees, out of billions. They were mostly language translators and a few (but not all) help-desk operators. This will change in the future, but if we are evidence based, the data so far is that the real harms of AI are almost nil, while the imagined harms are astronomical. If we base our policies for AIs on the reasonable fact that they are varied and heterogenous, and their benefits are more than answering questions, and that so far we have no evidence of massive job displacement, then we have time to accommodate their unprecedented power into our society.
The scientists who invented the current crop of LLMs were trying to make language translation software. They were completely surprised that bits of reasoning also emerged from the translation algorithms. This emergent intelligence was a beautiful unintended byproduct that also scaled up magically. We honestly have no idea what intelligence is, so as we make more of it and more varieties of it, there will inevitably be more surprises like this. But based on the evidence of what we have made so far, this is what we know.

An Audience of One

Today, AI tools lower the energetic costs of creating something. They make it easier to start and easier to finish. AIs can do a lot of the hard work needed in making something real.
I find little joy in having AIs do everything when I am being creative, but I do get enjoyment in co-creating with them. Co-creation feels like real creation. My role still takes effort and significantly determines the quality, style and nature of what is created. I typically spend 30 minutes to an hour co-generating an image in an engine like Midjourney. I have spent hours with AI co-writing an essay. All the stuff I like the best requires my personal attention and involvement as a co-creator.
My hypothesis is that in the near future, the bulk of creative content generated by humans – with the assistance of AI – will have an audience of one. Most art generated each day will be consumed primarily only by its human co-creator. Very little completed art will be shared widely with others – although a small percentage of it will be shared widely.
If most art created in the future is not shared, why is it made? It will be chiefly made for the pleasure of making it. In other words, the majority of all the creative work in the future will be made primarily and chiefly for the joy and thrill of co-creation.
Right now there are roughly 50 million images generated each day by AIs such as Midjourney, Google and Adobe, etc. Vanishing few of these 50 million images per day are ever shared beyond the creator. Still image creation today already predominantly has an audience of one.
A large portion of these still images are preliminary: a sketch, a first draft, a doodle, a memo, a phrase, not meant to share. But even among those creations completed, very few are shared – because they were made for the pleasure of making them. You can generate an endless stream of beauty for the same reason you take a stroll through a garden, or hike into the mountains – in the hope you’ll catch a moment of beauty. You might try to share what you find, but it is not why you went. You went to co-create it. I think of a walk in a garden, or a hike in the high mountains – a hike that is not necessary for transportation reasons – as an act of co-creation. Together with nature, we are co-creating the moments of beauty we might find. Most of the beauty in the world is never seen by anyone. When we encounter these glimpses of a vista, or an exquisite way something is backlit, we are an audience of one. The joy is in discovering it; sharing is an afterthought.
We have some traditional analogs of an audience of one with journals, sketchbooks, diaries, and logbooks. The creations in these forms are not intended to be shared beyond the creator, and in some cases, that limited audience is what makes them powerful. They bring a type of protective solitude to the creative act, and that power will also be part of an AI-based Audience of One. These kinds of private art act as a generative platform for bigger things. But reckoned in volume, a bona-fide artist may create far more material in private than is ever seen in public. However if you asked an artist why they fill notebooks and sketchbooks and journals, they will not say it is because this creation is inferior, but because they love to create it, because they enjoy it.
For those who view art primarily as a communication act, this art for an audience of one – traditionally found in journals and sketchbooks – still serves as communication, but to the self. In a curious way, AI can elevate self-communication, because its co-creativity enlarges the canvas and deepens the details of this communication to yourself. It is an enlarge self-inquiry.
In an AI-enhanced world, the realm of journals, sketchbooks, diaries and other private forms is expanded. Instead of compiling simple notes, doodles, fast impressions, small observations and other acts that can be done quickly, our journals, sketchbooks, and diaries will include fully rendered paintings, entire novels, feature-length movies, and immersive worlds.
These new creations will shift the time asymmetry long associated with creation. Until now an author would toil years on a book that could be consumed in a day; a painter sweated months over a painting viewed in seconds; a million work hours would be put into a movie that is watched in 2 hours. Henceforth, it may be quicker to generate a movie than to watch one; quicker to co-create a historical novel than to read one; faster to co-make a video game than to play it. This shifts the center of gravity away from consuming to generation in a good way.
I don’t believe that total viewing hours in a society will ever exceed total creation hours, but AI-based co-creation can help balance that imbalance. It makes entering into the creation mode much easier – without the need for an audience to justify the effort. From now on, the default destiny for most art will be for an audience of one, and it will abide in the memory of those who generate it. While some of this co-generated work might find its larger audience and some very tiny fraction of it might even become a popular hit, its chief value will be in the direct, naked pleasure of co-making of it.

Weekly Links, 06/27/2025
- Dark Matter = The weight of information. That's one of the speculations in an offbeat but creative alternative theory of quatum physics that is not entirely a crackpot idea. Way out there. The radical idea that space-time remembers could upend cosmology

Weekly Links, 05/23/2025
- We are going to hear more about Prompt Theory. I for one, fully believe in Prompt Theory. Prompt Theory (Made with Veo 3) - AI-generated characters refuse to believe they were AI-generated

Weekly Links, 04/25/2025
- Something I knew nothing about: the degree to which job interviewees are trying to cheat with AI, and the difficulties that makes for good hires. Tech hiring: is this an inflection point?
- Excellent article on why humanoid robots are slow in coming and why it may take a lot longer to arrive in your home. Robot Dexterity Still Seems Hard
- There are a number of medical technologies that are feasible in the short term but lack sufficient funding to make them happen. I can't vouch for this list of good tech that could but probably won't happen in 5 years, but it is a good place to start. 10 technologies that won't exist in 5 years

Epizone AI: Outside the Code Stack
Thesis: The missing element in forecasting the future of AI is to understand that AI needs culture just as humans need culture.
One of the most significant scientific insights into understanding our own humanity was the relatively recent insight that we are the product of more than just the evolution of genes. While we are genetic descendents of some ape-like creatures in the past, we modern humans are also molded each generation by a shared learning that is passed along by a different mechanism outside of biology. Commonly called “culture”, this human-created environment forms much of what we consider best about our species. Culture is so prevalent in our lives, especially our modern urban lives, that it is invisible and hard to recognize. But without human culture to support us, we humans would be unrecognizable.
A solo, naked human trying to survive in the prehistoric wilderness, without the benefit of the skills and knowledge gained by other humans, would rarely be able to learn fast enough to survive on their own. Very few humans by themselves would be able to discover the secrets of making fire, or the benefits of cooking food, or to discover the medicines found in plants, or learn all the behaviors of animals to hunt, let alone the additional educations need for the habits of planting crops, learning how to knap stone points, sew and fish.
Humanity is chiefly a social endeavor. Because we invented language – the most social thing ever – we have been able to not only coordinate and collaborate in the present, but also to pass knowledge and know-how along from generation to generation. This is often pictured as a parallel evolution to the fundamental natural selection evolution of our bodies. Inside the long biological evolution happening in our cells, learning is transmitted through our genes. Anything good we learn as a species is conveyed through inheritable DNA. And that is where learning ends for most natural creatures.
But in humans, we launched an extended evolution that transmits good things outside of the code of DNA, embedded in the culture conveyed in families, clans, and human society as a whole. From the very beginning this culture contains laws, norms, morals, best practices, personal education, world views, knowledge of the world, learnable survival skills, altruism, and a pool of hard-won knowledge about reality. While individual societies have died out, human culture as a whole has continued to expand, deepen, grow, and prosper, so that every generation benefits from this accumulation.
Our newest invention – artificial intelligence – is usually viewed in genetic terms. The binary code of AI is copied, deployed, and improved upon. New models are bred from the code of former leading models – inheriting their abilities –, and then distributed to users. One of the first significant uses for this AI is in facilitating the art of coding, and in particular helping programmers to code new and better AIs. So this DNA-like code experiences compounding improvement as it spreads into human society. We can trace the traits and abilities of AI by following its inheritance in code.
However, this genetic version of AI has been limited in its influence on humans so far. While the frontier of AI research runs fast, its adoption and diffusion runs slow. Despite some unexpected abilities, AI so far has not penetrated very deep into society. By 2025 it has disrupted our collective attention, but it has not disrupted our economy, or jobs, or our daily lives (with very few exceptions).
I propose that AI will not disrupt human daily life until it also migrates from a genetic-ish code-based substrate to a widespread, heterodox culture-like platform. AI needs to have its own culture in order to evolve faster, just as humans did. It cannot remain just a thread of improving software/hardware functions; it must become an embedded ecosystem of entities that adapt, learn, and improve outside of the code stack. This AI epizone will enable its cultural evolution, just as the human society did for humans.
Civilization began as songs, stories, ballads around a campfire, and institutions like grandparents and shamans conveyed very important qualities not carried in our genes. Later, religions and schools carried more. Then we invented writing, reading, texts and pictures to substitute for reflexes. When we invented books, libraries, courts, calendars, and math, we moved a huge amount of our inheritance to this collaborative, distributed platform of culture that was not owned by anyone.
AI civilization requires a similar epizone running outside the tech stack. It begins with humans using AI everyday, and an emerging skill set of AI collaboration taught by the AI whisperers.There will be alignment protocols, and schools for shaping the moralities of AIs. There will be shamans and doctors to monitor and nurture the mental health of the AIs. There needs to be corporate best practices for internal AIs, and review committees overseeing their roles. New institutions for reviewing, hiring and recommending various species of AI. Associations of AIs that work best together. Whole departments are needed to train AIs for certain roles and applications, as some kinds of training will take time (not just downloaded). The AIs themselves will evolve AI-only interlinguals, which needs mechanisms to preserve and archive. There’ll be ecosystems of AIs co-dependent on each other. AIs that police other AIs. The AIs need libraries of content and intermediate weights, latent spaces, and petabytes of data that need to be remembered rather than re-invented. There are the human agents that have to manage the purchase of, and maintenance of, this AI epizone, at local, national and global levels. This is a civilization of AIs.
A solo, naked AI won’t do much on their own. AIs need a wide epizone to truly have consequence. They need to be surrounded and embedded into an AI culture, just as humans need culture to thrive.
Stewart Brand devised a beautiful analogy to understand civilizational traits. He explains that the functions of the world can be ranked by their pace layers, which depend on all the layers below it. Running the fastest is the fashion layers which fluctuate daily. Not far behind it in speed is the tech layer, which includes the tech of AI. It changes by the week. Below that, (and dependent on it), is the infrastructure layer, which moves slower, and even slower below that is culture, which crawls in comparison. (At the lowest, slowest level is nature, glacial in its speed.) All these layers work at the same time, and upon each other, and many complex things share multiple levels. Artificial Intelligence also works at several levels. Its code-base improves at internet speed, but its absorption and deployment runs at the cultural level. In order for AI to be truly implemented, it must be captured by human culture. That will take time, perhaps decades, because that is the pace of culture. No matter how quick the tech runs, the AI culture will run slower.
That is good news in many respects, because part of what the AI epizone does is incorporate and integrate the inheritable improvements in the tech stack and put them into the slower domain of AI culture. That gives us time to adapt to the coming complex changes. But to prepare for the full consequences of these AIs, we must give our attention to the emerging epizone of AIs outside the code stack.


Hoses

Push & click hose adaptors
These plastic quick connects from Melnor are the gobetweens for the hose and whatever nozzle, sprinklers or other hose-end attachments you may have. They’re especially good for quickly moving and attaching hoses from one faucet to another. I installed them on ALL my faucets (5) and hoses (perhaps 7) and external attachments (probably 10). I have used them for about a year and wonder how I ever got along without them. It takes less than a second (maybe 1/2 second) to attach or detach any hose or attachment. They are installed in pairs, a male and corresponding female connector, with the appropriate threaded fitting to attach to the faucet, hose or nozzle attachment, one on each side of the connection. You just firmly push the connector into its counterpart, and it easily pops into place — firmly means it does need a little pressure, but even a small child could do it. To disconnect, you push the green collar about an eighth of an inch in the one direction it’s capable of moving, and it pops off. (Similar devices have been in use in industry for a long time — on compressed air lines, for example). No more screwing and unscrewing (no more scraped knuckles); no more leaks from incompletely tightened hoses; no more stuck connections because some gorilla (i.e. me) tried to stop a leak by tightening too hard.
One type is designed so that when you disconnect from it, an internal plug pops into place and stops water from coming out. The other type, for between a faucet and hose, does not have the shutoff. When you disconnect the hose from the faucet, water will still flow and the faucet can still be used. There are other brands and styles; some are even made of pricier brass, but I recommend you stick with one manufacturer because connectors are generally not interchangeable between brands. And these inexpensive plastic ones from Melnor are well made: I have (intentionally) very high water pressure (~100 psi, sufficient to burst hoses) on my garden faucets, and I have had no leaks from these connectors. — Robert Ando

Reaching the high spots
We are lucky to have a few apple & peach trees, but they have to be sprayed to insure tasty fruit. Trouble is some are about 20 feet high. I tried a bunch of sprayers, all poor performers, until I discovered the Hudson Trombone Tree Sprayer. It works like using a trombone and throws a great spray — they claim to around 25 feet high and that looks about right. A connecting hose maybe 7-8 feet long rests with a sort of small shower-head-like filter in the bottom of a bucket (not provided).
It uses plain old arm power. You feel like Elliot Ness in the “Untouchables” wielding a Tommy gun, but it works great, is only about $40 (get the one with the two gun grips) and even builds up your forearms and shoulder muscles. It’s also got an adjustable nozzle to adjust spray. It really throws a good heavy directed or dispersed spray; I’m surprised at how much more quickly it gets the job done. Way outperforms pump-up pressure tank ones. — Vince Crisci

Brass connectors
These brass connectors are MUCH better than the plastic Melnor Quick Connects.
These little brass hose connectors make the job of attaching and detaching hoses quick and simple. You pull the collar back on the female connector, and insert the male connector, and you’re ready to roll. Really, it just takes a second or two to provide a secure, leak-proof connection. There are several brands of cheap plastic connectors out there, but these brass ones will last a lifetime. I have a number of them that are 10+ years old, and they work amazingly well. I attach these to everything hose-related: faucets, both ends of the hoses, and all the attachments, and they save me a lot of time and annoyance.
There are two drawbacks to these connectors: people unfamiliar with them will unscrew the whole set up, so if you have handymen, contractors, or yard men who are going to deal with your hoses, you’ll need to explain how they work. The second is that they’re easily lost and misplaced. Even though these connectors are easily lost, they’re so long-lasting and sturdy that when they turn up again, they’ll work perfectly! — Amy Thomson

Fine misting nozzles
I have used Dramm Fogg-it hose nozzles ($12) for a variety of watering and irrigation purposes for more than ten years. They deliver a fine mist of water and are available in different strengths, measured as gallons per minute: ½ GPM, 1 GPM, 2 GPM and 4 GPM. I’ve used all but the 4 GPM model. The ½ GPM nozzle, attached to a wand, is perfect for laying down a fine mist of water on a hot deck to cool things down using a minimal amount of water. You can also water very fragile seedlings, or mist cuttings with it. I use the 1 GPM nozzle for watering seedlings and seed beds. The 2 GPM nozzle is great for general watering of established plants. The fine mist will not break down soil structure, and delivers slowly enough for the soil to take in the moisture without run-off.
I like the fact that I can tweak the flow rate by switching nozzles. If one takes too long, I use a nozzle with a higher flow rate. Or if the spray is damaging tender seedlings, then I use a more gentle nozzle. The fine spray is also a great way to revive a heat-wilted plant.
These nozzles are solid brass, tough and well made. I toss them around mercilessly. Also, mine have never clogged. They fit onto a standard ¾-inch fitting, so you can screw them onto your hose, or any water wand with a hose fitting. Their only drawback is that they’re small enough to get lost easily. — Amy Thomson

Superior garden hose reel
This is a heavy-duty cast aluminum garden hose reel. It costs about twice as much as the plastic reel I replaced and is at least four times the quality and longevity. The materials used are thick cast aluminum, powder-coated, with real stainless steel fasteners and brass fittings. The fittings and bearings are replaceable and heavy duty. The term bulletproof comes to mind.
The reel is configurable as a parallel or perpendicular mount with either a right or left hand hose mount. The design is modular and well thought out. Even the included hex wrenches are well thought out and long enough to reach easily and are of high quality. As a mechanical designer myself, I am able to appreciate a nice robust design and execution. — Jack Kellythorne

Best garden hose
I spent 20 hours researching garden hoses and discovered that the 50-foot Craftsman Premium Rubber Hose for $40 is the best choice for a garden hose. It is built like a tank -- heavy rubber construction and nickel plated brass connectors. It should last years if cared for properly. Not only is it affordable, but it comes with a lifetime warranty that covers you when, not if, the hose eventually breaks. — Oliver Hulland
Once a week we’ll send out a page from Cool Tools: A Catalog of Possibilities. The tools might be outdated or obsolete, and the links to them may or may not work. We present these vintage recommendations as is because the possibilities they inspire are new. Sign up here to get Tools for Possibilities a week early in your inbox.

Last-minute Ticket Savings/Best Food Cities/40K Amtrak Points
Airlines Where it Pays to Wait
Is it better to buy flight tickets way ahead of time or wait until the last minute? The answer sometimes depends on the airline, as this study on last-minute ticket prices in the USA lays out. The conventional wisdom that it’s best to plan ahead is true for JetBlue, Hawaiian, and United. On those you’d pay 16% to 30% more than if you booked ahead. On the other hand, you could score a deal by waiting until the last minute for Alaska, Southwest, Frontier, and Spirit. It only averaged 3.6% and 3.1% less for those last two though, so do it for convenience rather than savings unless your trip will be on Alaska Air (a 22.6% difference).
Best Food Cities in the World?
You’ll surely find plenty to quibble with in this Time Out rundown of the world’s best cities for food (Medellin ahead of Lima and Mexico City, Cairo but not NYC or Tokyo?) Turns out that affordability was one of the factors and they asked locals to chime it, so cities full of picky cynics apparently didn’t fare well. Having any city in Central America on a list like this is just plain wrong, but it’s still fun to look through and think about for future trips. Plans for Lagos anyone?
Expat Stories From Mexico
If you’re looking to escape to another country, Mexico has a lot going for it and has great air connections to get in and out. Many of the articles out there don’t dig very deep into what life is really like for expats through, so if you want the real deal, check out this book from resident Janet Blaser called Going Expat Mexico. It contains 24 in-depth stories from those who have made the move, including one chapter from yours truly. Get it here in paperback or for Kindle.
40,000 Amtrak Points
I’ve got an Amtrak trip from Montreal to Albany coming up in June after recently riding from Atlanta to Charlottesville to go visit my mom and sis. The economy class legroom is more than you get in domestic business class on a plane, there are no luggage fees, and the staffers on my last trip were quite helpful. We arrived on time even. If you love train travel and you’re an American in the market for a new credit card, through the end of April you can get 40,000 Amtrak points by getting this one. If I didn’t live in Mexico I’d be jumping on it pronto.
A weekly newsletter with four quick bites, edited by Tim Leffel, author of A Better Life for Half the Price and The World’s Cheapest Destinations. See past editions here, where your like-minded friends can subscribe and join you.

Gar's Tips & Tools - Issue #196
The Clam Switches of Warsaw

I found this 14-minute documentary utterly captivating. In Warsaw, Poland, they use clams to monitor water quality—an ingenious and surprisingly elegant system. Eight clams are outfitted with a tiny, spring-mounted contact on their top shells. If the water becomes too polluted, the clams instinctively shut, triggering an alarm and closing off the water supply. Simple, natural, brilliant. The film itself has a slow-burn, meditative European quality—like if Ingmar Bergman did a documentary on city water management. [Via Jay Townsend]
Clamps: Their History and Uses

One of me besties, Peter Bebergal, sent me a link to this 1980 guide to clamps from the Cincinnati Tool Company. Basic stuff, but a fun browse for anyone who loves tools, tool history, and vintage industrial training materials.

Tips from an Engineer: Improving Workflow and Precision
In this insightful video, Zach, The Byte-Sized Engineer, shares eight invaluable lessons he’s learned during his years of hands-on engineering and project building. From the importance of failing quickly to improving problem-solving efficiency, to the hidden functions of digital calipers (depth gauge, step measuring) that enhance measurement accuracy, these practical tips can save time and frustration. He highlights tools like logic analyzers for debugging, deburring tools for cleaning 3D prints, and corrugated tubing for better wire management, all aimed at making engineering work more efficient. Additional tips include using isopropyl alcohol to remove hot glue easily, strengthening CA/super glue bonds with baking soda, and adopting better soldering techniques using flux gels and third-hand tools. Whether you're a seasoned engineer or a hobby maker, these tips might help improve your workflow and precision.
Did He Say ‘Home Depot Arrowheads”?
Maker pal Dug North sent me this recent video of his ‘cause he thought that I “might get a kick out of it.” He was right! Having always been interested in flintknapping, I love watching videos of people doing it. And I especially fine glassknapping fascinating. In this video, Dug makes his own knapping tools from common Home Depot materials.
Cool Tools’ Kevin Kelly has pointed out something interesting about flintknapping (and other “bygone” technologies). There are actually more people practicing this stone age technology today than when it was the reigning tool tech. In fact, Kevin argues, technologies rarely die off. Once conceived, human inventions, regardless of their obsolescence, continue to exist through enthusiasts, historians, and niche communities. Dug knows a few things about this. When not knapping glass arrowheads and doing bushcraft, he also engages in the ancient art of mechanical automata.
Making a Screw Shortening Jig
Quinn of Blondihacks delivers another fantastic machining tutorial, this time focusing on creating a custom screw shortening tool—a handy bit ‘o kit for model engineers, watchmakers, and anyone needing small, precision screws cut to length. The project evolved from a makeshift aluminum jig she originally made into a durable steel fixture with precisely-machined threaded holes, solving the problem of deforming screw ends while ensuring repeatable accuracy. As usual, along the way, Quinn offers valuable machining insights, including stress-relief techniques, optimal cutting methods, and the importance of deburring. A clever last-minute fix—adding a "keeper" to stabilize the tool’s jaws—proves the iterative nature of good design. Finished with cold bluing and stamped size markings, the final tool is both functional and refined. Another fun and informative watch for machining enthusiasts, armchair [raises hand[, or otherwise.
Your Inspired Objects
I got a wonderful response to my piece about inspired objects. I’ll be showing them over the next few issues. This is inspired objects — humble objects edition. If you have an object you think is inspired (humble or otherwise), please share it!
Sydney Smith writes:
This is an inexpensive Japanese utility knife purchased on Amazon. Search for Takagi Gisuke Cutting. Before decorating with chip carving, it was so-so. After decoration, it's enjoyable to pick up and use. Now it's a workbench favorite.

I love this handle “hack” from reader Gideon Weinerth. I think I’m going to have to try this:
This is my inspired object. I took a Rada brand vegetable peeler which was excellent but had a crummy discolored handle. I wrapped the handle in aluminum foil and then applied a layer of Sculpey polymer clay over it. I then squeezed the bulkier handle until it perfectly matched my hand positions for way I like to peel. I baked it, let it cool, then dipped it in Plasti-Dip which needs another application. Makes meal prep a dream. Great technique for any other sub-par handle.

Ernie Hayden writes:
I was given one of these iSlice ceramic cutters about 10-15 years ago and use it almost daily to cut open tight, cellophane packaging, sealed packages, etc. It is really terrific. My old one even has a magnet in the handle so it can be placed on a metal cabinet or metal toolbox for handy use.

Consider a Paid Subscription
Gar’s Tips & Tools is free, but if you really like what I’m doing here and want to support me, please consider a paid subscription. Same great taste, but more cheddar for me to help keep me stocked in neodymium magnets. I will also pick paid subscribers at random and send them out little treats on occasion.
We have a winner! Rob Stone, you were the winner of our Work Pro multitool (or PDFs of my tips books) drawing. Congrats! I sent you an email.
Special thanks to all of my paid subscribers so far and an extra special thanks to Hero of the Realm, Jim Coraci.
Gar’s Tips, Tools, and Shop Tales is published by Cool Tools Lab. To receive the newsletter a week early, sign up here.

Weekly Links, 03/21/2025
- This article "Fetility on Demand" by @RuxandraTeslo is a fantastic report on one way to increase the fertility rate by artificially extending reproductive age. Fertility on demand

Best Thing Since Sliced Bread?
The other day I was slicing a big loaf of dark Italian bread from a bakery; it is a pleasure to carve thick hunks of hearty bread to ready for the toaster. While I was happily slicing the loaf, the all-American phrase “the best thing since sliced bread” popped into my head. So I started wondering, what was the problem that pre-sliced bread solved? Why was sliced bread so great?
Shouldn’t the phrase be “the best thing since penicillin”, or something like that?
What is so great about this thing we now take for granted? My thoughts cascaded down a sequence of notions about sliced bread. It is one of those ubiquitous things we don’t think about.
- Maybe the bread they are talking about is fluffy white Wonder bread that crushes really easy. That might be hard to slice, and so getting white bread pre sliced is nice.
- Maybe the bread they are talking about is not as tender as it is today, and it was actually tough to slice very thin for a sandwich. Buying pre slice saved embarrassment, and so in that respect it was a wonder.
- Maybe it is hard to automate sliced bread, and while not that much of a selling point, maybe it took some technical innovation to make it happen. Otto Frederick Rohwedder, an American inventor, developed the first successful bread slicing machine in 1928, but it took some years for the invention to trickle into bakeries around the country.
- Maybe this was a marketing ploy by commercial bread bakers, to sell a feature that becomes a necessity.
- Maybe this phrase has always been said ironically. Maybe from the beginning everyone knew that sliced bread was a nothing burger, and it was meant to indicate that the new thing was no big deal. Only later did the original meaning lapse and it become un-ironic.
- Maybe it is still ironic, and I am the last person to misunderstand that it is not to be taken as an indicator of goodness.
Turns out I am not the first to wonder about this. The phrase's origins lie — no surprise — in marketing the first commerical sliced bread in the 1930s. It was touted in ads as the best new innovation in baking. The innovation was not slices per se, but uniform slices. During WWII in the US, sliced bread was briefy banned in 1943 to conserve the extra paper wrapping around sliced bread for more paper for the war effort, but the ban was rescinded after 2 months because so many people complained of missing the convienence of slice bread — a time when bread was more central to our diets. With the introduction of a mass-manufacture white bread like Wonder Bread, the phrase became part of its marketing hype.
I think the right answer is 4 — its a marketing ploy for an invention that turns a luxury into a necessity. I can't imagine any serious list of our best inventions that would include sliced bread, although it is handy, and is not going away.
That leads me to wonder: what invention today, full of our infactuation, will be the sliced bread of the future?
Instagram? Drones? Tide pods, Ozempic?
This is the best thing since ozempic!

Public Intelligence
Imagine 50 years from now a Public Intelligence that was a distributed, open-source, non-commercial artificial intelligence, operated like the internet, and available to the whole world. This public AI would be a federated system, not owned by any one entity, but powered by millions of participants to create an aggregate intelligence beyond what one host could offer. Public intelligence could be thought of as an inter-intelligence, an AI composed of other AIs, in the way that the internet is a network of networks. This AI of AIs, would be open and permissionless: any AI can join, and its joining would add to the global intelligence. It would be transnational, and wholly global – an AI commons. Like the internet, it would run on protocols that enabled interoperability and standards. Public intelligence would be paid for by usage locally, just as you pay for your internet access, storage, or hosting. Local generators of intelligence, or contributors of data, would operate for profit, but in order to get the maximum public intelligence, you need to share your work in this public non-commercial system.
For an ordinary citizen, the AI commons of public intelligence would be an always-on resource, that would deliver as much intelligence as they required, or are willing to pay for. Minimum amounts would almost be free. Maximum amounts would be gated and priced accordingly. AI of many varieties will be available from your own personal devices, whether it be a phone, glasses, in a vehicle, or in a bot in your house. Fantastic professional intelligence can also be bought from specialty AI providers, like Anthropic and DeepSeek. But public intelligence offers all these plus planetary-scale knowledge and a super intelligence that works at huge scales.
Algos within public intelligence route hard questions one way and easy questions in another, so for most citizens, they only deal with the public intelligence with one interface. While public intelligence is composed of thousands of varieties of AI, and each of those comprises an ecosystem of cognitions, to the user these appear as a single entity, a public intelligence. A good metaphor for the technical face of this aggregated AI commons, is to imagine it as a rainforest, crowded with thousands of species, all co-dependent on each other, some species consuming what the other produces, all of them essential for the productivity of the forest.
Public intelligence is a rainforest of thousands of species of AI, and in summation it becomes – like our forests and oceans – a public commons, a public utility at a global scale.
At the moment, the training material for artificial intelligences we have is haphazard, opaque, and partial. So far, as of 2025, LLMs have been trained on a very small, and very peculiar set of writings, that are far from either the best, or the entirety, of what we know. For archaic legal reasons, much of the best training material has not been used. Ideally, the public intelligence would be trained on ALL the books, journals and documents of the world, in all languages, in order to create for the public good the best AIs we can make for all.
As the public intelligence grows, it will continue to benefit from having access to new information and new knowledge, including very specific, and local information. This is one way its federated nature works. If I can share with the public intelligence what I learn that is truly new, the public intelligence gains from my participation, and in aggregate gains from billions of other users as they contribute.
A chief characteristic of public intelligence is that it is global, or perhaps I should say, planetary. It is not only accessible by the public globally, it also is trained on a globally diverse set of training materials in all languages, and it is also planetary in its dimensions. For instance, this AI commons integrates environmental sensing data – such as weather, water, air traffic – from around the world, and from the cloak of satellites circling the planet. Billions of moisture sensors in farmland, tide flows in wetlands, air quality sensors in cities, rain gauges in backyards and trillions of other environmental sensors feed rivers of data into the public intelligence creating a sort of planetary cognition grid.
Public intelligence would encompass big thoughts about what is happening planet wide, as well as millions of smaller thoughts on what is happening in niche areas that would feed the intelligence with specific information and data, such as DNA sampling of sewage water from cities, to monitor the health of cities.
There is no public intelligence right now. Currently Open AI is not a public intelligence; there is very little open about it beyond its name. Other models in 2025 that are classified as open source, such as Meta’s, and Deepseek’s, are leaning in the right direction, but only open and to very narrow degrees. There have been several initiatives to create a public intelligence, such as Eleuther.ai, and LAION, but there is no real progress or articulated vision to date.
The NSF (in the US) is presently funding an initiative to coordinate international collaboration on networked AI. This NSF AI Institute for Future Edge Networks and Distributed Intelligence is primarily concerned with trying to solve hard technical problems such as 6G and 7G wireless distributed communication.
Diagram from NSF AI Institute for Future Edge Networks and Distributed Intelligence
Among these collaborators is a program at Carnegie Mellon University focused on distributed AI. They call this system AI Fusion, and say “AI will evolve from today’s highly structured, controlled, and centralized architecture to a more flexible, adaptive, and distributed network of devices.” The program imagines this fusion as an emerging platform that enables distributed artificial intelligence to run on many devices, in order to be more scalable, more flexible, more active, in redirecting itself when needed, or even finding data it needs instead of waiting to be given it. But in none of these research agendas is the mandate of a public resource, open source, or an intelligence commons more than a marginal concern..
Sketch from AI Fusion
A sequence of steps will be needed to make a public intelligence:
- We need technical breakthroughs in "Sparse Activation Routing," enabling efficient distribution of computation across heterogeneous devices from smartphones to data centers. We need algos for dynamic resource allocation, automated model verification, and enhanced distributed security protocols. And we need breakthroughs in collective knowledge synthesis, enabling the public intelligence to identify and resolve contradictions across domains automatically.
- We need to release a Public Intelligence Protocol, establishing standards for secure model sharing, training, and interoperability, and establish a large-scale federated learning testbed connecting 50+ global universities demonstrating the feasibility of training complex models without centralizing data. A crucial technology is continuous-learning protocols, which enable models to safely update in real-time based on global usage patterns while preserving privacy.
- We need to pioneer national policies in small hi-tech countries such as Estonia, Finland, and New Zealand, explicitly supporting public intelligence infrastructure as digital public goods as a place to prototype this commons.
- An essential development would be the first legal framework for an AI commons, creating a new class of digital infrastructure with specific governance and access rights. This would go hand in hand with two other needed elements: "Differential Privacy at Scale" techniques, allowing sensitive data to be used for training while providing mathematical guarantees against privacy breaches. And "Community Intelligence Trusts," allowing local communities to maintain specialized knowledge and capabilities within the broader ecosystem.
There is a very natural tendency for AI to become centralized by a near monopoly, and probably a corporate monopoly. Intelligence is a networked good. The more it is used, the more it can learn. The more it learns, the smarter it gets. The smarter it gets, the more it is used. Ad infinitum. A really good AI can swell very fast as it is used and gets better. All these dynamics move AI to become centralized and a winner-take-all. The alternative to public intelligence is a corporate or a national intelligence. If we don’t empower public intelligence, then we have no choice but to empower non-public intelligences.
The aim of public intelligence is to make AI a global commons, a public good for maximum people. Political will to make this happen is crucial, but equally essential are the technical means, the brilliant innovations needed that we don’t have yet, and are not obvious. To urge those innovations along, it is helpful to have an image to inspire us.
The image is this: A Public Intelligence owned by everyone, composed of billions of local AIs, needing no permission to join and use, powered and paid for by users, trained on all the books and texts of humankind, operating at the scale of the planet, and maintained by common agreement.

The Unpredicted
It is odd that science fiction did not predict the internet. There are no vintage science fiction movies about the world wide web, nor movies that showed the online web as part of the future. We expected picture phones, and online encyclopedias, but not the internet. As a society we missed it. Given how pervasive the internet later became this omission is odd.
On the other hand, there have been hundreds of science fiction stories and movies predicting artificial intelligence. And in nearly every single one of them, AI is a disaster. They are all cautionary tales. Either the robots take over, or they cause the end of the world, or their super intelligence overwhelms our humanity, and we are toast.
This ubiquitous dystopia of our future with AI is one reason why there is general angst among the public for this new technology. The angst was there even before the tech arrived. The public is slightly fearful and wary of AI based not on their experience with it, but because this is the only picture of it they have ever seen. Call up an image of a smart robot and you get the Terminator or its ilk. There are no examples of super AI working out for good. We literally can’t imagine it.
Another factor in this contrast between predicting AI and not predicting the internet is that some technologies are just easier to imagine. In 1963 the legendary science fiction author Arthur C. Clarke created a chart listing existing technologies that had not been anticipated widely, in comparison to other technologies that had a long career in our imaginations.
Clarke called these the Expected and the Unexpected, published in his book Profiles of the Future in 1963.
Clarke does not attempt to explain why some inventions are expected while others are not, other than to note that many of the expected inventions have been anticipated since ancient times. In fact their reality – immortality, invisibility, levitation – would have been called magic in the past.
Artificial beings – robots, AI – are in the Expected category. They have been so long anticipated that there has been no other technology or invention as widely or thoroughly anticipated before it arrived as AI. What invention might even be second to AI in terms of anticipation? Flying machines may have been longer desired, but there was relatively little thought put into imagining what their consequences might be. Whereas from the start of the machine age, humans have not only expected intelligent machines, but have expected significant social ramifications from them as well. We’ve spent a full century contemplating what robots and AI would do when it arrived. And, sorry to say, most of our predictions are worrisome.
So as AI is beginning to finally hatch, it is not being as fully embraced as say the internet was. There are attempts to regulate it before it is operational, in the hopes of reducing its expected harms. This premature regulation is unlikely to work because we simply don’t know what harms AI and robots will really do, even though we can imagine quite a lot of them.
This lopsided worry, derived from being Over Expected, may be a one-time thing unique to AI, or it may become a regular pattern for tech into the future, where we spend centuries brewing, stewing, scheming, and rehearsing for an invention long before it arrives. That would be good if we also rehearsed for the benefits as well as harms. We’ve spent a century trying to imagine what might go wrong with AI. Let’s spend the next decade imagining what might go right with AI.
Even better, what are we not expecting that is almost upon us? Let’s reconsider the unexpecteds.