Public Intelligence
Imagine 50 years from now a Public Intelligence that was a distributed, open-source, non-commercial artificial intelligence, operated like the internet, and available to the whole world. This public AI would be a federated system, not owned by any one entity, but powered by millions of participants to create an aggregate intelligence beyond what one host could offer. Public intelligence could be thought of as an inter-intelligence, an AI composed of other AIs, in the way that the internet is a network of networks. This AI of AIs, would be open and permissionless: any AI can join, and its joining would add to the global intelligence. It would be transnational, and wholly global – an AI commons. Like the internet, it would run on protocols that enabled interoperability and standards. Public intelligence would be paid for by usage locally, just as you pay for your internet access, storage, or hosting. Local generators of intelligence, or contributors of data, would operate for profit, but in order to get the maximum public intelligence, you need to share your work in this public non-commercial system.
For an ordinary citizen, the AI commons of public intelligence would be an always-on resource, that would deliver as much intelligence as they required, or are willing to pay for. Minimum amounts would almost be free. Maximum amounts would be gated and priced accordingly. AI of many varieties will be available from your own personal devices, whether it be a phone, glasses, in a vehicle, or in a bot in your house. Fantastic professional intelligence can also be bought from specialty AI providers, like Anthropic and DeepSeek. But public intelligence offers all these plus planetary-scale knowledge and a super intelligence that works at huge scales.
Algos within public intelligence route hard questions one way and easy questions in another, so for most citizens, they only deal with the public intelligence with one interface. While public intelligence is composed of thousands of varieties of AI, and each of those comprises an ecosystem of cognitions, to the user these appear as a single entity, a public intelligence. A good metaphor for the technical face of this aggregated AI commons, is to imagine it as a rainforest, crowded with thousands of species, all co-dependent on each other, some species consuming what the other produces, all of them essential for the productivity of the forest.
Public intelligence is a rainforest of thousands of species of AI, and in summation it becomes – like our forests and oceans – a public commons, a public utility at a global scale.
At the moment, the training material for artificial intelligences we have is haphazard, opaque, and partial. So far, as of 2025, LLMs have been trained on a very small, and very peculiar set of writings, that are far from either the best, or the entirety, of what we know. For archaic legal reasons, much of the best training material has not been used. Ideally, the public intelligence would be trained on ALL the books, journals and documents of the world, in all languages, in order to create for the public good the best AIs we can make for all.
As the public intelligence grows, it will continue to benefit from having access to new information and new knowledge, including very specific, and local information. This is one way its federated nature works. If I can share with the public intelligence what I learn that is truly new, the public intelligence gains from my participation, and in aggregate gains from billions of other users as they contribute.
A chief characteristic of public intelligence is that it is global, or perhaps I should say, planetary. It is not only accessible by the public globally, it also is trained on a globally diverse set of training materials in all languages, and it is also planetary in its dimensions. For instance, this AI commons integrates environmental sensing data – such as weather, water, air traffic – from around the world, and from the cloak of satellites circling the planet. Billions of moisture sensors in farmland, tide flows in wetlands, air quality sensors in cities, rain gauges in backyards and trillions of other environmental sensors feed rivers of data into the public intelligence creating a sort of planetary cognition grid.
Public intelligence would encompass big thoughts about what is happening planet wide, as well as millions of smaller thoughts on what is happening in niche areas that would feed the intelligence with specific information and data, such as DNA sampling of sewage water from cities, to monitor the health of cities.
There is no public intelligence right now. Currently Open AI is not a public intelligence; there is very little open about it beyond its name. Other models in 2025 that are classified as open source, such as Meta’s, and Deepseek’s, are leaning in the right direction, but only open and to very narrow degrees. There have been several initiatives to create a public intelligence, such as Eleuther.ai, and LAION, but there is no real progress or articulated vision to date.
The NSF (in the US) is presently funding an initiative to coordinate international collaboration on networked AI. This NSF AI Institute for Future Edge Networks and Distributed Intelligence is primarily concerned with trying to solve hard technical problems such as 6G and 7G wireless distributed communication.
Diagram from NSF AI Institute for Future Edge Networks and Distributed Intelligence
Among these collaborators is a program at Carnegie Mellon University focused on distributed AI. They call this system AI Fusion, and say “AI will evolve from today’s highly structured, controlled, and centralized architecture to a more flexible, adaptive, and distributed network of devices.” The program imagines this fusion as an emerging platform that enables distributed artificial intelligence to run on many devices, in order to be more scalable, more flexible, more active, in redirecting itself when needed, or even finding data it needs instead of waiting to be given it. But in none of these research agendas is the mandate of a public resource, open source, or an intelligence commons more than a marginal concern..
Sketch from AI Fusion
A sequence of steps will be needed to make a public intelligence:
- We need technical breakthroughs in “Sparse Activation Routing,” enabling efficient distribution of computation across heterogeneous devices from smartphones to data centers. We need algos for dynamic resource allocation, automated model verification, and enhanced distributed security protocols. And we need breakthroughs in collective knowledge synthesis, enabling the public intelligence to identify and resolve contradictions across domains automatically.
- We need to release a Public Intelligence Protocol, establishing standards for secure model sharing, training, and interoperability, and establish a large-scale federated learning testbed connecting 50+ global universities demonstrating the feasibility of training complex models without centralizing data. A crucial technology is continuous-learning protocols, which enable models to safely update in real-time based on global usage patterns while preserving privacy.
- We need to pioneer national policies in small hi-tech countries such as Estonia, Finland, and New Zealand, explicitly supporting public intelligence infrastructure as digital public goods as a place to prototype this commons.
- An essential development would be the first legal framework for an AI commons, creating a new class of digital infrastructure with specific governance and access rights. This would go hand in hand with two other needed elements: “Differential Privacy at Scale” techniques, allowing sensitive data to be used for training while providing mathematical guarantees against privacy breaches. And “Community Intelligence Trusts,” allowing local communities to maintain specialized knowledge and capabilities within the broader ecosystem.
There is a very natural tendency for AI to become centralized by a near monopoly, and probably a corporate monopoly. Intelligence is a networked good. The more it is used, the more it can learn. The more it learns, the smarter it gets. The smarter it gets, the more it is used. Ad infinitum. A really good AI can swell very fast as it is used and gets better. All these dynamics move AI to become centralized and a winner-take-all. The alternative to public intelligence is a corporate or a national intelligence. If we don’t empower public intelligence, then we have no choice but to empower non-public intelligences.
The aim of public intelligence is to make AI a global commons, a public good for maximum people. Political will to make this happen is crucial, but equally essential are the technical means, the brilliant innovations needed that we don’t have yet, and are not obvious. To urge those innovations along, it is helpful to have an image to inspire us.
The image is this: A Public Intelligence owned by everyone, composed of billions of local AIs, needing no permission to join and use, powered and paid for by users, trained on all the books and texts of humankind, operating at the scale of the planet, and maintained by common agreement.