Semilattice logo with icon depecting overlapping sets.Semilattice logo with icon depecting overlapping sets.
Blog/
Company6 Nov 2024

Humans + Time

Semilattice believes significant advances in decision-making, management, and governance are possible, but only if we can accurately model the complex human systems which run the world.
Picture of Joseph Wright
Joseph WrightCo-Founder, Product & Engineering
image depicting the complexity of the human world with a zoomed out globe made up of intricate, seemingly moving parts and a network of nodes and connections encircling it

High failure rates are the norm in human systems

This startup has a 90% chance of failure. That level of risk is unacceptable for modes of transport, methods of energy production, and medical procedures, but with startups it’s normal and investors will take the bet. Businesses suffer similar failure rates in most major endeavours: 95% of new products fail, half of marketing spend is wasted, and 50% of the Fortune 500 has dropped out in the last 20 years. It might be good at the macro level, but no CMO or CEO is aiming at failure. The public sector is similar: 80% of government change efforts fail, 90% of large infrastructure projects go over budget (often by more than 50%), and at the time of writing history seems to be starting back up again.
The common thread behind all of these examples is human systems. New products, new campaigns, new policies, and new infrastructure all rely on making things happen with large groups of people: organisations, markets, economies, communities, countries, and cultures. This means they depend fundamentally on understanding and predicting human behaviour and we are not, and have never been, very good at that.
“Twenty-five hundred years ago it might have been said that man understood himself as well as any other part of his world. Today he is the thing he understands least. Physics and biology have come a long way, but there has been no comparable development of anything like a science of human behavior.” ― B.F. Skinner, Beyond Freedom and Dignity
The reason our ability to achieve things with human systems is so stagnant compared to our ability to achieve things with physical and biological systems is that human systems have much higher levels of complexity.

Complexity fundamentally limits progress

A system is a set of components which interact to perform a function. A complex system is one where the interaction of the components produces emergent behaviour: behaviour which cannot be predicted by looking at those components in isolation. Emergent behaviour only happens while the system is in operation and only happens at the macro level of the components’ interaction. It must be observed as it unfolds, and the behaviour must be considered holistically to be understood.
A power station is a complicated system, not a complex system. Predicting what will happen when you press a button in the control room is a tractable problem. You can follow the cabling, check the monitors, do tests and work out exactly what will happen. A network of individuals in communication is a complex system. Working out what happens when you say something to someone is an intractable problem. There is no cabling, no monitors, and tests won’t always yield the same results. The behaviour of the network emerges as a result of the individual decisions of its members, continually unfolding unpredictably. Even if you could decode how a message propagates through the network, you need to consider the system holistically to make sure you’re not missing other emergent phenomena, such as secondary messages getting sent which contradict your original message.
All human systems are complex, with all the good and bad they create coming about through complex processes. Collaboration, infrastructure, trade, and knowledge, along with war, poverty, inequality, and dogma are all emergent phenomena. And human systems never exist in isolation. An organisation overlaps with the city, with the market, and with the cultural lives of its members. Human systems are complex systems of complex systems of complex systems (…), multiplying their complexity and unpredictability.
The core problem of complexity is the uncertainty it creates. Beyond very simple processes and very short timeframes, understanding what will happen, where it will happen, and why is near-impossible. Human complex systems are black boxes shrouded in fog. We do not know how to consistently do things like keep a team on schedule, design the right product, improve employee morale, generate word of mouth, outmanoeuvre a competitor, or change a market perception. We can barely do better than random. We simply do not understand how these things work in the same way we understand how complicated systems like power stations, smartphones, and MRI scanners work. There is a fundamental limit on human efficacy when it comes to human complex systems and this affects decision-making and outcomes at all levels, from freelancers to world leaders.
Unsurprisingly, the scale and nature of this problem is a function of technology, which means it’s growing—and accelerating. It took landlines 40 years to grow tenfold to their peak of 1.25 billion in 2008. Mobile subscriptions overtook landlines in 2002, growing eightfold in just 20 years to number over 8 billion today. The more interconnected people are, the more information they have access to (or cannot avoid), and the more capabilities they have for creating, processing, and distributing that information, the higher the complexity of the social systems. Society used to resemble something like a nicely wired-up early computer. Complex, but somewhat graspable. Most people read the same news, watched the same channels, trusted the same institutions, and lived in geographic communities. Power was neatly centralised and statecraft was 9-5. Now it’s more like the brain than a vacuum tube computer; impenetrable. YouTube, TikTok, and whatever’s next are qualitatively different from what they replaced—sprawling, real-time, democratised, interactive—not merely faster. Journalism has become a decentralised, non-stop words machine, alternatingly desperate for clicks, in a complicated relationship with the truth, and unflinchingly willing (and able) to speak truth to power. Geography’s both an annoying dimension in which our bodies are stuck and a happier, simpler place to which we’d like to return. And statecraft is either in permacrisis mode or the one thing that could really make a difference. And on top of all this we’ve now got generative AI, giving humans the ability to synthesise and create even more information and introducing semi-autonomous agentic nodes into our complex social systems.

The human element has held back modelling solutions

The problem of complexity is caused by a lack of both data and the knowledge of how to compute those data. The data required to pierce the uncertainty of a network might include information on the members of the network, how they are connected, by what medium they are connected, the history of messages they’ve sent through those connections, and so on. The knowledge of how to compute those data involves knowing how those data, or the events and actions they record, interact to generate the system’s emergent behaviour. To have knowledge of how to compute the data means having a model—a model of reality. For every message there are probabilities of forward communication to any node in the network determined by the internal state of the individual who receives the message and then further by the internal states of the others in the network, all affected by myriad other factors which vary through time such as what they had for breakfast, what they were just looking at in another tab, and what’s going on in the rest of their lives. While the human-built elements of human complex systems are typically non-complex and relatively easy to model (the features and functionality of the network’s messaging systems) the human elements have so far been much more elusive. Individual humans are complex systems themselves. Their internal biologies, mental processes, and individual histories continuously generate new and emergent feelings, thoughts, emotions, and actions. Knowing that a message will arrive instantly on the phones of certain people in the network is one thing. Knowing which of those recipients will read it, when they’ll read it, what they’ll think, what they’ll do, when they’ll do it, and why is a much harder problem.
This unpredictability combined with the high cost of human time and ethical concerns severely limit the application of the most powerful tool for understanding systems, and complex systems in particular: experimentation. Since a complex system’s behaviour can only be observed as it unfolds, and since complex systems do not behave deterministically, being able to experimentally see what happens for different inputs is incredibly valuable in developing understanding. This can be done either with the actual system under study or a physical or virtual model of the system. When it’s impractical or impossible to experiment with the actual system, modelling is crucial. It’s how we predict the weather and prevent skyscrapers from falling down during earthquakes. But with human systems, modelling doesn’t work because individual humans are too hard to predict, and you can’t experiment with the real thing because it’s too expensive and risks causing harm.

LLMs are universal human simulators

LLMs are neural networks, they model the brain, and their self-supervised training mirrors how humans learn: through the raw experience of reality—continually observing what happens next and learning predictive patterns. LLMs are trained on text from the internet. Most of the text on the internet was written by humans and most of it is about humans: humans talking about themselves, humans talking about each other, humans talking about events in their lives, in their communities, and in the human world more broadly. The internet is arguably the largest set of anthropological data ever assembled. And text is a modelling framework: it can be used to describe, or simulate, the real and unreal at some level of fidelity. All human stories, fictional and nonfictional, are simulations. In addition to being chatbots, we believe LLMs are universal human simulators. Through the modelling framework of human language, they can generate and simulate humans and human behaviour, aligned with the patterns of reality they’ve learnt from the text on the internet.
If you say to an LLM “It’s 9am on Monday morning in London, you’ve just woken up, what do you do next?” it’ll come back with something along the lines of “First I’d check my phone to see if I have any messages and to see what the weather is doing. Then I’ll get up, shower, brush my teeth, and get dressed. Then I’ll make a cup of tea.” The “answer” is obvious to some extent, but the LLM is mentioning the weather and tea because those are probabilistically common concerns if you live in London. You’ll get similarly accurate representations of people’s reality if you ask the same question for different locations. If you ask the question in Greek and for Athens, the LLM will be more likely to omit the weather and opt for coffee over tea.
Historically, Agent-based Modeling (ABM) has been one of the best approaches to modelling emergent behaviour. Tools like AnyLogic and NetLogo allow you to define the behaviour of agents (humans, cars, animals) and then simulate their interactions in a virtual environment. Beyond AnyLogic’s narrow applications in things like supply chain modelling and NetLogo’s use in academia, they haven’t had much impact on the problem of understanding human systems because of the core problem of the unpredictability of the individual. Having to define the behaviour of simulated humans by hand limits you to only the most reductive or abstract models. It simply takes too long and is too informationally unrealistic to encode all the possible things humans might do in different situations. The ability of LLMs to generate probable human behavioural responses to different situations completely changes how models like these can work.
A number of social scientists and economists have had similar ideas. From replicating political partisanship studies and behavioural economics experiments, to modelling public opinion, journalism, social networking, and even an “AI Town” which directly implemented an ABM architecture (albeit at great cost, and only to game-like realism), a number of research papers have explored using LLMs to simulate human behaviour since the introduction of GPT-3. Going deeper, a pseudonymous alignment researcher named janus wrote a lengthy post titled Simulators which comprehensively developed the idea. It points to the long-term potential of Semilattice; of the possibility of mapping, modelling, and predicting complex human systems. It argues that by learning to predict what token comes next, what the LLM is really learning is the underlying “conditional structure of the universe”—the rules and patterns which generate the emergent complexity which currently shrouds us in uncertainty.

Semilattice is a human world simulation platform

The mission of Semilattice is to simulate the human world to advance society. We believe significant advances in decision-making, management, and governance are possible, yielding much higher success rates for ambitious projects, but only if we can understand how complex human systems really work. Given we can’t experiment with these systems in reality, and given their complexity is far beyond the capacity of the human brain, the only way we will be able to develop this deeper understanding is through accurate computational simulation.
The accurate computational simulation of systems at this level of complexity is extremely hard. The combinatorial possibilities of even the simplest human systems explode super-non-linearly after only a few timesteps, easily exceeding the number of atoms in the universe. Difficult technical challenges abound: the necessary reductions and abstractions, the temporal depth of the simulation, the modelling of the physical and social environments, the mental processes of the simulated people, and the real-time alignment with the moving target of reality. We may ultimately need computational paradigm shifts (e.g. quantum or biological) to get there.
The most important aspect of this vision is not the exciting technical challenges though. It’s the primacy of accuracy. Semilattice is in the business of modelling and predicting reality. Reality is the goal. It’s trivial to get a model to output a prediction, but that prediction is useless unless it tells you something true about reality. The difficulties of maintaining accuracy when predicting complex emergent behaviour over longer time spans is why we picked a more tractable use case to get started: simulating how groups of people answer questions, i.e. surveys and polls. 
Our first product is both a proof of concept of the ability to model and predict complex humans using LLMs and a breakthrough research tool. Developed through months of research and thousands of experiments, its prompting system simulates humans and deterministically yields highly accurate results. It enables decision-makers to create models of specific target audiences using a small sample of survey data. Those models can then predict how those target audiences would answer any multiple choice question in around 30 seconds. Benchmarked at >87% accuracy compared to surveys and polls, it predicts roughly 7-8 questions out of 10 correctly, reducing the time and cost of primary market research by 3-5 orders of magnitude and enabling new information about the user and the market to be generated for any decision.
Primary research is mostly used today for periodic market insights and big, strategic decisions—it doesn’t serve the multiplicity of day-to-day decisions operators have to make. We envision every user-focused person in product, design, marketing, and sales having access to Semilattice; able to open a tab and get answers from their audience in seconds. And we see a clear technical path to expand the product to cover all research use cases, from qualitative interviews and focus groups to multi-modal creative feedback and user testing. By modelling humans with AI, new programmatic use cases are also unlocked, such as direct integrations into creative tools (“Ask your audience about this design, paragraph, UX”), automatic evaluations triggered on deploy (“Branch onboarding-v2 is predicted to reduce new user conversions by 3%”), or simply as net new information for existing data pipelines.
New technologies change the fundamentals. Today’s research methods were developed to solve the problem of the high cost of finding things out about humans. It’s technically infeasible, and a privacy nightmare, to observe human behaviour directly, so the research industry developed sampling and questioning techniques. These were revolutionary, but what humans say about themselves does not always match what they actually do. Semilattice will expand to model their actual behaviour, unlimited by physics and privacy constraints. Our second wave of products will enable the simulation of real-world phenomena: actual purchasing, actual product usage and adoption, actual word of mouth, actual launch impact, actual policy adoption. We anticipate building these applications with custom models that are designed to compute the dynamics and outcomes of these specific processes.
If a decision-maker asks the question “What will the market impact of this product change be over the next month?”, they can get an answer to that question—but should they be asking other questions? A positive market effect could occur along with a negative perception change amongst a market segment which the company isn’t targeting yet. And this might suddenly flip positive years later as a result of other trends or competitor behaviour. Semilattice’s third phase products will be built around a fuller, more holistic model which simulates recursive human behaviour in nested social, physical, and informational environments. An operator seeking insight on market impact will also be given proactive reporting on the second and third and fourth order effects of their decision. At this stage, Semilattice will start to continually be improving and aligning a singular human world model in which decision-makers can ask many questions, exploring decision counterfactuals and inspecting complex dynamics. What we’re really talking about is seizing control of time. The world model will be real-time synced with reality, always ready to generate simulations into possible futures. Decision-makers will branch time, defining interventions they are considering in reality, simulating their effects, comparing their outcomes, pausing, rewinding, zooming in on the causal dynamics, iterating their interventions and re-simulating—all towards making better, less uncertain, more complexity-aware decisions.
We believe humanity is stuck; stuck at the level of advancement achievable without the ability to break through the complexity of human systems. As a technology company, we are primarily interested in building the tools that can answer what-if questions about human society: our customers and users will be the ones asking the questions. But we are building Semilattice because we believe those questions will unavoidably, though not necessarily directly or immediately, lead to the discovery of better ways of doing things. We are not blind defenders of the status quo. We do not know the answers, but we know there are answers. Answers to questions about how to collaborate and coexist—about how to organise society.