Semilattice logo with icon depecting overlapping sets.Semilattice logo with icon depecting overlapping sets.
Blog/
Product2026-04-02T12:42:13+0000

Context

We built the simulation first. We're now starting to think that simulation is what makes a context engine useful, and context is what makes simulation useful. They need each other.
Fabian TeichmuellerCo-Founder, Commercial & Operations
About a year ago we started building an API for audience simulation. We were focused on accuracy: how close can simulated responses get to real-world ground truth. We got to 91% across 207 questions against the FCA Financial Lives Survey, and we were proud of that.

Then we started putting it in front of people.

Here's how it works. When you sign up, you give us a URL. We pull your site and extract what we can see about your product, your positioning, your competitors, your pricing. We put it into a structured document and show it back to you. Does this look right? This is the context we use to frame every question you ask and every simulation you run.

We expected users to glance at it and move on to the simulation. Nobody did.

A head of product at a consumer lending company spent ten minutes correcting product names and updating competitor rates before he'd run a single question. When he was done, the simulation results were specific to his actual market. A growth lead at a European payments company saw us list a US-only competitor as dominant in her region. She corrected it, reran the extraction, and the questions the system generated shifted noticeably. When context is right, users trust the output. When it's wrong, nothing downstream matters.

The pattern held across ten calls. Users interrogated the context like it was the product. We realised it is the product.

Here's what we think is happening.

Product managers used to hold all the context. User research, competitive intelligence, market data, engineering constraints, business goals. It passed through them. That was the job: synthesise what the company knows about its users with what the market is doing, and point the team in the right direction.

That job depended on an information pipeline. Quarterly research studies fed insights into planning cycles. Analytics dashboards tracked what happened last month. Competitive reports landed once a quarter. The PM synthesised it, built the case, and the team built the thing. It worked because the build cycle was slow enough for the information to stay current.

That pipeline no longer works. Teams ship in days. One PM told us his product team is struggling to keep up with his development team. "That's always been the other way around," he said. PMs talk to us partially because they struggle to justify the cost of tools that don't connect to when decisions actually happen. The reasoning is fine. The picture of reality is stale.

But PMs are more important now, not less. Every company still has one set of users. Someone has to decide what to build for them, and that decision is higher-stakes when building is cheap. You can ship ten things in a week. Which ones should exist? That's a context problem.
We built the simulation first. We're now starting to think that simulation is what makes a context engine useful, and context is what makes simulation useful. They need each other.

When you put in your URL, we generate a structured document: what your product does, how it's positioned, who your competitors are, what they charge, where the market is moving. You correct what's wrong, add what's missing. That corrected context feeds every question you ask and every simulation you run. Better context in, better questions out, better answers back.

Without context, the simulation doesn't know enough about your situation to be specific. Without simulation, the context just sits there. That can drift.

Together, the context tells the simulation what to pay attention to, and the simulation turns context into something you can act on.

The hard part is where these two things touch. The audience model already captures what individuals know about a product. That's a clear modeling target. But the PM's context shapes how questions get asked, and asking "what do you understand about X" risks telling the model what X is. The challenge isn't conceptual. It's practical: how do you use rich context to generate good questions without biasing the answers? Every time we got the context right, the product worked. Every time we got it wrong, nothing else helped. That's where our attention is now