A look at this platform through the lens of a product-led engineer. A diagnosis and a proposal from someone who has spent three years building its core, and the last year working closely with researchers with frontier AI at the centre.
Every conversation in software today starts with AI. This one starts somewhere else: with the two engines that have always been at the heart of this platform, and an honest look at whether they are ready for the era the industry is walking into.
Strip away everything else. At its core, Rival is two things. Always has been.
Researchers use it to design and build surveys: questions, logic, routing, quotas, translations. From brief to live study. The authoring engine is where research intent becomes a deployed instrument.
Respondents use it to answer surveys. The responding engine is the runtime that delivers the right question to the right person, records what they say, and keeps the session alive. This is where data is born.
"What if the platform at the centre of everything being built here was not designed for the speed, flexibility, and intelligence that modern research demands? And never could be, without rebuilding it?"
The current platform was built correctly for its time. But that time is not now. AI-era research requires things the core structurally cannot do.
AI needs to write and deploy surveys without a programmer in the loop. The current authoring engine was built for humans clicking through a UI. It was not built for agents generating survey logic from a brief.
Modern research needs surveys that adapt in real time: pulling live data, rewriting questions mid-session, running entirely different instruments for different respondents. The current engine runs fixed scripts.
AI-moderated interviews are long-running, stateful sessions. The current infrastructure was built for short stateless completions. Running 1,000 AI interviews simultaneously is not a configuration problem. It is an architecture problem.
Researchers should be able to ask questions of their data in plain language and get answers immediately. That requires data that is self-describing from the moment it is written, not data that needs an ETL job and a BI tool before it is queryable.
None of these work on the current platform. Most don't work on any survey platform. But every one of them runs on what has been built here. And every study was authored by an AI agent from a research brief, not a programmer.
A researcher takes a study from brief to toplines without leaving the platform. No handoffs. No exports. One conversation.
No handoff to a programmer. No export to a separate BI tool. No waiting for an analyst. Research brief, deployed survey, crosstabs, verbatim analysis, client-ready report: one conversation.
No dashboard to learn. No query to write. Ask a question in plain language, get an answer backed by actual data. And replace Sisense entirely.
Toplines in seconds. Segmentation cuts on demand. Ask one study or ask across dozens. The data answers back.
The same ClickHouse instance that powers the DataTalk MCP and the knowledge base also powers a custom analytics dashboard, replacing Sisense entirely. One source of truth. No ETL pipeline between the survey and the chart. A respondent submits an answer; it is in the dashboard.
Authoring, and data access are first-class primitives through MCP servers and CLIs. The platform is not an island. It is infrastructure.
Any AI agent, any internal tool, any client workflow can call them. Build on top of the platform without touching the platform. Integrate into existing pipelines without ETL.
A voice interview is the same survey with a different modality. Same isolation model. Same authoring flow. 1,000 respondents in parallel.
Running a 1,000-respondent voice study is not a different product or a different team. The program is the same. The infrastructure is the same. Voice is a modality, not a separate platform.
Every completed study feeds a structured data layer. What used to live in slide decks becomes queryable, connected, alive.
Future studies can draw from past findings. Analysis agents reference prior work automatically. The platform does not just run research. It accumulates it.
Every study runs in its own sandboxed environment. Each respondent gets a fully isolated session. Scale from hundreds to tens of thousands without infrastructure changes.
Their data, their language, their path. A breach in one session cannot reach another. No capacity planning. No cost that grows linearly with volume.
Survey Guard, Transcription, Crosstabs, Toplines, and Rival authoring. Separate tools today. One connected platform tomorrow.
Reach3's moat is its researchers. Until now, the tools they rely on have been disconnected: Survey Guard for data quality, Transcription Assistant for voice IDIs, Note-taking, Crosstabs, Mobile Toplines. Each a separate workflow. None connected to a live data layer. And on the other side: Rival for authoring surveys, fielding them, and pulling or exporting data. Two separate worlds, two separate contexts, no shared foundation.
The new platform makes every one of these a first-class feature. Survey Guard runs directly on collected data through the DataTalk MCP. Crosstabs and toplines build from real connected data, not exports. Transcription and note-taking become native to semi-guided voice interviews. Bot detection through IP, geolocation, VPN fingerprinting, and honey-trap questions ships as a responding-layer primitive, not a bolt-on.
A researcher plans a study in a conversational interface, authors it through the same, fields it, and gets insights without switching tools or waiting for a data export. Sky is the limit.
The platform absorbs QA, DevOps, and observability as built-in properties. A small team delivers what larger teams with more functions cannot.
Automated validation catches logic errors before any respondent sees the survey. Synthetic respondents run every path. Deployment is a single command with no infrastructure to provision, no capacity to plan, no on-call rotation to staff. Near-zero idle infrastructure cost means the team is not maintaining servers between studies.
A team of 3-4 engineers absorbs what currently requires separate QA, DevOps, and infrastructure functions and still ships faster. The organisation gets more leverage per headcount than is possible on the current platform.
Observability and structured logging are built in from day one. PII is redacted at the logging layer before anything is written -- not as a retrofit, as a default. Developers have direct access to production logs without security reviews, ticketing systems, or Ops gatekeeping. Because the logs never contain sensitive data, the security concern that justified the gatekeeping does not exist. Issues are diagnosed in minutes. The bureaucracy disappears.
Pass a persona definition and the survey schema to an LLM. Get a complete response back. Run it a thousand times. No live connection. No browser automation.
Because the survey is a single JSON schema with a defined program structure, any LLM can generate responses for it directly: no live survey connection required, no browser automation, no WebSocket tapping. Pass an elaborate persona definition and the survey schema to a model. Get a complete response back in one call. Run that call a thousand times across different personas.
Those responses run against the headless survey on the same infrastructure. No AI in the response loop at execution time. Fraction of the cost of computer-use approaches.
Then go a step further: fine-tune a model on Rival Group's actual response data, mapped to anonymised respondent profiles. That model takes the survey schema and generates responses that behave like real people. Trained on them. Synthetic fieldwork that gets closer to reality the more real data feeds it.
Seven impossible things, working today. Here is the platform that makes them run.
Survey session state is an unsolved problem for every platform in the market. This solves it by recognising that a survey is a program, and building a runtime where the program's own execution holds the state. No serialisation. No coordination. No ceiling.
Each respondent session is completely isolated: its own runtime, its own memory, its own stored answers. They have no awareness of each other. A crash in one affects no one else.
"The ceiling of what a survey can do is not what the platform team built controls for. It is what an AI can express in a program against the question schema. Which is effectively unbounded."
100 respondents and 100,000 respondents use identical infrastructure. Load spreads to the nearest data centre automatically.
This is not a feature list. Each capability exists because the layer below it made it possible. Remove any one layer and multiple capabilities above it collapse.
Traditional survey infrastructure bills for time: servers running, connections held open, sessions kept alive. A respondent reading for 30 seconds is 30 seconds of cost.
Rival has panel infrastructure, enterprise relationships, methodology credibility, and compliance track record that AI-native competitors are spending years and hundreds of millions to acquire. What has been missing is a platform that matches those assets.
The new authoring engine and responding engine change that. The platform becomes what the moat always deserved.