Rival

Reimagined.

A look at this platform through the lens of a product-led engineer. A diagnosis and a proposal from someone who has spent three years building its core, and the last year working closely with researchers with frontier AI at the centre.

R
Rohan Gore
Lead Software Engineer  ·  Product-led Engineer
The case for change

Before we talk about AI,
let's talk about what
Rival actually is.

Every conversation in software today starts with AI. This one starts somewhere else: with the two engines that have always been at the heart of this platform, and an honest look at whether they are ready for the era the industry is walking into.

Authoring 2.0 Responding 2.0 Data 2.0 Platform 2.0
What Rival is

Two engines.
Everything else is downstream.

Strip away everything else. At its core, Rival is two things. Always has been.

01
Authoring Engine

Researchers use it to design and build surveys: questions, logic, routing, quotas, translations. From brief to live study. The authoring engine is where research intent becomes a deployed instrument.

02
Responding Engine

Respondents use it to answer surveys. The responding engine is the runtime that delivers the right question to the right person, records what they say, and keeps the session alive. This is where data is born.

💡
Data is not a third engine. It is the output of responding. The quality, richness, and speed of data is entirely determined by what the responding engine can do. If responding is constrained, data is constrained. Fix responding and everything downstream follows: analysis, reporting, dashboards, cross-wave tracking. All of it flows from what responding produces.
The challenge

The core was built
for a different era.

"What if the platform at the centre of everything being built here was not designed for the speed, flexibility, and intelligence that modern research demands? And never could be, without rebuilding it?"

The current platform was built correctly for its time. But that time is not now. AI-era research requires things the core structurally cannot do.

Authoring

AI needs to write and deploy surveys without a programmer in the loop. The current authoring engine was built for humans clicking through a UI. It was not built for agents generating survey logic from a brief.

Responding

Modern research needs surveys that adapt in real time: pulling live data, rewriting questions mid-session, running entirely different instruments for different respondents. The current engine runs fixed scripts.

Scale

AI-moderated interviews are long-running, stateful sessions. The current infrastructure was built for short stateless completions. Running 1,000 AI interviews simultaneously is not a configuration problem. It is an architecture problem.

Data

Researchers should be able to ask questions of their data in plain language and get answers immediately. That requires data that is self-describing from the moment it is written, not data that needs an ETL job and a BI tool before it is queryable.

Just scratching the surface

Let's look at some interesting things
Rival Reimagined can solve.

None of these work on the current platform. Most don't work on any survey platform. But every one of them runs on what has been built here. And every study was authored by an AI agent from a research brief, not a programmer.

Responding 2.0
01
Live catalog conjoint
A CPG client wants concepts tested against their actual live inventory. Not static images uploaded in advance. The catalog changes overnight.
Responding 2.0
02
Adaptive MaxDiff
40 drug attributes. The study learns which are well-differentiated mid-field and concentrates on the uncertain ones. No two respondents see the same item sets.
Responding 2.0
03
Live event study
Political research during a live policy announcement. Quota cells rebalance in real time. The final question is written from what actually happened.
Responding 2.0
04
CRM-personalised concept test
5 store concepts. Which one a respondent sees is determined by their actual purchase history, pulled live from the client's CRM.
Just scratching the surface

Let's look at some interesting things
Rival Reimagined can solve.

Responding 2.0
05
Live voice interview
The same survey program delivered as a live bidirectional voice conversation. The AI speaks and listens. Coverage objectives, not a fixed question list.
Authoring 2.0
06
Brief to deployed survey through claude.ai
A researcher describes the study in plain language. Claude authors, validates, and deploys it. The researcher never opens Studio.
Data 2.0
07
DataTalk MCP
A researcher asks a question. ClickHouse answers in seconds. No dashboard, no export, no analyst in the loop. Cross-study comparisons in a conversation.
What this unlocks  1 of 3

A researcher who can
do everything.

🌟
Plan, build, field, and analyse. One platform.

A researcher takes a study from brief to toplines without leaving the platform. No handoffs. No exports. One conversation.

No handoff to a programmer. No export to a separate BI tool. No waiting for an analyst. Research brief, deployed survey, crosstabs, verbatim analysis, client-ready report: one conversation.

Authoring 2.0 Data 2.0
🔗 Unlocked by one schema driving headless authoring, UI, and responding
🗣
Talk to the data. Any of it.

No dashboard to learn. No query to write. Ask a question in plain language, get an answer backed by actual data. And replace Sisense entirely.

Toplines in seconds. Segmentation cuts on demand. Ask one study or ask across dozens. The data answers back.

The same ClickHouse instance that powers the DataTalk MCP and the knowledge base also powers a custom analytics dashboard, replacing Sisense entirely. One source of truth. No ETL pipeline between the survey and the chart. A respondent submits an answer; it is in the dashboard.

Data 2.0
🔗 Unlocked by the DataTalk MCP
🔗 Unlocked by a single large-scale OLAP database (ClickHouse)
🔗
Connect to any tool, any workflow.

Authoring, and data access are first-class primitives through MCP servers and CLIs. The platform is not an island. It is infrastructure.

Any AI agent, any internal tool, any client workflow can call them. Build on top of the platform without touching the platform. Integrate into existing pipelines without ETL.

Authoring 2.0 Platform 2.0
🔗 Unlocked by the Authoring MCP
What this unlocks  2 of 3

A researcher who can
do everything.

🎤
Voice at scale. Native, not bolted on.

A voice interview is the same survey with a different modality. Same isolation model. Same authoring flow. 1,000 respondents in parallel.

Running a 1,000-respondent voice study is not a different product or a different team. The program is the same. The infrastructure is the same. Voice is a modality, not a separate platform.

Responding 2.0 Platform 2.0
🔗 Unlocked by long-running worker agent infrastructure
📚
The knowledge base that builds itself.

Every completed study feeds a structured data layer. What used to live in slide decks becomes queryable, connected, alive.

Future studies can draw from past findings. Analysis agents reference prior work automatically. The platform does not just run research. It accumulates it.

Data 2.0 Platform 2.0
🔗 Unlocked by a single large-scale OLAP database (ClickHouse)
🔒
Secure, sandboxed, multilingual. For every respondent. At any scale.

Every study runs in its own sandboxed environment. Each respondent gets a fully isolated session. Scale from hundreds to tens of thousands without infrastructure changes.

Their data, their language, their path. A breach in one session cannot reach another. No capacity planning. No cost that grows linearly with volume.

Responding 2.0 Platform 2.0
🔗 Unlocked by long-running worker agent infrastructure
What this unlocks  3 of 3

A researcher who can
do everything.

🤝
Reach3 researchers. One technology stack. For the first time.

Survey Guard, Transcription, Crosstabs, Toplines, and Rival authoring. Separate tools today. One connected platform tomorrow.

Reach3's moat is its researchers. Until now, the tools they rely on have been disconnected: Survey Guard for data quality, Transcription Assistant for voice IDIs, Note-taking, Crosstabs, Mobile Toplines. Each a separate workflow. None connected to a live data layer. And on the other side: Rival for authoring surveys, fielding them, and pulling or exporting data. Two separate worlds, two separate contexts, no shared foundation.

The new platform makes every one of these a first-class feature. Survey Guard runs directly on collected data through the DataTalk MCP. Crosstabs and toplines build from real connected data, not exports. Transcription and note-taking become native to semi-guided voice interviews. Bot detection through IP, geolocation, VPN fingerprinting, and honey-trap questions ships as a responding-layer primitive, not a bolt-on.

A researcher plans a study in a conversational interface, authors it through the same, fields it, and gets insights without switching tools or waiting for a data export. Sky is the limit.

Authoring 2.0 Responding 2.0 Data 2.0
3-4 developers. No QA team. No DevOps. Delivering at pace.

The platform absorbs QA, DevOps, and observability as built-in properties. A small team delivers what larger teams with more functions cannot.

Automated validation catches logic errors before any respondent sees the survey. Synthetic respondents run every path. Deployment is a single command with no infrastructure to provision, no capacity to plan, no on-call rotation to staff. Near-zero idle infrastructure cost means the team is not maintaining servers between studies.

A team of 3-4 engineers absorbs what currently requires separate QA, DevOps, and infrastructure functions and still ships faster. The organisation gets more leverage per headcount than is possible on the current platform.

Observability and structured logging are built in from day one. PII is redacted at the logging layer before anything is written -- not as a retrofit, as a default. Developers have direct access to production logs without security reviews, ticketing systems, or Ops gatekeeping. Because the logs never contain sensitive data, the security concern that justified the gatekeeping does not exist. Issues are diagnosed in minutes. The bureaucracy disappears.

Platform 2.0
🔗 Unlocked by one schema driving headless authoring, UI, and responding
🧪
Synthetic respondents at scale. Without the cost.

Pass a persona definition and the survey schema to an LLM. Get a complete response back. Run it a thousand times. No live connection. No browser automation.

Because the survey is a single JSON schema with a defined program structure, any LLM can generate responses for it directly: no live survey connection required, no browser automation, no WebSocket tapping. Pass an elaborate persona definition and the survey schema to a model. Get a complete response back in one call. Run that call a thousand times across different personas.

Those responses run against the headless survey on the same infrastructure. No AI in the response loop at execution time. Fraction of the cost of computer-use approaches.

Then go a step further: fine-tune a model on Rival Group's actual response data, mapped to anonymised respondent profiles. That model takes the survey schema and generates responses that behave like real people. Trained on them. Synthetic fieldwork that gets closer to reality the more real data feeds it.

Responding 2.0 Platform 2.0
🔗 Unlocked by one schema driving headless authoring, UI, and responding
🔗 Unlocked by long-running worker agent infrastructure
What makes this possible

Now let's open
the black box.

Seven impossible things, working today. Here is the platform that makes them run.

↓ scroll to explore
The architectural insight

Everyone built a pen for zero gravity.
This is a pencil.

Survey session state is an unsolved problem for every platform in the market. This solves it by recognising that a survey is a program, and building a runtime where the program's own execution holds the state. No serialisation. No coordination. No ceiling.

Every Other Platform
Survey Format
Proprietary DSL. No AI training data. Cannot be written or debugged by an agent.
Session State
Database column tracking "current step". Coordination required between every request. Complex resume logic.
Scale
Add servers, tune connection pools, manage sticky sessions, plan capacity in advance.
Dynamic Logic Impossible
No live API calls inside a survey. No adaptive card generation. No real-time external data.
New Question Type
Update DSL spec, compiler, VM, renderer. Weeks of work across multiple teams.
Rival Reimagined
Survey Format
JavaScript. Claude writes it, debugs it, extends it. 30 years of training data. Every developer on earth knows it.
Session State Native
The program's own execution is the state. Pause at a question. Resume when the answer arrives. Nothing to serialise.
Scale
One isolated session per respondent. 300+ data centres globally. Nothing to configure. No ceiling.
Dynamic Logic Unbounded
Live API calls, adaptive card generation, real-time external data. It is JavaScript.. Write it.
New Question Type
Add to schema. Add renderer. Claude uses it immediately. One afternoon.
↓ scroll to explore
How it runs

1,000 respondents.
1,000 independent sessions.

Each respondent session is completely isolated: its own runtime, its own memory, its own stored answers. They have no awareness of each other. A crash in one affects no one else.

Active: waiting at a question Hibernated: between answers, zero cost
resp_a3f8
Q7 of 24
nps question
resp_b2c1
Hibernated
Q4 answered
resp_c9d4
Q12 of 24
ai probe
resp_d7e2
Q3 of 24
screener
resp_e1f5
Hibernated
Q18 answered
resp_f6a8
Q19 of 24
price question
resp_g3b7
Complete
data written
resp_h5c2
Q1 of 24
brand question
resp_i9d6
Hibernated
Q11 answered
resp_j2e4
Q22 of 24
ranking task

"The ceiling of what a survey can do is not what the platform team built controls for. It is what an AI can express in a program against the question schema. Which is effectively unbounded."

Rival Reimagined: Platform Architecture
↓ scroll to explore
Scale & availability

No capacity planning.
No ceiling.

100 respondents and 100,000 respondents use identical infrastructure. Load spreads to the nearest data centre automatically.

~1s
resume after days away
Always resumable
A respondent who closes their tab and returns two days later picks up exactly where they left off. Answered questions replay invisibly. They land on the next unanswered question. Invisible to the respondent.
🌍
300+
data centres globally
Global by default
A respondent in Tokyo hits Tokyo infrastructure. In Lagos, Lagos. In São Paulo, São Paulo. No configuration. No regional deployment work. Under 50ms from anywhere on earth.
💰
You pay for execution, not existence
A respondent reading a question for 30 seconds costs nothing. The billing model charges only for the milliseconds code is actually running. Infrastructure cost scales with completed work, not elapsed time. The dominant cost line is AI usage, which is where the value is created.
🔒
Privacy by architecture
Each respondent session is completely isolated with its own storage. There is no shared memory between sessions. Data isolation is structural, not an access control policy that can be misconfigured.
🗑️
GDPR deletion in 3 operations
Delete the session storage. Delete the study files. Delete the database rows. No hunting through shared tables. No risk of partial deletion. When a study expires, everything associated with it is cleaned up automatically.
📋
SOC 2, ISO 27001, PCI DSS
The platform runs on infrastructure that holds SOC 2, ISO 27001, ISO 27701 (privacy-specific), and PCI DSS certifications. Compliance is structural, not a configuration layer bolted on after the fact.
↓ scroll to explore
How the layers compound

Every layer amplifies
every other layer.

This is not a feature list. Each capability exists because the layer below it made it possible. Remove any one layer and multiple capabilities above it collapse.

1
JavaScript as the survey program
AI can write it, debug it, extend it. 30 years of training data. Every capability above this layer depends on it.
Foundation
2
AI authors the survey from a research brief
Because the program is JavaScript, the authoring agent writes real executable code, not a translation layer. Brief to deployed survey in a conversation.
AI Authoring
3
Automated validation before any respondent sees the survey
Multiple execution paths run before the researcher reviews. Logic errors, quota gaps, and routing issues are caught automatically. Studies that pass validation work. No exceptions.
Quality Gate
4
One isolated session per respondent: stateful, globally distributed
The program runs in its own isolated runtime per respondent. 10,000 concurrent respondents means 10,000 independent sessions, each unaware of the others. No coordination overhead. No shared state to corrupt.
Execution
5
Self-describing data: one row per answer, instantly queryable
No wide table per study. No ETL job. No schema migration when a new study is published. Every row carries the question text, question type, answer value, and respondent demographics. Queryable from the moment it is written.
Data Layer
6
Topline report in seconds. Cross-tabs on demand. No analyst in the loop.
Because the data is self-describing, a topline is a loop over every question with one query per question. A 1,000-respondent topline runs in under a second. Segmentation cuts, cross-tabs, verbatim analysis. A conversation, not a dashboard to navigate..
Analysis
7
Live client dashboards. Charts that know what they are rendering.
Chart type is deterministic from question type. Single choice → bar chart. Rating scale → histogram with mean. Price sensitivity → four-line cumulative curve. No manual configuration. The data carries the semantics. The dashboard knows what it is rendering.
Delivery
Economics

You pay for execution,
not existence.

Traditional survey infrastructure bills for time: servers running, connections held open, sessions kept alive. A respondent reading for 30 seconds is 30 seconds of cost.

Traditional Platform
Wall-clock
You pay for every second the respondent is alive in the system: reading, thinking, typing, distracted, gone to make tea.

Plus: session database, load balancer management, connection pooling, capacity planning, on-call rotation for infrastructure.
Infrastructure cost scales with respondent count and session duration. Both are outside your control..
Rival Reimagined
CPU-time
You pay only when code is executing: the milliseconds it takes to process an answer and deliver the next question. A respondent reading for 30 seconds costs nothing.

No servers. No session database. No capacity planning. No on-call for infrastructure.
The dominant cost line is AI usage: authoring, probing, analysis. That is the same for any AI-native research solution, and it is where the value is created.
The opportunity

Fix the core.
Everything unlocks.

Rival has panel infrastructure, enterprise relationships, methodology credibility, and compliance track record that AI-native competitors are spending years and hundreds of millions to acquire. What has been missing is a platform that matches those assets.

The new authoring engine and responding engine change that. The platform becomes what the moat always deserved.

Full before/after breakdown →