Rival 2.0
Where the current platform creates friction, and how Rival 2.0 removes it.
Authoring 2.0
Six places where today's authoring workflow slows teams down, and what changes in Rival 2.0.
Researchers adjust their methodology to fit available card types. If the study needs something the library does not have, it either gets dropped or run outside Rival entirely.
The authoring agent can wrap any combination of cards in JavaScript logic to produce interactions the card library never anticipated. Example: an image grid that fetches images from an authenticated external endpoint, displays them 10 at a time, lets the respondent pick 1 to 5 per batch, and keeps going until 10 total are selected or images are exhausted. Impossible to build in current Rival, even for a programmer. In Rival 2.0 a researcher describes it and the agent builds it.
A card type that sees repeated demand still has to go through product requirements, engineering scoping, sprint planning, and compete with other priorities. It may land next quarter. It may not.
One schema drives everything. Adding a card type means one TypeScript union entry and one React component. The same schema that drives the authoring UI drives the responding runtime. No changes needed across six separate systems. What used to span a sprint now takes an afternoon.
Conditional display, answer piping, and option masking are built into each card's own implementation. A card only supports the orchestration logic its programmer added to it. If a study needs a behaviour that was not anticipated when the card was built, it cannot be done, even if the logic itself is straightforward.
In Rival 2.0, which cards show, in what order, with what piped values and masked options: that is the JavaScript program's concern, not each card's. Cards still own their own configuration (min/max, required, options). But the orchestration layer is shared, consistent, and not limited by what any individual card was built to support.
Anything beyond a straightforward study requires a survey programmer to translate the researcher's intent into the platform's format. The researcher writes the brief; someone else builds it.
The authoring agent takes a natural language brief, designs the study with the researcher in conversation, generates the FlowDefinition, runs it through the validation pipeline, and deploys it. No programmer in the loop unless the researcher wants one. Complex studies are a conversation, not a handoff.
To author a survey, you open Rival. There is no API, no programmatic access, no way to integrate survey creation into another workflow or tool.
The authoring pipeline is exposed as an MCP tool (studio-mcp). Claude, Cursor, custom agents, or any MCP-enabled client can author a survey through conversation. A researcher working in Claude can describe a study and deploy it without ever opening Studio. Authoring becomes infrastructure, not just a UI.
Multilingual support is handled separately: after the survey is built and validated, translations are produced and integrated as a follow-on task.
The authoring schema uses textKey and labelKey throughout: every piece of respondent-facing text is a key, never hardcoded. At publish time, translations for any set of languages are generated in parallel. Five languages takes the same time as one. The study is multilingual by design, not by retrofit.
The current Rival platform authors and publishes text-based surveys. Voice interviews are not supported. They require a separate tool or approach outside Rival entirely.
Rival 2.0 authors both text surveys and voice interviews from a single authoring flow. The same FlowDefinition, the same pipeline, the same publish step. The researcher selects the modality; everything else is handled by the platform.
Responding 2.0
Five places where the current responding stack limits what a survey can do, and what changes in Rival 2.0.
Studies share the same runtime. A spike in one study degrades others. Scaling requires ops intervention. AI-moderated interviews. Each session is a long-running agent that stays alive for the duration of a conversation. They are not supportable on this architecture at any meaningful scale.
Each respondent session runs in its own Durable Object: a persistent, stateful compute instance that lives for the duration of the interview and hibernates between turns at zero cost. 1,000 simultaneous AI-moderated interviews means 1,000 independent agents, each with their own state, none affecting the others. No Redis, no sticky sessions, no ops team. Cloudflare runs across 330+ cities; capacity is not a planning exercise.
Current Rival probe cards are LLM-generated follow-ups, but each probe is a stateless call from the card: one answer in, one follow-up out. The probe has no memory of the broader conversation and cannot run across multiple turns as a continuous agent.
In Rival 2.0, a probe card runs inside the respondent's Durable Object session. It has full conversation context, can go as many turns as needed, and maintains state across the entire exchange. The probe is not a card-level LLM call. it is an agent that lives for as long as the conversation warrants.
The current platform supports text interactions. Voice interviews are not supported.
The same survey program runs across three modalities. Text: cards and typed responses. Voice: bidirectional audio, the AI speaks and listens. AI-moderated: no fixed script, Claude guides the conversation toward the researcher's coverage objectives, adapting based on what the respondent says. The researcher picks the modality at study level; the program runs unchanged.
Survey logic and quota behaviour can only be validated once real respondents are in the field. Problems with routing, quota logic, or question flow are discovered post-launch.
Before fielding, a set of AI personas, one per quota cell, runs through the study automatically. Quota logic, branching, and flow are validated against synthetic data before a single real respondent sees the survey. Early signal without the cost of full fieldwork. Planned feature: the core mechanism is built, the calibration layer is in progress.
The current responding stack chains six proprietary systems. A new card type requires changes across all of them. A bug anywhere in the chain affects every live study. The DSL (chasm) has no LLM training data, so AI cannot generate or modify survey programs.
A survey is a JavaScript program. It runs in a Durable Object: one per respondent session. The same schema drives authoring, responding, and analytics. Adding a card type means one TypeScript union entry and one React component, not eight changes across six systems. Claude writes JavaScript fluently, so the authoring agent can generate, modify, and debug survey programs without constraint.
Data 2.0
Current Rival produces data that requires two separate pipeline layers and a human analyst before it becomes useful. Rival 2.0 makes the data self-describing from the moment it is written, and puts it directly in reach of the researcher.
Current Rival emits respondent events through a multi-stage pipeline that lands in OpenSearch, which Rival calls analytics. In practice it can produce toplines and not much else. Cross-study queries, segmentation cuts, and anything beyond basic counts require engineering effort or are simply not possible. The data exists but it is not accessible in any meaningful way.
Rival 2.0 stores one row per respondent per question. Every row carries everything needed to understand it: the question text as shown, the card type, the raw answer, the human-readable label, the numeric value, and the respondent's demographics. No joins. No ETL. No schema that needs updating when a new card type is added. 1,000 respondents across 30 questions is 30,000 rows, all in one table, queryable from the moment they land. Any segmentation cut (responses from females 25-34 who spend over Β£100/month) is a single query with no joins required.
A separate pipeline converts raw EAV values into wide tables for Redshift and Sisense. Every new study needs a new table and a new ETL run before analysis can begin. Adding a question to a running study breaks the schema. Cross-study queries require joining incompatible schemas. When the study closes, producing a topline means someone runs queries, pastes numbers into a deck, and formats it manually. That process takes hours.
There is no ETL job in Rival 2.0. No per-study table. No schema migration when a new card type is added. The table structure never changes regardless of what the survey contains. A topline is a loop over every question in the study, one query per question, running in parallel. The full topline for a 1,000-respondent study is produced in under a second of query time. Reachy can then write that topline as a narrative or publish it as a live web page. No analyst, no deck, no waiting.
In current Rival, every research domain and every new study requires new tables to be created in Redshift and new fields to be mapped in Sisense. The schema is the constraint. As studies evolve, questions get added or changed, and keeping the Sisense field definitions in sync with the actual data becomes a persistent source of errors and delays. Cross-study or cross-wave dashboards mean joining tables with incompatible schemas. The data infrastructure does not grow with the programme. It has to be rebuilt for it.
Because every study's answers land in the same ClickHouse table with the same structure, a dashboard layer can be built once and work across every study, every wave, every market. Chart types are deterministic from card type, no manual configuration. Filters by segment, quota group, language, and wave are available on every question without any per-study setup. A tracking programme dashboard that compares brand metrics across six waves and four markets is the same query structure as a single-study topline, just with additional filters. Clients get configurable, live dashboards that update as fieldwork progresses, not a static deck delivered after the study closes.
In current Rival, asking a question about your data means finding an analyst, explaining what you need, waiting for them to run queries in Sisense or Redshift, and getting a chart back hours or days later. Cross-tabs require a template. Segmentation cuts require configuration. A researcher who wants to know "how did the high-spend segment answer Q3" cannot find that out themselves.
Rival 2.0 exposes study data through a DataTalk MCP server backed by ClickHouse. Any MCP-enabled client (Reachy, Claude, Cursor, or a custom agent) can query study data in plain English. Ask a question. The client writes the query, runs it against ClickHouse, and returns the answer as a number, a narrative, a chart, or a cross-tab. "Break Q3 by spend tier." "Which price point had the highest acceptable range?" "Show me open-end responses from respondents who rated below 5." No export. No analyst. No Sisense configuration. The researcher directs the analysis from whatever tool they are already in. The platform does the work.
Platform 2.0
Two foundational capabilities that were added to current Rival as an afterthought. They are built properly in Rival 2.0 from the ground up.
In current Rival, organisations and role-based access were not designed in from the start. They were layered on as needs arose. Enforcement is uneven across the platform: some areas check permissions, others do not. Orgs, research domains, and studies are not modelled as proper entities with consistent access rules. It works for a single internal team but breaks down the moment you add external clients or need genuine data siloing between accounts.
Rival 2.0 models Users, Organisations, and Memberships as first-class entities. Every piece of data belongs to an org. Roles (owner, admin, analyst, viewer) are enforced at the application layer on every route, and at the database layer via Postgres row-level security as a last line of defence. A client PM sees only their org's studies. An analyst cannot trigger jobs they are not permitted to run. Adding a new external client is a data change, not a code change.
Current Rival has no pricing model at the platform level. Billing is handled entirely outside the platform, handled manually or through separate agreements. The platform has no concept of consumption, no metering of what was used, and no connection between research value delivered and what a client pays.
Rival 2.0 prices at the research programme level. A client engages for a tracking programme with a defined scope, number of waves, markets, modalities. Within it, consumption is measured by outcomes: studies completed, synthetic panel runs, DataTalk query sessions, translations published. Seats do not appear in this model. A researcher who authors a study in 20 minutes via Claude is not penalised for being efficient. The platform charges for research value produced, not for time spent logged in.
Current Rival has logs, but DevOps holds the keys. When something breaks in production, the developer who built the system posts in a Slack channel, waits for the right DevOps contact to respond, and gets a log dump back over Slack. DevOps did not build the system and cannot diagnose it. The developer who can diagnose it is waiting on Slack while the incident is live. The justification is security. The practical effect is a gatekeeping layer between the people who understand the code and the evidence they need to fix it.
In an AI-first, agent-driven platform, logs are not just for debugging. They are how you evaluate whether the agents are making good decisions, spot where generation quality is drifting, and improve the system over time. Without logs you cannot debug. Without logs you cannot evaluate. Without logs you cannot improve. Rival 2.0 treats observability as a product requirement from day one: every pipeline run is traceable end-to-end, every LLM call is logged with token usage and latency, every agent decision is auditable. Developers own their logs directly. PII is masked at the point of writing, not blocked at the point of access. Security and developer visibility are not in conflict because the problem is designed out at the source.
Current Rival is spread across more than ten repositories and an equivalent number of services. Deploying a change that touches multiple layers requires coordinating across separate codebases, separate CI pipelines, and separate deployment targets. A dedicated DevOps function manages infrastructure. Engineers who want to trace a problem end-to-end need access to multiple systems they do not own.
Rival 2.0 lives in a single monorepo. The infrastructure is chosen to be simple and managed: compute that scales automatically, databases that do not need a DBA, no idle servers to patch and monitor. The engineering team operates the application. The infrastructure operates itself. No dedicated DevOps function is in the team structure because none is needed. Any engineer on the team can hold the full system in their head, trace a failure end-to-end, and ship a fix without waiting for another team to give them access.
In the current platform, both authoring and responding are bounded by what was built into the system in advance. Rival 2.0 moves that boundary. The platform becomes programmable infrastructure: a schema, a runtime, and an agent that can work with both. The researcher's intent, not the card library, defines what a study can do.