Rival 2.0 — AI-First Conversational Research Platform
Rival 2.0
AI-First Conversational Research Platform
From Authoring to Analytics — powered by Reachy. Not AI-integrated. AI-first.
Reachy
Reachy ✦ Built for Rival 2.0
Rival's Agentic Platform Layer
Lives in Studio AWS
Reachy is the agentic layer that runs across the entire Rival product surface — not just research. She already powers multiple builders in Studio today.
✍️
Study Authoring
Brief → survey, this flow
📊
Talk to Data
Plain English → ClickHouse
📱
Mobile Topline
Website deliverables
🔢
Crosstabs Builder
Automated cross-tabs
More builders
Extensible platform
"Run me a 10-minute voice study on AI sentiment — 4 objectives, global audience, multilingual."
"On it. Generating your study brief now — I'll route this to the authoring agent with voice config."
Triggers Authoring Agent
Reachy
Author
Authoring Agent
GPT-4o Validation
Verification
Cloudflare Edge
✦ Built for Rival 2.0 everything except Panel & Distribution
AWS — Studio (Authoring Plane)
Reachy · Authoring Agent · Publishing · Researcher UI
SOC 2 Type II · ISO 27001 · GDPR
Cloudflare — Execution Plane
Workers · Durable Objects · R2 · D1 · Queues · AI Gateway
SOC 2 Type II · ISO 27001/27701 · GDPR · 300+ edge cities
✍️
Step 1
Research Brief + Modality
Author writes a natural-language brief and picks modality — text survey or voice interview. This choice routes the entire generation path.
POST /generate brief + modality 🤖 or via Reachy
Zoom in →
Modality?
Text Survey
🤖 Authoring Agent ✦ Built for Rival 2.0 AWS
🧠
Step 2a
Claude Generation
Generates full FlowDefinition JSON via streamObject. Tokens stream live to author.
Claude Sonnet 4.6
Zoom in →
🔬
Step 2b
Semantic Validation
3 parallel GPT-4o passes: coverage, flow logic, AI probes.
GPT-4o × 3 parallel
Zoom in →
Step 2c
Verify + Loop Decision
Compiles to JS, verifies graph. Issues → fix prompt back to Claude (max 3 iterations).
verifyProgram max 3 iter
Zoom in →
↺ fix & retry
Voice Interview
🎯
Step 2 (voice)
Voice Brief Generation
Single Claude call shapes the brief into AI interview config — coverage checklist, opening message, time limit, language.
Claude Sonnet 4.6 single call
Zoom in →
📋
Step 3
Draft Ready
Generated study saved to disk. Author reviews and approves. Publish is a separate explicit action — nothing goes to CF until the author says so.
SSE stream author review POST /publish
Zoom in →
🧪 Automated Testing — Pre-launch QA Gate
🤖
Test Generator + Headless Runner
study.json is the source of truth for test generation. Every card type maps deterministically to a Playwright interaction. Open-text answers are generated by a lightweight model (Haiku / Flash) per persona. No AI required at run time — the test data file is pure JSON, run at any scale.
Phase 1 — Deterministic
single_choice → pick from options
multi_select → pick N from options
rank_order → ordered permutation
rating / nps → integer in range
slider_grid → per-row integer
emotion_dial → valence + arousal point
Phase 2 — AI fills gaps
open_text_short / long → Haiku / Flash generates realistic answer given question + persona
ai_probe → same model answers whatever Claude generated at runtime
AI invoked once at generation time — not during the run
Output
sessions-*.json with fully specified per-persona answers
Playwright drives real browser against live survey URL
DO isolation validated — zero cross-contamination
Every card type covered, every answer asserted
✓ DO isolation verified ✓ All card types exercised ✓ Answer fidelity asserted ⚡ Any study, any schema — zero manual test writing
🤖 Synthetic Respondents ✦ New Capability
An army of AI agents — each embodying a distinct persona — participates in the study headlessly as if they were real panel members. Same DO, same endpoints, same analytics pipeline. The only difference: the respondent is an agent.
When to use
Hard-to-reach demographics Quota gaps mid-fieldwork Markets with no panel coverage Sensitive topics — self-censorship risk Faster pilot before real fieldwork Cost-capped studies
study.json + persona description LLM generates answers once sessions-*.json headless runner at any scale DO records as real data
🤖
18–24 F
🤖
35–44 M
🤖
55+ F
🤖
25–34 M
🤖
...
run 10,000 in parallel
Survey URL — headless agent
Completion — same DO, same analytics
👥
Rival Panel & Distribution Existing Infrastructure
Built over years. Millions of opted-in respondents, demographic targeting, compliance track record. Rival 2.0 relies on this — it does not replace it. Clean API boundary: Rival 2.0 takes over between link click and completion webhook. Everything before and after stays exactly as it is today.
Survey URL distributed
via SMS / email (Twilio)
Completion webhook
incentive triggered
Opted-in respondents Demographic targeting Quota matching Years of trust 🔌 MCP integration — agents talk to panel directly
☁️ Step 4 — Cloudflare Edge Deploy ✦ Built for Rival 2.0 ☁ Cloudflare One WfP UserWorker per study · One DO per session
Text Study Worker
📄
survey.js → R2
WfP UserWorker study-{id}
D1 row — modality: text
Zoom in →
Session DO — respondent A sandboxed
Session DO — respondent B sandboxed
... per engagement isolated
Voice Study Worker
🎙️
stub survey.js → R2
WfP UserWorker study-{id}
D1 row — modality: voice + voiceConfig
Zoom in →
Voice DO — respondent A agent loop
Voice DO — respondent B agent loop
... per engagement isolated
🌍
At Scale — Cloudflare Global Network
Every study = its own isolated Worker. Every session = its own sandboxed DO. Zero shared state.
300+
edge nodes
concurrent DOs
<5ms
auth overhead
0
shared servers
study-brandtrack-q2
text
study-luxury-voice-apr
voice
study-cx-nps-global
text
study-ai-sentiment-q3
voice
study-concept-test-fr
text
study-pharma-pulse-de
text
study-ux-diary-ar
voice
+ hundreds more
any org
Toronto
London
Frankfurt
Singapore
São Paulo
Dubai
Tokyo
Sydney
+290 more
🏁
When a Session Completes
Three things happen simultaneously the moment a respondent finishes — guaranteed delivery to ClickHouse, real-time events to D1, and Reachy ready to answer questions about the data.
1
Completion Pipeline
Answers → ClickHouse, guaranteed
+
2
Live Events Agent
Real-time quota & drop-off to D1
+
3
Reachy Analytics
Ask questions, get answers instantly
📬 Completion Pipeline
🗄️
Session DO
Flushes answers + transcript. Clears in-memory state.
📨
CF Queue
Per-study queue. Guaranteed delivery, no data loss on Worker failure.
Queue.send()
⚙️
Queue Consumer Worker
Batches completions → ClickHouse upsert. Retries on failure. Writes D1 completion record.
batch upsert
🔔
Panel Webhook
POST to Rival panel system — mark complete, trigger incentive, update quota counts.
panel.rival.com/complete
Live Events Agent
📡
DOs fan out events
Each DO emits structured events as the interview progresses — answer recorded, quota cell filled, AI probe triggered.
🔄
Event Router Worker
Receives fan-out from all active DOs. Normalises and writes to D1 in real time.
low-latency path
🗃️
D1 — Live State
Quota fill rates, partial completions, drop-off points. Available immediately — no batch delay.
<100ms latency
📊
Live Dashboard
Researcher watches quota cells fill in real time. Study DO polls D1 to gate over-quota respondents instantly.
SSE to Studio UI
Reachy — Talk to Your Data
🏛️
ClickHouse Cloud
Full interview transcripts, card answers, AI probe responses, quota cells — structured and queryable.
source of truth
Reachy
Reachy
Rival's Agentic Platform Layer
"What were the top themes from the luxury study?"
"Compare NPS across age groups"
"Which objectives had the lowest coverage?"
📈
Analysis Agent
Plain English → ClickHouse SQL → synthesised answer. Topline report, cross-tabs, verbatim excerpts — generated on demand.
Studio dashboard
🗃️
D1
Study metadata · live quota state · completion records
+
📦
R2
Survey programs · media uploads · voice recordings
+
🏛️
ClickHouse
Full transcripts · card answers · AI probe data · analysis
Reachy
The Cycle Closes
Back to Reachy
Insights from this study feed Reachy's memory. She surfaces patterns, flags follow-up questions, and can kick off the next study — all from the same conversation. And this is just one of her roles. The same agent layer powers Mobile Topline deliverables, Crosstabs Builder, Website Builder, and every new capability Rival ships.
✍️ Study Authoring 📊 Talk to Data 📱 Mobile Topline 🔢 Crosstabs Builder + Every new builder
"Your luxury study is done. Promoter segment over-indexed on experience, not price. Want me to run a follow-up?"
"Yes — dig deeper on the experience theme. Same audience."
"Generating brief now. Routing to authoring agent ↗"
New study
API-First · Agent Economy
Rival 2.0 as Platform Infrastructure
Every capability exposed as an interface. External agents, MCP clients, and third-party tools can author studies, field research, and query results — without opening a browser. Rival becomes infrastructure for the research layer of the agent economy, not just a survey tool.
🔌
MCP Server
Model Context Protocol
External Agents & Clients
🤖
Claude Desktop / Claude.ai
Researcher asks Claude to run a study directly from their workflow
⌨️
Cursor / VS Code
Dev teams field quick concept tests without leaving their editor
🏢
Custom Internal Tools
CRM, brand tracker, NPS platform — trigger research programmatically
Automation Workflows
Study auto-launches when NPS drops, competitor announces, or quota shifts
MCP / REST
Rival 2.0 MCP Tools
create_study
Brief → authored study via Reachy
publish_study
Deploy to CF edge, get survey URL
get_results
Toplines, verbatims, quota status
query_responses
Plain English → ClickHouse SQL
get_quota_status
Live quota fill rates per cell
pause_study
Halt recruitment on quota full
reachy.crosstabs
Crosstabs builder via agent
reachy.topline
Mobile topline deliverable
Security & Compliance
SOC 2 Type II ISO 27001 ISO 27701 Privacy GDPR — by architecture ISO 27018 PII EU-U.S. Data Privacy Framework AES-256 at rest, TLS in transit Per-session isolation — no shared memory 300+ edge cities — data stays near respondent
Click any step to drill down  ·  Pills to switch view level