Rival Reimagined
A feature-by-feature comparison in the context of Responding: the soul and heart of the Rival platform.
| Area | AWS | Cloudflare | Cost | Winner |
|---|---|---|---|---|
| Deployment | ||||
| Deployment speed | Multi-step CI/CD pipeline, ECR push, ECS task definition update, 5–15 min | wrangler deploy, 10–30 seconds globally |
AWS: included in CodePipeline (~$1/pipeline/mo). CF: Workers free tier includes 100K req/day; paid $5/mo flat. | CF |
| Staging environments | Separate ECS cluster or environment, manual config duplication | Preview deployments per branch, automatic, free | AWS: separate cluster cost. CF: Pages preview deployments free, unlimited. | CF |
| Rollback | Re-deploy previous task definition, 5–10 min | Instant. Previous deployment always available. | AWS: no extra cost to redeploy. CF: free, instant. | CF |
| Infrastructure-as-code | CloudFormation/Terraform required for any infra change | Zero infra to define. Workers/Pages are the deployment unit. | AWS: CloudFormation free; Terraform Cloud ~$20/user/mo. CF: no infra to define, $0. | CF |
| Service bindings | Connecting services requires ARNs, IAM roles, VPC peering or endpoints, and explicit policy documents in CloudFormation or Terraform. Every service-to-service connection is a configuration artifact you write, review, and maintain. | Workers connect to R2, KV, DO, Queues, Vectorize, AI, and other Workers via bindings: a single line in wrangler.toml. No ARNs, no IAM policies, no network config. The binding is the connection. |
AWS: IAM + policy management is engineering overhead, not billed directly. CF: bindings are free; pay only for the bound service usage. | CF |
| Compute & Scaling | ||||
| Idle cost | EC2/ECS instances running 24/7 even between studies | Zero. Workers and DOs only run when a request comes in. | AWS: t3.medium ~$30/mo always-on. CF: $0 when idle. | CF |
| Cold start | ECS: none if always-on (but you pay). Lambda: 100–500ms cold start | Workers: sub-millisecond, no cold start concept | AWS: Lambda ~$0.20/1M invocations + duration. EC2: always-on cost. CF: Workers $5/mo flat for 10M requests. | CF |
| Scaling | Auto-scaling groups, target tracking policies, warm-up time | Automatic, instantaneous, no configuration | AWS: Auto Scaling free; ALB ~$16/mo base. CF: included in Workers pricing. | CF |
| Per-respondent isolation | Shared EC2 process, session state in Redis/DB | One Durable Object per respondent, completely isolated | AWS: Redis ~$15/mo (cache.t3.micro) + EC2. CF: DO $0.15/M invocations + $0.20/GB-month storage. | CF |
| Long-running sessions | ALB idle timeout (60s default), keep-alive hacks needed | DOs hold state for hours natively, no timeout issues | AWS: ALB ~$16/mo + WebSocket data transfer. CF: DO WebSocket included in DO pricing. | CF |
| Global distribution | Multi-region requires multi-region deployment, Route 53, replication | Single deployment, 300+ PoPs, respondent hits nearest edge | AWS: multi-region doubles infra cost. CF: single deploy, all PoPs, same price. | CF |
| Per-study isolation at scale (500+ live) | No equivalent primitive. Isolated compute per tenant at this count means separate Lambda versions, separate ECS tasks, or a complex multi-tenant routing layer. Provisioning and IAM overhead scales linearly with study count. | Workers for Platforms (WfP): each study gets its own Worker script and Durable Object namespace, deployed dynamically via the CF API at publish time. No per-study provisioning, no IAM, no cross-study bleed. 500+ live studies is routine. Rival uses WfP as the core isolation primitive for the responding plane. | AWS: per-study Lambda/ECS multiplies baseline cost. CF: WfP is $25/mo flat (includes 20M requests, 60M CPU-ms, 1,000 scripts) + $0.02/script and $0.30/M requests beyond that. No per-study overhead beyond script count. | CF |
| Storage & Data | ||||
| Asset storage egress | S3 + CloudFront: ~$0.09/GB egress from S3, CloudFront has its own pricing | R2: zero egress fees, flat storage cost | AWS: S3 ~$0.023/GB storage + $0.09/GB egress + CloudFront ~$0.0085/10K requests. CF: R2 $0.015/GB storage, $0 egress. | CF |
| Survey response storage | RDS (always-on, sized for peak) or DynamoDB (per-read/write) | DO SQLite per respondent, ClickHouse for analytics. Pay for what you use. | AWS: RDS db.t3.micro ~$25/mo always-on. CF: DO SQLite included in DO pricing; ClickHouse separate. | CF |
| Analytics / BI | Separate RDS + ETL pipeline + Sisense ($50–100k/yr license) | ClickHouse on CF, DataTalk MCP. Sisense replaced entirely. | AWS: Sisense $50–100K/yr. CF: ClickHouse on Workers (self-hosted) + DataTalk MCP, fraction of the cost. | CF |
| Real-time data availability | ETL job runs on schedule, reports lag behind by hours | Respondent submits answer, it is in ClickHouse immediately | AWS: ETL pipeline adds engineering cost. CF: no ETL pipeline needed. | CF |
| DevOps & Operations | ||||
| Infrastructure management | EC2 patching, AMI updates, security groups, VPC configuration | Zero. No servers to manage. | AWS: 0.5–1 FTE DevOps cost. CF: near-zero operational overhead. | CF |
| On-call burden | Server health, ECS task failures, DB connections, memory leaks | Mostly application-level. Platform failures are CF's problem. | AWS: more alert surface area, more pages. CF: mostly application-level incidents. | CF |
| Capacity planning | Must pre-provision for peak study load | Not required. Scales to zero and to millions automatically. | AWS: over-provisioning cost typical. CF: not applicable. | CF |
| Observability setup | CloudWatch + custom dashboards, log groups, metric filters | Workers Analytics, Logpush, built-in. Zero setup. | AWS: CloudWatch ~$0.30/GB ingested + dashboard costs. CF: Workers Analytics free; Logpush ~$0.05/GB. | CF |
| PII / secure logging | Custom log filtering pipeline needed, or risk PII in CloudWatch | Redact at Worker before any log is written. Built into the code. | AWS: custom Lambda filter adds cost. CF: handled in Worker code, no extra service. | CF |
| Team size needed | 0.5–1 FTE just for infra/DevOps | Near-zero. Engineers focus on product. | AWS: 0.5–1 FTE. CF: near-zero dedicated infra time. | CF |
| Networking & Performance | ||||
| Latency | Respondent → CloudFront → Origin (us-east-1 or wherever) | Respondent → nearest CF PoP (300+ globally) | AWS: CloudFront $0.0085–0.012/10K requests. CF: Workers/Pages included in flat pricing. | CF |
| WebSocket support | ALB WebSocket support, connection limits, sticky sessions needed | Native in Workers/DOs, no config | AWS: ALB WebSocket ~$0.008/GB data processed. CF: included in DO pricing. | CF |
| DDoS protection | Shield Standard free, Shield Advanced $3k/month | Built-in at all tiers, no extra cost | AWS: Shield Standard free; Shield Advanced $3,000/mo. CF: built-in, no extra cost. | CF |
| SSL/TLS | ACM certificates, manual renewal if not careful | Automatic, managed, zero config | AWS: ACM certs free. CF: managed TLS free. | CF |
| Cost Structure | ||||
| Pricing model | Pay for capacity (always-on instances + data transfer + storage) | Pay for usage (requests + active compute + storage) | AWS: capacity-based, pay even at zero load. CF: usage-based, $0 at zero load. | CF |
| Egress charges | S3→internet: ~$0.09/GB, EC2→internet: ~$0.09/GB | Workers/R2/Pages: zero or near-zero egress | AWS: ~$0.09/GB out to internet. CF: $0 egress from Workers, R2, Pages. | CF |
| Idle infra cost | Significant. EC2 runs 24/7 whether studies are fielding or not. | Zero. No requests, no cost. | AWS: ~$30–100+/mo for always-on instances. CF: $0. | CF |
| Sisense replacement | ~$50–100k/year license | Replaced by ClickHouse + DataTalk MCP, fraction of the cost | AWS: $50–100K/yr Sisense license. CF: $0 license cost. | CF |
| Estimated total saving | Baseline | 40–60% infra reduction + 0.5–1 FTE DevOps redirected to product | CF saves 40–60% on infra + recaptures 0.5–1 FTE in engineering capacity. | CF |
| Developer Experience | ||||
| New service setup | VPC, security groups, IAM roles, task definitions, load balancer rules | New .ts file + wrangler.toml entry | AWS: VPC/IAM/ECS config adds hours. CF: minutes. | CF |
| Local development | Docker Compose, localstack for AWS services, complex setup | wrangler dev: full local emulation in one command |
AWS: localstack not free for all services. CF: wrangler dev free. | CF |
| Secrets management | AWS Secrets Manager or Parameter Store, IAM policy needed | wrangler secret put: one command |
AWS: Secrets Manager $0.40/secret/mo. CF: Wrangler secrets free. | CF |
| CI/CD pipeline | CodePipeline or GitHub Actions with AWS credentials, ECR, ECS steps | GitHub Actions: npx wrangler deploy. Two lines. |
AWS: CodePipeline ~$1/pipeline/mo + GitHub Actions minutes. CF: GitHub Actions + free wrangler deploy. | CF |
| AI & Agent-Native (2023–2025) | ||||
| Edge inference | Bedrock: managed LLM inference from a handful of regional endpoints. SageMaker for custom model hosting. Both are regional, not edge. | Workers AI: serverless GPU inference in 180+ cities, sub-millisecond to respondent. Open-source model catalog. No infra to manage. Launched Sept 2023, GA April 2024. | AWS: Bedrock per-token pricing (Claude Sonnet ~$3/1M input, $15/1M output). CF: Workers AI $0.011–0.19/1K neurons depending on model; ~$0.056/1K neurons for Llama 3.1 70B. | CF |
| AI observability & gateway | Bedrock has basic logging. Cross-provider observability requires custom Lambda wrappers or third-party tools. | AI Gateway: a proxy layer for any AI provider (OpenAI, Anthropic, Workers AI, etc.) with logging, caching, rate limiting, retries, and model fallback. GA May 2024, 500M+ requests proxied. | AWS: CloudWatch + custom wrappers. CF: AI Gateway free up to 100K logged requests/day. | CF |
| Vector database | OpenSearch Serverless (vector mode) or Bedrock Knowledge Bases with OpenSearch. Operational overhead, separate billing, separate service. | Vectorize: serverless vector DB built into the Workers platform, designed to pair with Workers AI embeddings. Open beta Sept 2023, GA August 2024. Up to 5M vectors per index. | AWS: OpenSearch Serverless ~$0.24/OCU-hr. CF: Vectorize $0.04/1M queried vectors + $0.05/1M stored dims. | CF |
| Managed RAG pipeline | Bedrock Knowledge Bases: managed ingestion, chunking, embedding, and retrieval. Integrated with S3 and Bedrock models. Mature but AWS-ecosystem locked. | AI Search (formerly AutoRAG): upload docs to R2, Cloudflare handles chunking, embedding, Vectorize indexing, retrieval, and generation. Supports external models via AI Gateway. Open beta April 2025. | AWS: Bedrock Knowledge Bases: S3 + Bedrock embedding + retrieval costs stack up. CF: AI Search (AutoRAG) pricing TBD; in open beta. | Tie |
| Agent state & persistence | Lambda + DynamoDB + Step Functions wired together manually. No single primitive gives you co-located compute and state. Complex to orchestrate. | Durable Objects: one DO per agent, co-located compute + SQLite storage + WebSocket connections in a single entity with a globally unique address. SQLite in DOs entered public beta Sept 2024. Free tier added April 2025. | AWS: DynamoDB ~$1.25/M writes + Step Functions ~$0.025/1K transitions. CF: DOs $0.15/M invocations + $0.20/GB-month storage. | CF |
| Agents SDK | No equivalent managed SDK. Agents on AWS require composing Bedrock Agents, Lambda, Step Functions, and DynamoDB manually. | @cloudflare/agents: TypeScript SDK for building persistent AI agents on Durable Objects. Each agent gets its own SQLite DB, WebSocket connections, scheduling, and AI chat support. Launched February 2025. | AWS: Lambda + Step Functions + DynamoDB billed separately. CF: @cloudflare/agents runs on DOs, same DO pricing. | CF |
| MCP server hosting | No managed MCP offering. Building an MCP server on AWS requires Lambda + API Gateway + manual transport implementation. | Native MCP support via McpAgent class in the Agents SDK. Stateful per-session MCP servers backed by Durable Objects with SSE and Streamable HTTP transports. MCP Server Portals (managed hosting) in open beta Sept 2025. Launched April 2025. | AWS: Lambda + API GW ~$3.50/M requests + data transfer. CF: Workers $5/mo flat (10M req included). | CF |
| Durable execution / workflows | Step Functions: mature, visual, well-documented. But verbose state machine definitions and per-state-transition pricing adds up. | Cloudflare Workflows: durable multi-step execution built on Workers. Auto-retry, persist state across steps, survive failures and long pauses. Open beta Oct 2024, GA 2025. | AWS: Step Functions ~$0.025/1K state transitions. CF: Workflows pricing TBD; based on Workers + DO pricing. | Tie |
| Browser automation at edge | No managed browser execution. Requires EC2 + Puppeteer or a third-party service. | Browser Rendering (Browser Workers): headless Chromium accessible from Workers. Screenshots, PDFs, dynamic scraping, browser automation at the edge. GA April 2024. | AWS: EC2 + Puppeteer: instance cost + maintenance. CF: Browser Rendering $2/1K sessions (paid tier). | CF |
| Fine-tuned inference (LoRA) | SageMaker supports fine-tuned model hosting but requires significant infrastructure setup and per-instance billing. | LoRA adapters on Workers AI: upload lightweight adapters (few MB vs. tens of GB) and serve fine-tuned inference with milliseconds of added latency. Launched April 2024. | AWS: SageMaker per-instance billing + model storage. CF: LoRA adapters on Workers AI, same per-neuron pricing as base model. | CF |
| LLM security / Firewall for AI | Bedrock Guardrails: content filtering, PII redaction, topic blocking. Integrated but Bedrock-only. | Firewall for AI: WAF-style protection for any LLM endpoint. Detects prompt injection, PII leakage, unsafe content. Scores injection likelihood 1-99. Works across any provider via AI Gateway. Open beta March 2025. | AWS: Bedrock Guardrails ~$0.15/1K text units. CF: Firewall for AI pricing via AI Gateway; currently in open beta. | CF |
| Inference engine performance | Bedrock uses managed inference. No visibility into the underlying engine. | Infire: Cloudflare's own Rust-built inference engine (replacing vLLM). Continuous batching, paged KV-cache, multi-GPU tensor parallelism. Up to 7% faster than vLLM on H100s. Launched Birthday Week 2025. | AWS: Bedrock per-token pricing. CF: same Workers AI per-neuron pricing; Infire delivers more throughput per dollar. | CF |
| Real-time TTS/STT at edge | Polly (TTS) and Transcribe (STT) are regional services. Not edge-native, adds latency for globally distributed users. | Deepgram Nova 3 (STT) and Aura 2 (TTS) available directly via Workers AI. Runs at the edge. Relevant for Rival's voice interview modality. Added AI Week 2025. | AWS: Polly $4/1M chars (neural); Transcribe $0.024/min. CF: Deepgram via Workers AI; pricing TBD in open beta. | CF |
| MCP as a platform API layer | 66 separate MCP servers across services (Lambda, ECS, Bedrock, etc.), each scoped to one service. No unified server. Requires manual orchestration across servers for cross-service workflows. In preview as of March 2026. No code execution primitive. | One unified MCP server (mcp.cloudflare.com) exposing all 2,500+ Cloudflare APIs via two tools: search() for progressive semantic discovery and execute() for server-side JavaScript execution in an isolated V8 sandbox. Cross-API workflows are composed in code, not by chaining 2,500 individual tools. ~1,000 tokens vs. 1M+ for traditional enumeration. Launched February 2026. |
AWS: no extra charge; pay for underlying AWS resources. CF: no extra charge; pay for Workers/API usage. | CF |
| Static Hosting (Responding UI, client-facing deliverables, and any frontend) | ||||
| Subdomain routing | No native subdomain-to-subfolder mapping. Requires Lambda@Edge to route {client}.domain.com to the right S3 path. Custom code you own and maintain forever. | Built-in. A branch deploy is a subdomain. Zero config. One CLI call assigns the subdomain automatically. | AWS: Lambda@Edge $0.60/1M requests + $0.00001/100ms compute. CF: Pages subdomain routing free. | CF |
| SSL/TLS per subdomain | ACM wildcard cert (free) but must be provisioned in us-east-1 specifically for CloudFront. Manually associated with the distribution. | Automatic per subdomain. Zero-touch. No region constraints. | AWS: ACM free but requires provisioning steps. CF: automatic, free. | CF |
| Deploy workflow | S3 upload + CloudFront cache invalidation + wait for propagation. Multiple SDK calls, multiple failure modes. | wrangler pages deploy ./output --project-name project --branch {slug}. One command. Done. |
AWS: S3 + CloudFront invalidation ($0.005/path). CF: wrangler pages deploy, no invalidation cost. | CF |
| Republish latency | CloudFront cache invalidation takes 5-15 minutes. Costs $0.005 per path invalidated. | Seconds. Instant global propagation. No invalidation concept. | AWS: $0.005/path for cache invalidation. CF: $0, instant. | CF |
| Infrastructure code | ~250-300 lines of CloudFormation: S3 bucket, CloudFront distribution, Route 53 wildcard, ACM cert, Lambda@Edge, IAM roles. Code you write, review, test, debug, and maintain. | Zero. One CLI command created the project. No infrastructure code exists. | AWS: ~250–300 lines CloudFormation to write and maintain. CF: zero. | CF |
| Ongoing maintenance | Lambda runtime updates, CloudFront behavior changes, S3 bucket policies, cert monitoring. Every component is another thing that can break. | None. Platform managed entirely by Cloudflare. | AWS: engineering time for runtime updates, cert monitoring. CF: none. | CF |
| Time to implement | Estimated 2-3 days to build, test, and validate the full stack. | Already working. Took 2 hours. | AWS: est. 2–3 days engineering. CF: 2 hours. | CF |
| Scale ceiling | Unlimited publishes, pay-per-use. No hard ceiling. | 5,000 deploys/month on Pages Pro (~160/day). Not a constraint at current scale but worth noting. | AWS: unlimited publishes. CF: 5,000 deploys/mo on Pages Pro. | AWS |
| Vendor lock-in | Low. Deliverables are static HTML files. Move anywhere. | Low. Same. Static HTML is static HTML. | Both: static HTML is portable. Cost neutral. | Tie |
| AWS Wins | ||||
| Managed databases | RDS and Aurora have decades of operational maturity, read replicas, point-in-time recovery. The gold standard for relational workloads. | D1 and DO SQLite are purpose-built for the edge: lightweight, co-located, zero connection overhead. Not designed for heavy concurrent writes. For those workloads, Hyperdrive connects any Worker to any PostgreSQL-compatible database (RDS, Aurora, Neon, Supabase, PlanetScale) with connection pooling and edge-side query caching. You can even point it at an existing AWS RDS or Aurora instance with no migration required. | AWS: RDS db.t3.micro ~$25/mo; Aurora Serverless v2 from ~$0.12/ACU-hr. CF: Hyperdrive $0.90/M queries + point to existing RDS at no migration cost. | AWS |
| Ties | ||||
| Ecosystem breadth | 200+ services, SageMaker, Bedrock, Rekognition, full ML platform. Unmatched breadth. | Growing fast. The AI/agent surface (Workers AI, Agents SDK, MCP, AutoRAG, Vectorize, Workflows, Browser Rendering) has closed the gap significantly since 2023. For the Rival workload specifically, CF covers everything needed. | AWS: costs vary widely per service. CF: Workers AI + Agents + Vectorize all on flat/usage pricing. | Tie |
| Vendor familiarity | Most engineers have AWS experience. Large talent pool. | Edge model (DOs, Workers) requires a mental model shift, but easier to ramp up in practice. Fewer services to learn, excellent documentation, and a much simpler deployment model means engineers are productive faster. | AWS: existing talent pool (no cost). CF: faster ramp due to simpler model; minimal training cost difference. | Tie |
| Compliance certifications | SOC 2 Type II, ISO 27001, ISO 27701, ISO 27018, PCI DSS Level 1, FedRAMP Moderate (authorized), FedRAMP High (in process), HIPAA (BAA-based), HITRUST, C5, IRAP, MTCS, FINMA, and more. One of the most extensive compliance portfolios in cloud. | SOC 2 Type II, ISO 27001, ISO 27701, ISO 27018, PCI DSS Level 1, FedRAMP Moderate (authorized), FedRAMP High (in process), HIPAA (BAA-based), C5 2020 (Germany), ENS High (Spain), EU Cloud Code of Conduct (GDPR), IRAP PROTECTED (Australia), ISMAP (Japan), EU-US DPF, Swiss-US DPF, UK DPF extension, Global CBPR Forum (first org ever certified, Jan 2025). Stronger than commonly assumed. | AWS: compliance programs typically included in enterprise tiers. CF: same. Both enterprise agreements needed for full compliance coverage. | Tie |