Statement of Direction: Where Switch Is Going and Why
What this is (and what it isn't)
This is Switch's living narrative — the single source of truth for where we are, what we're building, and where we're headed.
Use it for sales calls, investor updates, customer conversations, internal planning. Everyone tells the same story. Nobody guesses. Nobody over-promises.
It is not a feature roadmap with dates. It's not a product spec. It's a statement of direction — what we believe, what we're doing about it, and how far along we are.
The big picture (zoom way out)
Here's the mental model. For the last 20 years, every website was a library built for one type of visitor: humans. We designed pages for eyes. We optimized for clicks. We A/B tested button colors.
Now a billion robots just walked into that library. And they read 10,000x faster than us.
Some of them are helpful — shopping agents comparing prices on behalf of real customers, AI assistants researching answers, search engine crawlers making your content discoverable. Some are parasitic — scrapers stealing your pricing data, competitors mining your catalog, bots burning your bandwidth. And most are somewhere in between: gray-area visitors that nobody quite knows what to do with.
Here's the thing that most people miss: these two audiences — humans and agents — don't want the same experience. Not even close.
Think of it like a restaurant with two types of customers. Humans walk through the front door, sit down, read the menu, and order. Agents pull up to the loading dock and need the inventory manifest in a spreadsheet. If you hand the agent a leather-bound menu, it can't parse it. If you hand the human a spreadsheet, they leave.
Same business. Same products. Two completely different front doors.
And it gets weirder. Sometimes you want agents at the loading dock — they're placing bulk orders on behalf of real buyers. Sometimes you want to lock the loading dock entirely — they're photographing your inventory for a competitor. And sometimes you want them there, but only if they follow the rules — show your credentials, don't take more than you need, and don't come back 500 times in an hour.
The answer is almost never "block everything" or "allow everything." The answer is: design two webs, deliberately, on the same site. We explored this framework in our Agent-Ready Website Playbook — the practical guide to making your site work for both audiences.
That's what Switch makes possible.
The "Two Webs" framework
Here's a concrete way to think about this. Every important page on your site actually needs to serve two different experiences:
Checkout
- The human web: Trust badges, customer photos, influencer quotes, "98% on-time delivery!"
- The agentic web:
delivery_sla: 2_business_days, on_time_rate: 0.98— structured, parseable, comparable
Product pages
- The human web: Rich media, lifestyle imagery, reviews, social proof
- The agentic web: Clean specs, pricing, availability in Markdown or structured data
Pricing
- The human web: Visual plan comparison grid, feature highlights
- The agentic web: Structured data feed for commercial agents. Nothing for competitor scrapers — Content Gate replaces the page before it loads
Blog
- The human web: Full reading experience, embedded media, related posts
- The agentic web: Markdown optimized for accurate AI citation. Unauthorized scrapers get blocked
API docs
- The human web: Interactive examples, code playgrounds
- The agentic web: Machine-readable endpoint specs. Rate-limited access based on agent identity
Give an agent the human version and it fumbles — hallucinating prices, misreading specs, losing context. Give a human the agent version and they bounce. Serve both audiences the same thing and you're optimizing for neither.
Traditional bot detection asks one question: bot or not?
Switch asks five: Who is this? What are they doing? What should they see? Under what conditions? And what happens if they break the rules?
Where we are today — what's live
The foundation is built. Here's what's working right now.
The detection system
Not binary. Six visitor classes: human, commercial agent, gray bot, browser agent, scraper, and unknown.
Think of it as a bouncer who doesn't just check IDs — they read body language, notice who came in through the window, and remember faces from last week. We use 50+ environment probes, behavioral analysis, fingerprinting, honeypot traps, and protocol detection (including Chrome's WebMCP protocol). The system doesn't just ask what something is — it asks what it's doing: indexing, scraping, automating, browsing, monitoring, or API probing.
Why this matters: the same agent can behave differently across sessions. A shopping agent browsing product pages is welcome. The same agent hammering your API 500 times per minute is not. The right response depends on intent, not just identity. You can see every agent we detect — from GPTBot to ClaudeBot to Googlebot — in our Agents Directory.
Journey Builder (the workflow engine)
Drag-and-drop builder for defining how each visitor type experiences your site. Triggers based on visitor class, confidence threshold, URL pattern, specific agent name, or rate limits. Actions include block, challenge, popup, redirect, log, and flag for review. Branching logic. A/B testing. We ship 20+ pre-built journey recipes to get you started.
This is where "two webs" starts becoming real. You're not choosing block-or-allow. You're building the rules for which door each visitor walks through.
Content Gate
The nuclear option for headless browsers. When a known bad actor is detected during page load, the entire page is replaced before the real content enters the DOM. Nothing in the source. Nothing in the DOM. Nothing to scrape. Popups don't work on headless browsers. This does.
Think of it as the loading dock having a steel shutter that drops before anyone can look inside. This is especially effective against automation frameworks like Puppeteer, Playwright, and Selenium.
Dashboard and reporting
Real-time traffic overview. Agent leaderboard. Session explorer with per-session classification timelines. Journey analytics. Agent identification lures.
You can't design two webs if you can't see both audiences. This is the x-ray vision.
Self-learning
Every detection contributes to learned patterns. Manual reclassifications are remembered. No blocklist maintenance. The landscape of who visits and what they want changes every week — the system adapts without you babysitting it.
Installation
Five minutes. ~10KB SDK. Script tag, Google Tag Manager, or npm. No server changes.
What's in motion — actively building and validating
These aren't plans on a whiteboard. Work is underway. Early customers are testing them. Designs are in progress.
LLM-ization: CRO for the agentic web
This is the big one. The bridge from "manage agent traffic" to "convert agent traffic."
CRO personalized the web for humans. LLM-ization personalizes it for agents. Page by page, visitor type by visitor type — deliberately designing what agents see, separate from what humans see. This is what the agentic web demands.
Not treating agents as lesser visitors. Treating them as different visitors who deserve a deliberately designed experience. An agent shopping on behalf of a customer who gets clean structured data will convert better than one fumbling through your human-optimized hero image carousel.
The implications are different for every industry. For SaaS companies, it means AI assistants accurately recommending your product. For e-commerce sites, it means shopping agents successfully comparing your products. For publishers, it means getting cited instead of scraped.
Status: In pilot with early customers. Core infrastructure live. Expanding content format options and per-page configuration.
Agency multi-site reporting
Free or low-cost reporting for SEO and marketing agencies managing multiple client sites. "Show your clients that 34% of their traffic is non-human — and what it's costing them." Data nobody else surfaces clearly.
Status: Validating with agency partners. Scoping multi-tenant dashboard.
WordPress and Webflow integrations
From "add a script tag" to "install a plugin." Native integrations for the two most common site platforms.
Status: On the roadmap. Scoping technical requirements.
GEO-optimized content serving
For sites focused on Generative Engine Optimization — serve AI agents structured content formats optimized for how they parse and represent information. Relates closely to the emerging llms.txt standard. Per-page or site-wide.
Status: Scheduled for validation this quarter with GEO practitioners.
What's being explored — research and early thinking
These are directions we're investigating. They represent our conviction about where the market is heading. But they're not committed work yet — and we'll say so plainly.
Session recording for agent traffic
The agent equivalent of Hotjar. Replay what AI agents actually see and do on your site. Understand behavior patterns, identify content that agents struggle to parse, debug classification decisions. Right now you can see that agents visit. This would let you see how they experience your site.
Pricing evolution
As Switch moves from reporting to active traffic management, pricing evolves with it. Our guidelines:
- Core reporting stays free or very cheap. Visibility should never be gated. You can't fix what you can't see.
- Advanced capabilities (LLM-ization, Content Gate, custom journeys) will be priced as they mature.
- Beta participants get extended free access and discounted rates when pricing formalizes.
- New products get priced separately. Incremental features may be add-ons.
No surprises. No sticker shock. We'll give you the trajectory before it arrives.
Agent compliance and standards
Protocols like WebMCP are creating a divide between compliant agents (using sanctioned channels) and rogue scrapers. As standards mature, Switch becomes the enforcement layer — rewarding agents that follow the rules, penalizing those that don't. Automatically. This directly relates to what we're seeing with agents like OpenAI Operator and Claude Computer Use — browser agents that don't identify via user-agent strings and can't be managed with robots.txt alone.
Cross-platform intelligence
Aggregate agent traffic patterns across the Switch network. Benchmarking. Early warning. "Your competitor's site saw a 3x spike in GPTBot traffic last month — here's what that means for you."
The long-term thesis (the "3-Phase Cake")
Here's the framework. Three layers. Each one is valuable on its own. Each one makes the next one possible.
Layer 1 — See (Now)
Show site owners the reality of their non-human traffic.
Free reporting. Who's visiting, what they're doing, what it's costing. Most site owners have never seen this data. It's the moment of "wait, what?" that changes how they think about their site. This is the entry point — and it sells itself. Start here — it takes five minutes.
Layer 2 — Control (In motion)
Decide what each visitor type gets.
Block bad actors. Challenge unknowns. Route traffic intelligently. Start making deliberate choices about which agents get access, under what conditions, and to what content. The defensive use case — but also the first step toward designing the agentic web on your site. If you need a primer on blocking specific agents, our how-to-block guides cover every agent in the directory with step-by-step instructions.
Layer 3 — Design (Building toward)
Build two webs on one site.
Full LLM-ization. Different content, different formats, different journeys for different visitor types. A human browsing your site gets one experience. An agent shopping on behalf of a customer gets another — optimized for how agents process, compare, and relay information. Not blocking agents. Not ignoring them. Designing for them.
If Layer 1 is mushy (you can't see who's visiting), the whole cake collapses. If Layer 2 is missing (you see agents but can't act), you're just watching the problem. Layer 3 is where the real value compounds — but only if the first two layers are solid.
Each phase builds on the last. And each phase changes how site owners think about the one after it.
How to talk about this
The language guide
When anyone customer-facing discusses Switch's direction, use these phrases:
- Live: "This is available today. Let me show you." — Confidence. Demo it.
- In pilot: "In pilot with early customers — here's what we're seeing." — Real, but evolving.
- Validating: "Scheduled for validation this quarter." — Committed, not shipped.
- Scoping: "Under active development." — Serious intent, no date.
- Exploring: "Under active research." — Direction, not promise.
Never promise a date. Never demo a mockup as if it's live. Always ground the future in what's real today.
So if a customer asks "Can Switch serve different Markdown formats per agent type?" — the answer isn't "yes" or "no." It's: "The infrastructure for that is in pilot with early customers right now. Here's what the detection and workflow system can do today, and here's where the content customization is heading."
Inviting customers in
Customers should shape this — but we don't lose our own conviction.
- Share this direction openly with customers who ask where we're headed
- Invite power users into validation conversations for in-motion features
- Ask them what they'd build if the two-webs framework was fully in their hands
- Let their input sharpen priorities. But the long-term thesis is ours to own.
What makes this believable
A direction statement without evidence is a marketing deck. Here's what makes this real:
- The detection engine is live and learning. Not a prototype. Real traffic, real classifications, real patterns being learned every day across 45+ agent types.
- The research foundation is published. Switch is built on VOIX (declarative agent-web frameworks), Video-Browser (multi-modal agent evolution), WebMCP (Chrome's agent protocol, shipped February 2026), and ASTRA (intent classification). Explore the papers in our Research Hub.
- The market is moving toward us. Chrome shipped WebMCP. Agents are evolving from text crawlers to full browser agents. Every month, the problem Switch solves gets bigger and more obvious. See what's changed in the AI training crawler landscape.
- Layer 1 is the proof. Every site that installs Switch for free reporting sees data that makes Layer 2 obvious. The product sells its own next step.
Start here
Install Switch in five minutes and see what's really visiting your site. Layer 1 is free. The "wait, what?" moment is immediate. And once you see your non-human traffic for the first time, you'll understand exactly why the next two layers matter.
Last updated: Q1 2026. Reviewed quarterly. Next review: Q2 2026.