How to Block Meta Crawler

Complete guide to blocking Meta Crawler (Meta) from crawling your website using robots.txt, server configuration, and Switch workflows.

Operated by MetaSocial Crawlers

Should You Block Meta Crawler?

Caution: Blocking Meta Crawler will break link previews when your URLs are shared on Meta's platforms.

Only block if you have specific reasons. Use Switch to serve appropriate content rather than blocking entirely.

Blocking Methods

1robots.txt

High for cooperative crawlers

Add a Disallow rule for Meta Crawler's user-agent string in your robots.txt file. This is the standard, cooperative method that well-behaved crawlers respect.

2Server-side UA filtering

High

Configure your web server (nginx, Apache, Cloudflare) to reject requests matching Meta Crawler's user-agent patterns. This blocks at the network level before your application processes the request.

3Switch Journey Workflows

Highest — granular, real-time control

Create a custom journey in Switch that detects Meta Crawler and routes it to a block action, challenge, redirect, or modified content — without touching your server configuration.

robots.txt — Block Meta Crawler

Add the following to your robots.txt file (at the root of your domain) to block Meta Crawler:

User-agent: facebookexternalhit
Disallow: /

User-agent: Facebot
Disallow: /

User-agent: FacebookBot
Disallow: /

User-agent: meta-externalagent
Disallow: /

User-agent: meta-webindexer
Disallow: /

User-agent: Meta-WebIndexer
Disallow: /

robots.txt — Allow with Restrictions

Alternatively, allow Meta Crawler on most pages while blocking specific directories:

User-agent: facebookexternalhit
Disallow: /private/
Allow: /

User-agent: Facebot
Disallow: /private/
Allow: /

User-agent: FacebookBot
Disallow: /private/
Allow: /

User-agent: meta-externalagent
Disallow: /private/
Allow: /

User-agent: meta-webindexer
Disallow: /private/
Allow: /

User-agent: Meta-WebIndexer
Disallow: /private/
Allow: /

Meta Crawler User-Agent Strings

Use these patterns to identify Meta Crawler in your server logs or firewall rules:

facebookexternalhit
Facebot
FacebookBot
meta-externalagent
meta-webindexer
Meta-WebIndexer

Frequently Asked Questions

Does blocking Meta Crawler affect my Google search rankings?

No. Blocking Meta Crawler does not affect your Google search rankings. Only blocking Googlebot impacts Google Search visibility.

Does Meta Crawler respect robots.txt?

Yes, Meta Crawler respects robots.txt directives. Adding a Disallow rule for its user-agent will prevent it from crawling blocked paths.

Can I allow Meta Crawler on some pages but not others?

Yes. Use robots.txt to disallow specific directories, or use Switch journey workflows for granular page-level control with conditional logic.

Go beyond robots.txt

Switch detects Meta Crawler in real-time and lets you build custom journey workflows — block, challenge, redirect, or serve modified content. No server changes required.

Get Started Free