...
How to Block AI/LLM Crawlers on Your Website in 2025: The Complete Guide Featured Image

Denis K

Author

Over the past two years, large language models (LLMs) like ChatGPT, Claude, and Google Gemini have turned into constant readers of the web. Their bots scan billions of pages to feed AI training and power assistants.

For a single site, this can mean dozens — sometimes hundreds — of hits each day, not from people but from machines. Over time, this creates visible strain: slower loading, wasted bandwidth, and higher server load.

Which leads to the key question: who is crawling my site, and how can I Block AI Crawlers or even Block LLM Website Crawlers before they cause damage?

Server log statistics showing that nearly 50% of traffic comes from LLM crawling and other bots

When was the last time you checked your server logs? Statistics show that nearly 50% of traffic comes from LLM crawling and other bots, proving how automated access rivals real user visits.

Major Networks Behind LLM Crawling

Currently, most of the large AI companies operate their own dedicated crawlers. The most active include:

  • OpenAI – GPTBot: collects text data for models like ChatGPT.
  • Anthropic – ClaudeBot: similar purpose, gathering content for the Claude assistant.
  • Google – Google-Extended (Gemini data crawler): used to improve Google Bard/Gemini.
  • Perplexity – PerplexityBot: powers the Perplexity AI search engine.
  • Common Crawl: while not a company-owned LLM, it supplies datasets widely used by AI labs.
  • Others: Meta, Apple, Mistral, and smaller research labs increasingly run their own.

These bots present themselves with different user agents, and sometimes rotate IPs through cloud providers like AWS, GCP, or Azure.

LLM Crawlers and Their User Agents You’ll See in Logs

Understanding how these crawlers enter your site and how they identify themselves is essential. When LLM crawlers visit your website, they announce themselves through their LLM User-Agent strings. With that technical insight, you can build a smarter blocking strategy.

OpenAI Crawlers

GPTBot is the official crawler used by OpenAI to fetch and index publicly available web pages for use in training or providing references in ChatGPT and related systems. It tends to respect robots.txt directives

Bash
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; OAI-SearchBot/1.0; +https://openai.com/searchbot Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; ChatGPT-User/1.0; +https://openai.com/bot Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; GPTBot/1.1; +https://openai.com/gptbot` Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; GPTBot/1.2; +https://openai.com/gptbot)

Deepseek crawlers

Deepseek is an AI that can fetch web content and index pages for its search / knowledge tasks. In documentation from xseek, it is described as using specific user agents when accessing content.

Bash
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Deepseek/1.0; +https://www.deepseek.com) xSeek  Mozilla/5.0 (compatible; DeepseekBot/1.0; +https://www.deepseek.com/bot)

CCBot (Common Crawl)

CCBot is the web crawler operated by Common Crawl, a non-profit organization that produces open crawl data sets of the web. It is part of efforts to archive large portions of the public web, making the data freely available for research, machine learning, and AI training.

Bash
Mozilla/5.0 (compatible; CCbot/2.0; +http://commoncrawl.org/faq/)

Grok (xAI by Elon Musk)

Part of xAI, linked to X (Twitter). Grok is a conversational AI that also pulls external content.

Bash
Mozilla/5.0 (compatible; xAI-Grok/1.0; +https://x.ai/)

Anthropic (Claude) Crawlers

Anthropic, an AI company founded by ex-OpenAI researchers, headquartered in San Francisco. Claude is an LLM (like ChatGPT) trained on large web data. Anthropic uses its own crawler to fetch pages for model training and fine-tuning.

Bash
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; ClaudeBot/1.0;

Perplexity

Perplexity AI, a U.S. startup founded in 2022, backed by investors like Nvidia and Jeff Bezos. It crawls the web (like a search engine) and uses LLMs to generate direct answers with sources.

Bash
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; PerplexityBot/1.0; +https://www.perplexity.ai/bot

Google AI Crawlers

Google now runs several AI-related crawlers in addition to its classic Googlebot. These are the most important ones in 2025 that relate specifically to AI and training.

Bash
GoogleOther-Image/1.0 GoogleOther-Video/1.0 Google-Extended/1.0

FYI: Google-Extended is the main AI crawler flag: it tells Google whether your site can be used for AI model training (e.g., Gemini).

Meta AI Crawlers

These crawlers are owned by Meta Platforms, Inc. (formerly Facebook). Their purpose is primarily to fetch web content for social features (link previews, metadata) and increasingly to support Meta’s AI / language model training and retrieval functions.

Bash
Mozilla/5.0 (compatible; meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler))  Mozilla/5.0 (compatible; FacebookBot/1.0; +http://www.facebook.com/bot.html)

MistralBot / Mistral AI

It fetches pages when needed (on-demand), rather than continuously crawling all sites. It also publishes a list of IP addresses used by MistralAI-User so that site owners can block or allow them explicitly

Bash
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; MistralAI-User/1.0; +https://docs.mistral.ai/robots)

Stability AI Scraper

Best known for Stable Diffusion, but Stability AI also collects text/web data for training LLMs.

Bash
Mozilla/5.0 (compatible; StabilityAI/1.0; +https://stability.ai/)

YouBot (You.com)

YouBot is the crawler behind You.com’s AI / search features. It indexes web content so that You.com can use those pages in its answers

Bash
Mozilla/5.0 (compatible; YouBot (+http://www.you.com))

Other Notable LLM Crawlers

Bash
Mozilla/5.0 (compatible; SerpApiBot/1.0; +https://serpapi.com/bot) Mozilla/5.0 (compatible; KagiBot/1.0; +https://kagi.com/) Mozilla/5.0 (compatible; StabilityAI/1.0; +https://stability.ai/) Mozilla/5.0 (compatible; DuckAssistBot/1.0; +http://www.duckduckgo.com/bot.html) Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729; Diffbot/0.1; +http://www.diffbot.com) Diffbot/1.0 (+http://www.diffbot.com/bot) Diffbot/5.0 (diffbot.com) Mozilla/5.0 (compatible; SerpApiBot/1.0; +https://serpapi.com/bot) Mozilla/5.0 (compatible; StabilityAI/1.0; +https://stability.ai/) Mozilla/5.0 (compatible; omgili/1.0; +http://www.omgili.com/bot.html) Mozilla/5.0 (compatible; cohere-ai/1.0; +http://www.cohere.ai/bot.html) Mozilla/5.0 pineapple (compatible; Onespot-ScraperBot/1.0; +https://www.onespot.com/identifying-traffic.html) ImagesiftBot/1.0 (+http://imagesift.com/bot) DataForSeoBot/1.0 (+https://dataforseo.com/dataforseo-bot)

FYI: Some of these change names or rotate between “stealth” user agents. That’s why regularly analyzing your access logs is critical.

Should You Block All AI/LLM Crawlers?

The question doesn’t have a simple yes or no. AI crawlers are a new reality on the web, and each webmaster has to weigh the trade-offs.

Arguments For Blocking

  • Content protection – prevents your text, images, and data from being harvested for AI training without consent
  • Bandwidth conservation – reduces server strain and keeps resources available for real users
  • Competitive advantage – stops proprietary information from flowing into rival AI models
  • Copyright control – ensures you decide how and where your content is reused

Arguments Against Blocking

  • Traffic source – AI platforms (ChatGPT, Perplexity, Gemini) can send new visitors your way
  • Visibility in AI search – blocking could remove you from future AI-driven discovery channels
  • SEO benefits – some AI crawlers may eventually influence ranking and indexing alongside Googlebot
  • Future-proofing – if AI assistants become the main gateway to information, blocking now could leave your site invisible later

Block AI Crawlers: LLM Crawler Best Practices & Priorities

Instead of blocking everything or allowing all bots, the most practical strategy today is selective blocking. Treat each crawler by its purpose and impact on your site.

  • Block training crawlers – shut down access for GPTBot, CCBot, ClaudeBot, and other bots whose main purpose is to harvest data for AI training.
  • Allow search-facing crawlers selectively – keep bots like PerplexityBot or OAI-SearchBot if you want visibility in AI search results and referral traffic.
  • Protect sensitive content – lock down premium areas, members-only sections, or proprietary material, while leaving marketing or public pages accessible.
  • Use analytics & adjust – track bot activity, bandwidth usage, and incoming referrals to see which bots help and which only drain resources.

Methods to Block AI/LLM Crawlers

robots.txt Method

The simplest way to block LLM crawlers is by using the robots.txt file. This small text file, placed in the root of your site, tells crawlers which parts of your site they are allowed to access and which are off-limits. Most legitimate AI crawlers — like GPTBot, ClaudeBot, PerplexityBot, and CCBot — officially state that they respect robots.txt.

Bash
# Block OpenAI crawlers User-agent: GPTBot Disallow: /  User-agent: OAI-SearchBot Disallow: /  User-agent: ChatGPT-User Disallow: /  # Anthropic (Claude) User-agent: ClaudeBot Disallow: /  User-agent: Claude-Web Disallow: /  User-agent: Claude-User Disallow: /  User-agent: Claude-SearchBot Disallow: /  User-agent: anthropic-ai Disallow: /  # Perplexity User-agent: PerplexityBot Disallow: /  User-agent: Perplexity-User Disallow: /  # Common Crawl User-agent: CCBot Disallow: /  # Google AI User-agent: Google-Extended Disallow: /  User-agent: Google-CloudVertexBot Disallow: /  User-agent: Google-Research Disallow: /  # Meta (Facebook / Instagram) User-agent: Meta-ExternalAgent Disallow: /  User-agent: Meta-ExternalFetcher Disallow: /  User-agent: FacebookBot Disallow: /  User-agent: facebookexternalhit/1.1 Disallow: /  # Apple User-agent: Applebot-Extended Disallow: /  # Other AI Crawlers User-agent: YouBot Disallow: /  User-agent: Diffbot Disallow: /  User-agent: ImagesiftBot Disallow: /  User-agent: Omgilibot Disallow: /  User-agent: Omgili Disallow: /  User-agent: DataForSeoBot Disallow: /  User-agent: cohere-ai Disallow: /  User-agent: cohere-training-data-crawler Disallow: /  User-agent: AlephAlphaBot Disallow: /  User-agent: ArxivBot Disallow: /  User-agent: DuckAssistBot Disallow: /  User-agent: NeevaAI Disallow: /  User-agent: AndiBot Disallow: /  User-agent: MistralAI-User Disallow: /  User-agent: StabilityAI Disallow: /  # AI / Research Bots User-agent: Bytespider Disallow: /  User-agent: TikTokSpider Disallow: /  User-agent: Amazonbot Disallow: /  User-agent: SemrushBot-OCOB Disallow: /  User-agent: Petalbot Disallow: /  User-agent: VelenPublicWebCrawler Disallow: /  User-agent: TurnitinBot Disallow: /  User-agent: Timpibot Disallow: /  User-agent: ICC-Crawler Disallow: /  User-agent: AI2Bot Disallow: /  User-agent: AI2Bot-Dolma Disallow: /  User-agent: AwarioBot Disallow: /  User-agent: AwarioSmartBot Disallow: /  User-agent: AwarioRssBot Disallow: /  User-agent: PanguBot Disallow: /  User-agent: KangarooBot Disallow: /  User-agent: Sentibot Disallow: /  User-agent: img2dataset Disallow: /  User-agent: Meltwater Disallow: /  User-agent: Seekr Disallow: /  User-agent: peer39_crawler Disallow: /  User-agent: Scrapy Disallow: /  User-agent: Cotoyogi Disallow: /  User-agent: aiHitBot Disallow: /  User-agent: Factset_spyderbot Disallow: /  User-agent: FirecrawlAgent Disallow: /
Expand

Important: These directives will block indexing attempts from nearly all AI-related bots — and potentially from other crawlers that read this file. While this provides strong protection against unauthorized use of your content for AI training, it may also reduce visibility in certain search or discovery platforms.

In short: apply these rules selectively and at your own risk. Balance protection with discoverability.

Server-Level User Agent Blocking

Apache .htaccess:

Another effective way to stop LLM crawlers is by blocking them directly at the server level. In Apache, you can use your .htaccess file to detect requests by their User-Agent string and deny access instantly — before the page even loads.

Apache
# Block LLM crawlers by User-Agent RewriteEngine On  # Block OpenAI RewriteCond %{HTTP_USER_AGENT} "GPTBot|OAI-SearchBot|ChatGPT-User" [NC] RewriteRule .* - [F,L]  # Block Anthropic RewriteCond %{HTTP_USER_AGENT} "ClaudeBot|Claude-Web|anthropic-ai" [NC] RewriteRule .* - [F,L]  # Block Perplexity RewriteCond %{HTTP_USER_AGENT} "PerplexityBot" [NC] RewriteRule .* - [F,L]  # Block Common Crawl RewriteCond %{HTTP_USER_AGENT} "CCBot" [NC] RewriteRule .* - [F,L]  # Block Google Extended RewriteCond %{HTTP_USER_AGENT} "Google-Extended" [NC] RewriteRule .* - [F,L]  # Block Meta AI RewriteCond %{HTTP_USER_AGENT} "Meta-ExternalAgent|FacebookBot" [NC] RewriteRule .* - [F,L]

Nginx Configuration

In Nginx, you can block unwanted LLM bots directly in your server configuration. This method inspects the User-Agent string and rejects matching requests before they hit your application.

Nginx
# Block LLM crawlers if ($http_user_agent ~* "GPTBot|OAI-SearchBot|ChatGPT-User|ClaudeBot|Claude-Web|anthropic-ai|PerplexityBot|CCBot|Google-Extended|Meta-ExternalAgent|FacebookBot") {     return 403; }

This rule denies access with a 403 Forbidden response whenever the User-Agent contains the listed bot names. Lightweight, efficient, and stops requests early.

Nginx Rate Limiting

Not every crawler needs to be fully blocked. Sometimes it’s enough to slow them down so they don’t overload your server. Nginx has a built-in rate limiting module (limit_req) that controls how many requests per second a client can make.

Nginx
# Define rate limiting zones limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s; limit_req_zone $http_user_agent zone=bots:10m rate=1r/s;  server {     # Apply rate limits     limit_req zone=general burst=20 nodelay;          # Stricter limits for suspected bots     if ($http_user_agent ~* "bot|crawler|spider|scraper") {         set $bot_detected 1;     }          if ($bot_detected) {         limit_req zone=bots burst=5 nodelay;     } }

Fail2ban Configuration

Fail2ban is a security tool that scans your server logs for suspicious patterns and then automatically bans IPs that break your rules. While it’s most often used against brute-force attacks, it can also be adapted to control aggressive AI crawlers.

Nginx
[llm-crawlers] enabled = true port = http,https filter = llm-crawlers logpath = /var/log/nginx/access.log maxretry = 3 bantime = 86400 findtime = 300 action = iptables-multiport[name=llm-crawlers, port="http,https", protocol=tcp]  # Filter file: /etc/fail2ban/filter.d/llm-crawlers.conf [Definition] failregex = ^<HOST>.*"(GPTBot|ClaudeBot|PerplexityBot|CCBot|OAI-SearchBot)".*$ ignoreregex =

ProTip: The strongest strategy is a layered defense: start with robots.txt, enforce rules with server-level blocking, add rate limiting to prevent overload, and let Fail2ban catch whatever slips through.

Conclusion

Not all bots are bad. It’s worth remembering that not every crawler is an enemy. Some bots genuinely help — they power search engines, generate previews when your links are shared, or even bring you new visitors through AI search platforms. Blocking them blindly could mean losing out on valuable exposure.

And if you do decide to restrict LLM crawlers, here’s the final set of recommendations to guide you.

  • robots.txt – Your first line of defense; it clearly states your rules. Compliant AI crawlers (OpenAI, Anthropic, Perplexity, Google-Extended, etc.) will typically honor it.
  • User-agent blocking – Filters out most bots that identify themselves honestly. Works well against crawlers like GPTBot, ClaudeBot, PerplexityBot, CCBot.
  • Rate limiting – Protects server resources by slowing down crawlers that send too many requests in a short time.
  • Monitoring & analytics – Tracks what gets through, which bots bring referrals, and which cause unnecessary load.
  • Advanced methods (JavaScript / CAPTCHAs / Fail2ban) – Catch stealthy or persistent crawlers that try to evade detection.

What Each Method Catches

  • robots.txt → Compliant crawlers (≈40–60% of AI bots).
  • User-agent blocking → Honest but persistent crawlers (≈30–40%).
  • Rate limiting → Aggressive crawlers hitting too often (≈10–15%).
  • Advanced techniques → Sophisticated or stealth crawlers that spoof user-agents (≈5–10%).

Think of it as building a layered defense system: each method catches a different slice of unwanted traffic, and together they give you the best balance between protection and discoverability.

Frequently Asked Questions

You can block AI crawlers by combining methods: use robots.txt to disallow them, block their user agents in your server (Apache/Nginx), apply rate limiting, and ban persistent offenders with tools like Fail2ban.

The simplest method is adding rules to your robots.txt file. For example: User-agent: GPTBot Disallow: /

No. Major crawlers like GPTBot, ClaudeBot, PerplexityBot, and Google-Extended claim to follow robots.txt. But many smaller or less transparent bots ignore it.

Yes. Website owners control access to their servers. Blocking bots is legal and common practice. Just remember that blocking some crawlers (like search engines) can affect your visibility.

You can’t stop all crawlers globally, but you can block them on your site. Use robots.txt, block their user agents at the server level, or filter their IP ranges. Persistent crawlers may require advanced methods like Fail2ban.

Add this to your robots.txt: User-agent: Googlebot Disallow: /

Blocking LLM crawlers (like GPTBot, CCBot, ClaudeBot) does not affect your Google SEO rankings. But if you block search crawlers (like Googlebot or Bingbot), your site will disappear from search results.

Yes. Search engine crawlers like Googlebot, Bingbot, DuckDuckBot, and social preview bots like facebookexternalhit or Twitterbot help users discover and share your content. Blocking them will reduce visibility.

At least every few months. New AI crawlers appear constantly (DeepSeek, Grok, Manus, etc.). Review your server logs, track new user agents, and update your blocklist to stay ahead.

Denis K

Author

A passionate tech explorer with a focus on internet security, anonymous browsing, and digital freedom. When not dissecting IP protocols, I enjoy testing open-source tools and diving into privacy forums. I’m also passionate about discovering new places, fascinated by maps and the way the world connects — I can even name all 50 U.S. states in alphabetical order. I never turn down a good cup of coffee in the morning.

Recommended Posts

Insights & Inspirations: Our Blog

Top Free Android Antivirus Apps: What Really Works in 2025 - featured image
Cybersecurity

Top Free Android Antivirus Apps: What Really Works in 2025

If you check your email, pay bills, install apps, or use public Wi-Fi on your Android phone every day, you've probably thought at least once, "Is...

8 min read
Top Internet Speed Test Services in 2025: Honest Breakdown  - featured image
IP address

Top Internet Speed Test Services in 2025: Honest Breakdown 

You only really notice your internet connection when it starts to annoy you. Video calls freeze. Streams buffer. Games lag right at the crucial...

9 min read
Top Antidetect Browsers for Linux Users in 2025 - featured image
Antidetect Browsers

Top Antidetect Browsers for Linux Users in 2025

An antidetect browser is a multi-profile browser that allows users to separate online activities by creating multiple independent browsing environments. Each...

8 min read
How to Have a Public IP Address and Why You Might Need One - featured image
IP address

How to Have a Public IP Address and Why You Might Need One

If you've ever tried to host a website, set up remote access, or simply understand your network better, you’ve probably wondered how to get a public IP address....

7 min read
Best DNS Leak Test Checkers of 2025 — Comprehensive Guide - featured image
Antidetect Browsers Cybersecurity

Best DNS Leak Test Checkers of 2025 — Comprehensive Guide

A DNS Leak Test Checker helps you find out whether your real DNS requests are leaking outside your secure connection. A DNS leak exposes the domains you visit...

8 min read
Good Telegram Alternatives You Should Know About - featured image
Cybersecurity

Good Telegram Alternatives You Should Know About

Why Look For A Telegram Alternative In 2025, more and more users are thinking about finding a Telegram alternative — and the reasons for this are quite serious....

9 min read
Easy Ways Of How To Bypass CAPTCHA Human Verification - featured image
Antidetect Browsers Cybersecurity Proxy VPN

Easy Ways Of How To Bypass CAPTCHA Human Verification

It is not possible to bypass CAPTCHA human verification directly and it is not necessary. The correct way to “bypass” is not to hack, but to eliminate the...

13 min read
Chat GPT Atlas Browser Review. What Does OpenAI Browser Has In It? - featured image
Cybersecurity

Chat GPT Atlas Browser Review. What Does OpenAI Browser Has In It?

What Is OpenAI Browser? OpenAI Browser is not just a new ChatGPT feature, but a real step towards the smart internet. Simply put, it is an embedded browser that...

11 min read
Best Free Temporary Email Services For Tests And Registration - featured image
Cybersecurity

Best Free Temporary Email Services For Tests And Registration

What Is a Temporary Email Temporary email is a one-time mailbox that does not last long, but makes life on the Internet safer and more convenient. You need it...

12 min read
Helium Browser in 2025: A Complete Review for Anonymous Browsing - featured image
Antidetect Browsers Cybersecurity

Helium Browser in 2025: A Complete Review for Anonymous Browsing

What if a browser “light as helium” could hand you back control of the web: no telemetry, no ad sludge, no noise? We put that promise to the test. In this...

9 min read
What Is reCAPTCHA and Why It Exists - featured image
Cybersecurity

What Is reCAPTCHA and Why It Exists

Let's try to start with the simplest one. Each of us has come across a situation at least once: you visit a website, you want to register or leave a comment,...

13 min read
How To Create Multiple Gmail Accounts And Manage Them Effectively - featured image
Cybersecurity

How To Create Multiple Gmail Accounts And Manage Them Effectively

Why People Create Multiple Gmail Accounts Creating multiple Gmail accounts is a normal practice in 2025, especially for those who work online, manage projects,...

11 min read
Best Free Cloud Storages Up To 100 GB - featured image
Cybersecurity

Best Free Cloud Storages Up To 100 GB

In short, in 2025, the cloud world has become incredibly diverse. Almost every user can choose their free cloud storage for specific tasks, from photo storage...

10 min read
Top 12 Best Free Email Services Besides Gmail - featured image
Cybersecurity

Top 12 Best Free Email Services Besides Gmail

Life Without Gmail If you're tired of Google ads and algorithms, there are plenty of decent alternatives. The best free email services today not only offer a...

11 min read
How to Remove My Personal Data from the Internet Best Guide - featured image
Cybersecurity

How to Remove My Personal Data from the Internet Best Guide

Today, the question “How to remove my personal data from the Internet?” is not only asked by cybersecurity specialists, but also by ordinary users. The reason...

12 min read