IndexFlow
Free Robots.txt Generator

Robots.txt Visual Builder

Build a production-ready robots.txt file in seconds — with one-click AI bot blocking and live preview.

1. Default rules

2. Disallowed paths

3. Allowed exceptions

4. Sitemap URL

5. Crawl delay (optional, seconds)

6. Per-bot rules

Generated robots.txt

User-agent: *
Disallow: /admin/
Disallow: /private/

Sitemap: https://example.com/sitemap.xml

Upload as robots.txt at your site root.

Visual builder AI bot presets 100% private

What You Get

Visual Builder

Click-to-add disallow paths. No syntax to memorize.

Allow / Block / Custom

One-click presets for common scenarios, plus full custom mode.

Block AI Crawlers

One-click blocks for GPTBot, ClaudeBot, Google-Extended, PerplexityBot, CCBot.

Sitemap Declaration

Add your sitemap URL so crawlers find every page.

Crawl Delay

Throttle aggressive bots with optional Crawl-delay directive.

Live Preview & Download

See your robots.txt update as you type. Copy or download instantly.

Why Robots.txt Matters for SEO

robots.txt is the first file every search engine reads when it visits your site. A single misplaced line can de-list your entire domain. We've seen sites accidentally ship Disallow: / on launch and lose months of organic traffic before anyone noticed.

Beyond crawl control, robots.txt is also the official place to declare your sitemap. Including Sitemap: gives crawlers direct access to your URL list — no Search Console submission required for non-Google bots like Bing, DuckDuckGo, and Yandex.

In 2026, the new battleground is AI crawlers. GPTBot, ClaudeBot, PerplexityBot, and Google-Extended all respect robots.txt directives. Your decision to allow or block them shapes whether your content trains AI models and appears in AI-powered search.

After deploying your robots.txt, verify Google can still crawl your important pages with IndexFlow's Bulk Index Checker — broken robots rules are the #1 cause of sudden de-indexing.

Frequently Asked Questions

Where do I put the robots.txt file?

robots.txt must be at your site root: https://yourdomain.com/robots.txt. Subdirectories don't work — Google only checks the root URL. If your site uses subdomains (blog.example.com), each subdomain needs its own robots.txt at its root.

Should I block GPTBot, ClaudeBot, and other AI crawlers?

It depends on your strategy. Blocking AI bots prevents your content from training future models, but also removes you from AI search results (ChatGPT browse, Perplexity, etc.). For most content sites, allowing AI crawlers brings referral traffic and is worth it. Block them only if you have proprietary or paid content.

Does Disallow remove pages from Google?

No — Disallow only blocks crawling, not indexing. Pages can still appear in Google search results without a snippet if other sites link to them. To remove a page from the index entirely, use a noindex meta tag (and let Google crawl it once to see the tag) or use the Search Console removal tool.

What is Crawl-delay and should I use it?

Crawl-delay tells bots to wait N seconds between requests. Google ignores this directive (use Search Console crawl rate settings instead). Bing, Yandex, and most other crawlers respect it. Useful only if your server is being overloaded by aggressive crawlers.

Why include the Sitemap directive?

Sitemap: https://example.com/sitemap.xml in robots.txt is read by every major crawler (Google, Bing, DuckDuckGo, etc.) without requiring separate submission. It's the single highest-leverage line in robots.txt for SEO — make sure it's included even if you also submit through Search Console.

Make Sure Google Indexes Your Pages

A perfect robots.txt is only step 1. Verify every important URL is actually in Google's index.

Free forever. No credit card required.