Is a noindex tag, X-Robots-Tag header, or robots.txt rule preventing your page from appearing in Google?
Detect all indexing blockers in seconds. Get a clear indexability score with actionable fixes.
Paste any page URL. We fetch the page and analyze its HTML, headers, and robots.txt.
Noindex meta tags, X-Robots-Tag headers, robots.txt rules, and canonical tag issues.
See a 0-100 indexability score with pass/fail for each check and detailed issue explanations.
Detects <meta name="robots" content="noindex"> and <meta name="googlebot" content="noindex"> tags in your HTML
Checks HTTP response headers for X-Robots-Tag: noindex, which blocks indexing at the server level
Fetches your robots.txt and checks if Googlebot is blocked from crawling the URL path
Detects missing or mismatched canonical tags that can confuse Google about which URL to index
Get a 0-100 score showing how indexable your page is, with clear pass/fail for each check
Identifies critical, warning, and informational issues with detailed explanations and fixes
A single noindex tag can make an entire page invisible to Google. Developers often add noindex during staging and forget to remove it before launch. CMS plugins can accidentally set it. Server configurations may add X-Robots-Tag headers you never intended.
The tricky part: your page looks perfectly normal to visitors, but Google will never show it in search results. You could have amazing content, strong backlinks, and perfect on-page SEO — but if noindex is set, none of it matters.
Canonical tag mismatches are equally dangerous. If your canonical points to a different URL, Google treats your page as a duplicate and indexes the canonical URL instead. This is especially common with HTTP/HTTPS mismatches, trailing slash inconsistencies, and CMS pagination.
Once you have confirmed your page is free from indexing blockers, use IndexFlow to check if Google has actually indexed it and submit it for faster crawling.
A noindex tag is an HTML meta tag (<meta name="robots" content="noindex">) that tells search engines not to include a page in their search results. When Google encounters this tag, it will remove the page from search results or prevent it from being added. This is different from robots.txt, which blocks crawling — noindex blocks indexing specifically.
X-Robots-Tag is an HTTP response header that provides the same directives as meta robots tags but at the server level. It is useful for non-HTML files (PDFs, images) or when you want to set indexing rules without modifying HTML. Example: X-Robots-Tag: noindex. It takes precedence over meta tags and can be set in your web server configuration (Apache, Nginx) or CDN settings.
To remove a noindex tag: 1) Check your HTML source for <meta name="robots" content="noindex"> and remove it or change to "index, follow". 2) Check your CMS settings — WordPress SEO plugins (Yoast, Rank Math) have per-page indexing toggles. 3) Check server config for X-Robots-Tag headers in .htaccess, nginx.conf, or your CDN. 4) After removing, request re-indexing in Google Search Console. Pages typically get re-indexed within a few days.
No — robots.txt blocks crawling, not indexing. If you use Disallow: /page in robots.txt, Google cannot crawl the page but may still index the URL if it discovers it through links. The page would appear in search results with a "No information is available for this page" message. To block indexing, use the noindex meta tag or X-Robots-Tag header. A common mistake is using robots.txt to block pages you want de-indexed — this actually prevents Google from seeing the noindex tag.
Canonical tag issues occur when the canonical URL differs from the page URL. This tells Google that another page is the preferred version and the current page should not be indexed. Common causes: 1) CMS auto-generating wrong canonicals 2) HTTP vs HTTPS or www vs non-www mismatches 3) Trailing slash inconsistency 4) Query parameters being stripped. Always ensure your canonical tag matches the exact URL you want indexed.
The most common reasons are: 1) Noindex meta tag left from development 2) X-Robots-Tag header set in server config 3) Robots.txt blocking the URL 4) Canonical pointing to a different URL 5) Low-quality or duplicate content 6) The page is too new and has not been crawled yet 7) The page has no internal links pointing to it. Use this tool to check for technical blockers first, then ensure your content is high-quality and linked from other pages.