IndexFlow
Google Search Console Guide

Discovered — Currently Not Indexed: 9 Proven Fixes That Actually Work

14 min read
Updated March 28, 2026

You open Google Search Console, navigate to the Pages report, and see it: "Discovered — currently not indexed." Dozens — maybe hundreds — of your pages sitting in limbo. Google knows they exist but hasn't bothered to crawl them. Here's exactly what's going on and 9 proven ways to fix it.

62%

of web pages across the internet are not indexed by Google (Ahrefs study)

What "Discovered — Currently Not Indexed" Actually Means

When Google Search Console shows "Discovered — currently not indexed," it means Google knows your URL exists but has not crawled it yet. Your page is sitting in Google's crawl queue, waiting for its turn.

Google discovers URLs through three main channels: your XML sitemap, internal links on your site, and external links from other websites. Once discovered, the URL enters a prioritization queue. Google decides when (or if) to crawl it based on the page's perceived importance.

This is not a penalty. It's a priority decision. Google has limited crawl resources, and it allocates them to pages it considers most valuable. If your page is stuck here, Google simply hasn't decided it's worth crawling yet.

"Discovered" vs. "Crawled — Currently Not Indexed"

Discovered — Not Indexed

Google found the URL but hasn't crawled it. This is a priority problem. Google hasn't even looked at your content yet. Fix: increase the page's importance signals (internal links, sitemap priority, crawl triggers).

Crawled — Not Indexed

Google crawled the page but chose not to index it. This is a quality problem. Google saw your content and decided it wasn't good enough. Fix: improve content quality, uniqueness, and E-E-A-T signals.

The distinction matters because the fixes are completely different. "Discovered" pages need stronger crawl signals. "Crawled" pages need better content. This guide focuses on "Discovered — currently not indexed."

9 Reasons Google Won't Crawl Your Page (And How to Fix Each One)

1

Weak Internal Linking

Internal links are the single most important signal Google uses to discover and prioritize pages. If your page has zero or very few internal links pointing to it, Google treats it as low priority. Think of internal links as votes of importance within your own site — pages with more votes get crawled first.

A study by Botify found that pages with 5+ internal links are 2.5x more likely to be crawled than pages with only 1 internal link. If your page is orphaned (zero internal links), it may never get crawled even if it's in your sitemap.

Fix

Add 3-5 internal links from your most important indexed pages to the stuck page. Use descriptive anchor text (not "click here"). Prioritize links from your homepage, category pages, and highest-traffic pages. Use IndexFlow's Bulk Index Checker to identify which of your pages are already indexed and can serve as link sources.

2

Low Domain Authority

New or weak domains get significantly less crawl budget from Google. If your site has few backlinks and limited content, Google allocates fewer resources to crawling it. Established sites like Forbes or GitHub get crawled thousands of times per day. A brand-new site might get crawled once a week.

This creates a frustrating chicken-and-egg problem: you need indexed pages to build authority, but you need authority to get pages indexed. The solution is to start small and build momentum gradually.

Fix

Build quality backlinks from reputable sites. Create more high-quality content (Google crawls active sites more frequently). Submit your most important pages first via Search Console. Be patient — new domains typically need 2-4 months to build enough authority for reliable crawling. Use the Google Indexing API to accelerate discovery.

3

Thin or Duplicate Content

Google's crawl scheduler uses predictive signals to estimate page quality before crawling. If your URL pattern looks similar to pages Google already knows are thin (short product descriptions, tag pages, paginated archives), it may deprioritize crawling entirely.

Pages with fewer than 300 words, auto-generated content, or content that's substantially similar to other pages on your site are prime candidates for the "Discovered — currently not indexed" queue. If Google has already crawled 50 similar pages and found them all thin, it won't rush to crawl the 51st.

Fix

Add unique, comprehensive content to each page — aim for 1,500+ words with original insights, data, or examples. Consolidate thin pages into fewer, more comprehensive ones. Use canonical tags to handle near-duplicates. Check your pages with a meta tag checker to ensure proper indexing directives.

4

Crawl Budget Exhaustion

Every site has a limited crawl budget — the number of pages Googlebot will crawl in a given time period. Large sites with thousands or tens of thousands of pages often exhaust their crawl budget on high-priority pages, leaving newer or lower-priority pages perpetually stuck in the "discovered" state.

This is especially common on e-commerce sites with extensive product catalogs, sites with paginated archives, or sites that generate many URL variations through filters and sorting parameters.

Fix

Clean up low-value pages: noindex tag pages, thin archive pages, and faceted navigation URLs. Use robots.txt to block sections Google shouldn't waste crawl budget on. Reduce URL parameters and consolidate similar pages. Check your robots.txt with a robots.txt checker to make sure you're not accidentally blocking important pages.

5

Poor Site Structure (Deep Pages)

Pages that require 4 or more clicks from the homepage to reach are significantly less likely to be crawled. Google interprets page depth as an importance signal — the deeper a page sits in your site architecture, the less important Google assumes it is.

If your blog posts are buried under /blog/2026/03/28/category/subcategory/post-title, Google may never reach them. Flat architectures where every page is 2-3 clicks from the homepage perform dramatically better for crawling and indexing.

Fix

Flatten your site architecture so every important page is within 3 clicks of the homepage. Add breadcrumb navigation with structured data. Create hub/category pages that link to all related content. Use HTML sitemaps on your site (not just XML sitemaps) to create additional crawl paths.

6

Slow Server Response Time

If your server takes more than 2-3 seconds to respond, Googlebot automatically reduces its crawl rate to avoid overloading your server. This means fewer pages get crawled per session, and lower-priority pages (the ones stuck in "Discovered") get pushed further back in the queue.

Google has confirmed that server response time directly affects crawl budget allocation. A server that responds in 200ms can be crawled 10x faster than one that takes 2 seconds. Over time, this compounds — fast sites get crawled more thoroughly.

Fix

Optimize server response time to under 500ms. Use a CDN (Cloudflare, Fastly) to serve cached content. Upgrade hosting if you're on shared hosting. Enable server-side caching (Redis, Varnish). Test your page speed with a site speed checker to identify bottlenecks.

7

Too Many New Pages Published at Once

Publishing hundreds of pages simultaneously is a red flag for Google. It looks like auto-generated content or a spam attack, and Google responds by throttling your crawl rate and deprioritizing the new pages.

This is particularly common with e-commerce sites launching new product lines, programmatic SEO pages, or sites migrating to new URL structures. Even if every page is high-quality, the sudden volume spike triggers Google's caution mechanisms.

Fix

Drip-release new pages gradually — 30-50 per week instead of 500 at once. Submit new URLs to your sitemap incrementally. Give Google time to crawl and evaluate each batch before adding more. If you need to publish many pages at once (site migration), use 301 redirects to preserve crawl equity.

8

No XML Sitemap (or Broken Sitemap)

Without a properly formatted XML sitemap submitted to Google Search Console, you're relying entirely on link-based discovery. While Google can find pages through links alone, a sitemap acts as a direct request to crawl specific URLs. It's like handing Google a roadmap vs. hoping it stumbles onto your pages.

Common sitemap issues: the sitemap exists but isn't submitted in Search Console, it contains URLs that return 404 errors, it's missing <lastmod> dates, or it hasn't been updated since the new pages were published.

Fix

Create a valid XML sitemap with all indexable URLs. Include <lastmod> dates for every URL. Submit the sitemap in Google Search Console. Use a sitemap checker to validate your sitemap is properly formatted and contains no broken URLs. Update your sitemap whenever you publish new content.

9

Robots.txt Accidentally Blocking Googlebot

This is more common than you'd think. A misconfigured robots.txt file can prevent Googlebot from crawling entire sections of your site. Some CMS platforms add default Disallow rules that block blog categories, tag pages, or even entire directories. Development environments sometimes have robots.txt set to block all crawlers, and this setting can carry over to production.

Even if the URL is in your sitemap, a robots.txt Disallow rule overrides it. Google will discover the URL (via sitemap) but won't crawl it (due to robots.txt), resulting in the "Discovered — currently not indexed" status.

Fix

Check your robots.txt at yourdomain.com/robots.txt. Look for Disallow rules that might block important page sections. Use Google's robots.txt tester in Search Console or check your robots.txt online. Make sure your robots.txt includes a Sitemap directive pointing to your XML sitemap.

Are Your Pages Actually Indexed?

Stop guessing. Paste your URLs into IndexFlow and see exactly which pages are indexed and which are stuck. Then auto-submit the unindexed ones for crawling. 100 free credits/month.

How to Fix "Discovered — Currently Not Indexed" (Step by Step)

Follow this checklist in order. Each step builds on the previous one. Most pages will start getting crawled within 1-2 weeks after completing steps 1-5.

1

Find your stuck pages in Search Console

Go to Search Console → Pages → filter by 'Discovered — currently not indexed'. Export the full list. Sort by importance.

2

Add internal links from indexed pages

For each stuck page, add 3-5 internal links from your highest-traffic indexed pages. This is the single most effective fix.

3

Update your sitemap with lastmod dates

Ensure every stuck URL is in your sitemap.xml with an accurate <lastmod> date. Resubmit the sitemap in Search Console.

4

Request indexing via URL Inspection

In Search Console, use the URL Inspection tool to manually request indexing. Limited to ~10-15 per day — prioritize your most important pages.

5

Share URLs on social media

Post the URLs on Twitter/X, LinkedIn, and relevant forums. Social sharing triggers Googlebot crawls within hours.

6

Use IndexNow for Bing and Yandex

Submit URLs via the IndexNow API for instant discovery on Bing, Yandex, and other supported search engines.

7

Use Google Indexing API for faster discovery

The Google Indexing API (~200 requests/day) can accelerate crawling. It's officially for job/event pages but works for other content types.

8

Wait 2-4 weeks and recheck

Some pages simply need time. Recheck after 2-4 weeks. If still not indexed, the issue is structural — revisit steps 1-3.

Doing this manually for dozens of URLs is time-consuming. IndexFlow automates steps 2-8: bulk check index status, auto-submit unindexed URLs via multiple channels (IndexNow, Google API, ping services), and monitor until they're indexed.

"Crawled — Currently Not Indexed" vs. "Discovered — Currently Not Indexed": Different Problems, Different Fixes

AspectDiscovered — Not IndexedCrawled — Not Indexed
Root causePriority problemQuality problem
What happenedGoogle found URL but didn't crawlGoogle crawled but rejected the content
SeverityModerate — fixableHigher — content issue
Primary fixInternal links, sitemap, crawl signalsImprove content quality, uniqueness
Expected resolution1-4 weeks with fixesVariable — depends on content rewrite

If you're seeing both statuses across different pages, tackle "Discovered" pages first — they're usually easier and faster to fix. "Crawled but not indexed" pages need a content overhaul that takes more time.

How Long Does It Take to Get Indexed?

There's no guaranteed timeline, but here's what we've observed across thousands of URLs:

Domain TypeTypical Indexing TimeWith Active Submission
Established domains (DA 40+)1-7 daysOften within 24-48 hours
Mid-authority domains (DA 15-40)7-21 days3-7 days
New domains (<6 months old)14-42 days7-14 days
New domains with zero backlinks30-60+ days14-30 days

When to Worry

If a page has been stuck in "Discovered — currently not indexed" for more than 4-6 weeks despite having internal links and being in your sitemap, there's likely a structural issue. Review the 9 reasons above systematically. If more than 50% of your pages show this status, the problem is site-wide (usually crawl budget, server speed, or domain authority).

Tools That Help Fix "Discovered — Currently Not Indexed"

You don't need to do everything manually. These tools can dramatically speed up the process:

IndexFlow (Recommended)

IndexFlow is purpose-built for this exact problem. Bulk check which of your pages are indexed, auto-submit unindexed pages through multiple channels (Google Indexing API, IndexNow, ping services, crawl network), and monitor until they're indexed. Set up auto-resubmission so pages that drop out of the index get resubmitted automatically. 100 free credits per month.

Key features for this problem: Bulk Index Checker, Multi-Channel Submission, Index Monitoring with alerts, and Auto Re-indexing.

Google Search Console (Free)

The source of truth. Use URL Inspection to manually request indexing (~10-15/day). The Pages report shows all indexing issues. Limited but essential.

Robots.txt Checker

Verify your robots.txt isn't accidentally blocking Googlebot from crawling important pages.

Sitemap Validator

Use a sitemap checker to confirm your XML sitemap is valid, contains all important URLs, and has correct lastmod dates.

Page Speed Testing

Test your server response time with a site speed checker. If your server is slow, it directly reduces how many pages Googlebot will crawl.

Meta Tag Analyzer

Check your pages for noindex tags, canonical issues, and other meta directives with a meta tag checker.

Stop Waiting. Start Indexing.

IndexFlow checks, submits, and monitors your pages automatically. Multi-channel submission through Google API, IndexNow, ping services, and crawl network. Free to start.

Indexing Issues Aren't Just for Big Sites

"Discovered — currently not indexed" affects every type of website. Whether you've converted your website into a mobile app and need the landing page indexed, or you run a specialized B2B tool like an industrial automation simulator, the same crawl-priority rules apply.

Small sites, new sites, and niche sites are affected disproportionately because Google allocates less crawl budget to them. The fixes in this guide work regardless of your industry — internal linking, proper sitemaps, and active crawl signals are universal.

Frequently Asked Questions

Is 'Discovered — currently not indexed' bad?

Not necessarily. It means Google knows your URL exists but hasn't crawled it yet. For new pages on new sites, this is normal and usually resolves within 2-4 weeks. However, if pages have been stuck for more than 4-6 weeks, or if a large percentage of your site shows this status, it indicates structural issues that need fixing — weak internal linking, crawl budget problems, or server performance issues.

How do I force Google to index my page?

You can't truly force Google, but you can significantly accelerate indexing. Use Google Search Console's URL Inspection tool to request indexing (limited to ~10-15 pages/day). For bulk submission, use the Google Indexing API (~200 requests/day), IndexNow for Bing/Yandex, and ping services. Add internal links from your most-visited pages to the stuck page. Share the URL on social media (Twitter/X posts trigger fast crawling). Tools like IndexFlow automate this entire process across multiple channels.

Does submitting a URL in Search Console guarantee indexing?

No. Submitting a URL in Search Console is a request, not a command. Google will crawl the URL faster, but it still evaluates the page's quality, uniqueness, and value before deciding whether to index it. If the content is thin, duplicate, or low-value, Google may crawl it but still choose not to index it (moving it to 'Crawled — currently not indexed'). Submission speeds up discovery, but doesn't bypass quality checks.

Why is Google not indexing my new pages?

The most common reasons are: 1) Weak internal linking — your new pages have no or few internal links pointing to them. 2) Low domain authority — new or small sites get less crawl budget. 3) Thin content — pages with very little unique content get deprioritized. 4) Server speed — slow servers cause Google to reduce crawl rate. 5) Too many pages published at once — sudden spikes trigger spam filters. Check all 9 reasons in this guide for a complete diagnosis.

How many pages can remain 'discovered but not indexed' before it's a problem?

As a general rule: if fewer than 10-15% of your pages show 'Discovered — currently not indexed', it's normal — Google naturally queues some pages. If 20-50% are stuck, you have a crawl budget or internal linking issue. If more than 50% are stuck, there's a fundamental problem with your site's authority, server performance, or content quality that needs immediate attention. Use IndexFlow's Bulk Index Checker to monitor this ratio over time.

Related Guides

Get Your Pages Out of "Discovered" Limbo

IndexFlow checks index status in bulk, submits through 5+ channels, and automatically re-submits pages that aren't indexed. Free plan includes 100 checks per month.