The URL Inspection tool in Google Search Console represents the most powerful single-URL diagnostic tool available to SEO professionals, providing real-time insight into how Google sees individual pages, why they do or don’t appear in search results, and what technical issues prevent indexing. Accessible via the search bar at the top of any Google Search Console page, this tool replaced the deprecated “Fetch as Google” feature in 2019 with significantly expanded functionality including live URL testing, JavaScript rendering analysis, structured data detection, and the ability to request priority indexing for updated content.
For technical SEO practitioners, URL Inspection bridges the gap between implementing fixes and confirming Google recognizes those changes. The tool reveals canonical tag conflicts, identifies robots.txt blocks preventing crawling, exposes noindex tags accidentally preventing indexing, and shows JavaScript rendering issues causing content invisibility to Googlebot. Understanding how to interpret results—distinguishing between crawl dates and index dates, recognizing when indexed versions lag behind live content, and comparing mobile versus desktop inspection—transforms this tool from simple status checker to comprehensive diagnostic platform.
Bottom Line Up Front
Who needs this: SEO professionals troubleshooting indexing issues, validating technical changes, or managing time-sensitive content requiring faster discovery than normal crawl cycles provide.
Primary use cases: Diagnosing why specific URLs won’t index, verifying fixes before requesting reindexing, checking canonical tag implementation, validating structured data, analyzing JavaScript rendering, requesting priority crawling for updated content.
Key limitation: Single-URL tool (not for bulk analysis), daily quota limits Request indexing feature (estimated 10-50 requests per property), shows historical indexed data unless using “Test live URL” for real-time server testing.
Expected outcome: Clear understanding of individual URL indexing status, specific technical issues preventing indexing, ability to request priority crawling for important pages, confirmation that implemented fixes work correctly.
Time investment: 2-3 minutes per URL inspection, immediate for basic status checks, additional time for comparing indexed vs live versions or analyzing rendering issues.
Quick Start: URL Inspection Workflow
When inspecting URLs for indexing issues:
1. Access URL Inspection Tool
- Click search bar at top of any GSC page
- Paste full URL (must include https:// or http://)
- Press Enter
- Wait 5-10 seconds for results
2. Check Primary Status
"URL is on Google" (Green checkmark):
- Indexed successfully
- Check indexed date for freshness
- Verify canonical is correct
- Proceed to step 4 if validating updates
"URL is not on Google" (Red X):
- Not indexed, proceed to step 3
- Check reason in Coverage section
3. Diagnose "Not on Google" Issues
Common reasons and quick actions:
"Discovered - currently not indexed"
- Content quality issue, improve content or build links
"Crawled - currently not indexed"
- Google crawled but chose not to index (quality/duplicate)
"Excluded by 'noindex' tag"
- Remove noindex meta tag or X-Robots-Tag header
"Blocked by robots.txt"
- Update robots.txt to allow crawling
"Not found (404)"
- Fix URL or redirect to correct page
"Server error (5xx)"
- Fix server issue, check error logs
"Redirect error"
- Fix redirect chain or loop
4. Test Live Version (If Verifying Fixes)
- Click "Test live URL" button
- Wait 30-90 seconds for live test
- Compare live vs indexed version
- Verify fix is working on live server
5. Request Indexing (If Appropriate)
- Only for important URLs needing fast discovery
- Click "Request indexing" button
- Wait for confirmation (added to priority queue)
- Don't use for bulk requests (quota limits)
When to Request Indexing:
- New high-priority content published
- Critical bug fixed on important page
- Major content update completed
- Time-sensitive content (news, events)
When NOT to Request:
- Low-quality pages (won't index regardless)
- Minor updates (normal crawl sufficient)
- Bulk requests across many URLs
- Already indexed pages with no changes
6. Monitor Results
- Check back in 1-3 days
- Re-inspect URL to see if status changed
- If still not indexed, investigate deeper issues
Critical Rule: Use Request indexing sparingly due to daily quota limits (estimated 10-50 requests). Reserve for most important URLs only.
What Is the URL Inspection Tool and How to Access It?
The URL Inspection tool provides granular, URL-level diagnostic information about how Google interacts with specific pages on your site. Unlike aggregate reports showing site-wide patterns, this tool focuses on individual URL analysis, revealing why specific pages index or fail to index and what Google sees when crawling those pages.
Primary access method:
Click the search bar at the very top of any Google Search Console page. The search bar appears on every page in GSC—Performance, Page indexing, Sitemaps, Links—making URL Inspection universally accessible. Enter the complete URL you want to inspect including protocol (https://example.com/page, not just example.com/page or /page). Press Enter or click the magnifying glass icon. GSC loads inspection results within 5-10 seconds.
Alternative access methods:
From Page indexing report: Click any URL in the report. GSC automatically opens URL Inspection for that URL.
From Performance report: Click any URL in the “Pages” tab. URL Inspection opens showing that specific page’s indexing status.
From Links report: Click any URL in top linked pages or any linking page. Opens URL Inspection.
These alternative methods provide convenient shortcuts when you’re already analyzing URLs in other reports and want deeper diagnostic information.
URL format requirements:
URLs must be complete including protocol. https://example.com/page works. example.com/page fails. /page fails. The URL must belong to the currently selected GSC property. You cannot inspect URLs from different domains or properties without switching properties first.
You can inspect URLs not yet discovered by Google. Entering a brand new URL shows “URL is not on Google” with reason “URL is not in any sitemaps” or “Not found (404)” if the page doesn’t exist. This lets you pre-check pages before publishing or requesting indexing.
You can inspect URLs blocked by robots.txt. URL Inspection shows the block status and explains which robots.txt rule blocks the URL, even though Google can’t crawl the actual content.
What URL Inspection shows:
The tool provides four main information sections visible immediately after inspection completes:
Indexing section: Core status information including whether URL is indexed, when Google last crawled and indexed, what canonical URL Google selected, how Google discovered the URL, whether crawling is allowed (robots.txt status), and whether indexing is allowed (meta robots/X-Robots-Tag status).
Coverage section: Detailed explanation of why URL is or isn’t indexed. For indexed URLs, shows “Valid” status. For non-indexed URLs, shows specific reason with explanation and documentation links.
Enhancements section: Additional page features detected including mobile usability status, breadcrumb structured data, all structured data types found (Article, Product, FAQ, etc.), AMP status if applicable, and specific Schema markup implementations.
Page experience section: Core Web Vitals status (Good, Needs improvement, Poor), mobile-friendly verdict, HTTPS status, and intrusive interstitial issues if detected.
Two inspection modes:
URL Inspection operates in two distinct modes that serve different purposes and show different data.
Indexed version (default): Shows information about the URL as it currently exists in Google’s index. This is historical data from Google’s last successful crawl—could be hours old for frequently crawled sites or weeks/months old for rarely crawled sites. The indexed date displayed shows exactly when this snapshot was taken. This mode answers “What does Google currently know about this URL and show in search results?”
Live URL test: Click the “Test live URL” button to fetch the page from your server in real-time. Google makes a fresh request to your server, renders any JavaScript, and analyzes the current state of the page. This takes 30-90 seconds to complete. This mode answers “What would Google see if it crawled this URL right now?” Critical for verifying fixes before requesting indexing—you can confirm the noindex tag is removed, canonical is updated, or content change is live before asking Google to recrawl.
When to use URL Inspection:
Use URL Inspection when diagnosing why a specific important page won’t index. The Page indexing report shows broad patterns (“500 pages have noindex tags”), but URL Inspection shows exactly which noindex tag on which specific page and what the complete robots meta tag says.
Use it after implementing technical fixes. Changed canonical tag? Test live URL to verify Google sees the new canonical. Removed noindex? Live test confirms it’s gone. Fixed 5xx error? Live test shows 200 status. This verification step prevents requesting indexing for pages where fixes aren’t actually live yet.
Use it for priority indexing of time-sensitive content. New product launch? Inspect the URL, verify it’s live and crawlable via live test, then request indexing to add to priority queue rather than waiting weeks for normal discovery.
Use it to understand canonical conflicts. When your declared canonical differs from Google’s selected canonical, URL Inspection shows both values and why they differ, guiding troubleshooting of conflicting signals.
When NOT to use URL Inspection:
Don’t use it for bulk analysis. The tool is single-URL only. Checking 100 URLs requires 100 manual inspections. For bulk analysis, rely on Page indexing report, Sitemaps report, or third-party tools that aggregate data.
Don’t use Request indexing for every page. Daily quota limits (exact number undisclosed, estimated 10-50 requests) mean you can’t request indexing for hundreds of pages. Reserve for genuinely important, time-sensitive URLs only.
Don’t expect instant indexing. Request indexing adds URLs to priority queue but doesn’t guarantee immediate indexing. Processing still takes hours to days depending on site authority and page quality.
Understanding URL Inspection as both diagnostic tool (shows current index status and crawl information) and verification tool (tests live versions before requesting indexing) clarifies its role in technical SEO workflows. It complements site-wide reports by providing the URL-specific detail needed to troubleshoot individual indexing failures that aggregate data can’t explain.
Understanding Indexed vs Live URL Testing
URL Inspection’s two modes—indexed version and live test—serve fundamentally different purposes and show different data. Confusion between these modes leads to misdiagnosing issues or failing to verify fixes properly.
Indexed version (default view):
This shows Google’s current knowledge about the URL based on the last successful crawl. The data is historical—a snapshot from whenever Google last crawled and processed the page. For high-authority pages crawled daily, indexed data may be recent (hours to days old). For low-priority pages, indexed data can be weeks or months old.
Key indexed version information:
Crawl date: When Googlebot last visited this URL. Format: “Last crawl: Oct 15, 2025, 3:42 PM.” This is the timestamp of the data you’re viewing.
Index date: When Google last updated its index with this URL’s information. Usually matches crawl date but can differ if Google crawled without finding significant changes worth reindexing.
Indexed canonical: The canonical URL Google selected for this page. If your page specifies rel="canonical", this should match. Discrepancies indicate problems.
Referring page: How Google discovered this URL. Common values: sitemap (found in XML sitemap), internal link from specific page URL, external link from another site, or already known (was previously indexed).
Robots meta tag: Exact value of robots meta tag from last crawl. Shows index, follow (allows indexing), noindex (blocks indexing), or specialized directives like noarchive, nosnippet.
User-declared canonical: What your page’s canonical tag says. Extracted from <link rel="canonical" href="..."> in HTML.
Google-selected canonical: What Google actually chose as canonical. When this differs from user-declared canonical, you have a conflict requiring investigation.
Sitemaps: Which sitemaps contain this URL. Shows “None” if URL isn’t in any submitted sitemap, or lists specific sitemap(s) like “sitemap.xml, sitemap-blog.xml.”
Structured data: All Schema.org markup Google detected during last crawl. Lists types found (Article, Product, FAQPage, etc.) with validation status.
The indexed version answers: “What does Google currently show in search results for this URL?” It reflects the live search index—what users see when searching.
Live URL test (click “Test live URL” button):
This performs a real-time fetch from your server. Google makes a fresh HTTP request to your URL, downloads the HTML, executes JavaScript, and analyzes the current state. The test takes 30-90 seconds to complete and uses significant resources (renders full JavaScript execution).
What live test shows:
Current HTTP status: 200 OK, 404 Not Found, 301 Moved Permanently, 5xx Server Error, etc. Shows what your server returns right now, not weeks ago.
Current content: Rendered HTML after JavaScript execution. Shows what Googlebot would see if crawling today.
Current canonical tag: What the canonical tag says on the live page right now, which may differ from what the indexed version shows if you recently changed it.
Current robots meta tag: Live robots meta value. If you just removed noindex, live test shows this immediately while indexed version still shows old noindex until next crawl.
Current structured data: Schema markup in current live version. If you just added Product schema, live test detects it immediately.
Resource loading: Shows all resources (CSS, JavaScript, images) Googlebot loads, which are blocked by robots.txt, and any 404 errors for missing resources.
JavaScript rendering: Screenshots of rendered page and comparison between raw HTML and rendered HTML, revealing client-side rendering issues.
The live test answers: “If Google crawled this URL right now, what would it see?” This is crucial for verifying fixes work before requesting indexing.
When to use indexed version:
Checking current search index status: “Is this page indexed? When was it last crawled?” Indexed version shows this directly.
Investigating why page isn’t ranking: Indexed version shows what content, canonical, and structured data Google currently has for ranking purposes.
Understanding discovery path: Referring page shows how Google originally found the URL (sitemap, internal link, external link).
Diagnosing stale index: Large gap between crawl date and today suggests Google hasn’t recrawled recently. Might need request indexing or improved internal linking.
When to use live test:
Verifying fixes before requesting indexing: Changed canonical tag? Removed noindex? Fixed 5xx error? Test live URL confirms fix is actually live on server before wasting request indexing quota.
Debugging JavaScript rendering: Live test shows raw HTML vs rendered HTML comparison. If critical content only appears after JavaScript execution, you’ll see the discrepancy here.
Checking new pages pre-publishing: Test a URL before announcing it. Verify structured data works, canonical is correct, page loads without errors.
Understanding indexed vs live differences: After testing live URL, GSC shows side-by-side comparison of indexed version vs live version. Any differences appear highlighted (new structured data, changed canonical, different content).
Interpreting discrepancies:
Indexed canonical ≠ live canonical: You changed canonical tag but Google hasn’t recrawled yet. Indexed version shows old canonical, live test shows new canonical. Solution: Request indexing to trigger recrawl.
Indexed shows noindex, live test shows index: You removed noindex tag but Google hasn’t recrawled. Indexed version reflects old crawl with noindex. Live test confirms noindex is gone. Solution: Request indexing.
Indexed content differs from live content: Page content changed since last crawl. Normal for dynamic sites. If changes are significant and time-sensitive, request indexing.
Indexed structured data missing, live test shows it: You recently added Schema markup. Google hasn’t recrawled to discover it. Request indexing to accelerate detection.
Live test fails (5xx error) but indexed shows 200: Server issues affecting live test but page was accessible during last crawl. Fix server issue before requesting indexing.
Common mistakes:
Assuming indexed version is real-time: Indexed version shows last crawl data, potentially weeks old. Always check crawl date to understand data age.
Requesting indexing without testing live: Developers say “we fixed it,” but live test still shows the problem. Verify fixes work via live test before requesting indexing to avoid wasting quota.
Over-relying on live test: Live test is snapshot from single request. It doesn’t guarantee future crawl results will be identical (server load, network conditions, or time-based content may vary).
Comparing wrong versions: Analyzing indexed version to diagnose current problems when live version is significantly different. Check live test if indexed data is more than a few days old.
The indexed vs live distinction is URL Inspection’s most powerful feature. Indexed shows Google’s current knowledge (search index reality), while live test shows current server state (fix verification). Using both together—checking indexed status, testing live version, comparing differences, then requesting indexing if appropriate—creates complete diagnostic workflow.
How to Read URL Inspection Results
URL Inspection presents multiple information sections, each revealing specific aspects of how Google processes the URL. Understanding what each section means and which signals matter most guides effective troubleshooting.
Primary status indicator:
The very first thing displayed is overall status: “URL is on Google” with green checkmark, or “URL is not on Google” with red X. This binary indicator immediately tells you if the URL can appear in search results.
“URL is on Google” means the page is indexed and eligible to appear in search. It doesn’t mean the page ranks well or receives traffic—only that it exists in Google’s index and can potentially show up for relevant queries.
“URL is not on Google” means the page is not indexed and cannot appear in search results. The Coverage section below this indicator explains why (crawl error, noindex, quality issue, robots.txt block, etc.).
Indexing allowed vs Crawling allowed:
These two separate indicators show different permission levels:
Crawling allowed: Shows whether robots.txt permits Googlebot to request the URL. Values: “Yes” (robots.txt allows crawling) or “No” (robots.txt blocks crawling). If “No,” shows which robots.txt directive blocks it.
Indexing allowed: Shows whether meta robots tags or X-Robots-Tag HTTP headers permit indexing. Values: “Yes” (no noindex directive found) or “No” (noindex present). If “No,” shows exact noindex directive (<meta name="robots" content="noindex"> or X-Robots-Tag header).
Critical understanding: Crawling and indexing are separate permissions. A page can be crawlable but not indexable (noindex tag but no robots.txt block), or not crawlable but indexed (already indexed before robots.txt block added—Google can’t recrawl to see changes but keeps existing index entry until it eventually drops out).
Sitemaps section:
Shows which submitted sitemaps contain this URL. Possible values:
Sitemap name(s): Lists specific sitemaps like “https://example.com/sitemap.xml” or “https://example.com/sitemap-products.xml.” Multiple sitemaps may list the same URL.
None: URL not in any submitted sitemap. Not necessarily a problem—Google discovers URLs via internal links too. But important URLs should be in sitemaps for discovery insurance.
Referring page (how Google discovered URL):
Shows the discovery source. Common values:
Sitemap: Found via XML sitemap submission. Most common for pages without strong internal linking.
Internal link from [specific URL]: Googlebot followed link from another page on your site. Shows exact page that linked here.
External link from [domain]: Discovered via backlink from another site.
Already known: URL was previously indexed, so Google already knows about it from past crawls.
Understanding referring page helps diagnose discovery issues. If important page shows “None” or isn’t discovered at all, add to sitemap or improve internal linking.
User-declared vs Google-selected canonical:
Two separate fields show canonical signals:
User-declared canonical: What your page’s <link rel="canonical"> tag says. Shows exact URL from canonical tag in HTML.
Google-selected canonical: What Google chose as the canonical after considering all signals (your canonical tag, sitemaps, internal links, external links, URL structure).
When these match: Good. Google agrees with your canonical declaration.
When these differ: Conflict requiring investigation. Google overrode your declared canonical due to stronger conflicting signals. Common causes:
Canonical points to 404 or redirected URL (Google can’t use non-existent canonical).
Canonical chain exists (A canonicals to B, B canonicals to C—Google simplifies to direct canonical).
Internal links point to different URL than canonical declares (mixed signals).
Sitemap contains different URL than canonical declares.
External links all point to different URL, signaling that URL is more important.
When conflict exists, investigate which signals contradict your canonical declaration and align them.
Crawl date vs Index date:
Last crawl: When Googlebot last visited this URL and downloaded content. Format: “Oct 15, 2025, 3:42 PM.”
Indexed date: When Google last updated index with this URL’s information. Usually same as crawl date.
When dates differ: Google crawled but didn’t reindex because content hadn’t changed significantly. Normal for stable content. Concerning if you updated content but index date doesn’t update—suggests Google doesn’t consider changes significant enough to warrant reindexing.
Large gap between crawl date and today: Page hasn’t been crawled recently. Low-priority page, weak internal linking, or low site crawl budget. Consider improving internal links or adding to sitemap.
Coverage status details:
The Coverage section shows exactly why URL is or isn’t indexed:
“Valid” (green): Successfully indexed, eligible for search results.
“Valid with warnings” (yellow): Indexed but issues detected (e.g., indexed despite robots.txt block, soft 404 indexed as 200, unusual redirect pattern).
“Error” (red): Not indexed due to error. Common errors:
- “Server error (5xx)”: Fix server to return 200 status
- “Not found (404)”: URL doesn’t exist, fix URL or redirect
- “Redirect error”: Fix redirect chain or loop
- “Submitted URL returned soft 404”: Page returns 200 but shows 404-like content
“Excluded” (gray): Not indexed by design, not an error. Common exclusions:
- “Excluded by ‘noindex’ tag”: Remove noindex if page should be indexed
- “Blocked by robots.txt”: Update robots.txt if page should be crawlable
- “Duplicate, Google chose different canonical”: This URL is non-canonical variant
- “Discovered – currently not indexed”: Google found URL but hasn’t indexed (quality/priority issue)
- “Crawled – currently not indexed”: Google crawled but chose not to index (quality/duplicate issue)
Enhancements section:
Shows additional page features:
Mobile usability: “Page is mobile-friendly” or specific mobile issues (text too small, clickable elements too close, viewport not set).
Breadcrumbs: “Valid breadcrumbs detected” if BreadcrumbList Schema found, or “No breadcrumbs detected.”
Structured data: Lists all Schema types found with counts. Example: “Article (1 item found), BreadcrumbList (1 item found).” Click any to see detailed validation via Rich Results Test.
AMP: AMP URL if applicable, AMP validation status if AMP implemented.
Page experience section:
Core Web Vitals: “Good,” “Needs improvement,” or “Poor” based on field data from Chrome User Experience Report. Links to detailed Core Web Vitals report.
Mobile-friendly: “Yes” or “No” with issues listed if problems detected.
HTTPS: “Yes” or “No” with warning if not using HTTPS.
Intrusive interstitials: “No issues detected” or specific interstitial problems.
View crawled page:
Click “View crawled page” to see how Googlebot rendered the page during last crawl. Shows:
Screenshot: Visual rendering of page as Googlebot saw it. Useful for identifying rendering issues (blank sections, missing images, layout problems).
HTML comparison: Raw HTML (what server sent) vs Rendered HTML (after JavaScript execution). Critical for debugging JavaScript SEO issues—if content only appears in rendered HTML but not raw HTML, client-side rendering may cause problems.
More info: HTTP response code, page load time, list of all resources loaded (CSS, JS, images), blocked resources (robots.txt blocks), and JavaScript console errors.
Reading URL Inspection results systematically—checking overall status, verifying crawling/indexing permissions, examining canonical signals, reviewing coverage reason, validating structured data, and comparing raw vs rendered HTML when needed—provides complete diagnostic picture of why a URL does or doesn’t index and what technical issues exist.
How and When to Use Request Indexing
The “Request indexing” button adds URLs to Google’s priority crawl queue, accelerating discovery compared to waiting for normal crawling cycles. However, daily quota limits require strategic use for only the most important URLs.
How Request indexing works:
After inspecting a URL, click the “Request indexing” button at bottom of URL Inspection results. Google shows confirmation message: “Indexing requested.” The URL is added to a priority queue for crawling.
Processing time varies significantly: hours for high-authority sites with strong crawl demand, days for lower-authority sites. Google doesn’t provide queue position or estimated processing time—you simply wait and re-inspect the URL later to check if status changed.
Request indexing does not guarantee indexing. It only prioritizes crawling. If the page has quality issues (thin content, duplicate content, noindex tag), it still won’t index even after being crawled. Request indexing accelerates discovery, not indexing decisions.
Daily quota limits:
Google doesn’t publish exact quota numbers. Limits vary by site authority—higher authority sites receive larger quotas. Anecdotal evidence suggests 10-50 requests per property per day for most sites.
When quota is exceeded, clicking “Request indexing” shows error message: “Quota exceeded. Try again tomorrow.” Quota appears to reset around midnight Pacific Time, though exact timing isn’t documented.
No visible quota counter exists. You only discover limits by hitting them. This necessitates conservative usage—reserve requests for genuinely important URLs.
When to use Request indexing:
New high-priority content: Just published important blog post, product page, or service page. Request indexing to accelerate discovery from weeks to hours/days.
Critical fixes completed: Fixed major technical issue (removed noindex tag from important page, resolved 5xx error, corrected canonical tag conflict). Request indexing to trigger recrawl and update index with fixed version.
Time-sensitive content: News articles, event announcements, limited-time promotions requiring fast visibility. Request indexing since waiting days/weeks for natural discovery defeats the purpose.
Major content updates: Significantly rewrote existing content, added substantial new sections, or completely restructured page. Request indexing to trigger reindexing with fresh content.
Competitive timing: Launching product same day as competitors. Request indexing to ensure your page appears in search results simultaneously rather than lagging by days/weeks.
When NOT to use Request indexing:
Low-priority pages: Author bios, tag archives, old blog posts, footer pages. Normal crawling is sufficient—no need to prioritize these.
Minor content updates: Fixed typo, updated date, added single sentence. These minor changes don’t warrant request indexing—wait for natural recrawl.
Bulk requests: Just published 50 new blog posts. Requesting indexing for all 50 exceeds quota and wastes priority slots. Instead, request indexing for top 5-10 most important posts, let others be discovered via sitemap.
Already indexed pages with no changes: Checking if page is indexed, find it is, but request indexing anyway “just to be safe.” This wastes quota. Request indexing only when changes warrant recrawling.
Low-quality pages: Thin content, duplicate content, doorway pages. Request indexing won’t help—these won’t index regardless due to quality issues. Fix quality first, then consider requesting.
Pages with technical problems: Page still has noindex tag, still returns 5xx error, still has canonical pointing to 404. Fix the problem first, verify via “Test live URL,” then request indexing. Requesting indexing for broken pages wastes quota—Google crawls and finds same problems.
Best practices for quota management:
Verify fixes via “Test live URL” before requesting indexing. Don’t waste quota on pages where fixes aren’t actually live yet.
Prioritize ruthlessly. If you can only request 20 URLs per day, choose the 20 most business-critical pages.
Track requests. Keep spreadsheet noting which URLs you requested indexing for and when. Re-inspect 2-3 days later to verify if status changed.
Use sitemaps for broad discovery. Request indexing for specific high-priority URLs, rely on XML sitemaps for general content discovery across hundreds/thousands of URLs.
Don’t request repeatedly. If you requested indexing yesterday and page hasn’t indexed yet, don’t request again today. Wait 3-5 days before considering a second request. Repeated requests for same URL don’t accelerate processing and waste quota.
Monitoring request indexing results:
After requesting indexing, wait 1-3 days (longer for lower-authority sites). Return to URL Inspection tool, enter the same URL, check if status changed.
Success indicators: “URL is on Google” status changes from red X to green checkmark. Crawl date updates to recent date (today or yesterday). Index date updates. Coverage status changes from “Discovered – currently not indexed” to “Valid.”
If still not indexed after 5-7 days: Problem isn’t crawl timing, it’s content quality or technical issues. Re-inspect with “Test live URL” to verify no technical problems exist. If technically sound but still not indexed, content quality likely insufficient—improve content depth, add unique value, build backlinks to signal importance.
Request indexing is powerful for priority URLs but limited quota makes strategic selection critical. Use it for genuinely time-sensitive, business-critical pages where faster discovery provides real value—not reflexively for every page you inspect.
Common URL Inspection Issues and Solutions
URL Inspection reveals specific technical problems preventing indexing. Understanding common issues and their solutions streamlines troubleshooting.
| Issue | Cause | Solution | Verification |
|---|---|---|---|
| Canonical Conflict | User-declared canonical differs from Google-selected canonical | Investigate conflicting signals: Check internal links point to declared canonical, verify sitemap contains declared canonical, ensure no redirect chains, confirm external links don’t overwhelmingly point to different URL | Test live URL after fixes, verify both canonicals match |
| Excluded by noindex | Meta robots tag or X-Robots-Tag header contains noindex | Remove <meta name="robots" content="noindex"> from HTML, or remove X-Robots-Tag: noindex from HTTP headers | Test live URL shows “Indexing allowed: Yes” |
| Blocked by robots.txt | Disallow directive in robots.txt prevents crawling | Update robots.txt to allow crawling: Change Disallow: /page/ to allow, or add Allow: /page/ before broader Disallow rule | Test live URL shows “Crawling allowed: Yes” |
| Discovered – currently not indexed | Google found URL but chose not to index due to quality/priority | Improve content quality (add depth, unique value), build internal links to page, acquire backlinks to signal importance, add to XML sitemap | Monitor in Page indexing report, may take weeks to index |
| Crawled – currently not indexed | Google crawled but decided not to index (quality/duplicate issue) | Check for duplicate content (canonical conflicts, parameter variations), improve content uniqueness and value, consolidate thin pages | May require content overhaul, not quick fix |
| Server error (5xx) | Server returning 500, 502, 503, or 504 errors | Fix server issues: Check error logs, increase server resources, fix application bugs, verify database connections | Test live URL shows 200 status |
| Not found (404) | URL doesn’t exist or returns 404 status | Fix URL spelling if typo, implement 301 redirect to correct URL, or remove broken internal links if page intentionally removed | Test live URL shows 200 status or proper redirect |
| Redirect error | Redirect chain (A→B→C) or redirect loop (A→B→A) | Fix internal links to point directly to final destination, remove redirect chains, fix redirect loops by correcting redirect rules | Test live URL shows single 301 or 200 status |
| Soft 404 | Page returns 200 status but displays “not found” content | Return proper 404 status code for missing content, or add substantial content if page should exist | Test live URL shows 404 status or real content with 200 |
| Mobile usability issues | Text too small, clickable elements too close, viewport not set | Add viewport meta tag, increase font sizes, add spacing between clickable elements | Check Mobile usability section shows “No issues” |
| Structured data errors | Invalid Schema markup | Use Rich Results Test to identify specific errors, fix JSON-LD syntax, ensure required properties present | Test live URL shows “Valid” structured data |
Canonical conflict deep dive:
When user-declared canonical differs from Google-selected canonical, systematically check each signal:
Step 1 – Internal links: Use Screaming Frog or site crawl to find all internal links to this URL. Do most links point to URL A (your declared canonical) or URL B (Google’s selected canonical)? If most links point to B, this signal overpowers your canonical tag. Fix: Update internal links to consistently point to A.
Step 2 – XML sitemaps: Check all submitted sitemaps. Do they include URL A or URL B? If sitemaps include B while canonical tag points to A, mixed signals exist. Fix: Update sitemaps to include only A.
Step 3 – Redirect status: Does either URL redirect? If A redirects to B, Google can’t use A as canonical (canonicals must be accessible). Fix: Remove redirect or change canonical to point to final destination.
Step 4 – External links: Use Ahrefs or similar to check backlinks. Do most backlinks point to A or B? If external links overwhelmingly point to B, this signal may override your canonical tag. Consider if B should actually be canonical, or build more backlinks to A to strengthen signal.
Step 5 – URL parameters: Does one URL have parameters (?id=123) while other doesn’t? Google may prefer parameter-free version. Use canonical tags to consolidate, or configure URL Parameters in GSC if available.
Noindex + Crawl allowed confusion:
Pages with noindex tag but allowed in robots.txt confuse many SEOs. Understanding the interaction:
Scenario: Page has <meta name="robots" content="noindex,follow"> but robots.txt allows crawling.
Result: Google can crawl (robots.txt permits), reads the page, sees noindex tag, and doesn’t index. URL Inspection shows “Crawling allowed: Yes” but “Indexing allowed: No.”
Why this setup exists: Allows link equity to flow through the page (follow links) without indexing the page itself. Useful for pagination, filter pages, or low-value pages that should pass authority but not appear in search.
If page should be indexed: Remove noindex tag. Crawling is already allowed, so once noindex is removed and Google recrawls, indexing will occur.
Robots.txt block + Already indexed:
Paradoxical situation: URL Inspection shows “Blocked by robots.txt” but also “URL is on Google.”
Explanation: Page was crawlable and indexed before robots.txt block was added. Google can’t recrawl to check current status (robots.txt blocks), so keeps existing index entry. Eventually (weeks to months), Google drops the entry since it can’t verify the page still exists.
If you want page to remain indexed: Remove robots.txt block. Add Allow: /page/ to permit crawling again.
If you want page removed from index: Robots.txt block alone is insufficient. Use noindex tag (requires temporarily allowing crawl so Google can see noindex), request removal via Removals tool in GSC, or implement 404/410 status.
Discovered but not indexed volume:
Large numbers of URLs in “Discovered – currently not indexed” status indicate systematic issues, not individual page problems.
Common causes:
Crawl budget exhaustion: Site too large for allocated crawl budget. Google discovers URLs via sitemaps or internal links but doesn’t prioritize crawling them. Solution: Improve site architecture, remove low-value pages, strengthen internal linking to important pages.
Low-quality content at scale: E-commerce sites with thin product descriptions, blogs with short posts, category/tag pages with minimal content. Google discovers but determines not worth indexing. Solution: Improve content quality, consolidate thin pages, add unique value.
Faceted navigation explosion: Filter combinations creating thousands of URLs (products?color=blue&size=large&brand=nike&…). Google discovers but recognizes as duplicate/low-value variations. Solution: Use canonical tags, noindex filters, or robots.txt blocking for infinite filter combinations.
JavaScript rendering issues:
URL Inspection’s “View crawled page” feature reveals rendering problems:
Symptom: Screenshot shows blank sections or missing content. Rendered HTML contains content but raw HTML doesn’t.
Cause: Client-side JavaScript renders critical content. If JavaScript fails or loads slowly, Googlebot may not see content.
Diagnosis: Compare “Raw HTML” vs “Rendered HTML” tabs in “View crawled page.” If content only appears in rendered version, JavaScript dependency exists.
Solutions:
Implement server-side rendering (SSR) so content exists in raw HTML before JavaScript executes.
Use static site generation (SSG) pre-rendering HTML with content already present.
Ensure critical content renders within 5 seconds (Googlebot’s JavaScript execution timeout).
Check “More info” tab for JavaScript errors preventing execution.
Troubleshooting URL Inspection issues systematically—identifying specific error messages, understanding what each means, implementing targeted fixes, and verifying via “Test live URL”—transforms vague “page won’t index” problems into actionable technical corrections.
Advanced URL Inspection Techniques
Beyond basic status checking, URL Inspection enables sophisticated diagnostic workflows for complex technical issues.
JavaScript rendering analysis:
Modern sites rely heavily on JavaScript, creating rendering challenges. URL Inspection provides tools to diagnose these issues.
Raw vs Rendered HTML comparison: In “View crawled page,” URL Inspection shows both raw HTML (what the server sends) and rendered HTML (after JavaScript execution). Significant differences reveal JavaScript dependency.
Example diagnosis: Page content wrapper in raw HTML is <div id="content"></div> (empty), but rendered HTML shows full article text inside. This indicates JavaScript populates content client-side. If JavaScript fails or loads slowly, Googlebot sees empty div.
Solution verification: Implement SSR, check rendered HTML now contains content in raw HTML tab. Verify both tabs show identical content proving JavaScript dependency eliminated.
Resource loading analysis: “More info” tab lists all resources Googlebot loaded: CSS files, JavaScript files, images, fonts. Blocked resources (robots.txt blocks) appear highlighted. Missing resources (404 errors) are flagged.
Critical resources blocked: If main.css or app.js are blocked by robots.txt, page can’t render properly. Solution: Update robots.txt to allow these resources.
Console errors: JavaScript errors appear in “More info” section. Example: “Uncaught TypeError: Cannot read property ‘x’ of undefined” indicates JavaScript bug preventing proper execution. Fix code errors before requesting indexing.
Structured data validation workflow:
URL Inspection detects structured data but limited validation detail. Comprehensive validation requires combined workflow:
Step 1: Inspect URL in URL Inspection tool. Check “Enhancements” section for detected structured data types.
Step 2: If structured data detected, note the types (Article, Product, FAQPage, etc.) and count.
Step 3: Click structured data type or copy URL to Rich Results Test (search.google.com/test/rich-results) for detailed validation.
Step 4: Rich Results Test shows:
- Which specific properties are present/missing
- Validation errors (required properties missing, wrong value types)
- Warnings (recommended properties missing, deprecated markup)
- Preview of how rich result might appear in search
Step 5: Fix identified errors in your Schema markup. Common issues:
- Missing required properties (Product needs
name,image,offers) - Wrong value type (price as text instead of number)
- Invalid property values (datePublished in wrong format)
Step 6: Test live URL in URL Inspection to verify fixes. Re-check Rich Results Test.
Step 7: Request indexing to trigger recrawl with corrected structured data.
Mobile vs Desktop inspection comparison:
URL Inspection defaults to mobile version (Googlebot Smartphone) due to mobile-first indexing. Desktop version available via dropdown at top of results.
When to check both:
Sites serving different content to mobile vs desktop (responsive designs that hide content on mobile, separate mobile URLs, dynamic serving based on user agent).
What to compare:
Content parity: Does mobile version include all important content, or is some hidden? Mobile-first indexing means hidden mobile content won’t affect rankings.
Structured data: Some implementations only add Schema to desktop version. Check mobile version has identical structured data.
Canonical tags: Ensure mobile and desktop both specify correct canonicals. Separate mobile URLs (m.example.com) should canonical to desktop version unless using separate URL strategy.
Internal links: Mobile navigation may differ from desktop. Verify important pages are accessible on mobile version.
Canonical configuration scenarios:
Desktop version (example.com/page):
- User-declared canonical: example.com/page
- Google-selected canonical: example.com/page
- Status: Match, good
Mobile version (m.example.com/page):
- User-declared canonical: example.com/page (points to desktop)
- Google-selected canonical: example.com/page
- Status: Match, correct for separate mobile URLs
Batch URL inspection strategies:
URL Inspection is single-URL tool, but workflows exist for analyzing multiple URLs efficiently:
Prioritized manual inspection: Export URL list from Page indexing report (click “Discovered – currently not indexed” status, export URLs). Sort by business importance. Manually inspect top 20-50 URLs to identify patterns. If all show same issue (all have noindex, all blocked by robots.txt), fix is systematic rather than URL-by-URL.
URL Inspection API (if available): Google offers URL Inspection API for programmatic access. Requires API credentials and coding. Enables bulk inspection of URLs, automated reporting, and integration with dashboards. Check Google Search Console API documentation for current availability and usage limits.
Spreadsheet tracking: Create spreadsheet with columns: URL, Inspection Date, Status, Coverage Reason, Actions Taken. Systematically inspect important URLs, document findings, track fixes. Re-inspect weekly to monitor status changes.
Pattern recognition: Instead of inspecting all 1,000 “Discovered – currently not indexed” URLs, inspect 10-20 representative samples. If all show same pattern (all are thin category pages, all are parameter variations, all are deep in site structure with weak internal linking), implement systematic fix rather than addressing individually.
Canonical chain diagnosis:
Canonical chains (A → B → B) waste crawl budget and dilute ranking signals.
Detection via URL Inspection:
Inspect URL A. Check canonical signals:
- User-declared canonical: URL B
- Google-selected canonical: URL B (Google follows your chain to final destination)
Inspect URL B. Check canonical:
- User-declared canonical: URL C
- Google-selected canonical: URL C
Chain confirmed: A → B → C.
Solution: Update URL A’s canonical to point directly to C, eliminating the chain:
<!-- Before (chain) -->
<!-- Page A --> <link rel="canonical" href="https://example.com/B">
<!-- Page B --> <link rel="canonical" href="https://example.com/C">
<!-- After (direct) -->
<!-- Page A --> <link rel="canonical" href="https://example.com/C">
<!-- Page B --> <link rel="canonical" href="https://example.com/C">
Advanced URL Inspection techniques—systematically analyzing JavaScript rendering, validating structured data comprehensively, comparing mobile and desktop versions, diagnosing canonical chains, and implementing batch inspection strategies—transform the tool from simple status checker to sophisticated technical diagnostic platform enabling deep troubleshooting of complex indexing issues.
URL Inspection Tool Audit Checklist
For important URLs requiring indexing:
- [ ] Access URL Inspection via search bar (paste full URL with https://)
- [ ] Check primary status (“URL is on Google” or “URL is not on Google”)
- [ ] Verify crawling allowed (Robots.txt permits access)
- [ ] Verify indexing allowed (No noindex tag present)
- [ ] Check user-declared canonical matches Google-selected canonical
- [ ] Review crawl date (recent crawl suggests good priority)
- [ ] Review index date (should match crawl date)
- [ ] Confirm URL appears in appropriate XML sitemap
- [ ] Check referring page (how Google discovered URL)
- [ ] Review Coverage status (Valid, Error, or Excluded with reason)
Test Live URL (if verifying fixes):
- [ ] Click “Test live URL” button
- [ ] Wait 30-90 seconds for live test completion
- [ ] Compare live version to indexed version
- [ ] Verify fixes are live (noindex removed, canonical corrected, content updated)
- [ ] Check HTTP status is 200 (not 404, 5xx, or redirect)
- [ ] Review “View crawled page” for rendering issues
- [ ] Compare raw HTML vs rendered HTML for JavaScript issues
- [ ] Check resources loaded (CSS, JS) with no critical blocks
- [ ] Review console for JavaScript errors
Structured Data Validation:
- [ ] Check Enhancements section for detected structured data
- [ ] Note all Schema types found (Article, Product, FAQPage, etc.)
- [ ] Copy URL to Rich Results Test for detailed validation
- [ ] Fix any validation errors or warnings
- [ ] Re-test live URL to confirm structured data corrections
Mobile vs Desktop:
- [ ] Switch to Googlebot Smartphone view (default for mobile-first indexing)
- [ ] Verify mobile version includes all important content
- [ ] Compare mobile canonical to desktop canonical (should match for responsive sites)
- [ ] Check mobile usability section for issues
- [ ] If serving different content, inspect desktop version separately
Request Indexing (use selectively):
- [ ] Only for important, time-sensitive URLs
- [ ] Verify via “Test live URL” that page is ready for indexing
- [ ] Confirm no technical issues exist (noindex, 5xx, robots.txt block)
- [ ] Click “Request indexing” button
- [ ] Note in tracking spreadsheet (URL, date requested)
- [ ] Re-inspect in 2-3 days to verify status change
- [ ] Don’t request repeatedly for same URL (wait 5-7 days between requests)
Troubleshooting Checklist:
- [ ] Canonical conflict: Investigate internal links, sitemaps, redirects for conflicting signals
- [ ] Noindex issue: Remove meta robots noindex tag or X-Robots-Tag header
- [ ] Robots.txt block: Update robots.txt to allow crawling
- [ ] Server error: Fix server to return 200 status consistently
- [ ] 404 error: Fix URL or implement 301 redirect to correct destination
- [ ] Redirect error: Remove redirect chains, fix redirect loops
- [ ] Soft 404: Return proper 404 status or add substantial content
- [ ] Discovered not indexed: Improve content quality, add internal links, build backlinks
- [ ] JavaScript rendering: Implement SSR or verify content in raw HTML
Quota Management:
- [ ] Track Request indexing usage (estimate 10-50 per day max)
- [ ] Prioritize business-critical URLs only
- [ ] Don’t request indexing for low-value pages
- [ ] Don’t request repeatedly for same URL
- [ ] Use XML sitemaps for broad discovery, Request indexing for priority URLs
Monitoring:
- [ ] Re-inspect URLs after 2-3 days (Request indexing processing time)
- [ ] Track status changes in spreadsheet
- [ ] Document patterns across multiple URLs
- [ ] Implement systematic fixes for widespread issues
Use this checklist when diagnosing indexing issues, validating technical implementations, or requesting priority crawling for important pages.
Related Technical SEO Resources
Deepen your indexing and diagnostic expertise:
- Google Search Console Indexing Issues Guide – Understand site-wide indexing patterns revealed in the Page indexing report, learn how URL-level diagnostics from URL Inspection complement aggregate indexing data, and master the relationship between discovery, crawling, and indexing that URL Inspection reveals for individual pages.
- Crawl Budget Optimization Guide – Explore how Request indexing affects crawl budget allocation by prioritizing specific URLs, understand why frequently requesting indexing for low-value pages wastes both quota and crawl resources, and learn strategic URL selection that maximizes crawl efficiency alongside URL Inspection verification workflows.
- Structured Data and Schema Markup Guide – Master comprehensive structured data implementation that URL Inspection’s Enhancements section validates, learn the complete workflow combining URL Inspection detection with Rich Results Test validation, and understand how proper Schema markup influences the information URL Inspection displays about your pages.
- JavaScript SEO and Rendering Guide – Understand the JavaScript rendering issues URL Inspection’s “View crawled page” feature exposes through raw vs rendered HTML comparison, learn server-side rendering and static site generation strategies that eliminate client-side dependencies URL Inspection reveals, and implement solutions ensuring content appears in both raw and rendered views for optimal indexing.
The URL Inspection tool represents Google Search Console’s most granular diagnostic capability, transforming abstract indexing concepts into concrete, URL-specific technical data that guides precise troubleshooting and validates that implemented fixes actually work as intended. Unlike aggregate reports showing site-wide patterns—500 pages with noindex tags, 1,000 pages discovered but not indexed, 200 pages with canonical conflicts—URL Inspection reveals exactly which noindex tag on which specific page, precisely what the canonical conflict involves, and definitively whether that particular URL is crawlable, indexable, and appearing in Google’s index. The tool’s dual modes serve complementary purposes: indexed version shows historical reality (what Google currently has in its index and displays in search results), while live URL testing shows present server state (what Google would see if crawling right now), enabling verification that fixes are actually deployed before wasting Request indexing quota on pages where changes aren’t yet live. Understanding the distinction between crawling permission (robots.txt) and indexing permission (meta robots tags) prevents confusion when pages are crawlable but not indexable or vice versa, while recognizing when user-declared canonicals differ from Google-selected canonicals exposes the conflicting signals that prevent Google from honoring your canonical preferences. The Request indexing feature accelerates discovery for time-sensitive content—new product launches requiring same-day visibility, critical bug fixes needing immediate index updates, breaking news demanding instant search presence—but daily quota limitations (estimated 10-50 requests per property) necessitate strategic prioritization, reserving requests for genuinely business-critical URLs rather than reflexively requesting indexing for every inspected page regardless of importance or changes. Effective URL Inspection workflows combine systematic checking of crawling/indexing permissions, canonical verification, structured data validation, JavaScript rendering analysis, and live testing before requesting indexing, creating comprehensive diagnostic processes that identify not just whether pages index but precisely why they don’t and exactly what technical corrections will enable successful indexing. The tool’s integration with other GSC features—accessible from Page indexing report URLs, Performance report pages, and Links report entries—embeds URL-level diagnosis directly into broader analytical workflows, enabling seamless transition from identifying site-wide patterns to diagnosing specific URL failures to validating individual fixes to monitoring whether status changes after interventions. Advanced techniques including JavaScript rendering comparison via raw vs rendered HTML tabs, structured data validation combining URL Inspection detection with Rich Results Test detailed analysis, mobile versus desktop inspection for content parity verification, and canonical chain diagnosis reveal sophisticated technical issues that basic status checking never exposes. Common mistakes—assuming indexed version shows real-time data when it’s actually weeks-old snapshot, requesting indexing without testing live version to verify fixes are deployed, expecting instant indexing after requesting when processing takes hours to days, or burning through daily quota on low-value pages leaving no requests available for genuinely important URLs—diminish URL Inspection effectiveness, while understanding these pitfalls and following best practices transforms the tool from occasionally useful into systematically valuable for managing indexing across sites of any scale or complexity.