If you’ve launched a fresh website or started publishing on a new domain and you see “Discovered – currently not indexed” inside Google Search Console, it can feel like Google is ignoring you. You’ve done the work, you’ve published the page, you’ve even submitted a sitemap, and still… nothing shows up in search. The good news is that this status is not a penalty, and it’s not a final verdict. It’s a signal about where your URLs are sitting in Google’s pipeline, and on new sites, it’s one of the most common early-stage states you’ll see.
The key is to understand what “discovered” really means, what’s actually blocking the next step, and how to make the right moves in the right order. Most people waste weeks doing random “SEO fixes” because they treat this issue like a mystery. It’s usually not a mystery. It’s a prioritisation problem combined with a trust and quality problem, and you can solve it systematically.
What “Discovered – currently not indexed” actually means
In plain terms, Google knows your URL exists, but it hasn’t crawled it yet, and therefore it hasn’t indexed it. Google can discover a URL in many ways: from your XML sitemap, from internal links, from external links, from your RSS feed, from redirects, from canonical tags, even from pages it has already crawled that reference your new page. Discovery is cheap. Crawling is not. Crawling costs Google resources, and Google is constantly deciding what deserves attention next.
This status is different from “Crawled – currently not indexed.” With “Discovered,” Google hasn’t fetched the page content yet. With “Crawled,” Google has fetched the content and decided not to index it (at least for now). That difference matters because it changes what you should focus on. “Discovered” is primarily about crawl prioritisation, crawl pathways, and perceived value. It’s Google saying: “I see it’s there, but it’s not high on my to-do list.”
On a new site, that usually happens for a few predictable reasons: your site doesn’t yet have much authority, Google doesn’t yet trust it, you’re producing more URLs than Google wants to crawl early on, or the internal linking and architecture aren’t giving Google a strong reason to pull those pages forward.
Why this happens most often on new sites
New sites are basically unknown entities. Google has to be careful about crawling too aggressively, because a large portion of the web is low quality, duplicated, or intentionally spammy. Early on, Google tests you. It will crawl some pages, watch what happens, evaluate signals, and gradually allocate more crawl capacity if the site looks legitimate and valuable.
Here are the most common causes I see behind “Discovered – currently not indexed” on fresh domains, especially content hubs and blogs:
1) Google doesn’t see enough value yet to prioritise the URL
If you publish content that looks similar to thousands of existing pages on the web, or content that reads like generic SEO filler, Google has no urgency to crawl it. This is especially true if your site is new and there’s no brand demand pulling Google toward your pages. The harsh truth is that many new sites publish pages Google can safely ignore, and Google learns quickly.
2) Weak internal linking and unclear architecture
If your page exists, but it’s buried, not linked from prominent pages, or only reachable through a path that Google doesn’t crawl often, it can sit in “discovered” limbo for a long time. Sitemaps help discovery, but they do not replace strong internal linking. Google still relies heavily on your site’s structure to decide what matters.
3) Index bloat from tags, filters, and thin archives
Magazine-style WordPress setups can generate many low-value URLs: tag archives, author archives, date archives, attachment pages, search pages, parameter URLs, and so on. When Google sees a new site producing a large number of URLs that look similar or thin, it becomes conservative. Your high-value posts get lumped into the same crawl queue as low-value pages, and everything slows down.
4) Technical friction that reduces crawl efficiency
This doesn’t always show up as a clear error. Sometimes the site is technically “fine,” but slow, unstable, blocked in certain ways, or heavy enough that Google doesn’t crawl deep. Things like slow TTFB, misconfigured caching, inconsistent canonical tags, or messy redirect patterns can reduce crawl appetite. It’s not always a direct “error”; it’s just not an inviting crawl environment.
5) Publishing velocity is too high for the site’s current trust level
If a new site publishes dozens of posts quickly, Google may discover them (via sitemap or internal links), but not crawl them all right away. This is normal. The fix is not to stop publishing; the fix is to make sure you’re publishing the right types of posts, while building crawl pathways and trust signals that help Google upgrade your site’s priority.
How to diagnose it properly (without guessing)
Before changing anything, you want to confirm what’s actually happening. Here’s a practical workflow that doesn’t rely on superstition.
Step 1: Check a few affected URLs in URL Inspection
Pick 5–10 URLs showing “Discovered – currently not indexed.” Use the URL Inspection tool in Search Console and look for patterns. Are these pages new posts? Are they tag archives? Are they duplicates? Do they share a template? If you see that tag archives or other non-essential pages are heavily represented, you have an index bloat problem. If it’s primarily your best posts, you likely have a trust/prioritisation problem.
Step 2: Confirm the pages are actually reachable through internal links
Open the post and ask a simple question: can a user reach this page by navigating from the homepage without using search, and without clicking through a stack of archives? If the answer is “not really,” Google is probably treating it the same way. Your homepage and key category pages are crawl hotspots. You want your most important posts linked there, at least early on.
Step 3: Inspect sitemap and canonical consistency
Make sure your sitemap includes your real canonical URLs. If you have both trailing slash and non-trailing slash versions, http/https inconsistencies, or mixed “www” and non-“www” signals, Google can waste time discovering variants. Also check that your canonical tag on each post points to itself (unless you have a deliberate canonical strategy).
Step 4: Look at server performance and response codes
You don’t need full log analysis to catch common issues. Use a simple crawl tool or even manual checks to confirm that the pages return a clean 200 status, load quickly, and don’t trigger odd behaviours like intermittent 5xx errors or slow first byte. If your site is slow or unstable, Google will crawl less and delay crawling new URLs.
Step 5: Confirm you’re not accidentally telling Google to back off
This sounds obvious, but it’s surprisingly common. Confirm you’re not using “noindex” on templates that matter, that your robots.txt isn’t blocking essential paths, and that you’re not limiting crawling through aggressive security rules or bot blocks. Many “lightweight” setups accidentally block Googlebot through WAF settings or plugin configurations.
What to do to move URLs from “Discovered” to crawled and indexed
Now the execution part. The goal is not to “force” Google. The goal is to make your best URLs impossible to ignore, while reducing noise that competes for crawl attention.
1) Reduce index bloat immediately (high impact on new sites)
If you’re running WordPress with a news-style theme, you can accidentally generate a lot of low-value archives. On a fresh site, you should be conservative. The fastest improvement I see on new content hubs comes from tightening what Google is encouraged to crawl and index.
In practical terms, that means you should review: tag archives, author archives, date archives, and internal search results pages. If those pages are thin or duplicative, they should not be indexed. You can keep them for users if you want, but don’t present them as “valuable search destinations” until they have real depth. This single change often improves crawl efficiency because Google stops discovering thousands of “almost the same” pages and starts focusing on actual articles.
You already created a strong tag set, which is great, but don’t let tags become a second “content site” made of thin archive pages. Use tags as organisational metadata first, and only let tag archives become indexable when they are genuinely useful and have enough posts.
2) Strengthen internal linking like you mean it
For a new site, internal linking is your strongest lever because it costs nothing and it directly influences prioritisation. Your homepage is the most important crawler entry point. Make sure the homepage links to your newest and best content in a way that is obvious, not hidden behind sliders or lazy-loaded widgets.
Then, on each article, add a “Related articles” section and manually link to 2–4 relevant posts. Don’t wait until you have hundreds of posts. Start now, even if it’s only a few. Google follows internal links, but it also learns what you consider important based on where you place links and how frequently you connect topics.
A simple rule that works: every new post should link to one “parent” topic in its category and two “siblings” that are directly related. Over time, this forms clusters naturally, without needing dedicated pillar pages.
3) Publish content that earns crawl priority, not content that just exists
If your first 10–20 posts are generic, Google will treat the entire site as generic. For a new SEO insights site, you want problem-led content that demonstrates expertise immediately. Posts that perform best early are usually specific troubleshooting guides, practical playbooks, and content that answers “what do I do now?” questions.
A good first-month publishing pattern is to focus on Search Console statuses, technical issues, and specific platform problems (Next.js rendering pitfalls, Shopify duplicate patterns, CWV root causes). These topics tend to get discovered and crawled faster because they match clear intent and have ongoing demand.
4) Improve technical crawl friendliness (without turning it into a performance project)
You don’t need to spend a month chasing perfect Lighthouse scores, but you do need a clean crawl environment. Make sure pages load consistently, the server responds quickly, canonical tags are stable, and you are not producing confusing URL variants. If your theme is lightweight, you’re already ahead. Now keep it that way by being disciplined with plugins and third-party scripts.
5) Use sitemap submission correctly (and don’t over-trust it)
Submitting a sitemap is necessary, but it’s not magic. Sitemaps are a discovery tool and a hint. If Google doesn’t see value or doesn’t have enough confidence in the site, it will still delay crawling. Keep your sitemap clean. Include only canonical, index-worthy URLs. Avoid stuffing it with tag archives, thin categories, or anything you don’t actually want in search.
6) Earn a few real external signals (small, legitimate, and enough)
You do not need aggressive link building for this stage. What helps most is a handful of legitimate mentions or links that tell Google: “This site is real, and people reference it.” If you have a LinkedIn audience or professional network, use it. Share your best troubleshooting posts. If you have a main site (ramfaseo.se) with service pages, link to your insights site prominently and contextually. This is one of the cleanest ways to pass trust signals early.
7) Be patient, but not passive
On new sites, some URLs sit in “discovered” status for days or weeks even when everything is fine. What you should look for is momentum: are more URLs moving from discovered → crawled → indexed over time? If yes, you’re building trust. If nothing changes after a few weeks, that’s when you treat it as a priority problem and tighten quality, structure, and noise reduction more aggressively.
What not to do (common mistakes that waste time)
A lot of people panic and start doing things that either don’t help or actively hurt.
One common mistake is blasting “Request indexing” for dozens of URLs. It can be useful for a few important pages, but it’s not a strategy. Overusing it doesn’t make Google crawl your entire site. Another mistake is mass-producing more posts to “give Google more.” If the first batch isn’t earning crawl priority, adding more of the same just increases the queue of low-priority URLs.
A third mistake is making major URL structure changes too early. If your permalinks are clean and stable, don’t change them because of one Search Console status. Fix the fundamentals: internal linking, index bloat, and content value. Finally, don’t confuse “discovered” with a manual action or algorithmic penalty. It’s almost always a prioritisation signal, not a punishment.
A simple, practical checklist for new sites
If you want one quick way to approach this without overthinking, use this checklist for the next 14 days:
Make sure only high-value URLs are indexable, especially early on. Strengthen homepage and category links to your best posts. Add related-article links inside every post. Keep your sitemap clean and canonical-only. Publish problem-led content that matches clear search intent. Share a few posts publicly and earn real engagement and mentions. Watch momentum in Search Console instead of obsessing over a single URL.
Do this consistently, and “Discovered – currently not indexed” will stop feeling like a wall and start behaving like what it really is: a temporary queue stage.
Want a fast win?
If you’re seeing this status on many important posts, the fastest diagnostic is to pick your top 10 articles, ensure they’re linked from the homepage, ensure tag/archive bloat is controlled, and ensure each post links to 2–4 related posts. That single move often shifts crawl behaviour within days on fresh sites because it clarifies what matters and reduces noise.
