On Twitter, Google’s John Mueller said that you have to work on your website if pages that have been excluded by your robots.txt file outrank the regular pages of your website in Google’s search results.
What is robots.txt?
A robots.txt file is a simple text file in the root directory of a website (www.example.com/robots.txt). The robots.txt file enables you to inform indexing bots about which areas of your website should not be processed or scanned. Not all robots cooperate with the standard.
If you block a page with robots.txt, Google will still index the URL of the page, but not the content of the page.
What’s the problem with robots.txt?
On Twitter, a webmaster complained that pages that he excluded in his robots.txt file still showed up in Google’s search results:
I see this all the time. Content we purposely disallow in robots.txt because users from search likely won’t find it useful shows up in a SERP with the terrible “We can’t show a description because it’s blocked by robots.txt” snippet.
— Elmer Boutin (@rehor) March 28, 2019
Google’s John Mueller said that this means that the website owner has to work on the web pages:
If a robotted page from your site ranks instead of a page with content on it, for queries that users use to find your site, then I think you have work to do :-).
— ? John ? (@JohnMu) March 28, 2019
Optimize your web pages
If blocked pages have higher rankings on Google than you regular pages then you have to optimize the content of your pages and you have to improve the links that point to your pages. The tools in SEOprofiler help you with that: