Google’s Gary Illyes announced in Google’s official blog that Google is going to drop support for crawl-delay, noindex and nofollow in robots.txt files.
“In particular, we focused on rules unsupported by the internet draft, such as crawl-delay, nofollow, and noindex. Since these rules were never documented by Google, naturally, their usage in relation to Googlebot is very low.
Digging further, we saw their usage was contradicted by other rules in all but 0.001% of all robots.txt files on the internet. These mistakes hurt websites’ presence in Google’s search results in ways we don’t think webmasters intended.
In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019.”
You can continue to use the nofollow and noindex attribute in the meta robots tag on web pages. If your website delivers a 404 or 410 HTTP status code for a web page, Google will also remove that URL from its index. You can find Google’s robots.txt specifications here.
Bing does the same
Bing’s Frédéric Dubut said on Twitter that Bing also does not support these robots.txt commands:
The undocumented noindex directive never worked for @Bing so this will align behavior across the two engines. NOINDEX meta tag or HTTP header, 404/410 return codes are all fine ways to remove your content from @Bing. #SEO #TechnicalSEO https://t.co/ukKhfRPWzO
— Frédéric Dubut (@CoperniX) July 2, 2019
How to check the HTTP status codes of your web pages
Even if your web pages can be seen in web browsers, they might send the wrong HTTP status codes to search engine robots. Use the website audit tool to check the HTTP status codes of your web pages and find many other issues on your website that prevent search engines from ranking your web pages: