Google has announced in a blog post that they want to formalize the robots exclusion protocol specification.
“For 25 years, the Robots Exclusion Protocol (REP) has been one of the most basic and critical components of the web. It allows website owners to exclude automated clients, for example web crawlers, from accessing their sites – either partially or completely. […]
The proposed REP draft reflects over 20 years of real world experience of relying on robots.txt rules, used both by Googlebot and other major crawlers, as well as about half a billion websites that rely on REP.”
How to check the robots.txt file of your website
Among many other things, the website audit tool in SEOprofiler checks the robots.txt file of your website. If there is an issue with the robots.txt file, the website audit tool will reveal it: