“What is it that you’re trying to achieve with the robots.txt file in your case?
First off, one thing that is likely wrong with your robots.txt file is that crawlers obey the most specific user-agent line, not all of them. So for Googlebot, that would be only the section for Googlebot, not the section for *.
The * section is very explicit in your case, so you’d probably want to duplicate that. Past that, why is there a section for Googlebot? Are these URL patterns that you want to disallow for all search engines perhaps?
The “*” section is likely much more complex than you really need. When possible, I’d really recommend keeping the robots.txt file as simple as possible, so that you don’t have trouble with maintenance and that it’s really only disallowing resources that are problematic when crawled (or when its content is indexed).”
How to check your robots.txt file
Among many other things, the website audit tool in SEOprofiler also checks the robots.txt file of your website:
SEOprofiler also offers a free robots.txt creator:
The robots.txt creator can be accessed in the free demo version of SEOprofiler. If you haven’t done it yet, try SEOprofiler now. You can get the full version for just $1 (this offer is available once per customer):
Please tell your friends and colleagues about SEOprofiler and click one of the following buttons: