WordPress robots.txt file is a text file located at the root of your site that “tells search engine crawlers which URLs the crawler can access on your site”.
Here is an example of a
robots.txt file for a WordPress site that allows web crawlers to crawl the site and access all pages and files:
robots.txt file includes a directive (
Disallow:) that tells web crawlers that they are allowed to access all pages and files on the site. The
User-agent: * line specifies that the directive applies to all web crawlers.
If you want to prevent web crawlers from accessing certain sections of your WordPress site, you can use the
Disallow directive to specify those sections. For example, the following
robots.txt file disallows crawling of the site’s wp-admin and wp-includes directories:
It’s important to note that the
robots.txt file is only a suggestion and is not a guarantee that a particular page or file will be crawled or not crawled. Some web crawlers may choose to ignore the directives in the
It’s also important to note that if you use security plugins such as “All in one WP security” or similar, it can add more blocks and rules to the robots.txt file.