Generate robots.txt files visually with separate Disallow/Allow fields per user-agent. 6 quick templates: Allow All, Block All, Standard, E-commerce, WordPress, SPA.
✅ Free🤖 Visual Builder📋 6 Templates🔒 Private
⭐⭐⭐⭐⭐4.9 / 5(7,918 ratings)
🤖Robots.txt Generator
User-agent Rules
⚡ Quick Templates
Generated robots.txt
🤖
Visual Rule Builder
Add user-agent rules with separate Disallow and Allow field areas — no syntax to memorise.
📋
6 Quick Templates
Allow All, Block All, Standard, E-commerce, WordPress and SPA/App templates.
⬇️
Download File
Download a deployment-ready robots.txt file with one click.
🔒
100% Private
All generation runs in your browser. Your URL paths are never sent anywhere.
⭐ User Reviews
4.9
⭐⭐⭐⭐⭐
Based on 7,918 verified reviews · 99% recommend
L
Lena K.
Yesterday
⭐⭐⭐⭐⭐
The e-commerce template blocks /checkout/, /account/ and /cart/ while allowing product and public API pages. Perfect starting point that I customise quickly.
Robots.txt Generator
J
Jake T.
3 days ago
⭐⭐⭐⭐⭐
The visual Disallow/Allow fields for each user-agent make complex robots.txt files easy without memorising syntax. The WordPress template with all WP-Admin paths is a lifesaver.
Robots.txt Generator
M
Maya S.
1 week ago
⭐⭐⭐⭐⭐
The SPA template handles separate Googlebot rules correctly for React apps. The download button gives a deployment-ready file. No syntax errors possible.
Robots.txt Generator
V
Victor L.
2 weeks ago
⭐⭐⭐⭐⭐
The Sitemap URL and Host fields are often missed in simple generators. The live output updating as I add rules makes verification easy.
Robots.txt Generator
📖 How to Use
1
Choose a Template
Click a quick template matching your site type as a starting point.
2
Customise Rules
Add, edit or remove user-agent rules with the Disallow and Allow fields.
3
Add Sitemap URL
Enter your sitemap.xml URL to help search engines find all your pages.
4
Download
Click Download robots.txt to get the deployment-ready file, then upload to your site root.
🎯 Related Tools
❓ FAQ
What is robots.txt?+
robots.txt is placed at https://yourdomain.com/robots.txt and instructs search engine crawlers which pages they can or cannot access, following the Robots Exclusion Protocol.
What is the difference between Disallow and Allow?+
Disallow tells crawlers not to access a path. Allow overrides a Disallow for a more specific path. More specific rules take precedence. For example, Disallow /api/ with Allow /api/public/ lets crawlers access the public endpoint.
Does robots.txt block pages from being indexed?+
No. robots.txt prevents crawling but not indexing. If another site links to a disallowed URL, Google may still index it without crawling. Use a noindex meta tag to prevent indexing.
What does User-agent: * mean?+
User-agent: * applies rules to all crawlers. You can target specific bots by name: Googlebot, Bingbot, Slurp. More specific user-agent rules override the wildcard.
Where do I upload robots.txt?+
The file must be at exactly https://yourdomain.com/robots.txt in the root. Place it in your root public directory (public_html, www, or /).