Q. robots.txt prevent a page from being indexed by search engines?
Solution:
robotstxt can stop a crawler from visiting a page, but if other sites link to it, the URL can still be indexed (though without content) For index blocking, you need a noindex directive or password protection
Get Question Bank
Strengthen Your Practice with our comprehensive question bank.