Robots.txt is a file that contains instructions on how to crawl a website. It is also known as robots exclusion protocol, and this standard is used by sites to tell the bots which part of their website needs indexing.
You can, also, specify which areas you don't want to get processed by these crawlers; such areas contain duplicate content or are under development. Bots like malware detectors or email harvesters don't follow this standard and will scan for weaknesses in your securities, and there is a high probability that they will begin examining your site from the areas you don't want to be indexed.