The robots.txt file is a type of guide that lets you tell search engines where websites/blogs can and cannot be accessed. This means that if you do not want to inform the search engines about all the content of your website, you can use robots.txt to keep them separate from the search engines. As a result, search engines do not have access to these files. Search engines cannot crawl the robots.txt file.
Each search engine has its own robot or bot which also has its own name like Googlebot in case of Google, Yahoo bot in case of Yahoo etc.