A crawler is a automated program or script that scans the Internet for new content. Other terms for Web crawlers are: Spiders, Web robots, and Bots.
When a crawler visits a web page it will read the visible text, hyperlinks, and the content that is used in the site, this process is called Web crawling or Spidering.
Crawlers often have a specific name for identification. Google's crawler is called 'Googlebot', Yahoo's crawler is called 'Slurp' for example.