Googlebot travels around the web using links. A complex algorithm determines which webpages Googlebot will crawl based on data from previous crawls. Information about new, altered or dead links are passed on to the Google index.
How often a website is crawled and how many pages are crawled differ from site to site. Good SEO practice will improve your chances of having more pages crawled more often.
Yes you can! There are a few options but using robots.txt is the most common method – and it's also the method recommended by Google. The robots.txt is a file which tells Googlebot exactly how you would like it to crawl your site including pages you would not like to be crawls. Here's some further information about how to implement a robots.tx file on your site.
Using the rel="nofollow" attribute on any links you would not like the Googlebot to follow is another option to attempt to prevent pages you link to being indexed. Simply add the attribute to your HTML links:
<a href="http://www.dontfollowme.com" rel="nofollow">Please don't follow me</a>
The "nofollow" attribute can also be added at the meta level in the head of any document:
<meta name="robots" content="nofollow" />
This command stops Googlebot from passing authority or anchor text relevancy signals from any of the links on the page the attribute is applied to.
Here's a fantastic article from Kissmetrics on how to optimise your site for Googlebot.