wounded in the line of duty
Bugzilla actually ships a robots.txt file that instructs Google not to index a Bugzilla site. Traditionally this was because Google's crawler would completely kill the Bugzilla server when it tried to crawl it. Now, Bugzilla itself has had a LOT of performance improvements since then. It may very well be that it's safe to let Google crawl the thing now, but we'd probably need a guinea pig to test it. :)
Last time we discussed trying to allow Google to crawl bugzilla.mozilla.org, several people voiced concern about the potential for security bugs that were mis-filed without the security flag on them (or someone reports a crash, and the developers discover it's exploitable when trying to fix it) to get picked up and cached by Google before they got secured. In real life, I don't think that actually happens very often, but that's one concern for it anyway.
More information about formatting options
© 2007-2010 Dag Wieërs | Powered by Drupal and RHEL. | No legal statement, haha.