robots.txt is a file that tells search engines what their spiders/web crawlers can and can not crawl. if its giving you that error, its probably saying the page was specifically marked as "do not crawl". its not really an error so much as an explanation of why the page is unavailable (i think, i never checked out that tool). although, im not sure why WIS would exclude the forums like that. on a random side note, microsoft is notorious in that space for their crawler being too ****** to follow the robot.txt rules. not really going anywhere with that, just hard for me to avoid the opportunity to give microsoft a little kick when the chance comes along ;) but maybe there is a wayback or something based off of the microsoft search engines, which might include that stuff, because microsoft was too stupid to follow basic instructions?