.Google's John Mueller addressed a concern regarding why Google.com marks web pages that are refused coming from crawling through robots.txt and why the it's secure to overlook the associated Explore Console records concerning those crawls.Bot Website Traffic To Inquiry Parameter URLs.The person asking the question recorded that crawlers were actually producing hyperlinks to non-existent question guideline Links (? q= xyz) to webpages with noindex meta tags that are actually likewise obstructed in robots.txt. What urged the inquiry is actually that Google is actually crawling the links to those webpages, receiving obstructed through robots.txt (without envisioning a noindex robots meta tag) at that point acquiring shown up in Google.com Search Console as "Indexed, though blocked by robots.txt.".The person inquired the adhering to inquiry:." Yet below's the big inquiry: why would certainly Google.com index web pages when they can't even see the web content? What is actually the benefit during that?".Google.com's John Mueller affirmed that if they can't crawl the web page they can't view the noindex meta tag. He also helps make an appealing acknowledgment of the web site: search driver, suggesting to dismiss the end results due to the fact that the "common" customers will not find those end results.He wrote:." Yes, you're appropriate: if our company can not crawl the web page, our company can not view the noindex. That said, if our experts can't creep the pages, then there's certainly not a whole lot for our company to mark. Therefore while you could observe a few of those pages with a targeted site:- concern, the normal individual will not view all of them, so I wouldn't bother it. Noindex is actually also fine (without robots.txt disallow), it merely means the Links are going to end up being actually crawled (and find yourself in the Browse Console report for crawled/not recorded-- neither of these conditions induce concerns to the rest of the site). The important part is that you don't make them crawlable + indexable.".Takeaways:.1. Mueller's answer validates the constraints in operation the Web site: hunt progressed search operator for analysis causes. Among those causes is given that it's not attached to the frequent hunt mark, it's a different trait entirely.Google.com's John Mueller discussed the website hunt operator in 2021:." The short response is actually that a site: concern is actually not meant to become comprehensive, neither used for diagnostics reasons.A web site inquiry is actually a particular kind of hunt that restricts the outcomes to a specific website. It's primarily simply words website, a bowel, and afterwards the site's domain name.This inquiry restricts the outcomes to a specific website. It is actually not suggested to become a complete selection of all the webpages coming from that site.".2. Noindex tag without using a robots.txt is great for these type of circumstances where a bot is connecting to non-existent webpages that are getting uncovered through Googlebot.3. URLs with the noindex tag will definitely create a "crawled/not catalogued" entry in Look Console which those will not possess a bad impact on the remainder of the site.Read through the concern and respond to on LinkedIn:.Why will Google.com index webpages when they can't also see the content?Included Picture by Shutterstock/Krakenimages. com.