• @[email protected]
    link
    fedilink
    1101 month ago

    Block? Nope, robots.txt does not block the bots. It’s just a text file that says: “Hey robot X, please do not crawl my website. Thanks :>”

    • ɐɥO
      link
      fedilink
      591 month ago

      I disallow a page in my robots.txt and ip-ban everyone who goes there. Thats pretty effective.

        • bountygiver [any]
          link
          fedilink
          English
          18
          edit-2
          1 month ago

          humans typically don’t visit [website]/fdfjsidfjsidojfi43j435345 when there’s no button that links to it

          • @[email protected]
            link
            fedilink
            English
            1530 days ago

            I used to do this on one of my sites that was moderately popular in the 00’s. I had a link hidden via javascript, so a user couldn’t click it (unless they disabled javascript and clicked it), though it was hidden pretty well for that too.

            IP hits would be put into a log and my script would add a /24 of that subnet into my firewall. I allowed specific IP ranges for some search engines.

            Anyway, it caught a lot of bots. I really just wanted to stop automated attacks and spambots on the web front.

            I also had a honeypot port that basically did the same thing. If you sent packets to it, your /24 was added to the firewall for a week or so. I think I just used netcat to add to yet another log and wrote a script to add those /24’s to iptables.

            I did it because I had so much bad noise on my logs and spambots, it was pretty crazy.

            • Mikelius
              link
              fedilink
              1030 days ago

              This thread has provided genius ideas I somehow never thought of, and I’m totally stealing them for my sites lol.

          • JackbyDev
            link
            fedilink
            English
            14
            edit-2
            1 month ago

            I LOVE VISITING FDFJSIDFJSIDOJFI435345 ON HUMAN WEBSITES, IT IS ONE OF MY FAVORITE HUMAN HOBBIES. 🤖👨

      • Onno (VK6FLAB)
        link
        fedilink
        729 days ago

        Is the page linked in the site anywhere, or just mentioned in the robots.txt file?

      • @[email protected]
        link
        fedilink
        51 month ago

        Not sure if that is effective at all. Why would a crawler check the robots.txt if it’s programmed to ignore it anyways?

      • Dizzy Devil Ducky
        link
        fedilink
        English
        430 days ago

        I doubt it’d be possible in most any way due to lack of server control, but I’m definitely gonna have to look this up to see if anything similar could be done on a neocities site.

    • Cynicus RexOP
      link
      fedilink
      131 month ago

      Unfortunate indeed.

      “Can AI bots ignore my robots.txt file? Well-established companies such as Google and OpenAI typically adhere to robots.txt protocols. But some poorly designed AI bots will ignore your robots.txt.”

      • @[email protected]
        link
        fedilink
        English
        231 month ago

        typically adhere. but they don’t have to follow it.

        poorly designed AI bots

        Is it a poor design if its explicitly a design choice to ignore it entirely to scrape as much data as possible? Id argue its more AI bots designed to scrape everything regardless of robots.txt. That’s the intention. Asshole design vs poor design.

    • @majestictechie
      link
      English
      71 month ago

      This is why I block in a htaccess:

      # Bot Agent Block Rule
      RewriteEngine On
      RewriteCond %{HTTP_USER_AGENT} (BOTNAME|BOTNAME2|BOTNAME3) [NC]
      RewriteRule (.*) - [F,L]
      
      • @[email protected]
        link
        fedilink
        191 month ago

        This is still relying on the bot being nice enough to tell you that it’s a bot; it could just not.

        • @[email protected]
          link
          fedilink
          English
          71 month ago

          Exactly. The only truly effectively way I’ve ever found to block bots is to use a service like Akamai. They have an add-on called Bot Manager that identifies requests as bots in real time. They have a library of over 1000 known bots and can also identify unknown bots built on different frameworks, bots that impersonate well known bots like Googlebot, etc. This service is expensive, but effective…

          • poVoq
            link
            fedilink
            5
            edit-2
            1 month ago

            I wonder if there is an AI scraper block list I could add to Suricata 🤔

          • @majestictechie
            link
            English
            21 month ago

            How does this differentiate between a user and a bot if the User Agent doesn’t say it’s a bot?

            • @[email protected]
              link
              fedilink
              English
              9
              edit-2
              1 month ago

              When any browser, app, etc. makes an HTTP request, the request consists of a series of lines (headers) that define the details of the request, and what is expected in the response. For example:

              
              GET /home.html HTTP/1.1
              Host: developer.mozilla.org
              User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:50.0) Gecko/20100101 Firefox/50.0
              Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
              Accept-Language: en-US,en;q=0.5
              Accept-Encoding: gzip, deflate, br
              Referer: https://developer.mozilla.org/testpage.html
              Connection: keep-alive
              Upgrade-Insecure-Requests: 1
              Cache-Control: max-age=0
              
              

              The thing is, many of these headers are optional, and there’s no requirement regarding their order. As a result, virtually every web browser, every programming framework, etc. sends different headers and/or orders them differently. So by looking at what headers are included in a request, the order of the headers, and in some cases the values of some headers, it’s possible to tell if a person is using Firefox or Chrome, even if you use a plug-in to spoof your User-Agent to look like you’re using Safari.

              Then there’s what is known as TLS fingerprinting, which can also be used to help identify a browser/app/programming language. Since so many sites use/require HTTPS these days it provides another way to collect details of an end user. Before the HTTP request is sent, the client & server have to negotiate the encryption to use. Similar to the HTTP headers, there are a number of optional encryption protocols & ciphers that can be used. Once again, different browsers, etc. will offer different ciphers & in different orders. The TLS fingerprint for Googlebot is likely very different than the one for Firefox, or for the Java HTTP library or the Python requests package, etc.

              On top of all this Akamai uses other knowledge & tricks to determine bots vs. humans, not all of which is public knowledge. One thing they know, for example, is the set of IP addresses that Google’s bots operate out of. (Google likely publishes it somewhere) So if they see a User-Agent identifying itself as Googlebot they know it’s fake if it didn’t come from one of Google’s IP’s. Akamai also occasionally injects JavaScript, cookies, etc. into a request to see how the client responds. Lots of bots don’t process JavaScript, or only support a subset of it. Some bots also ignore cookies, and others even modify cookies to try to trick servers.

              It’s through a combination of all the above plus other sorts of analysis that Akamai doesn’t publicize that they can identify bot vs human traffic pretty reliably.

              • Daemon Silverstein
                link
                fedilink
                English
                11 month ago

                What if a bot/crawler Puppeteers a Chromium browser instead of sending a direct HTTP requisition and, somehow, it managed to set navigator.webdriver = false so that the browser will seem not to be automated? It’d be tricky to identify this as a bot/crawler.

                • @[email protected]
                  link
                  fedilink
                  English
                  31 month ago

                  Oh there are definitely ways to circumvent many bot protections if you really want to work at it. Like a lot of web protection tools/systems, it’s largely about frustrating the attacker to the point that they give up and move on.

                  Having said that, I know Akamai can detect at least some instances where browsers are controlled as you suggested. My employer (which is an Akamai customer and why I know a bit about all this) uses tools from a company called Saucelabs for some automated testing. My understanding is that our QA teams can create tests that launch Chrome (or other browsers) and script their behavior to log into our website, navigate around, test different functionality, etc. I know that Akamai can recognize this traffic as potentially malicious because we have to configure the Akamai WAF to explicitly allow this traffic to our sites. I believe Akamai classifies this traffic as a “headless” Chrome impersonator bot.