Reddit CEO Steve Huffman is standing by Reddit’s decision to block companies from scraping the site without an AI agreement.

Last week, 404 Media noticed that search engines that weren’t Google were no longer listing recent Reddit posts in results. This was because Reddit updated its Robots Exclusion Protocol (txt file) to block bots from scraping the site. The file reads: “Reddit believes in an open Internet, but not the misuse of public content.” Since the news broke, OpenAI announced SearchGPT, which can show recent Reddit results.

The change came a year after Reddit began its efforts to stop free scraping, which Huffman initially framed as an attempt to stop AI companies from making money off of Reddit content for free. This endeavor also led Reddit to begin charging for API access (the high pricing led to many third-party Reddit apps closing).

In an interview with The Verge today, Huffman stood by the changes that led to Google temporarily being the only search engine able to show recent discussions from Reddit. Reddit and Google signed an AI training deal in February said to be worth $60 million a year. It’s unclear how much Reddit’s OpenAI deal is worth.

Huffman said:

Without these agreements, we don’t have any say or knowledge of how our data is displayed and what it’s used for, which has put us in a position now of blocking folks who haven’t been willing to come to terms with how we’d like our data to be used or not used.

“[It’s been] a real pain in the ass to block these companies,” Huffman told The Verge.

  • morgunkorn@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    428
    arrow-down
    3
    ·
    edit-2
    1 month ago

    Honestly, any platforms hosting user-generated content who use the legal argument that they only provide hosting and aren’t responsible for what their user post shouldn’t also be able to sell the same data and claim owning any of it.

    Otherwise, take away their legal immunity. Nazis or pedophiles post something awful? You get in front of the judge.

    edit: typo

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      ·
      1 month ago

      Can’t sell something you don’t own.

      So if they’re selling the parts people want, they need to own the parts no one wants.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        1 month ago

        Well, you can give money to Reddit for a piece of paper, but unless Reddit is claiming copyright to the content posted there, then they can’t sue anyone for not paying. It would be very interesting to see the text of these “licensing agreements”.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          They’re not claiming copyright. They have a perpetual, non-revokable license to the content, granted by the people who use their site when they post the content.

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      180
      ·
      1 month ago

      Exactly this. You can claim that their scraping is abusing your servers, but the moment you claim copyright for the content of the site, then you give up your Section 230 rights.

      • fuckwit_mcbumcrumble@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        37
        arrow-down
        2
        ·
        1 month ago

        You’d also probably lose a whole lot more processing power trying to stop the crawlers vs just letting them have API access with some sort of limit to queries.

        • Admiral Patrick@dubvee.org
          link
          fedilink
          English
          arrow-up
          30
          arrow-down
          1
          ·
          edit-2
          1 month ago

          Eh, not really.

          I block bot user agents to my Lemmy instance, and the overhead is pretty negligible for that (it’s all handled in my web firewall/load balancer).

          Granted, those are bots that correctly identify themselves via user agent and don’t spoof a browser’s.

          It’s also cheaper and easier to add another load balancer than size up or scale out my DB server to handle the bot traffic.

        • rbits@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 month ago

          I don’t think they actually block malicious bots, the change they’ve made is just to the robots.txt, they don’t have to do anything.

  • JaymesRS@literature.cafe
    link
    fedilink
    English
    arrow-up
    102
    ·
    edit-2
    1 month ago

    Robots.txt isn’t a binding agreement, this isn’t stopping anyone for whom their drive for profit outweighs their ethics.

    Also, Fuck Spez.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 month ago

      Because if there’s one thing this world needs more its more rights for property.

  • spongebue@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    5
    ·
    1 month ago

    Honestly, my biggest issue with LLMs is how they source their training data to create “their own” stuff. A meme calling it a plagiarism machine struck a chord with me. Almost anyone else I’d sympathize with, but fuck Spez.

    • Wirlocke@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 month ago

      What resonated with me is people calling LLMs and Stable Diffusion “copyright laundering”. If copyright ever swung in AI’s favor it would be super easy to train an AI on stuff you want to steal, add in some generic training, and now you have a “new” piece of art.

      LLMs and Stable Diffusion are just compression algorithms for abstract patterns, only one level above data.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 month ago

        The real takeaway of all of this is that copyright law is massively out of date and not fit for purpose in the 21st century or frankly the late 20th.

        The current state of copyright law cannot deal with the internet, let alone AI

    • markon@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 month ago

      Yep they now get paid for the data we have them. I have no sympathy lol. At least these models can’t actually store it all losslessly by any stretch of the imagination. The compression factors would have to be like 100-200X+ anything we’ve ever been able to achieve before. The numbers don’t work out. The models do encode a lot though and some of it is going to include actual full text data etc but it’ll still be kinda fuzzy.

      I think we do need ALL OPEN SOURCE. Not just for AI, but I know on that point I’m preaching to the choir here lol

  • Sordid@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    2
    ·
    1 month ago

    The enshittification cycle:

    Phase one, attract users by providing a good service.
    Phase two, once the users are locked in, squeeze them for all they’re worth by selling them to business customers (advertisers and/or data buyers).
    Phase three, once the business customers are locked in, squeeze them for all they’re worth by threatening to deny them access to the users on whom they now depend.

    Spez seems to think Reddit has the pull to make phase 3 happen. I rather doubt it, but we’ll see.

    • Boozilla@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      My guess is that phase three will work for a while. But I think you’re right that eventually they are going to drive that thing into the ground. Because it’s never enough pure profit for rent-seeking scum, and there is no lower limit to the abuse they’ll inflict on their content creators (who they call users but think of as products).

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 month ago

      If he really had balls he’d restrict access to the site and improve the built-in search engine.

      If reddit’s own search worked well nobody would care. Engines like DDG even have bang codes that send you to a site’s own engine. So instead of having to add “site:reddit.com” to the search on DDG I’d just add “r!” and it would end up being the same thing. IF the internal search didn’t suck.

    • _haha_oh_wow_@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 month ago

      Yeah, as soon as the API thing happened I switched to Lemmy for mobile browsing and like it more than Reddit (Connect is pretty good, but even the mobile browser site is solid).

      The more they squeeze, the more popular alternatives like Lemmy, Kbin/Mbin, Tildes, etc. will become.

  • boonhet@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 month ago

    I never bothered to go edit or delete my comments after the API drama that caused me to move here, but now I might just go do that because the entire point of keeping old comments up was that maybe someone will find one from a search engine and find it useful. If reddit is going to monetize THAT, they can fuck right off.

    • palordrolap@kbin.run
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      1 month ago

      Save your effort. What’s already there is there forever. They can just roll back your comments, or even, if they’re in the mood for it, make it appear under an entirely different username.

      The only way to win is not give them any more. And that fight is already under way. They’ve already started recommending old comments after new ones because the quality isn’t as high any more.

      Think about it: The only people who contribute to Reddit now are the clueless and the sort of people who have willingly stayed.

      I like to imagine Spez stomping around saying “Hmph! Hmph! It’s not fair! Why did they all leave?! They’re stealing my revenue by not giving me anything for free!”. I mean, he’s probably not doing that, but I do like to imagine it.

  • Brkdncr@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    What if I had an agreement with MS that they can scrape my data and anything I post online?

    • Shdwdrgn@mander.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      What if Microsoft updated their Windows EULA to state that all users agree to allow MS to scrape their online data (if they haven’t already), and then take that to court against reddit? It would certainly be an interesting court case to watch, especially if they could get actual users to stand up in court and confirm that they did indeed approve of this. And it might settle the issue once and for all regarding companies trying to block freely-visible internet content just because someone scraped the info.

  • Brkdncr@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    1 month ago

    What if I had an agreement with MS that they can scrape my data and anything I post online?

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    How about starting a company that gathers people’s CAD design…grabCAD!.. Oh can’t scrape out design work Microsoft, you gotta pay!..or how about a company that stores people’s records or drawings or movies… Adobe! Oh Microsoft, you can’t scrape our data! It’s our data!

  • ayyy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    1
    ·
    1 month ago

    Fuck Spez. He’s probably editing the comments anyway, he literally can’t help himself.