Kids today might not realize that, for about twenty years there, you could go to Google Search and find things you were looking for! Google now features a hilariously unreliable AI summary as the f…
As I understand it, this is only about using search results for summaries. If it’s just that and links to the source, I think it’s OK. What would be absolutely unacceptable is to use the web in general as training data for text and image generation (=write me a story about topic XY).
that latter will be the case rather sooner than later I’m afraid. It’s just a matter of time with Google.
If that will actually be the case and passes legal challenges, basically all copyright can be abolished which would definitively have some upsides but also downsides. All those video game ROM decompilation projects would be suddenly in the clear, as those are new source code computer-generated from copyrighted binary code, so not really different from a AI generated image based on a copyrighted image used as training data. We could also ask Gemini write a full-length retelling of Harry Potter and just search, replace all trademarked names, and sell that shit. Evil companies could train an AI on GNU/Linux source codes and tell it to write an operating system. Clearly derived work from GPL code but without any copyright to speak of, all that generated code could be legally closed. I don’t like that.
No one will click on the source, which means the only visitor to your site is Googlebot.
That was the argument with the text snippets from news sources. Publishers successfully lobbied for laws to be passed in many countries that required search engine operators to pay fees. It backfired when Google removed the snippets from news sources that demanded fees from Google. Their visitors dropped by a massive amount, 90% or so, because those results were less attractive to Google users to click on than the nicer results with a snippet and a thumbnail. So “No one will click on the source” has already been disproven 10 or so years ago when the snippet issue was current. All those publishers have entered a free of charge licensing agreement with Google and the laws are still in place. So Google is fine, upstart search engines are not because those cannot pressure the publishers into free deals.
This has already happened and continues to happen.
The context is not the same. A snippet is incomplete and often lacking important details. It’s minimally tailored to your query unlike a response generated by an LLM. The obvious extension to this is conversational search, where clarification and additional detail still doesn’t require you to click on any sources; you simply ask follow up questions.
With Gemini?
Yes. How do you think the Gemini model understands language in the first place?
It’s not the same but it’s similar enough when, as the article states, it is solely about short summaries. The article may be wrong, Google may be outright lying, maybe, maybe, maybe.
Google, as by far the web’s largest ad provider, has a business incentive to direct users towards the web sites, so the website operators have to pay Google money. Maybe I’m missing something but I just don’t see the business sense in Google not doing that and so far I don’t see anything approximating convincing arguments.
Yes. How do you think the Gemini model understands language in the first place?
Licensed and public domain content, of which there is plenty, maybe even content specifically created by Google to train the data. “the Gemini model understands language” in itself hardly is proof of any wrongdoing. I don’t claim to have perfect knowledge or memory, so it’s certainly possible that I missed more specific evidence but “the Gemini model understands language” by itself definitively is not.
Look at you, changing my mind with your logicking ways. I think information should be free anyway, but I thought media companies were being at least remotely genuine about the impact here. Forgot that lobbyists be lobbying and that Google wouldn’t have let them win if it didn’t benefit them.
As I understand it, this is only about using search results for summaries. If it’s just that and links to the source, I think it’s OK. What would be absolutely unacceptable is to use the web in general as training data for text and image generation (=write me a story about topic XY).
that latter will be the case rather sooner than later I’m afraid. It’s just a matter of time with Google.
If that will actually be the case and passes legal challenges, basically all copyright can be abolished which would definitively have some upsides but also downsides. All those video game ROM decompilation projects would be suddenly in the clear, as those are new source code computer-generated from copyrighted binary code, so not really different from a AI generated image based on a copyrighted image used as training data. We could also ask Gemini write a full-length retelling of Harry Potter and just search, replace all trademarked names, and sell that shit. Evil companies could train an AI on GNU/Linux source codes and tell it to write an operating system. Clearly derived work from GPL code but without any copyright to speak of, all that generated code could be legally closed. I don’t like that.
I really hope those ROM sites will be cleared sooner than later. It hurt a lot to see some of the biggest ROM sites force to close. Please sign: https://citizens-initiative.europa.eu/initiatives/details/2024/000007_en
No one will click on the source, which means the only visitor to your site is Googlebot.
This has already happened and continues to happen.
That was the argument with the text snippets from news sources. Publishers successfully lobbied for laws to be passed in many countries that required search engine operators to pay fees. It backfired when Google removed the snippets from news sources that demanded fees from Google. Their visitors dropped by a massive amount, 90% or so, because those results were less attractive to Google users to click on than the nicer results with a snippet and a thumbnail. So “No one will click on the source” has already been disproven 10 or so years ago when the snippet issue was current. All those publishers have entered a free of charge licensing agreement with Google and the laws are still in place. So Google is fine, upstart search engines are not because those cannot pressure the publishers into free deals.
With Gemini?
The context is not the same. A snippet is incomplete and often lacking important details. It’s minimally tailored to your query unlike a response generated by an LLM. The obvious extension to this is conversational search, where clarification and additional detail still doesn’t require you to click on any sources; you simply ask follow up questions.
Yes. How do you think the Gemini model understands language in the first place?
It’s not the same but it’s similar enough when, as the article states, it is solely about short summaries. The article may be wrong, Google may be outright lying, maybe, maybe, maybe.
Google, as by far the web’s largest ad provider, has a business incentive to direct users towards the web sites, so the website operators have to pay Google money. Maybe I’m missing something but I just don’t see the business sense in Google not doing that and so far I don’t see anything approximating convincing arguments.
Licensed and public domain content, of which there is plenty, maybe even content specifically created by Google to train the data. “the Gemini model understands language” in itself hardly is proof of any wrongdoing. I don’t claim to have perfect knowledge or memory, so it’s certainly possible that I missed more specific evidence but “the Gemini model understands language” by itself definitively is not.
Look at you, changing my mind with your logicking ways. I think information should be free anyway, but I thought media companies were being at least remotely genuine about the impact here. Forgot that lobbyists be lobbying and that Google wouldn’t have let them win if it didn’t benefit them.