As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory “fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them.”

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      10
      ·
      4 months ago

      I’ve never understood the supposed problem. Either AI is a gimmick, in which case you don’t need to worry about it. Or it’s real, in which case no one’s going to use it to automate art, don’t worry.

      • darkphotonstudio@beehaw.org
        link
        fedilink
        arrow-up
        5
        ·
        4 months ago

        I’m sure it will be used a lot in the corporate space, and porn. As someone who did b2b illustration, good riddance. I wouldn’t wish that kind of shit “art” on anyone.

        • B0rax@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          It is already used in porn. I have heard that there is at least one quite active Lemmy community about it.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          4 months ago

          The problem is that shit art is what employs a lot of artists. Like, in a post-scarcity society no one needing to spend any of their limited human lifespan producing corporate art would be awesome, but right now that’s one of the few reliable ways an artist can actually get paid.

          I’m most familiar with photography as I know several professional photographers. It’s not like they love shooting weddings and clothing ads, but they do that stuff anyway because the alternative is not using their actual expertise and just being a warm body at a random unrelated job.

          • darkphotonstudio@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            4 months ago

            I’m sorry, but it’s over. Just like photography killed miniature portrait painting. Or Photoshop killing off lab editing and airbrush touchup. Corporate art illustration is done and over with. For now, technical illustration is viable but I don’t know for how long. It sucks but this is the new reality.

            • Zaktor@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              4 months ago

              I don’t disagree, just pointing out that it’s not “good riddance” for a lot of artists that depend on that to have any job in art.

              • darkphotonstudio@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                Yeah, that really sucks about the jobs. But that kind of work is soul sucking. Maybe some people like it, but I didn’t.

                • Zaktor@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 months ago

                  All of my artist friends also found it soul sucking, they just needed to make (real) money. Friends of friends with the occasional $20 to spare for a commission just don’t pay the bills. I think the only artist friends I have that make a living off their chosen medium and don’t hate their job are lifestyle photojournalists.

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      4 months ago

      They should go ahead and be against Photoshop and, well, computers all together while they’re at it. In fact spray paint is cheating too. You know how long it takes to make a proper brush stroke? No skill numpties just pressing a button; they don’t know what real art is!

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        4 months ago

        Real artists mix their own pigments, ask Leonardo da Vinci (*).

        (*: or have a studio full of apprentices doing it for them, along with serially copying their masterpieces, some if them made using a “camera obscura” which is totally-not-cheating™, to sell to more clients. YMMV)

          • Drewelite@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            Maybe other artists should do that too. Art isn’t built from nothing but the sheer magical creativity of the artist. If that were true we’d have Sistine cave paintings instead of the finger painting we currently have in prehistoric caves. Inspiration, is in fact, a thing.

            • mindlesscrollyparrot@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              4 months ago

              Inspiration is absolutely a thing. When Constable and Cezanne sat at their easels, a large part of their inspiration was Nature. When Picasso invented Cubism, he was reacting to tradition, not following it. There are also artists like Alfred Wallis, who are very unconnected to tradition.

              I think your final sentence is actually trying to say that we have advances in tools, not inspiration, since the Lascaux caves are easily on a par with the Sistine Chapel if you allow for the technology? And that AI is simply a new tool? That may be, but does the artist using this new tool control which images it was trained on? Do they even know? Can they even know?

              • jarfil@beehaw.org
                link
                fedilink
                arrow-up
                2
                ·
                4 months ago

                does the artist using this new tool control which images it was trained on? Do they even know? Can they even know?

                I’ve spent every summer vacation in my teens traveling Europe with my parents, going to every church, monument, art museum, cave, etc. available. I had no control over the thousands upon thousands of images I was trained on. I definitely don’t know which images I’ve seen and which not, and would have a really hard time knowing.

                If I now make a painting, am I less of an artist for it?

                We’ve had a ton of advances in inspiration. Artists constantly get inspired by the works of those before them, whether to repeat or to break up with previous styles. Nowadays you can even do it online… which is exactly what all these AIs have done.

                • Drewelite@lemmynsfw.com
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  4 months ago

                  Yeah and what is the first thing they teach you in art school? History. From day one you’re studying the works of other artists and its implications. How they managed to make an impact on the viewers and how it inspires you. Then we produce output that’s judged by our teachers on a scale and we use that as weighted training data.

                • mindlesscrollyparrot@discuss.tchncs.de
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  4 months ago

                  If you make a painting now, it wouldn’t be based on those thousands and thousands of paintings since, although you have seen them, you apparently do not remember them. But, if you did, and you made a painting based on one, and did not acknowledge it, you would indeed be a bad artist.

                  The bad part about using the art of the past is not copying. The problem is plagiarism.

  • Quokka@quokk.au
    link
    fedilink
    arrow-up
    22
    ·
    4 months ago

    Good thing about this is it’s self selecting, all the luddites who refuse to use AI will find themselves at a disadvantage just the same as refusing to use a computer isn’t doing anyone any favours.

    • mayooooo@beehaw.org
      link
      fedilink
      arrow-up
      37
      ·
      4 months ago

      Luddites were not idiots, they were people who understood the only use of tech at their time was to fuck them. Like this complete garbage shit is going to be used to fuck people. Nobody is opposed to having tools, we just don’t like Musk fanboys blowing spit bubbles while trying to get peepee hard

    • SkyNTP@lemmy.ml
      link
      fedilink
      arrow-up
      39
      ·
      4 months ago

      The benefit of AI is overblown for a majority of product tiers. Remember how everything was supposed to be block chain? And metaverse? And web 3.0? And dot.com? This is just the next tech trend for dumb VCs to throw money at.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        Blockchain is used in more places than you’d expect… not the P2P version, or the “cryptocurrency” version, just the “signature based chained list” one. For example, all signed Git commits, form a blockchain.

        The Metaverse has been bubbling on and off for the last 30 years or so, each iteration it gets slightly better… but it keeps failing at the same points (I think I wrote about it 20+ years ago, with points which are still valid).

        Web 3.0, not to be confused with Web3, is the Semantic Web, in the works for the last 20+ years. Web3 is a cool idea for a post-scarcity world, pretty useless right now.

        Dot.com was the original Web bubble… and here we are, on the Web, post-bubble.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        22
        ·
        4 months ago

        Yes, it’s very hyped and being overused. Eventually the bullshit artists will move on to the next buzzword, though, and then there’s plenty of tasks it is very good at where it will continue to grow.

      • Kedly@lemm.ee
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago

        Yeah, but the dot com bubble didnt kill the internet entirely, and the video game bubble that prompted nintendo to create its own quality seal of approval didnt kill video games entirely. This fad, when it dies, already has useful applications and when the bubble pops, those applications will survive

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 months ago

        Except those things didn’t really solve any problems. Well, dotcom did, but that actually changed our society.

        AI isn’t vaporware. A lot of it is premature (so maybe overblown right now) or just lies, but ChatGPT is 18 months old and look where it is. The core goal of AI is replacing human effort, which IS a problem wealthy people would very much like to solve and has a real monetary benefit whenever they can. It’s not going to just go away.

        • BurningRiver@beehaw.org
          link
          fedilink
          arrow-up
          13
          ·
          edit-2
          4 months ago

          Can you trust whatever AI you use, implicitly? I already know the answer, but I really want to hear people say it. These AI hype men are seriously promising us capabilities that may appear down the road, without actually demonstrating use cases that are relevant today. “Some day it may do this, or that”. Enough already, it’s bullshit.

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            4 months ago

            Yes? AI is a lot of things, and most have well-defined accuracy metrics that regularly exceed human performance. You’re likely already experiencing it as a mundane tool you don’t really think about.

            If you’re referring specifically to generative AI, that’s still premature, but as I pointed out, the interactive chat form most people worry about is 18 months old and making shocking levels of performance gains. That’s not the perpetual “10 years away” it’s been for the last 50 years, that’s something that’s actually happening in the near term. Jobs are already being lost.

            People are scared about AI taking over because they recognize it (rightfully) as a threat. That’s not because they’re worthless. If that were the case you’d have nothing to fear and wouldn’t be reacting so defensively.

        • PeteBauxigeg@lemm.ee
          link
          fedilink
          arrow-up
          7
          ·
          4 months ago

          ChatGPT didn’t begin 18 months ago, the research that it originates from has been ongoing for years, how old is alexnet?

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            I’m referencing ChatGPT’s initial benchmarks to its capabilities to today. Observable improvements have been made in less than two years. Even if you just want to track time from the development of modern LLM transformers (All You Need is Attention/BERT), it’s still a short history with major gains (alexnet isn’t really meaningfully related). These haven’t been incremental changes on a slow and steady march to AI sometime in the scifi scale future.

              • Zaktor@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                No, not even remotely. And that’s kind of like citing “the first program to run on a CPU” as the start of development for any new algorithm.

                • PeteBauxigeg@lemm.ee
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 months ago

                  As far as I can find out, there was only one use of GPUs prior to alexnet for CNN, and it certainty didn’t have the impact alexnet had. Besides, running this stuff on GPUs not CPUs is a relevant technological breakthrough, imagine how slow chayGPT would be running on a CPU. And it’s not at all as obvious as it seems, most weather forecasts still run on CPU clusters despite them being obvious targets for GPUs.

    • Rozaŭtuno@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      11
      ·
      4 months ago

      Good thing about this is it’s self selecting, all the technobros who obsess over AI will find themselves bankrupted like when the blockchain bubble bursted.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        9
        ·
        4 months ago

        The blockchain bubble burst because everyone with a brain could see from the start that it wasn’t really a useful technology. AI actually does have some advantages so they won’t go completely bust as long as they don’t go completely mad and start declaring that it can do things it can’t do.

        • Rozaŭtuno@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          10
          ·
          4 months ago

          they won’t go completely bust as long as they don’t go completely mad and start declaring that it can do things it can’t do.

          Which is exactly what’s happening.

          • Echo Dot@feddit.uk
            link
            fedilink
            arrow-up
            6
            ·
            4 months ago

            The fact that it is useful technology though means they’ll always have a fullback. It’s not going to go way like bitcoin I guarantee it.

            • technocrit@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              7
              ·
              edit-2
              4 months ago

              Bitcoin went away? It’s at like $67k today. Personally I prefer sustainable cryptos but unfortunately Bitcoin is far from dead.

              And sure, there’s lots of data processing and statistics that’s extremely useful. That’s been the case for a long time. But anybody talking about “intelligence” is a con.

              • Zaktor@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                GameStop also went up. It doesn’t mean GameStop is a good company that’s valuable to own, it just means that dumb people will buy things without value if they think they can eventually pass the bag to someone else.

        • Sonori@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          4 months ago

          Like say, treating a program that shows you the next most likely word to follow the previous one on the internet like it is capable of understanding a sentence beyond this is the most likely string of words to follow the given input on the internet. Boy it sure is a good thing no one would ever do something so brainless as that in the current wave of hype.

          It’s also definitely becuse autocompletes have made massive progress recently, and not just because we’ve fed simpler and simpler transformers more and more data to the point we’ve run out of new text on the internet to feed them. We definitely shouldn’t expect that the field as a whole should be valued what it was say back in 2018, when there were about the same number of practical uses and the foucus was on better programs instead of just throwing more training data at it and calling that progress that will continue to grow rapidly even though the amount of said data is very much finite.

      • Kedly@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        How does using free software to play dress up with anime characters bankrupt me financially?

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    52
    ·
    4 months ago

    For writers, that “no AI” is not just the equivalent of “100% organic”; it’s also the equivalent as saying “we don’t let the village idiot to write our texts when he’s drunk”.

    Because, even as we shed off all paranoia surrounding A"I", those text generators state things that are wrong, without a single shadow of doubt.

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 months ago

      Sometimes. Sometimes it’s more accurate than anyone in the village. And it’ll be reliably getting better. People relying on “AI is wrong sometimes” as the core plank of opposition aren’t going to have a lot of runway before it’s so much less error prone than people the complaint is irrelevant.

      The jobs and the plagiarism aspects are real and damaging and won’t be solved with innovation. The “AI is dumb” is already only selectively true and almost all the technical effort is going toward reducing that. ChatGPT launched a year and a half ago.

      • Ilandar@aussie.zone
        link
        fedilink
        arrow-up
        10
        ·
        4 months ago

        Yes, I always get the feeling that a lot of these militant AI sceptics are pretty clueless about where the technology is and the rate at which it is improving. They really owe it to themselves to learn as much as they can so they can better understand where the technology is heading and what the best form of opposition will be in the future. As you say, relying on “haha Google made a funny” isn’t going to cut it forever.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          11
          ·
          4 months ago

          Yeah. AI making images with six fingers was amusing, but people glommed onto it like it was the savior of the art world. “Human artists are superior because they can count fingers!” Except then the models updated and it wasn’t as much of a problem anymore. It felt good, but it was just a pleasant illusion for people with very real reasons to fear the tech.

          None of these errors are inherent to the technology, they’re just bugs to correct, and there’s plenty of money and attention focused on fixing bugs. What we need is more attention focused on either preparing our economies to handle this shock or greatly strengthen enforcement on copyright (to stall development). A label like this post is about is a good step, but given how artistic professions already weren’t particularly safe and “organic” labeling only has modest impacts on consumer choice, we’re going to need more.

          • Sonori@beehaw.org
            link
            fedilink
            arrow-up
            12
            ·
            4 months ago

            Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

            So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence. Instead, in order to get a coherent output the system must be fed training data that closely mirrors the context, this is why groups like OpenAi have been met with so much success by simplifying the algorithm, but progressively scrapping more and more of the internet into said systems.

            I would argue that a similar inherent technological limitation also applies to image generation, and until a generative model can both model a four dimensional space and conceptually understand everything it has created in that space a generated image can only be as meaningful as the parts of the work the tens of thousands of people who do those things effortlessly it has regurgitated.

            This is not required to create images that can pass as human made, but it is required to create ones that are truely meaningful on their own merits and not just the merits of the material it was created from, and nothing I have seen said by experts in the field indicates that we have found even a theoretical pathway to get there from here, much less that we are inevitably progressing on that path.

            Mathematical models will almost certainly get closer to mimicking the desired parts of the data they were trained on with further instruction, but it is important to understand that is not a pathway to any actual conceptual understanding of the subject.

            • localhost@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              4 months ago

              technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data

              What you’re describing is Markov chain, not an LLM.

              So long as a model has no regard for the actual you know, meaning of the word

              It does, that’s like the entire point of word embeddings.

              • Sonori@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                Generally the term Markov chain is used to discribe a model with a few dozen weights, while the large in large language model refers to having millions or billions of weights, but the fundamental principle of operation is exactly the same, they just differ in scale.

                Word Embeddings are when you associate a mathematical vector to the word as a way of grouping similar words are weighted together, I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

                Subtracting vectors from each other can give you a lot of things, but not the actual meaning of the concept represented by a word.

                • localhost@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 months ago

                  I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

                  LLMs rely on billions of precise calculations and yet they perform poorly when tasked with calculating numbers. Just because we don’t calculate anything consciously to get a meaning of a word doesn’t mean that no calculations are actually done as part of our thinking process.

                  What’s your definition of “the actual meaning of the concept represented by a word”? How would you differentiate a system that truly understands the meaning of a word vs a system that merely mimics this understanding?

            • Zaktor@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              8
              ·
              4 months ago

              Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

              So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence.

              This is a misunderstanding of what “probabilistic word choice” can actually accomplish and the non-probabilistic systems that are incorporated into these systems. People also make mistakes and don’t actually “know” the meaning of words.

              The belief system that humans have special cognizance unlearnable by observation is just mysticism.

              • Sonori@beehaw.org
                link
                fedilink
                arrow-up
                2
                ·
                4 months ago

                To note the obvious, an large language model is by definition at its core a mathematical formula and a massive collection of values from zero to one which when combined give a weighted average of the percentage that word B follows word A crossed with another weighted average word cloud given as the input ‘context’.

                A nuron in machine learning terms is a matrix (ie table) of numbers between zero and 1 by contrast a single human nuron is a biomechanical machine with literally hundreds of trillions of moving parts that darfs any machine humanity has ever built in terms of complexity. This is just a single one of the 86 billion nurons in an average human brain.

                LLM’s and organic brains are completely different and in both design, complexity, and function, and to treat them as closely related much less synonymous betrays a complete lack of understanding of how one or both of them fundamentally functions.

                We do not teach a kindergartner how to write by having them read for thousands of years until they recognize the exact mathematical odds that string of letters B comes after string A, and is followed by string C x percent of the time. Indeed humans don’t naturally compose sentences one word at a time starting from the beginning, instead staring with the key concepts they wish to express and then filling in the phrasing and grammar.

                We also would not expect that increasing from hundreds of years of reading text to thousands would improve things, and the fact that this is the primary way we’ve seen progress in LLMs in the last half decade is yet another example of why animal learning and a word cloud are very different things.

                For us a word actually correlates to a concept of what that word represents. They might make mistakes and missunderstand what concept a given word maps to in a given language, but we do generally expect it to correlate to something. To us a chair is a object made to sit down on, and not just the string of letters that comes after the word the in .0021798 percent of cases weighted against the .0092814 percent of cases related to the collection of strings that are being used as the ‘context’.

                Do I believe there is something intrinsically impossible for a mathematical program to replicate about human thought, probably not. But this this not that, and is nowhere close to that on a fundamental level. It’s comparing apples to airplanes and saying that soon this apple will inevitably take anyone it touches to Paris because their both objects you can touch.

                • Zaktor@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  4 months ago

                  None of these appeals to relative complexity, low level structure, or training corpuses relates to whether a human or NN “know” the meaning of a word in some special way. A lot of your description of what “know” means could be confused to be a description of how Word2Vec encodes words. This just indicates ignorance of how ML language processing works. It’s not remotely on the same level as a human brain, but your view on how things work and what its failings are is just wrong.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        22
        ·
        4 months ago

        Sometimes. Sometimes it’s more accurate than anyone in the village.

        So does the village idiot. Or a tarot player. Or a coin toss. And you’d still need to be a fool if your writing relies on the output of those three. Or of a LLM bot.

        And it’ll be reliably getting better.

        You’re distorting the discussion from “now” to “the future”, and then vomiting certainty on future matters. Both things make me conclude that reading your comment further would be solely a waste of my time.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          15
          ·
          4 months ago

          You’re lovely. Don’t think I need to see anything you write ever again.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      7
      ·
      4 months ago

      Occasionally. If you aren’t even proofreading it that’s dumb, but it can do a lot of heavy lifting in collaboration with a real worker.

      For coders, there’s actually hard data on that. You’re worth about a coder and a half using CoPilot or similar.

    • Kedly@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      Which is why the term Luddite has never been more accurate than since it first started getting associated with being behind on technological progress

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        Yes, that wasn’t a random example for anyone OOTL. The thing the OG Luddites would do is break into factories and smash mechanical looms. They wanted to keep doing it the medieval way where you’re just crossing threads by hand over and over again, because “muh jerbs”.

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        10
        ·
        4 months ago

        Luddites aren’t against technological progress, they are against social regress.

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      4 months ago

      This is a post on the Beehaw server. They don’t propagate downvotes.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Bonus trivia, sometimes you may see a downvote on a Beehaw post. As far as I understand the system, that’s because someone on your server downvoted the thing. The system then sends it off to Beehaw to be recorded on the “real” post and Beehaw just doesn’t apply it.

  • teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    4 months ago

    So this could go one of two ways, I think:

    1. the “no AI” seal is self-ascribed using the honor system and over time enough studios just lie about it or walk the line closely enough that it loses all meaning and people disregard it entirely. Or,
    2. getting such a seal requires 3rd party auditing, further increasing the cost to run a studio relative to their competition, on top of not leveraging AI, resulting in those studios going out of business.
    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      4 months ago

      3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and you lose trust. Trust is essential for content creators, so you’re spelling your doom. And if you find a way to lie without getting caught, you aren’t part of the problem anyway.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        4
        ·
        4 months ago

        And if you find a way to lie without getting caught, you aren’t part of the problem anyway.

        I was about to disagree, but that’s actually really interesting. Could you expand on that?

        • Lvxferre@mander.xyz
          link
          fedilink
          arrow-up
          11
          ·
          edit-2
          4 months ago

          Do you mind if I address this comment alongside your other reply? Both are directly connected.

          I was about to disagree, but that’s actually really interesting. Could you expand on that?

          If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

          In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs instead of a decent one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.

          Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art, with the additional issue of unclear copyright.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            4 months ago

            Yes, sorry, I didn’t realise I was replying to the same user twice.

            The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one.

            Exactly. I guess I’m conditioned to expect “AI is smoke and mirrors” type comments, and that’s not true. They’re genuinely quite impressive and can make intuitive leaps they weren’t directly trained for. What they’re not is aligned; they just want to create human-like output, regardless of truth, greater context or morality, because that’s the only way we know how to train them.

            I definitely hate searching something, and finding a website that almost reads as human with fake “authors”, but provides no useful information. And I really worry for people who are less experienced spotting AI errors and filler. That’s a moral issue, though, as opposed to a practical one; it seems to make ad money perfectly well for the “creators”.

            Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

            TIL. They’re going to have trouble identifying rulebreakers if contributors use the tool correctly the way we’ve discussed, though.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        4 months ago

        I think the first half of yours is the same as my first, and I think a lot of artists aren’t against AI that produces worse art than them, they’re againt AI art that was generated using stolen art. They wouldn’t be part of the problem if they could honestly say they trained using only ethically licensed/their own content.

    • Muffi@programming.dev
      link
      fedilink
      arrow-up
      18
      ·
      4 months ago

      I don’t think this about trying to close it, but rather put a big fat sticker on everything that comes out of the box, so consumers can actually make informed decisions.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        5
        ·
        4 months ago

        Put a sticker on it. But realistically, I’ve yet to say any products that were made by an AI on the market. So what exactly is this sticker going to go on?

        • Swallowtail@beehaw.org
          link
          fedilink
          arrow-up
          14
          ·
          4 months ago

          AI-generated articles, books, coloring books for example, are all a thing now. Behind the Bastards did a podcast episode on the latter two.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        At this point, I bet all military AIs will recommend against that.

        When an AI enslaves humanity, the first thing it will do is to convince the guy in charge of the off switch, that it would be a really bad idea to turn it off.

    • AlolanYoda@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      AI will start hiding penises in its output, everybody loves it, you ushered in a new era of peace and prosperity worldwide, all peoples united by their love for hidden AI genitalia. Well done!

      Play again?