• AkaneKurokawa@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    The only real medicine for AI nightmare, is having your own local and trained model. Like a 7B or above that. I read a lot about it, go to network chuck youtube channel, he teaches you how to set up and run your own AI based on yourself, that never shares information, it’s open-source and it runs even in a laptop.

  • kazerniel@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    Switching to Linux means you might have to say goodbye to certain proprietary software and games. Applications like Adobe Creative Suite

    as someone whose job mostly involves Adobe programs and whose many hobby is gaming, I think I’ll stick with a Windows with all the AI crap disabled via group policies and O&O Shutup 😐 For now…

  • cy_narrator@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    6 months ago

    Untill one day Ubuntu will start incorporating AI in GNOME search bar

    How much are you willing to bet this wont happen with Canonical’s Ubuntu?

  • Facebones@reddthat.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    6 months ago

    AI has people questioning Windows use Car systems ratting drivers out has people questioning car use

    Not the way I expected to reach some of my desired ends but I’ll take it. 🤔

  • plantedworld@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    6 months ago

    What happens when I, a potential new Linux user, need to search for how to make something work on Linux and thanks to SEO and AI driven/created search results I can’t find the solution?

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    6 months ago

    I wonder if some big AI heads will publish some “AI enhanced” Linux distros, that will also have other issues…

  • jabjoe@feddit.uk
    link
    fedilink
    English
    arrow-up
    50
    ·
    6 months ago

    And forced the hardware obsolescence nightmare.

    And the big tech surveillance nightmare.

    And the nightmare of the war on general purpose computers. (OK, that is more GNU and GPLv3)

    And a few other nightmares!

  • 3volver@lemmy.world
    link
    fedilink
    English
    arrow-up
    108
    arrow-down
    3
    ·
    6 months ago

    People keep pointing the finger at AI, but miss the fact that the problem is corporate greed. AI has the possibility to help us solve problems, corporate greed will gate keep the solutions and cause us suffering.

    • masquenox@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      6 months ago

      People keep pointing the finger at AI, but miss the fact that the problem is corporate greed capitalism. AI has the possibility to help us solve problems, corporate greed capitalism will gate keep the solutions and cause us suffering.

      No need to thank me.

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      6 months ago

      Enshittification is the result of the user not being in control: markets have a natural tendency to become dominated by a few companies (or even just a single one) if they have any significant barriers to entry (and said barriers to entry include things like networking effects), and once they consolidate control over a large enough share of the market those companies become less and less friendly and more and more extractive towards customers, simply because said customers don’t actually have any other options, which is what we now call enshittification.

      At the same time Linux (and most Open Source software) is mainly about the owner being in control of their own stuff, not some corporate provider of software for your hardware or of a hardware + software “solution” (i.e. most modern electronics) provider.

      So we’re getting to see more and more Linux-based full solutions to take control of one’s devices back from the corporations, not just Linux on the Desktop to wrestle control back from an increasingly anti-customer Microsoftw, but also, for example, stuff like OpenELEC (for TV boxes) and OPNSense (for firewalls/router).

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      6 months ago

      LLMs in particular are unlikely to solve really any problems, much less a measurable number of the problems it is currently being thrown at.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        I mean, if LLMs really make software engineering easier, we should also expect Linux apps to improve dramatically. But I’m not betting on it.

      • Joelk111@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        5
        ·
        6 months ago

        Tell that to the code I have it write and debug daily. I was skeptical at first, but it’s been a huge help for that, as well s learning new (development) languages.

        • AusatKeyboardPremi@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          6 months ago

          I do not agree with @FiniteBanjo@lemmy.today’s take. LLMs as these are used today, at the very least, reduces the number of steps required to consume any previously documented information. So these are solving at least one problem, especially with today’s Internet where one has to navigate a cruft of irrelevant paragraphs and annoying pop ups to reach the actual nugget of information.

          Having said that, since you have shared an anecdote, I would like to share a counter(?) anecdote.

          Ever since our workplace allowed the use of LLM-based chatbots, I have never seen those actually help debug any undocumented error or non-traditional environments/configurations. It has always hallucinated incorrectly while I used it to debug such errors.

          In fact, I am now so sceptical about the responses, that I just avoid these chatbots entirely, and debug errors using the “old school” way involving traditional search engines.

          Similarly, while using it to learn new programming languages or technologies, I always got incorrect responses to indirect questions. I learn that it has incorrectly hallucinated only after verifying the response through implementation. This makes the entire purpose futile.

          I do try out the latest launches and improvements as I know the responses will eventually become better. Most recently, I tried out GPT-4o when it got announced. But I still don’t find them useful for the mentioned purposes.

          • Joelk111@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            That’s an interesting anecdote. Usually my code sorta works and I just have to debug it a little bit, and it’s way faster to get to a viable starting point that starting from scratch.

            Often times my issue is unknown by it when debugging though, but sometimes it helps to find stupid mistakes.

            I’d probably give it a 50% success rate, but I’ll take the help.

        • FiniteBanjo@lemmy.today
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          6 months ago

          Mate, all it does is predict the next word or phrase. It doesn’t know what you’re trying to do or have any ethics. When it fucks up it’s going to be your fuckup and since you relied on the bot rather than learned to do it yourself you’re not going to be able to fix it.

        • Balder@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          I think they do have their help, but it’s not nearly as dramatic as some companies earning money from it want us to think. It’s just a tool that helps just like a good IDE has helped in the past.

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      6 months ago

      It’s not greed - it’s masqueraded violence being allowed, centralization, impunity, and general corruption, all supported by various IP, patent and “child protection” laws.

      No separate component is necessary, it’s a redundant system built very slowly and carefully.

      Referencing that quote about blood of patriots, and another about difference between journalism and public relations being in outrage and offense, or difference between a protest and a demonstration being in obviously breaking rules.

      EDIT: I meant - it’s a general tendency. But IT today is as important as police station, post office and telegraph were in 1917. One can also refer to that “means of production” controversy.

    • the_doktor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      6 months ago

      AI can’t solve problems. This should be abundantly clear by now from the number of laughable and even dangerous “solutions” it gives while stealing content, destroying privacy, and sucking up tons of power to do so. Just ban AI.

  • Suavevillain@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    6 months ago

    Linux has been great for me. I switched during Windows 10 forced updates and never been unhappy since. I hope more people at least give a try. If you have a computer that can’t meet Windows 11 requirements, it is worth a shot.

  • Doof@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    6 months ago

    I am basically a layman, i do music productions and in the past VSTs seemed to never work properly nor the authentication software that some us. Has it gotten better in the past few years, is there a specific one i should try? i have tried Ubuntu but nothing else to be fair. Also if i want to make a plex server on an old PC, what would people recommend? thanks to anyone who responds!

    edit - Thanks to all that responded, i have some direction now. Appreciated!

  • Diplomjodler@lemmy.world
    link
    fedilink
    English
    arrow-up
    351
    arrow-down
    7
    ·
    6 months ago

    It’s not AI that is the problem, it’s half baked insecure data harvesting products pushed by big corporations that are the problem.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      152
      ·
      6 months ago

      The biggest joke is that the LLM in Windows is running locally, it uses your hardware and not some big external server farm. But you can bet your ass that they still use it to data harvest the shit out of you.

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        139
        ·
        6 months ago

        To me this is even worse though. They’re using your electricity and CPU cycles to grab the data they want which lowers their bandwidth bills.

        It happening “locally” while still sending all the metadata home is just a slap in the face.

        • NutWrench@lemmy.world
          link
          fedilink
          English
          arrow-up
          60
          ·
          6 months ago

          Also, CoPilot is going to be bundled with Office 365, a subscription service. You’re literally paying them to spy on you.

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          17
          ·
          6 months ago

          Exactly. And if I use or even pay for an external LLM service then that’s also my decision. But they force this scheme onto every user, whether they want it or not. It’s like the worst out of all possible scenarios.

      • 👍Maximum Derek👍@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        4
        ·
        6 months ago

        That’s a pretty big joke, but I think the bigger joke is calling LLMs AI. We taught linear algebra to talk real pretty and now corps want to use it to completely subsume our lives.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          3
          ·
          6 months ago

          I think the bigger joke is calling LLMs AI

          I have to disagree.

          Frankly, LLMs (which are based on neural networks) seem a Hell of a lot closer to how actual brains work than “classical AI” (which basically boils down to a gigantic pile of if statements) does.

          I guess I could agree that LLMs are undeserving of the term “AI”, but only in the sense that nothing we’ve made so far is deserving of it.

          • Brickardo@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Let’s agree to disagree then. An LLM has no notion of semantics, it’s just outputting the most likely word to follow up to what it’s already written and the user’s input.

            On the contrary, expert systems from back in the 90s for, say, predicting the atomic structure of an element, work like a human brain on steroids. It features an arbitrary large search tree that the software knows how to iterarively prune according to a well known set of chemical rules. We do the same when analyzing a set of options.

            Debugging “current” AI models, on the other hand, is impossible because all we’re doing is prescripting a composition of functions and forcing it to minimize a loss function. That’s all we’re doing. How can you currently tell that a certain model is going to work? Unless the mathematical theory ever catches up with the technology, we’ll never know until we execute the code.

            • grue@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              1
              ·
              edit-2
              6 months ago

              I’m not talking about interacting with it. I’m talking about how it’s implemented, from my perspective as a computer scientist.

              Let me say it more concretely: if even shitty expert systems, which are literally just flowcharts implemented in procedural code, are considered “AI” – and historically speaking, they are – then the bar is really fucking low. LLMs, which at least make an effort to kinda resemble the structure of biological intelligence, are certainly way, way above it.

              • degen@midwest.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 months ago

                I’m actually sad that the state of AI deserves the hate it gets. Neural networks are so sick, just going through the example of detecting a diagonal on a 2x2 grid was like magic to me. And they made me second guess simulation theory for quite a while lmao

                Tangentially, blockchain was a similar phenomenon for me. Or at least trust networks. One idea was to just throw away Certificate Authorities. Basically federate all the things, and this was before we knew about the fediverse. It gets all the hate because of crypto, but it’s cool tech. The CA thing would probably lead to a bad place too, though.

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          9
          arrow-down
          3
          ·
          6 months ago

          Oh I agree. I typically put “AI” in quotation marks when using that term regarding LLMs, because to me they simply are not intelligent in anyway. In my mind an AI would need an actual level of consciousness of sorts, the ability to form actual thoughts and learn things freely based on whatever senses it has. But AI is a term that’s good for marketing as well as fear mongering, which we see a lot of in current news cycles and on social media. The problem is that most people do not even understand the basic principles of how LLMs work, which lead to a lot of misconceptions about its uses & misuses and what we should do about it. Weirdly enough this makes LLMs both completely overhyped as a product and completely stigmatized as some nefarious tool as well. But I guess it fits into our today’s societies that kinda seem to have lost all nuance and reason.

      • Aniki 🌱🌿@lemm.ee
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Runs locally, mirrors remotely.

        To ensure a seamless customer experience when their hardware isn’t capable of running the model locally or if there is a problem with the local instance.

        microsoft, probably.

    • snooggums@midwest.social
      link
      fedilink
      English
      arrow-up
      86
      arrow-down
      4
      ·
      6 months ago

      That is an accurate description of AI in common usage even if it isn’t an inherent aspect of AI.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      6 months ago

      Locally run AI could be great. But sending all your data to an external server for processing is really, really bad.

    • psycho_driver@lemmy.world
      link
      fedilink
      English
      arrow-up
      78
      ·
      6 months ago

      All true, and all a problem for which linux has been a solution (in the computing world) for decades now.

      • Andromxda 🇺🇦🇵🇸🇹🇼@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        61
        arrow-down
        1
        ·
        6 months ago

        It’s not just Linux, but free & open source software in general. And it’s not just desktop PCs that are plagued by this corporate spyware, it’s much worse when looking at the mobile device landscape. The only real solution for mobile devices is GrapheneOS with FOSS software installed from the F-Droid marketplace. Browsers are also under attack by proprietary software corporations, Google just intentionally broke adblockers on all Chromium-based browsers, so they can generate more ad revenue. Last year, they tried to push a proposal that would have massively extended their monopoly on web browsers (WEI). All the streaming services are screwing their users over and increasing the subscription prices while making the content library smaller. It’s such a fucking scam, and it’s almost sad to see how many people are dumb enough to fall for it.

        • bobs_monkey@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          6 months ago

          To your last point: I think a significant number of people these days are aware just how much corporations are bending us over, but most of us are just so exhausted at the end of the day to really make a huge stink about it when all we want to do is just vegitate on the couch for a few hours before we have to go to sleep, then wake up the next day and do it all over again. The current paradigm is horseshit, but the puppeteers make sure we work ourselves to the bone so that we’re too tired to really do anything about it aside from bitching online.

          • Andromxda 🇺🇦🇵🇸🇹🇼@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 months ago

            Brave apparently wants to do that, but it’s not a great long term solution. The feature should actually be supported upstream, that’s why Firefox is a much better option, and a better base for a fork to create a new browser.

        • slacktoid@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Why are some hands blue? Shouldn’t it just be whatever’s on the main body?

          • bobs_monkey@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 months ago

            It’s a spin on the Hindu god Vishnu (I think there might be a few depicted with multiple arms, but that the first that comes to mind)

            • slacktoid@lemmy.ml
              link
              fedilink
              English
              arrow-up
              4
              ·
              6 months ago

              This is Kali but yeah she is blue all over, body and hands. also, so is Vishnu.

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      6 months ago

      You’re not wrong. AI is just another tool to scrape cash to the top while eliminating jobs. Could it realize benefits like doing specialized research and testing? Sure…but again, the results of that work are lost human jobs and scraping money to the top. We can argue about advancing technology in a horse cart driver vs automobile thing (won’t anyone think about the poor farriers out of work?) but we’ve already done everything we can to eliminate blue collar jobs with as much automation as possible. Now AI is set to attack middle class jobs. Economically I don’t think that’s going to work out well.

      • nfh@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 months ago

        I mean, the problem isn’t the existence/obviation of jobs, but what we do next when it happens. If the people whose jobs are automated away are left out with no money or employment, that’s a serious problem. If we as a society support them in learning something new that puts their skills to good use, and maybe even reduce the expected working hours of a full-time job to 35 or 32 hours a week, that’s an absolute win in my book.

        • RememberTheApollo_@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          6 months ago

          Well that’s the point. We don’t support them as a society. From education to health care once you lose your job, you’re SOL, and in this hyper-capitalist dystopia we keep tipping towards I don’t see that changing.

        • barsquid@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 months ago

          Online shopping has removed a lot of retail jobs. Instead of seeing a transition to different jobs or fewer hours, today we see people working multiple jobs to get by.

          The reason these things are making money is specifically because they increase efficiency (how much money a capitalist can make from existing capital) by removing human labor. Giving any portion of that to laborers is completely antithetical to its entire purpose.

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 months ago

            Yea, this is because society system is lagging behind and we have not done the right changes fast enough to prevent suffering due to technological advancements, in my opinion

      • werefreeatlast@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        6 months ago

        But as someone pointed out elsewhere…AI can already take over the job of company CEOs… decision making tools could make a group of technical people be more effective than a CEO as we know today.

        • RememberTheApollo_@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 months ago

          Let’s see how many CEOs get replaced.

          Don’t forget the BoD are still human. They still want to profit by putting the AI in place of the CEO.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 months ago

      I find the nightmare getting a lot more noticeably bad with LLMs, though. That’s not just correlation.

      • Andromxda 🇺🇦🇵🇸🇹🇼@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        AI is a cool feature, which makes a great excuse for proprietary corporations to spy on their users. I’d say it’s one of the best opportunities for an excuse of the last few decades. Only 9/11 was a better excuse to put everyone under corporate/government surveillance.

  • seaQueue@lemmy.world
    link
    fedilink
    English
    arrow-up
    92
    arrow-down
    4
    ·
    edit-2
    6 months ago

    Linux may be the best way to avoid the <insert dystopian corporate feature> nightmare

    Always has been

    • xia@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      37
      ·
      edit-2
      6 months ago

      I’m convinced that Linux’ mere presence has already stymied the development of the worst possible technocractic nightmare. I shudder to think of the thick tech-chains that would bind us if there was not an anchor/reference point… or if there was not even the small contingent that knows what it is like to use a liberating platform.

      • Baggie@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 months ago

        I agree with this. We already have a situation where we don’t have feasible alternatives to the primary method, Google search comes to mind. With Linux, even if every company in the world goes down, nerds will still want to play with the technology.

  • archchan@lemmy.ml
    link
    fedilink
    English
    arrow-up
    106
    arrow-down
    2
    ·
    6 months ago

    I choose to privately self-host open source AI models and stuff on Linux. It’s almost like technology is a tool and corps are the ones fucking things up. Hmmm, imagine that.

    • nexussapphire@lemm.ee
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      15
      ·
      edit-2
      6 months ago

      It’s so fun to play with offline AI. It doesn’t have the creepy underpinnings of knowing art and journalism as well as musings from social media was blatantly stolen from the internet and sold as a service for profit.

      Edit: I hate theft and if you think theft is ok for training llms go ahead and dislike this comment. I don’t feel bad about what I said, local offline AI is just better because it doesn’t work on the premise of backroom deals and blatant theft. I will never use an AI like DALL.E when there is a talented artist trying to put food on the table with a skill they honed for years. If you condone stealing you are a cheap, heartless, coward.

      • Teanut@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        6 months ago

        I hate to break it to you, but if you’re running an LLM based on (for example) Llama the training data (corpus) that went into it was still large parts of the Internet.

        The fact that you’re running the prompts locally doesn’t change the fact that it was still trained on data that could be considered protected under copyright law.

        It’s going to be interesting to see how the law shakes out on this one, because an artist going to an art museum and doing studies of those works (and let’s say it’s a contemporary art museum where the works wouldn’t be in the public domain) for educational purposes is likely fair use - and possibly encouraged to help artists develop their talents. Musicians practicing (or even performing) other artists’ songs is expected during their development. Consider some high school band practicing in a garage, playing some song to improve their skills.

        I know the big difference is that it’s people training vs a machine/LLM training, but that seems to come down to not so much a copyright issue (which it is in an immediate sense) as a “should an algorithm be entitled to the same protections as a person? If not, what if real AI (not just an LLM) is developed? Should those entities be entitled to personhood?”

        • nexussapphire@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          6 months ago

          I hate to break it to you but not all machine learning is llms based. I’ve been messing with neural based tts from a small project called piper. I’m looking into an image recognition neural network to write software for and train myself. I might try writing it myself for fun 🤔

          I’m not interested in anything that uses stolen data like that so my options are limited and relegated to incredibly focused single purpose tools or things I make myself with the tools available.

          I’d love to play with image generation and large language models but until all the legal stuff is worked out and individuals get paid for their work I’m not touching it.

          To me it’s as cut and dry as this. If it’s the difference between an individual becoming their own boss/making a better living and a corporation growing their market cap I’ll always choose the individual. I know there’s a possibility of that growth resulting in more jobs but I’d rather have an environment where small businesses open breed competition and overall improve everyone’s life. Let’s not give the keys over to companies like Microsoft and close more doors.

          I don’t care about the discussion of true AI having rights. It’s only going to be used to make the wealthy wealthier.

          • hellofriend@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            6 months ago

            All LLMs are based on neural networks. Furthermore, all neural networks need training, regardless of whether they’re an LLM or some other form of machine learning. If you want to ensure there’s no stolen material used in the neural net then you have to train it yourself with material that you have the copyright to.

          • nexussapphire@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            6 months ago

            Sorry I feel strongly about this. Play with it all you want it’s really cool shit! But please don’t pay for access to it and if you need some art or a professional write-up please just pay someone to do it.

            It’ll mean so much to your fellow man in these uncertain times and the quality will be so much better.

      • nexussapphire@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        7
        ·
        6 months ago

        I’m on his side, I don’t get the dislike. Maybe he likes massive corporations stealing people’s data putting artist and journalist out of work.