A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.

SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.

Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.

From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.

So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Tbf most people have no clue how to use it nor even understand what “AI” even is.

    I just taught my mom how to use circle to search and it’s a real game changer for her. She can quickly lookup on-screen items (like plants shes reading about) from an image and the on-screen translation is incredible.

    Also circle to search gets around link and text copy blocking giving you back the same freedoms you had on a PC.

    Personally I’d never go back to a phone without circle to search - its so under-rated and a giant shift in smartphone capabilities.

    Its very likely that we’ll have full live screen reading assistants in the near future which can perform circle to search like functions and even visual modifications live. It’s easy to dismiss this as a gimmick but there’s a lot of incredible potential here especially for casual and older users.

    • Hoimo@ani.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Google Lens already did that though, all you need is decent OCR and an image classification model (which is a precursor to the current “AI” hype, but actually useful).

        • Hoimo@ani.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Image classification model isn’t really “AI” the way it’s marketed right now. If Google used an image classification model to give you holiday recommendations or answer general questions, everyone would immediately recognize they use it wrong. But use a token prediction model for purposes totally unrelated to predicting the next token and people are like “ChatGPT is my friend who tells me what to put on pizza and there’s nothing strange about that”.

            • Hoimo@ani.social
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              Neither LLMs nor ICMs are AI in any sense of the word, is my point. LLMs happen to give the illusion of intelligence because of their language-based nature, but they’re not fundamentally different from ICMs.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      even for the above it isnt useful, at least For professors have been abusing because they are too lazy to check someones writing, and found the AI have mistakenly assuming the paper is been written by AI. Medical would be just as problematic, it would be wierd if they are using it to make a diagnostic, without discerning, ruling other diseases with similar symptoms or results.

  • Coreidan@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    I don’t see how AI can benefit my phone experience.

    I use my phone to make phone calls and for text messaging. Where does AI fit in? It doesn’t.

    • graphene@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      But imagine!!! What if AI could write your text messages for you and convincingly hold phone calls??? Then you wouldn’t have to use your phone to interact with human beings at all!!!

      ~Why does anyone want this?~

  • clonedhuman@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    21 hours ago

    The consumer-side AI that a handful of multi-billion-dollar companies keep peddling to us is just a way for them to attempt to justify AI to us. Otherwise, it consumes MASSIVE amounts of our energy capacities and is primarily being used in ways that harm us.

    And, of course, there’s nothing they direct at us that isn’t ultimately (and solely) for their benefit–our every use of their AI helps train their models, and eventually it will simply be groups of billionaires competing against one another to form the most powerful model that allows them to dominate us and their competitors.

    As long as this technology remains determined by those whose entire existence is organized around domination, it will be a sum harm to all of us. We’d have to free it from their grips to make it meaningful in our daily lives.

  • TylerBourbon@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    2 days ago

    I do not need it, and I hate how it’s constantly forced upon me.

    Current AI feels like the Metaverse. There’s no demand for it or need for it, yet they’re trying their damndest to shove it into anything and everything like it’s a new miracle answer to every problem that doesn’t exist yet.

    And all I see it doing is making things worse. People use it to write essays in school; that just makes them dumber because they don’t have to show they understand the topic they’re writing. And considering AI doesn’t exactly have a flawless record when it comes to accuracy, relying on it for anything is just not a good idea currently.

    • Akito@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      If they write essays with it and the teacher is not checking their actual knowledge, the teacher is at fault, not the AI. AI is literally just a tool, like a pen or a ruler in school. Except much much bigger and much much more useful.

      It is extremely important to teach children, how to handle AI properly and responsibly or else they will be fucked in the future.

      • TylerBourbon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        I agree it is a tool, and they should be taught how to use it properly, but I disagree that is like a pen or a ruler. It’s more like a GPS or Roomba. Yes, they are tools that can make your life easier, but it’s better to learn how to read a map and operate a vacuum or a broom than to be taught to rely on the tool doing the hard work for you.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    I use chatgpt for things like debugging error codes but I have to be explicit with as much detail as possible or it will give me all sorts of inapplicable crap

  • QuarkVsOdo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    Generative AI is peaking in it’s ability to produce cringe boomer memes from a prompt. Everything else… MEH.

    • whaleiam@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Yeah but the amount of energy these auto correct search bars use is absolutely insane and disgusting and people are going without because of it, and literally given the study, most people don’t use it regular. It’s a cool novel tool, but really it’s just fancy google.

  • thingAmaBob@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 days ago

    Unless it can be a legit personal assistant, I’m not actually interested. Companies hyped AI way too much.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      seems like they hype to themselves more than the customers, they tried to force feed.

  • ATDA@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    Id take Bixby back over this forced AI crap.

    I mean I wouldn’t but you know…

  • Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    3 days ago

    The AI thing I’d really like is an on-device classifier that decides with reasonably high reliability whether I would want my phone to interrupt me with a given notification or not. I already don’t allow useless notifications, but a message from a friend might be a question about something urgent, or a cat picture.

    What I don’t want is:

    • Ways to make fake photographs
    • Summaries of messages I could just skim the old fashioned way
    • Easier access to LLM chatbots

    It seems like those are the main AI features bundled on phones now, and I have no use for any of them.

    • drthunder@midwest.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 days ago

      That’s useful AI that doesn’t take billions of dollars to train, though. (it’s also a great idea and I’d be down for it)

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        You mean paying money to people to actually program. In fair exchange for their labor and expertise, instead of stealing it from the internet? What are you, a socialist?

        /s