The tech mogul’s platform is the first to get hit with charges under new EU social media law.

The European Union is calling Elon Musk to order over how he turned social media site X into a haven for disinformation and illegal content.

The EU Commission on Friday formally charged X for failing to respect EU social media law. The platform could face a sweeping multi-million euro fine in a pioneering case under the bloc’s new Digital Services Act (DSA), a law to clamp down on toxic and illegal online content and algorithms.

Musk’s X has been in Brussels’ crosshairs ever since the billionaire took over the company, formerly known as Twitter, in 2022. X has been accused of letting disinformation and illegal hate speech run wild, roll out misleading authentication features and blocking external researchers from tools to scrutinize how malicious content on the platforms spreads.

The European Commission oversees X and two dozens of the world’s largest online platforms including Facebook, YouTube and others. The EU executive’s probe into Musk’s firm opened in December 2023 and was the first formal investigation. Friday’s charges are the first-ever under the DSA.

Infringements of the DSA could lead to fines of up to 6 percent of a X’s global revenue.

  • blazera@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    35
    ·
    4 months ago

    Disinformation is words

    It spreads on twitter, it spreads on facebook, on tiktok, on youtube, on discord, text messages, books, speeches, talking to coworkers. This is like the war on drugs except even easier to circumvent any bans. Youre not gonna beat disinformation by trying to block it.

    • fluxion@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      4 months ago

      You’re also not going to beat it by not trying to deal with it. The transition from twitter being an unreliable source to becoming an unbridled dumpster fire of disinformation and hate campaigns has a direct correlation with Musk taking specific steps to cater to those audiences while ripping out any facilities to filter it.

      It’s not all or nothing, like basically everything else in life, it requires balance. Just like you don’t have to “beat” drugs to help drug users find a better path, you don’t have to “beat” disinformation in order to help stop it from spreading. You can take steps when/where they make sense to limit the damage and give people a chance to pull their head out of the cesspool to get enough air that society can function in a manner in tune with reality to some degree.

      • blazera@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        13
        ·
        4 months ago

        Just like you don’t have to “beat” drugs to help drug users find a better path, you don’t have to “beat” disinformation in order to help stop it from spreading

        The war on drugs notably did not involve helping users find a better path, it only tried to block the path of drug use, with pretty disastrous results as drug users became pariahs pushed to more dangerous avenues of drug sources to get around the blocks.

        The only thing we are talking about here is a block from one path of disinformation. Theyll get pushed to the fringes of more dangerous sources of misinformation.

        • fluxion@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          ·
          edit-2
          4 months ago

          I’m not talking about the war on drugs, I’m talking about the fact that rehab facilities, education, counseling/medical aid are helpful to curtailing an out of control drug epidemic and reducing the negative impact on society.

          Just because the “war on drugs” failed doesn’t drug-related issues can’t be addressed to some degree. You focus on completely blocking misinformation so it doesn’t exist, I’m trying to point out other considerations: ranking, exposure, flagging/reviewing posts, community notes to provide additional context. These are all things that exist, that are used heavily, that impact our information feeds 24/7, and that will continue to be used to significant effect on the general population, whether for good or for bad. More likely the latter if everyone adopts perspectives like yours.

          • blazera@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            9
            ·
            4 months ago

            I am talking about the war on drugs, as that is what this is akin to, purely trying to block disinformation.

            All of the “other considerations” youve added, except for community context, are just tools to block. Like the war on drugs using drug tests, drug sniffing dogs, report hotlines, methods to find drugs and punish for it.

            Community context is a good example of things that do work, that is akin to educating people about drugs rather than trying to block them. But twitter has that tool, twitter is being punished for not blocking misinformation.

            • fluxion@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              ·
              edit-2
              4 months ago

              The specific charges noted in the article have similar nuances to the examples i gave. They are fixable and addressable and impactful. They do not require a full block on misinformation, which is obviously not something that’s possible to enforce effectively and not what’s being expected of X.

              • blazera@lemmy.world
                link
                fedilink
                English
                arrow-up
                8
                arrow-down
                2
                ·
                4 months ago

                I just wrote out a long response, ending with the idea that if misinformation gets removed from twitter, its only because its moved somewhere less visible to the public. And then realized i was arguing disinformation would be less visible to the public.

                Kick Musk’s ass EU

                • etuomaala@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  4 months ago

                  Bravo, blazera. It’s always nice to see some concern for the truth on the internet. I mean this very unsarcastically.

                  I don’t think I’ve ever seen somebody publicly changing their mind on the internet until I came here. Perhaps there is something special about lemmy.

                  The internet needs more of this. Maybe lemmy can amplify public mind changings like this somehow…

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      The article states that the EU is objecting to a couple of particular things:

      The EU said X’s blue checks policy was deceiving and had been abused by malicious actors. The checks were initially created as as way to verify users like government officials, public figures and journalists, in efforts to limit misinformation, but Musk changed that policy, allowing users to buy blue check accounts. The new policy has been abused by fraudsters to impersonate U.S. politician Hillary Clinton and author J.K. Rowling, among many other celebrities.

      The platform also didn’t respect an obligation to provide a searchable and reliable advertisement repository and limited access to its public data to researchers, the Commission said.

      This is not some amorphous campaign against disinformation, it’s a challenge to two specific policies of X.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      1
      ·
      edit-2
      4 months ago

      When the vast majority spreads on several platforms, you can very much beat it by blocking it. We’re not doing it not because we can’t but because letting it spread is profitable. Prior to the invention of modern social media the problem of misinformation was much smaller. Yes of course it will never disappear but we don’t need it to disappear.

    • eltrain123@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      4 months ago

      No, but he is finding out why twitter had all of its policies on combatting misinformation before he took over and gutted the staff… to prevent getting sued. You can say anything you want in America and the government can’t tell you that you aren’t allowed to say it, but you are still accountable for the damages caused by what you say… just ask Alex Jones.

      But operating in other countries doesn’t afford the same protections from government scrutiny.

      Disinformation campaigns are part of the reason social media is causing as much social strife in the world. It is not outside a logical line of thought that governments are going to attempt to minimize the damages from platforms like Twitter when they can. You may not beat misinformation, but you can minimize the financial incentive to promote it if you fine the fuck out of it when you find it.

      • blazera@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        12
        ·
        4 months ago

        Youre mixing up the accusation that twitter isnt stopping misinformation with an accusation that twitter itself is speaking misinformation. We’re talking about them being held responsible for what other people say.

        • Hacksaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 months ago

          If you make a deal with someone to come on your front porch every day yelling hate speech into your loudspeaker I think you’ll find it’s pretty easy to be held accountable for what other people say.

          Second, if you’ll remember, Twitter makes money from showing adds on this speech. It’s not like they’re doing this out of the goodness of their hearts. Profiting from hate speech isn’t going to be looked at kindly.

                • Hacksaw@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  4 months ago

                  That’s not what I said. In neither situation does the deal enforce that the person HAS to use the loudspeaker for hate speech. I wish I could blame your reading comprehension but it’s painfully obvious you’re arguing in bad faith since this is the pedantic detail you’re stuck on instead of the rest of my argument.

                  Every Twitter user makes a deal with Twitter to get an account. This deal includes what’s acceptable behaviour. If Twitter’s policy allows hate speech then it’s Twitter’s fault their platform is spreading hate speech. If Twitter’s policy prohibits hate speech then it’s still their fault because they’re not enforcing their policy. This is something Twitter had no problem with before their degenerate new owner fired the enforcement team.

                  Now let’s see what pedantic detail you get stuck on this time instead of facing the fact Twitter is liable for enabling hate speech to spread faster than ever before!

                  • blazera@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    4
                    ·
                    4 months ago

                    All social media has had this problem for as long as its existed.

                    Musk is terrible, but he didnt buy twitter until after trumps presidency. After covid. Dont underestimate how much misinformation has occurred in the past.