• surph_ninja@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    8
    ·
    1 month ago

    No. “Hate speech” is an intentionally broad term designed to be abused and weaponized against unpopular speech.

    A better approach would be more specific. No racist speech. No homophobic speech. No misogynistic speech. Etc. Leaving it open ended and subjective is setting up for failure.

  • pjwestin@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    30 days ago

    It depends on how much of an absolutist you want to be. No government allows absolute freedom of speech. Libel, slander, and incitement of violence are all forms of speech that are illegal in basically every country. If your platform refuses to remove these forms of speech, you would be protecting what is generally not considered to be free speech, and it’s possible you could even be held legally liable for allowing that kind of speech to spread on your platform.

    If you decide not to be a free speech absolutist, and instead define free speech as legal speech, then things get complicated. In the U.S., the Supreme Court has held multiple times that hate speech is protected under the First Amendment, so censoring hate speech would mean your platform wasn’t allowing all forms of, “free speech.” However, the U.S. has much broader protections on speech than most Western countries, and hate speech is illegal in much of Europe.

    So, TL:DR; free speech is a sliding scale, and many countries wouldn’t consider hate speech to be protected form of speech. By those standards, you could have a platform that censors hate speech but still maintains what is considered free speech. However, by other countries’ standards, you would be censoring legal speech.

  • atro_city@fedia.io
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    30 days ago

    No. Absolute free speech means allowing people say whatever they like and that means anything. You can spam somebody with messages telling them to kill themselves. You can put a loudspeaker in front of somebody’s house and play a message on loop telling them to kill themselves. You can openly call for somebody to kill another person and not get in trouble for enticing a murder. You can shout down anybody you like and tell them to shut up or threaten them, all you have to do is be louder and look like you have the means to kill them in order to intimidate. And that will all be fine because if someone tries to stop you from expressing your opinion, they will be infringing on your right to absolute free speech.

    It does however create a paradox: if someone uses their free speech to infringe on somebody else’s free speech, what can be done? You can’t tell the person infringing to stop because that would infringe on their free speech. After all, they have a right to absolute free speech, don’t they? So, if you say “your right to free speech ends where the right of somebody else’s begins” then it’s not absolute anymore.

    It also opens a can of worms as to what counts as expressing free speech and what counts as suppressing it. Does blocking somebody on a platform infringe on their right? Does muting? If the rule is “right to speak, but no right to be heard”, what counts as speech? Does typing and hitting send count as free speech? Well, I could give you an app with a textbox and a send button, disconnect you from the internet, and you could write everything you want, hit send and it never leaves your computer but you did express yourself, didn’t you? Or maybe the sounds coming out of your mouth count as speech / expression ? Well, I could gag you, you can make sounds and that’s speech, right?

    So no. I don’t believe absolute free speech can exist.

    • Swordgeek@lemmy.ca
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      30 days ago

      Definitely read the article, but TL;DR It’s acceptable (and necessary) to shut down Nazis.

  • gedaliyah@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 month ago

    No. Free speech tends to mean the most powerful group determines and enforces norms through aggression, harassment, etc. Speech has consequences, and some of those consequences include harms (threats, doxing, stalking, etc.)

    Mastodon is one of the freest online speech platforms I’ve been a part of, and yet also has the most rigorously enforced code of conduct. More people are free to say more things, and feel confident that doing so does not put them in danger.

    Before online platforms emerged, the ability to spread a message was dependent on your ability to support it financially and logistically. Anyone can publish a newspaper on any topic, but unless you have a racist millionaire backing you up, your message won’t get very far (ahem, Deerborn Independent). Online publishing has been a haven for hate groups.

  • NeilBrü@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    30 days ago

    Short answer: no. But one should define terms, especially with legal implications.

    “Hate Speech” always sounded a bit Orwellian to me. Just like “Homeland Security”. People should be allowed to speak about what they hate, even if it’s bigoted, racist, sexist, etc. if free thought and inquiry are valuable human rights.

    In general, I believe the jurisprudence of free speech in our country (USA) essentially says beyond, libel, slander, inciting violence, or sedition, the government can’t imprison you for expression or forcibly silence you in a public forum.

    Private organizations and companies can regulate speech within their domains and property to the extent that they don’t violate other laws or rights of other parties within and without their said domains and property.

    I think that’s pretty fair.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    5
    ·
    1 month ago

    Free speech as in, the freedom to express valid political speech and criticize the current government? Sure. Easy.

    Free speech as in, the ability to say whatever the hell you want, including threatening, harassing, or inciting hatred and genocide against people? No. No you cannot.

      • lud@lemm.ee
        link
        fedilink
        arrow-up
        7
        ·
        30 days ago

        It’s pretty exhausting having to block everyone all the time though. That’s one small benefit with Lemmy. You can block instances.

        • Demdaru@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          edit-2
          30 days ago

          I mean, yeah. But also not everyone.

          It worked well for so long because it is a good solution. Allow users to block and let everything fly as long as it’s not a personal attack. The community will relatively quickly sort itself out.

          Sadly, today there are exception to block button working >:(

          Edit: Hell. isn’t BlueSky pretty much riding this today? People made blocklists and give fuck all about the less nice side of the site. And people who are intersted can keep seeing stuff.

      • silly goose meekah@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        29 days ago

        Another comment explained it pretty nicely:

        But you can’t have [a platform with absolute free speech] while allowing for hate speech either because hate speech silences the voices of its target.

        It’s basically the tolerance paradox but with free speech.

      • Darkenfolk@dormi.zone
        link
        fedilink
        arrow-up
        1
        ·
        29 days ago

        In general it’s the wider community that decides all that.

        There are consequences in holding and sharing views that are disagreeable with the community in which people share them.

        People are free to air those thoughts, but others are also free in shunning them for those thoughts.

    • m0darn@lemmy.ca
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      30 days ago

      I think it may be possible if you understand a difference between the right to speak and the right to be heard.

      Ie the right to say something doesn’t create an obligation in others to hear it, nor to hear you in the future.

      If I stand up on a milk crate in the middle of a city park to preach the glory of closed source operating systems, it doesn’t infringe my right to free speech if someone posts a sign that says “Microsoft shill ahead” and offers earplugs at the park entrance. People can choose to believe the sign or not.

      A social media platform could automate the signs and earplugs. By allowing users to set thresholds of the discourse acceptable to them on different topics, and the platform could evaluate (through data analysis or crowd sourced feedback) whether comments and/or commenters met that threshold.

      I think this would largely stop people from experiencing hatespeech, (one they had their thresholds appropriately dialed in) and disincentivize hatespeech without actually infringing anybody’s right to say whatever they want.

      There would definitely be challenges though.

      If a person wants to be protected from experiencing hatespeech they need to empower some-one/thing to censor media for them which is a risk.

      Properly evaluating content for hatespeech/ otherwise objectionable speech is difficult. Upvotes and downvotes are an attempt to do this in a very coarse way. That/this system assumes that all users have a shared view of what content is worth seeing on a given topic and that all votes are equally credible. In a small community of people, with similar values, that aren’t trying to manipulate the system, it’s a reasonable approach. It doesn’t scale that well.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        30 days ago

        I think you misunderstand the point of hate speech laws, it’s not to not hear it, its because people rightly recognize that spreading ideas in itself can be dangerous given how flawed human beings are and how some ideas can incite people towards violence.

        The idea that all ideas are harmless and spreading them to others has no effect is flat out divorced from reality.

        Spreading the idea that others are less than human and deserve to die is an act of violence in itself, just a cowardly one, one step divorced from action. But one that should still be illegal in itself. It’s the difference between ignoring Nazis and hoping they go away and going out and punching them in the teeth.

        • m0darn@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          30 days ago

          I support robust enforcement of anti hate speech laws. In fact I’ve reported hate speech/ hatecrime to the police before.

          We’re not talking about laws, we’re talking about social media platform policies.

          Social media platforms connect people from regions with different hatespeech laws so " enforcing hatespeech laws" is impossible to do consistently.

          If users engage in crimes using the platform they are subject to the laws that they are subject to.

          I don’t care that it’s legal to advocate for genocide where a preacher is located, or at the corporation’s preferred jurisdiction, I don’t want my son reading it.

          The question was: is there a way a platform can be totally free speech and stop hate speech. I think the answer is “kinda”

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Depends on whether you define a “free speech platform” as a platform that doesn’t impose its own constraints on speech, or a platform that enables speech without constraints. Because there are social pressures that also constrain speech, and hate speech can be a tool of those pressures.

  • sith@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    28 days ago

    Many good things have already been said, but I want to add one thing.

    If you want an coherent framework that allows free speech but no hate speech, look at consequential ethics. As opposed to rule based ethics (free speech absolutionists).

    I.e. we want to maximize the amount of common knowledge (flow of information or free speech) because it leads to the best results over time. If we want to maximize the flow of information, we cannot be absolutionists, because that puts us in a local minimum (failing to optimize properly). In other words, we need to restrict certain free speech to achieve maximum effective free speech.

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    29 days ago

    Others have brought up the inherent tension in the idea, but there are some potential avenues that try to avoid the pitfalls of the issue. For example, distributed moderation, where you subscribe to other users’ moderation actions, allows anyone to post something while also allowing anyone to ignore them based on the moderation actions of those that they trust. If you combine this with global moderation of outright illegal content and mandatory tagging of NSFW/NSFL posts, which are generally considered to be necessary or at least understandable restrictions, then you have a somewhat workable system. You could argue that a platform that allows community moderators to curate their own communities also allows free speech and blocks hate speech, but that only works if the mods are always fair, which… yeah, no lol

    • Custodian1623@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      29 days ago

      It depends on whether the user in question is bothered by seeing hateful things or by the existence of hateful speech. The latter tends to seek out and share hate speech (to complain about it) where the former would rather block it out completely. Both of these users may believe they want the same thing

  • LouNeko@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    30 days ago

    I think to achieve that you’ll have to redefine the upvote/downvote system. Currently upvoting and downvoting is synonymous with “I agree” or “I disagree”, but what it should represent, is whether a contribution adds or subtracts value from the conversation.
    This way if somebody wants to troll their contribution will be vanquished.
    Further more, hate speech is usually backed by topics that are indeed worthy of discussion, but are often ill-expressed and prevent any for of civil discussion.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    30 days ago

    Well, if you allow everyone to say everything, the one yelling the loudest wins, and the more silent people don’t get to speak freely. Also it’s going to send hate, violence, doxxing, state secrets etc into the world. Harming other people and limiting their freedom. Or you limit free spech. So either way, there is no such thing as free speech. It contradicts itself.

      • deranger@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        30 days ago

        It’s inherently exploitative due to the age difference. Free speech doesn’t cover violating someone else’s rights like that.