This could be a tool that works across the entire internet but in this case I’m mostly thinking about platforms like Lemmy, Reddit, Twitter, Instagram etc. I’m not necessarily advocating for such thing but mostly just thinking out aloud.

What I’m imagining is something like a truly competent AI assistant that filters out content based on your preferences. As content filtering by keywords and blocking users/communities is quite a blunt weapon, this would be the surgical alternative which lets you be extremely specific in what you want filtered out.

Some examples of the kind of filters you could set would for example be:

  • No political threads. Applies only to threads but not comments. Filters out memes aswell based on the content of the media.
  • No political content whatsoever. Hides also political comments from non-political threads.
  • No right/left wing politics. Self explainatory.
  • No right/left wing politics with the exception of good-faith arguments. Filters out trolls and provocateurs but still exposes you to good-faith arguments from the other side.
  • No mean, hateful or snide comments. Self explainatory.
  • No karma fishing comments. Filters out comments with no real content.
  • No content from users that have said/done (something) in the past. Analyzes their post history and acts accordingly. For example hides posts from people that have said mean things in the past.

Now obviously with a tool like this you could build yourself the perfect echo chamber where you’re never exposed to new ideas which probably is not optimal but it’s also not obvious to me why this would be a bad thing if it’s something you want. There’s way too much content for you to pay attention to all of it anyway so why not just optimize your feed to only have stuff you’re interested in? With a tool like this you could quite easily take a platform that’s an absolute dumpster fire like Twitter or Reddit and just clean it up and all of a sudden it’s useable again. This could possibly also discourage certain type of behaviour online because it means that trolls for example could no longer reach the people they want to troll.

  • 9point6@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    8 months ago

    The problem with filtering political content is people are pretty bad at identifying their blind spots. It’s a pretty common trap that some people fall into where what they want as non-political conversation is actually just conversation which doesn’t challenge their political view.

    The same people will then conclude that the tool is making “political” choices in what it’s hiding from them.

    You’re also focusing a lot on left vs right. What about “third way” or centrist politics? What about fringe groups that people don’t really consider left or right? How does this work with different countries having different ideas of what’s left and what’s right (Overton window)? For example, I’d say the US doesn’t have a left and right, it has a centre-right and a far-right party.

    Finally plenty of people are happy in their echo chambers, despite them being terrible for a person. Challenging and reflecting on the way you fundamentally think about the world (basically what politics boils down to) is hard and sometimes unpleasant. It’s easy to see how many people go down the easy road where they double down on existing opinions and seek out echo chambers.

    • Thorny_Insight@lemm.eeOP
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      I used the term truly competent AI because obviously something like “no politics” is quite a broad guideline and the AI has to then figure out what you actually mean by this. Non-competent AI would filter out discussions about stuff like vegan food then aswell but obviously this is not what you meant. This is just a thought experiment about what it would be like if it actually worked as intented.

      What I’m imagining is something that also studies your own behaviour on the platform to learn what you’re actually into and if it’s not sure it could either ask you or do some sort of A-B testing to see what you engage with and what was that engagement like.

      • The Stoned Hacker@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        I think the idea that we need to be more efficient in consuming content is quite dystopic. I agree that not only should we be trying to reduce echo chamber, but content consumption as a whole. As a chronically online person in cybersecurity, I do not see a tenable future where humans continue to consume content at the rate they are. There needs to be a reduction in internet integration and online consumption. You’re right that there’s too much content for one person to reasonably sift through; the reasonable decision then is to reduce the amount of content rather than try to create a sieve. The amount of information that we try to consume on the internet is dangerous and harmful to us, and is destroying the foundations of society. I’m not some traditionalist nut or conspiracy theorist; it’s just easy to see that the benefits we get from globalized information sharing are very heavily offset by the constant influx of shit. I think people should have easy and free access to information and knowledge; I also think the current hierarchy of the internet was a mistake and that the majority of people do not need and in fact should not have computers.

        Also what you’re asking for is an incredibly invasive AI that is used for massive data collection and aggregation to track and serve you the content that is most addictive for you. I see no reasonable world where that is a good thing. It is only a good idea in our current world, which I do not believe is reasonable.

        • Thorny_Insight@lemm.eeOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          8 months ago

          Personally the way I think about is that since I’m going to spend a certain amount of time online anyways then why not atleast enjoy that time. For example I like discussing/debating ideas on platforms like Lemmy and Reddit but too often I find myself wasting time with someone whose not doing it in a good faith; they’re not open to have their mind changed and they’re not putting any effort into trying to change my mind either. They just want to dunk on what they deem as a stupid idea and more often than not they’re performing for their imagined audience. It would probably be better for us both if I wouldn’t engage with people like that to begin with. I really don’t need more than one decent person in order to have an interesting discussion. If there’s 20 others shouting insults into the void because my content filtering has blocked them I think that’s better than me relying on sheer will-power to resist the urge to reply back to these people.

    • perpetually_fried@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      14
      ·
      edit-2
      8 months ago

      Idk why people expect viewers want to have their ideas challenged when they’re just wanting a general idea of the happenings in the world.

      Like I want to know when the Houthis have hit another ship, causing an environmental disaster in the Red Sea (fertilizer). I DO NOT want to know about some bullshit laws being passed in a no name jurisdiction by some no name judge in a no name state that will get overturned in a month. It doesn’t affect me.

      attabit.com is a good example of this AI summary done right.

      But let me make myself clear. Nobody wants to be subjected to your ideology and echo chambers are fine. It’s not my responsibility to open up my attention to whatever it is you think is socially important at the time.