• 8 Posts
  • 393 Comments
Joined 3 years ago
cake
Cake day: June 17th, 2023

help-circle



  • But listen, epistemologically, you’ll encounter the is-ought problem, and without taking an “objective judge” into consideration morality will always be fought by corrupt scholars.

    You literally ignored the entire point behind my previous comment. You don’t need to establish an “objective judge” because the traditional ideas of morality are already observable as an optimal strategy to go through life, and we can observe it via experimentation.

    I don’t get why you insist on a nonsensical rant instead of just letting the other person have the last word when they prove you wrong. And at this point, I don’t care. You’re not worth wasting anymore time on. If you insist on sticming your head in the sand and ignore reality, then go ahead, but you’re not going to be bothering me with it because you’re getting blocked. Tata





  • And how can I talk about objective morality without God?

    Here it is! Here it fucking is! The single most overused thought-terminating fallacy that Jesus nuts like to pull out!

    The answer to your question is that we don’t need a deity to declare what objective right and wrong are. We can use game theory. If you want to watch an admittedly better explanation of it, Veritasium made a video on it last year, but I’ll recap it below.

    Decades ago, researchers set up an experiment where they paired various algorithms against each other, with each algorithm having different rules for approaching the prisoner dillema. And each pairing went on for hundreds of turns. Then the researchers tallied up all the scores. Thry noticed that almost all of the “nice” algorithms scored higher then almost all of the “mean” algorithms. And they redid the experiment multiple times with tweaks to the experiment, like randomizing the length of interactions between algorithms.

    The overall rules that caused this highest scores were:

    1. Start off picking the option to cooperate
    2. After the first exchange, respond in the same way they were treated in the first round
    3. A decision to not cooperate only affects the next decision, it doesn’t continuously affect every decision after that
    4. On rare occasions (<10%), cooperate on the next turn even if the other algorithm chose to not cooperate.

    Essentially it boils dowm to being polite, treating others how you wish to be treated, and being forgiving past transgressions. Strangely similar to what religions tend to teach, right?

    It turns out, these are actually emergent properties that appear in any system where you have series of interactions between individuals. It’s not divine provenance, it’s natural selection.












  • Under the new rules, online service providers must assess the risk that their services could be misused to spread depictions of child sexual abuse or to contact children. Based on this assessment, they must take measures to mitigate this risk.

    (Emphasis mine)

    That is not actually voluntary. The drafters of this bill are playing wordgames to trick committee members into passing mandatory chat scanning.