• Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I think the design of media products around maximally addictive individually targeted algorithms in combination with content the platform does not control and isn’t responsible for is dangerous. Such an algorithm will find the people most susceptible to everything from racist conspiracy theories to eating disorder content and show them more of that. Attempts to moderate away the worst examples of it just result in people making variations that don’t technically violate the rules.

    With that said, laws made and legal precedents set in response to tragedies are often ill-considered, and I don’t like this case. I especially don’t like that it includes Reddit, which was not using that type of individualized algorithm to my knowledge.

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Attempts to moderate away the worst examples of it just result in people making variations that don’t technically violate the rules.

      The problem then becomes if the clearly defined rules aren’t enough, then the people that run these sites need to start making individual judgment calls based on…well, their gut, really. And that creates a lot of issues if the site in question could be held accountable for making a poor call or overlooking something.

      The threat of legal repercussions hanging over them is going to make them default to the most strict actions, and that’s kind of a problem if there isn’t a clear definition of what things need to be actioned against.

      • VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        It’s the chilling effect they use in China, don’t make it clear what will get you in trouble and then people are too scared to say anything

        Just another group looking to control expression by the back door