• Ilandar@aussie.zone
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    The government already does that to a large extent. The content in question is not viewable from within Australia unless you use a VPN.

    • ⸻ Ban DHMO 🇦🇺 ⸻@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      True, they do a lot of this under the guise of copyright enforcement as well (which you can change your dns to fix generally). I don’t understand how this censorship is any different from what we look down upon authoritarian countries. I like the idea of a free and open web

      • Ilandar@aussie.zone
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        I don’t understand how this censorship is any different from what we look down upon authoritarian countries.

        The scope and nature of the content being censored, I guess. But you’re right that there is the potential of setting a dangerous precedent when taking this approach to online safety regulation. I think in general the saga has highlighted the problematic nature of social media becoming so intertwined with society. There is a real risk for this stuff to be viewed unintentionally, or because it was recommended through an algorithmic feed, and served to a considerably larger number of people than if it was only available on LiveLeak or something back in the day. It’s so difficult to effectively regulate these social media companies now because they have become part of mainstream society and gained so much power as a result. We are essentially just relying on goodwill on the part of the people running them.

        • ⸻ Ban DHMO 🇦🇺 ⸻@aussie.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          But in this specific case if they blurred out the content and put a warning: “This post contains graphic content, do you wish to view it?”. Or perhaps we could use AI to give a description so people know what they’re getting into. There’s nothing wrong with that, and I don’t know why that isn’t good enough.

          I might sound hypocritical as a mod of a few communities on here who has removed a few comments that don’t meet our standards, but comments on Lemmy aren’t truly removed (unless an admin purges it) and can be viewed in the modlog (or with a client that doesn’t respect the condition when a comment has been removed, there’s still quite a few where this is the case).

          • Ilandar@aussie.zone
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            But in this specific case if they blurred out the content and put a warning: “This post contains graphic content, do you wish to view it?”. Or perhaps we could use AI to give a description so people know what they’re getting into. There’s nothing wrong with that, and I don’t know why that isn’t good enough.

            I don’t think warnings are good enough if the content is being delivered automatically into people’s feeds. People are not really thinking rationally when they are doom-scrolling on social media. Not to mention that text descriptions are not always adequate preparation for extreme content, particularly with social media minimum age limits as low and as unenforced as they are.