I want to draw attention to the elephant in the room.

Leading up to the election, and perhaps even more prominently now, we’ve been seeing droves of people on the internet displaying a series of traits in common.

  • Claiming to be leftists
  • Dedicating most of their posting to dismantling any power possessed by the left
  • Encouraging leftists not to vote or to vote for third party candidates
  • Highlighting issues with the Democratic party as being disqualifying while ignoring the objectively worse positions held by the Republican party
  • Attacking anyone who promotes defending leftist political power by claiming they are centrists and that the attacker is “to the left of them”
  • Using US foreign policy as a moral cudgel to disempower any attempt at legitimate engagement with the US political system
  • Seemingly doing nothing to actually mount resistance against authoritarianism

When you look at an aerial view of these behaviors in conjunction with one another, what they’re accomplishing is pretty plain to see, in my opinion. It’s a way of utilizing the moral scrupulousness of the left to cut our teeth out politically. We get so caught up in giving these arguments the benefit of the doubt and of making sure people who claim to be leftists have a platform that we’re missing ideological parasites in our midst.

This is not a good-faith discourse. This is not friendly disagreement. This is, largely, not even internal disagreement. It is infiltration, and it’s extremely effective.

Before attacking this argument as lacking proof, just do a little thought experiment with me. If there is a vector that allows authoritarians to dismantle all progress made by the left, to demotivate us and to detract from our ability to form coalitions and build solidarity, do you really think they wouldn’t take advantage of it?

By refusing to ever question those who do nothing with their time in our spaces but try to drive a wedge between us, to take away our power and make us feel helpless and hopeless, we’re giving them exactly that vector. I am telling you, they are using it.

We need to stop letting them. We need to see it for what it is, get the word out, and remember, as the political left, how to use the tools that we have to change society. It starts with us between one another. It starts with what we do in the spaces that we inhabit. They know this, and it’s why they’re targeting us here.

Stop being an easy target. Stop feeding the cuckoo.

  • PhilipTheBucket@ponder.cat
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 hours ago

    Okay. You basically ignored most of my message, including some specific questions which I asked for specific reasons to try to get to the bottom of this. You just repeated your side again. So never mind.

    You say you’re not, but then what is the remedy you’re putting forth?

    This on the other hand is a pretty good question. So, one remedy I’d like to try is creating a moderated community specifically for political discussion, with a bot that can “oversee” the community and can identify fallacies or bad-faith engagement. LLMs aren’t really capable of following the thread of a conversation or picking the “winner”, but a lot of the stuff that pisses me off on Lemmy is pretty simple stuff to detect that I think they could do: Claiming that someone said something when they actually said something else, blatantly ignoring a direct question and instead going off and just talking about some different thing, repeating yourself forever without substantively responding to anything the other person says. That kind of thing. I think if there were a bot that could moderate discussions according to that kind of guideline and call people out in an unbiased way when they were engaging poorly, it would be hugely helpful. Because everyone does it, to some extent. It’s easy to get emotional or get heated up about the point you wanted to make, it’s easy to misinterpret something accidentally, and obviously everyone comes from a standpoint that their stuff is right (obviously right) or else they wouldn’t be saying it. I think a more neutral arbiter could help to point those things out without it being a big acrimonious mess whenever people disagree. Accusing another person in the conversation of bad faith rarely goes anywhere good. I think in general (if it somewhat worked) it could be a really cool thing.

    And, getting back to your question, I actually think something like that would do a lot to address the type of engagement that I tend to talk about when I talk about fake accounts. It sidesteps the (basically impossible and highly polarizing / inflammatory) task of categorizing accounts into “fake” or not. If you have a political viewpoint that I or OP happen to think may be coming from a “fake” POV, but you’re just sitting there talking about it and engaging with people who disagree, it’s fine. That’s healthy. The problem comes in (to me) when people come in big gangs to all yell the same stuff, don’t really engage with people who disagree but just mischaracterize the opposition and repeat their points of view forever, basically just engage in bad faith. Whether those people are “fake” or not is still relevant, to me, but I don’t think just excising them out from your Lemmy experience is necessarily the way, and I definitely don’t think trying to publicly call them out once they’re “identified” by whatever specific criteria is the way. Because it is impossible to tell specifically for any given person.

    Probably there are going to be 0 people who think that is a good idea. That is fine. I feel like the general street cred that AI in general has right now will lead people to hate the idea. That is fine. If I get motivation, I think I will just set the idea up and turn it loose, and if anyone’s open to play with it then see how it works out. That is my remedy.

    • t3rmit3@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      The problem comes in (to me) when people come in big gangs to all yell the same stuff, don’t really engage with people who disagree but just mischaracterize the opposition and repeat their points of view forever, basically just engage in bad faith.

      You clearly aren’t intending this to be about this (OP’s) post, and yet…

      That is my remedy.

      I actually like your idea, and I think that it could work if there was some kind of set structure to the posts, maybe using a template to make it easy for an LLM to parse, and to prevent comments from asking more follow-up questions than allowed. My partner is involved with competitive debate, and I think a highly-structured variant could work in an asynchronous format like forums posts, especially if there’s a bot to auto-remove posts that aren’t formatted correctly (that part could just be a script with regex or something).

      • PhilipTheBucket@ponder.cat
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        You clearly aren’t intending this to be about this (OP’s) post, and yet…

        I realized, in the course of talking with you, that while me and OP have come to the same conclusion about what’s going on in Lemmy, the specific sets of behavior we are calling out are very different. But we’re describing the same underlying problem, we just have different perspectives on what we observed that led us to that conclusion.

        I basically agree with OP’s characterization of a type of argument these accounts like to make that to me doesn’t make sense, but I just sort of suspect that there’s a big contingent of genuine users that also like to muster that exact same argument pattern also, or at least a lot of the elements of it. But again it’s probably pretty fruitless to start wildly speculating about which specific users are or are not “genuine,” unless they do some kind of really obvious tell that they are not what they are claiming to be. It is absolutely impossible to know.

        I actually like your idea, and I think that it could work if there was some kind of set structure to the posts, maybe using a template to make it easy for an LLM to parse, and to prevent comments from asking more follow-up questions than allowed. My partner is involved with competitive debate, and I think a highly-structured variant could work in an asynchronous format like forums posts, especially if there’s a bot to auto-remove posts that aren’t formatted correctly (that part could just be a script with regex or something).

        Hm… this is an interesting idea. I was going to have it intuit the “main pillars” so to speak of each side’s argument, and then make sense of how well the other side was coping with each of the pillars. Not in the sense of assessing right versus wrong or reading sources or anything, that’s clearly hopeless. But just the basics: Are you addressing the argument directly, or are you just kind of stepping past it when you respond or pretending that it didn’t exist, or are you mischaracterizing it as something totally different and then beating up the strawman? That might seem like kind of a simplistic bar to clear but I think there is so much on Lemmy that would fail that type of test that it would be really productive to have an objective referee. For everyone. It’s surprisingly easy to fall into “my stuff is right, fuck all this other stuff, that is nonsense” type of thinking, it doesn’t even have to be anything wrong with you if the bot is dinging you for not addressing something.

        Formalizing the thing and the format you need to provide could work too, it’s just an extra bar for people to clear and I feel like the LLM could probably do a half-decent job without it. I might try to knock up a quick version of it based on my idea but I’d be happy for any critique or other ways it could work.