• amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    1
    ·
    7 days ago

    Oh I see, I think I read cheating with the other poster and went off of that. That is a fair point and I dislike how LLMs are getting pushed as solution rather than tool. To make a rough comparison, a hammer is a tool and makes some tasks easier / more doable, but you still have to physically use the hammer with human dexterity to pound the nail in. Whereas with LLMs, you can ask it for answers or to write things for you and it will, even if nonsense; and there’s a big problem within that of “not knowing what you don’t know” with LLMs that if you have the skills/knowledge to know it’s feeding you BS, you probably don’t need it, but if you don’t, you’re more apt to think you do need it… but also lack the skills/knowledge to debug / fact check / etc. what it’s giving you. So the people who would get the most immediate use out of it are also putting a lot of trust in something that is nowhere near a reliable tutor or subject matter expert.

    • CountryBreakfast@lemmygrad.ml
      link
      fedilink
      arrow-up
      6
      ·
      7 days ago

      I don’t differentiate cheating from avoiding learning. Students get caught and fess up all the time, so it’s possibly even more pervasive than I believe.

      I teach things like global political economy and ethnic studies. If a sizable portion of students are using generative AI to fudge their way through these topics, then we couldn’t be more fucked. We can only get more fascist from here.

      And AI just compounds on other issues, like our political climate where people basically piss their pants because heaven forbid someone ask you to read 30 pages about enslavement. Not only do they not want to read but they are fairly often very racist and anti-intellectual. How do you address that when everything you can ask them to do to improve their engagement can be fudged?