Brian Eno has spent decades pushing the boundaries of music and technology, but when it comes to artificial intelligence, his biggest concern isn’t the tech — it’s who controls it.
My main gripe isn’t so much with cheating, but with it being so easy to avoid learning anything fundamental with LLMs and then getting stuck when things get more advanced. This is very common with programming, where intro level students are now able to pass easily by relying on tools like co-pilot, but get absolutely destroyed once they reach more advanced courses.
With LLMs kids don’t feel like learning, studying or developing critical thinking skills in fundamentals classes because they are constantly spoonfed ready answers, and so they are woefully unprepared later on. As with most “successful” inventions in the smartphone age, it turns humans into passive observers and consumers rather than engaged actors with skills for investigation. I am genuinely worried for what sort of professionals and scientists we are forming today.
Oh I see, I think I read cheating with the other poster and went off of that. That is a fair point and I dislike how LLMs are getting pushed as solution rather than tool. To make a rough comparison, a hammer is a tool and makes some tasks easier / more doable, but you still have to physically use the hammer with human dexterity to pound the nail in. Whereas with LLMs, you can ask it for answers or to write things for you and it will, even if nonsense; and there’s a big problem within that of “not knowing what you don’t know” with LLMs that if you have the skills/knowledge to know it’s feeding you BS, you probably don’t need it, but if you don’t, you’re more apt to think you do need it… but also lack the skills/knowledge to debug / fact check / etc. what it’s giving you. So the people who would get the most immediate use out of it are also putting a lot of trust in something that is nowhere near a reliable tutor or subject matter expert.
I don’t differentiate cheating from avoiding learning. Students get caught and fess up all the time, so it’s possibly even more pervasive than I believe.
I teach things like global political economy and ethnic studies. If a sizable portion of students are using generative AI to fudge their way through these topics, then we couldn’t be more fucked. We can only get more fascist from here.
And AI just compounds on other issues, like our political climate where people basically piss their pants because heaven forbid someone ask you to read 30 pages about enslavement. Not only do they not want to read but they are fairly often very racist and anti-intellectual. How do you address that when everything you can ask them to do to improve their engagement can be fudged?
My main gripe isn’t so much with cheating, but with it being so easy to avoid learning anything fundamental with LLMs and then getting stuck when things get more advanced. This is very common with programming, where intro level students are now able to pass easily by relying on tools like co-pilot, but get absolutely destroyed once they reach more advanced courses.
With LLMs kids don’t feel like learning, studying or developing critical thinking skills in fundamentals classes because they are constantly spoonfed ready answers, and so they are woefully unprepared later on. As with most “successful” inventions in the smartphone age, it turns humans into passive observers and consumers rather than engaged actors with skills for investigation. I am genuinely worried for what sort of professionals and scientists we are forming today.
Oh I see, I think I read cheating with the other poster and went off of that. That is a fair point and I dislike how LLMs are getting pushed as solution rather than tool. To make a rough comparison, a hammer is a tool and makes some tasks easier / more doable, but you still have to physically use the hammer with human dexterity to pound the nail in. Whereas with LLMs, you can ask it for answers or to write things for you and it will, even if nonsense; and there’s a big problem within that of “not knowing what you don’t know” with LLMs that if you have the skills/knowledge to know it’s feeding you BS, you probably don’t need it, but if you don’t, you’re more apt to think you do need it… but also lack the skills/knowledge to debug / fact check / etc. what it’s giving you. So the people who would get the most immediate use out of it are also putting a lot of trust in something that is nowhere near a reliable tutor or subject matter expert.
I don’t differentiate cheating from avoiding learning. Students get caught and fess up all the time, so it’s possibly even more pervasive than I believe.
I teach things like global political economy and ethnic studies. If a sizable portion of students are using generative AI to fudge their way through these topics, then we couldn’t be more fucked. We can only get more fascist from here.
And AI just compounds on other issues, like our political climate where people basically piss their pants because heaven forbid someone ask you to read 30 pages about enslavement. Not only do they not want to read but they are fairly often very racist and anti-intellectual. How do you address that when everything you can ask them to do to improve their engagement can be fudged?