• amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    The author appears to be focused on the transformer architecture of AI and in that regard, I see nothing wrong with their argument. The way I see it, the important thing here is not that it’s absolutely impossible that an LLM could have anything called consciousness, but that managing to prove it does is not as simple as plugging in some tests we associate on a surface level with “intelligence”, saying “it passed”, and then arguing that that means it’s conscious.

    I think it’s critically important to remember with LLMs that they are essentially designed to be good actors, i.e. everything revolves around the idea that they can convincingly carry on a human-like conversation. Which is not to be confused with having actual human characteristics. Even if a GPU (and that’s what it would be, if we’re supposing physicality origins, because that’s what LLMs run on) somehow had some form of consciousness in it connected to when an AI model is running on it:

    1. It would have no physical characteristics like a human and so no way to legitimately relate to our experiences.

    2. It would still have no long-term evolving memory, just a static model that gets loaded sometimes and runs inference on it.

    3. If it was capable of experiencing anything physical akin to suffering, it would likely be in the wear and tear of the GPU, but even this seems like a stretch because a GPU does not have a sensory brain-body connection like a human does, or many animals do. So there is no reason to think it would understand or share in human kinds of suffering because it has no basis for doing so.

    4. With all this in mind, it would likely need its own developed language just to begin to try to communicate properly on what its experiences are. A language built by and for humans doesn’t seem sufficient. And that’s not happening if it can’t remember anything from session to session.

    5. Even if it could develop its own language, humans trying to translate it would probably be something like trying to observe and understand the behavior of ants and anything said by it with confidence as plain English “I am a conscious AI” would be all but useless as information, since it’s trained on such material and so being able to regurgitate it is part of its purpose for acting.

    Now if we were talking about an android-style AI that was given an artificial brain and was given mechanical nerve endings and was designed to mimic many aspects of human biology, as well as the brain, I’d be much more in the camp of “yeah, consciousness is not only possible, but also likely the closer we get to a strict imitation in every possible facet.” But LLMs have virtually nothing in common with humans. The whole neural network thing is I guess imitative of current understanding of human neurons, but only on a vaguely mechanical level. It’s not as though they are a recreation of the biology of it, with a full understanding of the brain behind it. Computers just aren’t built the same, fundamentally, so even in trying to imitate with full information, it would not be the same.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      I think we very much agree here. In the strict context of LLMs, I don’t think they’re conscious as well. At best it’s like a Boltzmann brain that briefly springs into existence. I think consciousness requires a sort of a recursive quality where the system models itself as part of the its world model creating a sort of a resonance. I’m personally very partial to the argument that Hofstadter makes in I Am a Strange Loop regarding the nature of the phenomenon.

      That said, we can already see how LLMs are being combined with things like symbolic logic in neurosymbolic systems or reinforcement learning in case of DeepSeek. It’s highly likely that LLMs will end up being just one piece of a puzzle in future AI systems. It’s an algorithm that does a particular thing well, but it’s not sufficient on its own. We’re also seeing these things being applied to robotics. I expect that that’s where we may see genuinely conscious systems emerge. Robots create a world model of their environment, and they have to model themselves as an actor within that the environment. The internal reasoning model may end up producing a form of conscious experience as a result.

      I do think that from an ethics perspective, we should err on the side of caution with these things. If we can’t prove that something is conscious one way or the other, but we have a basis to suspect that it may be, then we should probably treat it as such. Sadly, given how we treat other living beings on this planet, I have very little hope that the way we treat AIs will resemble anything remotely ethical.