• ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    I think we very much agree here. In the strict context of LLMs, I don’t think they’re conscious as well. At best it’s like a Boltzmann brain that briefly springs into existence. I think consciousness requires a sort of a recursive quality where the system models itself as part of the its world model creating a sort of a resonance. I’m personally very partial to the argument that Hofstadter makes in I Am a Strange Loop regarding the nature of the phenomenon.

    That said, we can already see how LLMs are being combined with things like symbolic logic in neurosymbolic systems or reinforcement learning in case of DeepSeek. It’s highly likely that LLMs will end up being just one piece of a puzzle in future AI systems. It’s an algorithm that does a particular thing well, but it’s not sufficient on its own. We’re also seeing these things being applied to robotics. I expect that that’s where we may see genuinely conscious systems emerge. Robots create a world model of their environment, and they have to model themselves as an actor within that the environment. The internal reasoning model may end up producing a form of conscious experience as a result.

    I do think that from an ethics perspective, we should err on the side of caution with these things. If we can’t prove that something is conscious one way or the other, but we have a basis to suspect that it may be, then we should probably treat it as such. Sadly, given how we treat other living beings on this planet, I have very little hope that the way we treat AIs will resemble anything remotely ethical.