• Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 hours ago

    Then how will you know the difference between strong AI and not-strong AI?

    I’ve already stated that that is a problem:

    From a previous answer to you:

    Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.

    Because I don’t think we have a sure methodology.

    I think therefore I am, is only good for the conscious mind itself.
    I can’t prove that other people are conscious, although I’m 100% confident they are.
    In exactly the same way we can’t prove when we have a conscious AI.

    But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Because I don’t think we have a sure methodology.

      I don’t think there’s an agreed definition.

      Strong AI or AGI, or whatever you will, is usually talked about in terms of intellectual ability. It’s not quite clear why this would require consciousness. Some tasks are aided by or maybe even necessitate self-awareness; for example, chatbots. But it seems to me that you could leave out such tasks and still have something quite impressive.

      Then, of course, there is no agreed definition of consciousness. Many will argue that the self-awareness of chatbots is not consciousness.

      I would say most people take strong AI and similar to mean an artificial person, for which they take consciousness as a necessary ingredient. Of course, it is impossible to engineer an artificial person. It is like creating a technology to turn a peasant into a king. It is a category error. A less kind take could be that stochastic parrots string words together based on superficial patterns without any understanding.

      But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.

      Indeed, I do not see the relation between consciousness and reasoning in this example.

      Self-awareness means the ability to distinguish self from other, which implies computing from sensory data what is oneself and what is not. That could be said to be a form of reasoning. But I do not see such a relation for the example.

      By that standard, are all humans conscious?

      FWIW, I asked GPT-4o mini via DDG.

      Screenshot

      I don’t know if that means it understands. It’s how I would have done it (yesterday, after looking up Peano Axioms in Wikipedia), and I don’t know if I understand it.