• General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035.

    That seems to be aging well. But what is the definition of “strong AI”?

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Self aware consciousness on a human level. So it’s still far from a sure thing, because we haven’t figured consciousness out yet.
      But I’m still very happy with my prediction, because AI is now at a way more useful and versatile level than ever, the use is already very widespread, and the research and investments have exploded the past decade. And AI can do things already that used to be impossible, for instance in image and movie generation and manipulation.

      But I think the code will be broken soon, because self awareness is a thing of many degrees. For instance a dog is IMO obviously self aware, but it isn’t universally recognized, because it doesn’t have the same degree of selv awareness humans have.
      This is a problem that dates back to 17th century and Descartes, who claimed for instance horses and dogs were mere automatons, and therefore couldn’t feel pain.
      This of course completely in line with the Christian doctrine that animals don’t have souls.
      But to me it seems self awareness like emotions don’t have to start at human level, it can start at a simpler level, that then can be developed further.

      PS:
      It’s true animals don’t have souls, in the sense of something magical provided by a god, because nobody has. Souls are not necessary to explain self awareness or consciousness or emotions.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          To understand what “I think therefore I am” means, is a very high level of consciousness.
          At lower levels things get more complicated to explain.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Good question.
          Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
          We may think we have it before it’s actually real, some claim they believe some of the current systems display traits of consciousness already. I don’t believe that it’s even close yet though.
          As wrong as Descartes was about animals, he still nailed it with “I think therefore I am” (cogito, ergo sum) https://www.britannica.com/topic/cogito-ergo-sum.
          Unfortunately that’s about as far as we can get, before all sorts of problems arise regarding actual evidence. So philosophically in principle it is only the AI itself that can know for sure if it is truly conscious.

          All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
          For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
          Such things will of course be ironed out, and maybe this on is already. But it shows the current model, isn’t good enough for the basic comprehension I would think would follow from consciousness.

          Luckily there are people that know much more about this, and it will be interesting to hear what they have to say, when the time arrives. 😀

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            Obviously the Turing test doesn’t cut it, which I suspected already back then.

            The Turing test is misunderstood a lot. Here’s Wikipedia on the Turing test:

            [Turing] opens with the words: “I propose to consider the question, ‘Can machines think?’” Because “thinking” is difficult to define, Turing chooses to “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words”. Turing describes the new form of the problem in terms of a three-person party game called the “imitation game”, in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing’s new question is: "Are there imaginable digital computers which would do well in the imitation game?

            One should bear in mind that scientific methodology was not very formalized at the time. Today, it is self-evident to any educated person that the “judges” would have to be blinded, which is the whole point of the text chat setup.

            What has been called “Turing test” over the years is simultaneously easier and harder. Easier, because these tests usually involved only a chat without any predetermined task that requires thinking. It was possible to pass without having to think. But also harder, because thinking alone is not sufficient. One has to convince an interviewer that one is part of the in-group. It is the ultimate social game; indeed, often a party game (haha, I made a pun). Turing himself, of course, eventually lost such a game.

            All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.

            For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.

            This connects consciousness to reasoning ability in some unclear way. The example seems unfortunate, since humans need training to understand it. Most people in developed countries would agree that the equivalence is formally correct, but very few would be able to prove it. Most wouldn’t even know how to spell Peano Axiom; nor would they even try (Oh, luckier bridge and rail!)

            • Buffalox@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              I know about the Turing test, it’s what we were taught about and debated in philosophy class at University of Copenhagen, when I made my prediction that strong AI would probably be possible about year 2035.

              to exhibit intelligent behaviour equivalent to that of a human

              Here equivalent actually means indistinguishable from a human.

              But as a test of consciousness that is not a fair test, because obviously a consciousness can be different from a human, and our understanding of how a simulation can fake something without it being real is also a factor.
              But the original question remains, how do we decide it’s not conscious if it responds as if it is?

              This connects consciousness to reasoning ability in some unclear way.

              Maybe it’s unclear because you haven’t pondered the connection? Our consciousness is a very big part of our reasoning, consciousness is definitely guiding our reasoning. And our consciousness improve the level of reasoning we are capable of.
              I don’t see why the example requiring training for humans to understand is unfortunate. A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.

              It’s hard to explain, but intuitively it seems to me the missing factor is consciousness. It has learned tons of information by heart, but it doesn’t really understand any of it, because it isn’t conscious.

              Being conscious is not just to know what the words mean, but to understand what they mean.
              I think therefore I am.

              • General_Effort@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                I don’t see why the example requiring training for humans to understand is unfortunate.

                Humans aren’t innately good at math. I wouldn’t have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn’t understand what there is to prove. Actually, I’m not sure if I do.

                It’s not clear why such deficiencies among humans do not argue against human consciousness.

                A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.

                That’s dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it’s not entirely clear how much data that is, but it’s a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.

                Being conscious is not just to know what the words mean, but to understand what they mean.

                What might an operational definition look like?

                • Buffalox@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  2 hours ago

                  Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.

                  What might an operational definition look like?

                  I think if I could describe that, I might actually have solved the problem of strong AI.
                  You are asking unreasonable questions.

                  • General_Effort@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    2 hours ago

                    Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.

                    If I can’t prove it, I don’t know how I can claim to understand it.

                    It’s axiomatic that equality is symmetric. It’s also axiomatic that 1+1=2. There is not a whole lot to understand. I have memorized that. Actually, having now thought about this for a bit, I think I can prove it.

                    What makes the difference between a human learning these things and an AI being trained for them?

                    I think if I could describe that, I might actually have solved the problem of strong AI.

                    Then how will you know the difference between strong AI and not-strong AI?