It seems like a lot of professionals are thinking we will reach AGI within my lifetime. Some credible sources say within 5 years but who knows.

Either way I suspect it is inevitable. Who knows what may follow. Infinite wealth gap growth, mass job loss, post-work reforms, I’m not sure.

A bunch of questions bounce around in my head, examples may be:

  • Will private property rights be honored in said future?
  • Could Amish communities still exist?
  • Is it something we can prepare for as individuals?

I figured it is important to talk about seeing as it will likely occur in my lifetime and many of yours.

Edit: linked AGI wikipedia

  • Dem Bosain@midwest.social
    link
    fedilink
    English
    arrow-up
    34
    ·
    8 days ago

    We are a LOT farther away than 5 years. What you see right now is a lot of hype from venture capitalists over chatbots.

    I think any real AGI will have to have some kind of reasoning ability, and I don’t see any sign of that in current AI.

    • nfh@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 days ago

      Certainly, some interesting developments have happened, and we’ve realized our old models/thinking about progress towards AGI needed improvement… and that’s real. I think there’s a serious conversation to be had about what AGI would be, and how we can know we’re approaching it, and when it has arrived.

      But anybody telling you it is close either has something to sell you, or has themselves bought it.

  • chobeat@lemmy.ml
    link
    fedilink
    arrow-up
    21
    ·
    edit-2
    8 days ago

    protestantism for techbros. Boring. No machine will come and save you, just go to therapy instead.

    Also the future is built, not predicted.

  • borokov@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    8 days ago

    That at the begining, I thought the principle was to make computer smarter and smarter so that they can reach the level of human brain. But it’s seems they just try to make people dumber and dumber to reach level of current AI.

  • hexthismess [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 days ago

    If we don’t remove profit motives from the mix, all AGI will be used for is to create the torment nexus. Without the profit motive, I think that AGI would be a novelty at best, and useless at worst.

  • juliebean@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    7 days ago

    i reckon we won’t have a solid consensus about when we first had AGI until at least a decade after it happens. maybe we already do (though i doubt it). How it shakes out on a societal level is hard to say, but so long as the utility functions are being written, explicitly or implicitly, by capitalists, i don’t think it’ll go well.

  • m532@lemmygrad.ml
    link
    fedilink
    arrow-up
    3
    ·
    8 days ago

    I don’t think AGI will exist soon, but if it gets made, they will try to give it human rights.

    Never allow them to do this. They will try this to take away people’s rights, just like with corporate personhood. All stories with talking robots are bullshit, the “robots” in there are just mechanical humans, not AGI.

    • SEND_BUTTPLUG_PICS@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      8 days ago

      You could just Google it. AGI stands for Artificial General Intelligence. It’s not the type of Ai that we have today.

      There’s a great book about the pros and cons of AGI called “Superintelligence” by Nick Bostrom that I’d recommend reading if you have any interest in the topic.

      • Shimitar@downonthestreet.eu
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Yeah I know I an google it. But at the same time, I could expect that in a general community people take care to contextualize what they write about.