• BreadstickNinja@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    ·
    edit-2
    3 days ago

    In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and “irreversible defects” in performance. The final result? A Nature 2024 paper stated, “The model becomes poisoned with its own projection of reality.”

    A remarkably similar thing happened to my aunt who can’t get off Facebook. We try feeding her accurate data, but she’s become poisoned with her own projection of reality.

    • evasive_chimpanzee@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 days ago

      It’s such an easy thing to predict happening, too. If you did it perfectly, it would, at best, maintain an unstable equilibrium and just keep the same output quality.

      • BreadstickNinja@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 days ago

        Unstable, yes. Equilibrium… no.

        She sometimes maintains coherence for several responses, but at a certain point, the output devolves into rants about how environmentalists caused the California wildfires.

        These conversations consume a lot of energy and provide very limited benefit. We’re beginning to wonder if the trade-offs are worth it.