Original question by @BalakeKarbon@lemmy.ml

It seems like a lot of professionals are thinking we will reach AGI within my lifetime. Some credible sources say within 5 years but who knows.

Either way I suspect it is inevitable. Who knows what may follow. Infinite wealth gap growth, mass job loss, post-work reforms, I’m not sure.

A bunch of questions bounce around in my head, examples may be:

  • Will private property rights be honored in said future?
  • Could Amish communities still exist?
  • Is it something we can prepare for as individuals?

I figured it is important to talk about seeing as it will likely occur in my lifetime and many of yours.

  • nickwitha_k (he/him)@lemmy.sdf.org
    link
    fedilink
    arrow-up
    3
    ·
    16 hours ago

    It may or may not happen. What I do know is that it will never spontaneously arrise from an LLM, no matter how much data they dump into it or how many tons of potable water they carelessly waste.

  • truxnell@aussie.zone
    link
    fedilink
    arrow-up
    1
    ·
    13 hours ago

    As others have said, it AGI won’t be from LLMs. AGI is their current buzzword to hype stocks. If they declare theyve ‘reached’ AGI when you read the frine print it will be an arbitrary measure

    • Lovable Sidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      18 hours ago

      Just like Fusion power! What if AI and fusion invent each other at the same time?

      Maybe that’s what the aliens have been trying to tell us ALL ALONG!!!

  • leftzero@lemmynsfw.com
    link
    fedilink
    arrow-up
    2
    ·
    16 hours ago

    We were on track for it, but LLMs derailed that.

    Now we’ll have to wait for the bubble to burst, which will poison the concept of AI (since LLMs are being sold as AI despite being practically the opposite) in the minds of both users and investors for decades.

    It’d probably take a couple generations for any funding for AI research to be available after that (not to mention cleaning up all the LLM slop spillage from our knowledge repositories)… but by that time we’ll almost certainly be extinct due to global warming.

    The LLM peddlers murdered the future for short term profits, and doomed us all in the process.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    It won’t happen while I’m alive. Current LLMs are basically parrots with a lot of experience, and will never get close to AGI. We’re no closer today than when a computer first passed the Turing test in the 60s.

  • Arkouda@lemmy.ca
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    1 day ago

    I don’t think we will be able to achieve AGI with anything other than an absolute accident. We don’t understand our own brains enough to create one from scratch.

    • amelia@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      11 hours ago

      What makes you think a human brain has anything to do with general intelligence? Have you ever talked to people with a human brain?

      • Arkouda@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        I have talked to many people. All have demonstrated having a human brain with varying degrees of intelligence.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    I don’t see any reason to believe anything currently being done is a direct path to AGI. Sam Altman and Dario Amodei are straight up liars and the fact so many people lap up their shameless hype marketing is just sad.

  • rickdg@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    I’m more worried about jobs getting nuked no matter whatever AGI turns out to be. It can be vapourware and still the capitalist cult will sacrifice labour on that altar.

  • Dadifer@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    I think it is inevitable. The main flaw I see from a lay perspective in current methodology is trying to make one neural network that does everything. Our own brains are composed of multiple neural networks with different jobs interacting with each other, so I assume that AGI will require this approach.

    For example: we are currently struggling with LLM hallucinations. What could reduce this? A separate fact-checking neural network.

    Please keep in mind that my opinion is almost worthless, but you asked.

  • throwawayacc0430@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Is a lab grown genetically modified human-brain hooked to a computer technically considered “Artificial Intelligence”?

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    18 hours ago

    I have no doubt software will achieve general intelligence, but I think the point where it does will be hard to define. Software can already outdo humans at lots of specific reasoning tasks where the problems are well defined. But how do you measure the generality of problems, so you can say last week our AI wasn’t general enough to call it AGI, but now it is?

  • YappyMonotheist@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    1 day ago

    The computer doesn’t even understand things nor asks questions unprompted. I don’t think people understand that it doesn’t understand, lol. Intelligence seems to be non-computational!