LOOK MAA I AM ON FRONT PAGE

  • minoscopede@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      8
      ·
      edit-2
      2 months ago

      What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it’s no longer reasoning? I feel like at this point a more relevant question is “What exactly is reasoning?”. Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

      https://en.wikipedia.org/wiki/Reasoning_system

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Wow it’s almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

  • Nanook@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      "It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´.
      It’s called the AI Effect.

      As Larry Tesler puts it, “AI is whatever hasn’t been done yet.”.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so take that! \s

        Seriously tho. This person is arguing that a checkers program is “AI”. It kinda demonstrates the loooong history of this grift.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

    • El Barto@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      LLMs deal with tokens. Essentially, predicting a series of bytes.

      Humans do much, much, much, much, much, much, much more than that.

        • stickly@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          You are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.

          • Zexks@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 month ago

            No. I’m not. You’re nothing more than a protein based machine on a slow burn. You don’t even have control over your own decisions. This is a proven fact. You’re just an ad hoc justification machine.

            • stickly@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              How many trillions of neuron firings and chemical reactions are taking place for my machine to produce an output? Where are these taking place and how do these regions interact? What are the rules for storing and reshaping memory in response to stimulus? How many bytes of information would it take to describe and simulate all of these systems together?

              The human brain alone has the capacity for about 2.5PB of data. Our sensory systems feed data at a rate of about 109 bits/s. The entire English language, compressed, is about 30MB. I can download and run an LLM with just a few GB. Even the largest context windows are still well under 1GB of data.

              Just because two things both find and reproduce patterns does not mean they are equivalent. Saying language and biological organisms both use “bytes” is just about as useful as saying the entire universe is “bytes”; it doesn’t really mean anything.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.