LOOK MAA I AM ON FRONT PAGE
I see a lot of misunderstandings in the comments 🫤
This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.
Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.
What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it’s no longer reasoning? I feel like at this point a more relevant question is “What exactly is reasoning?”. Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.
Wow it’s almost like the computer scientists were saying this from the start but were shouted over by marketing teams.
For me it kinda went the other way, I’m almost convinced that human intelligence is the same pattern repeating, just more general (yet)
Except that wouldn’t explain conscience. There’s absolutely no need for conscience or an illusion(*) of conscience. Yet we have it.
- arguably, conscience can by definition not be an illusion. We either perceive “ourselves” or we don’t
How do you define consciousness?
It’s the thing that the only person who can know for sure you have it is you yourself. If you have to ask, I might have to assume you could be a biological machine.
Is that useful for completing tasks?
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
"It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´.
It’s called the AI Effect.As Larry Tesler puts it, “AI is whatever hasn’t been done yet.”.
I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so take that! \s
Seriously tho. This person is arguing that a checkers program is “AI”. It kinda demonstrates the loooong history of this grift.
deleted by creator
You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.
Maybe you failed all your high school classes, but that ain’t got none to do with me.
Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.
Most humans don’t reason. They just parrot shit too. The design is very human.
LLMs deal with tokens. Essentially, predicting a series of bytes.
Humans do much, much, much, much, much, much, much more than that.
No. They don’t. We just call them proteins.
You are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.
No. I’m not. You’re nothing more than a protein based machine on a slow burn. You don’t even have control over your own decisions. This is a proven fact. You’re just an ad hoc justification machine.
How many trillions of neuron firings and chemical reactions are taking place for my machine to produce an output? Where are these taking place and how do these regions interact? What are the rules for storing and reshaping memory in response to stimulus? How many bytes of information would it take to describe and simulate all of these systems together?
The human brain alone has the capacity for about 2.5PB of data. Our sensory systems feed data at a rate of about 109 bits/s. The entire English language, compressed, is about 30MB. I can download and run an LLM with just a few GB. Even the largest context windows are still well under 1GB of data.
Just because two things both find and reproduce patterns does not mean they are equivalent. Saying language and biological organisms both use “bytes” is just about as useful as saying the entire universe is “bytes”; it doesn’t really mean anything.
Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.