• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    You’ve missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that there’s a person there is irrelevant, and they could be replaced with a speaker or computer terminal.

    Put differently, it’s not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.

    If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely can’t be very good at anything are not comforting to me at all.

    • kassiopaea@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      This. I often see people shitting on AI as “fancy autocomplete” or joking about how they get basic things incorrect like this post but completely discount how incredibly fucking capable they are in every domain that actually matters. That’s what we should be worried about… what does it matter that it doesn’t “work the same” if it still accomplishes the vast majority of the same things? The fact that we can get something that even approximates logic and reasoning ability from a deterministic system is terrifying on implications alone.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Why doesn’t the LLM know to write (and run) a program to calculate the number of characters?

        I feel like I’m missing something fundamental.

        • OsrsNeedsF2P@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          You didn’t get good answers so I’ll explain.

          First, an LLM can easily write a program to calculate the number of rs. If you ask an LLM to do this, you will get the code back.

          But the website ChatGPT.com has no way of executing this code, even if it was generated.

          The second explanation is how LLMs work. They work on the word (technically token, but think word) level. They don’t see letters. The AI behind it literally can only see words. The way it generates output is it starts typing words, and then guesses what word is most likely to come next. So it literally does not know how many rs are in strawberry. The impressive part is how good this “guessing what word comes next” is at answering more complex questions.

            • OsrsNeedsF2P@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              2 months ago

              ChatGPT used to actually do this. But they removed that feature for whatever reason. Now the server that the LLM runs on doesn’t isn’t provide the LLM a Python terminal, so the LLM can’t query it

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.

      I’d be more impressed if the room could tell me how many "r"s are in Strawberry inside five minutes.

      If one day we discover that the human brain works on much simpler principles

      Human biology, famous for being simple and straightforward.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Because LLMs operate at the token level, I think it would be a more fair comparison with humans to ask why humans can’t produce the IPA spelling words they can say, /nɔr kæn ðeɪ ˈizəli rid θɪŋz ˈrɪtən ˈpjʊrli ɪn aɪ pi ˈeɪ/ despite the fact that it should be simple to – they understand the sounds after all. I’d be impressed if somebody could do this too! But that most people can’t shouldn’t really move you to think humans must be fundamentally stupid because of this one curious artifact. Maybe they are fundamentall stupid for other reasons, but this one thing is quite unrelated.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          why humans can’t produce the IPA spelling words they can say, /nɔr kæn ðeɪ ˈizəli rid θɪŋz ˈrɪtən ˈpjʊrli ɪn aɪ pi ˈeɪ/ despite the fact that it should be simple to – they understand the sounds after all

          That’s just access to the right keyboard interface. Humans can and do produce those spellings with additional effort or advanced tool sets.

          humans must be fundamentally stupid because of this one curious artifact.

          Humans turns oatmeal into essays via a curios lump of muscle is an impressive enough trick on its face.

          LLMs have 95% of the work of human intelligence handled for them and still stumble on the last bits.

          • jsomae@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 months ago

            I mean, among people who are proficient with IPA, they still struggle to read whole sentences written entirely in IPA. Similarly, people who speak and read chinese struggle to read entire sentences written in pinyin. I’m not saying people can’t do it, just that it’s much less natural for us (even though it doesn’t really seem like it ought to be.)

            I agree that LLMs are not as bright as they look, but my point here is that this particular thing – their strange inconsistency understanding what letters correspond to the tokens they produce – specifically shouldn’t be taken as evidence for or against LLMs being capable in any other context.

            • UnderpantsWeevil@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              Similarly, people who speak and read chinese struggle to read entire sentences written in pinyin.

              Because pinyin was implemented by the Russians to teach Chinese to people who use Cyrillic characters. Would make as much sense to call out people who can’t use Katakana.

              • jsomae@lemmy.ml
                link
                fedilink
                arrow-up
                2
                ·
                2 months ago

                More like calling out people who can’t read romaji, I think. It’s just not a natural encoding for most Japanese people, even if they can work it out if you give them time.