• OBJECTION!@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    Tbf, the article should probably mention the fact that machine learning programs designed to play chess blow everything else out of the water.

    • Zenith@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      I forgot which airline it is but one of the onboard games in the back of a headrest TV was a game called “Beginners Chess” which was notoriously difficult to beat so it was tested against other chess engines and it ranked in like the top five most powerful chess engines ever

    • bier@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      Yeah its like judging how great a fish is at climbing a tree. But it does show that it’s not real intelligence or reasoning

  • arc99@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Hardly surprising. Llms aren’t -thinking- they’re just shitting out the next token for any given input of tokens.

      • arc99@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 days ago

        An LLM is an ordered series of parameterized / weighted nodes which are fed a bunch of tokens, and millions of calculations later result generates the next token to append and repeat the process. It’s like turning a handle on some complex Babbage-esque machine. LLMs use a tiny bit of randomness (“temperature”) when choosing the next token so the responses are not identical each time.

        But it is not thinking. Not even remotely so. It’s a simulacrum. If you want to see this, run ollama with the temperature set to 0 e.g.

        ollama run gemma3:4b
        >>> /set parameter temperature 0
        >>> what is a leaf
        

        You will get the same answer every single time.