• UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    When people refer to agents, is this what they are supposed to be doing?

    That’s not how LLMs operate, no. They aggregate raw text and sift for popular answers to common queries.

    ChatGPT is one step removed from posting your question to Quora.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      But an LLM as a node in a framework that can call a python library should be able to count the number of Rs in strawberry.

      It doesn’t scale to AGI but it does reduce hallucinations.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 days ago

        But an LLM as a node in a framework that can call a python library

        Isn’t how these systems are configured. They’re just not that sophisticated.

        So much of what Sam Alton is doing is brute force, which is why he thinks he needs a $1T investment in new power to build his next iteration model.

        Deepseek gets at the edges of this through their partitioned model. But you’re still asking a lot for a machine to intuit whether a query can be solved with some exigent python query the system has yet to identify.

        It doesn’t scale to AGI but it does reduce hallucinations

        It has to scale to AGI, because a central premise of AGI is a system that can improve itself.

        It just doesn’t match the OpenAI development model, which is to scrape and sort data hoping the Internet already has the solution to every problem.