• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • The reason I compare them to autocomplete is that they’re token predictors, just like autocomplete.
    They take your prompt and predict the first word of the answer. Then they take the result and predict the next word. Repeat until a minimum length is reached and the answer seems complete. Yes, they’re a tad smarter than autocorrect, but they understand just as little of the text they produce. The text will be mostly grammatically correct, but they don’t understand it. Much like a compiler can tell you if your code is syntactically correct, but can’t judge the logic.



  • Or, and hear me out on this, you could actually learn and understand it yourself! You know? The thing you go to university for?
    What would you say if, say, it came to light that an engineer had outsourced the statical analysis of a bridge to some half baked autocomplete? I’d lose any trust in that bridge and respect for that engineer and would hope they’re stripped of their title and held personally responsible.

    These things currently are worse than useless, by sometimes being right. It gives people the wrong impression that you can actually rely on them.

    Edit: just came across this MIT study regarding the cognitive impact of using LLMs: https://arxiv.org/abs/2506.08872