• jj4211@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    3 days ago

    So my experience has been:

    • For at least some jobs, there’s a ‘work item’ of basically generating a bunch of text for humans that no human will ever read, but management thinks it’s important. AI can generate those walls of text no one actually needs while making management feel good.
    • It can catch some careless mistakes and guess remediation frequently. For example, if you provide a template string but forget to actually push it through templating, it can see that a string looks like it should be a template and add the templating call and also do a decent job of guessing the variables to pass for the template. However it does have a high false-positive rate, and does hallucinate variables that didn’t exist sometimes, so it’s a bit frustrating and I’m not sure if the false error annoyance is worth it…
    • On code completion, it can guess the next line or two I was going for about 15% of the time, 20% of the time with some trivial edits to fix it. A bit annoying because along with the suggested line or two it can get right, it will tend to suggest like 6-10 more lines that are completely wrong 99% of the time, so if I accept the completion I have to delete a bunch. The 1% of the time that it manages to land a full, 6 line completion accurately seems magical, but not magical enough to forget being annoyed at usually having to undo most of the work. Further a bit of a challenge as it has a high chance of ‘looking’ correct even as it makes a mistake, and if you are skimming the suggestion you might overlook the mistake because you aren’t forced to process it at the slow speed of typing. One thing it does do pretty well is if I’m about to construct a string intended for a human user, it will auto complete a decent enough error message for the human user, which tends to be a bit more forgiving of little mistakes in the data.