• cronenthal
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    6 hours ago

    That’s something people really have to get into their heads: an “answer” by an LLM ist just a series of high probability tokens. It’s only us humans who interpret reason and value into it. From the system’s standpoint it’s just numbers without any meaning whatsoever. And no amount of massaging will change that. LLMs are about as “intelligent” as a fancy database query.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      I use it for basic Python questions, but it gets even basic stuff wrong. The reframing can sometimes help me see new options when I get in a rut, but I’m not putting that code into production.

      • prettybunnys@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        I find myself asking an AI things and getting an answer that makes me go “what the actual fuck, why would you do this when you SHOULD do it this other way”

        Which is the best way it’s helped me.

        Making me realize I know what I’m doing already.