• dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Yet we have the same fundamental problem with the human brain

    And LLMs aren’t human brains, they don’t even work remotely similarly. An LLM has more in common with an Excel spreadsheet than with a neuron. Read on the learning models and pattern recognition theories behind LLMs, they are explicitly designed to not function like humans. So we cannot assume that the same emergent properties exist on an LLM.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        That’s not how science works. You are the one claiming it does, you have the burden of proof to prove they have the same properties. Thus far, assuming they don’t as they aren’t human is the sensible rational route.

        • UnpluggedFridge@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 month ago

          Read again. I have made no such claim, I simply scrutinized your assertions that LLMs lack any internal representations, and challenged that assertion with alternative hypotheses. You are the one that made the claim. I am perfectly comfortable with the conclusion that we simply do not know what is going on in LLMs with respect to human-like capabilities of the mind.