As AI capabilities advance in complex medical scenarios that doctors face on a daily basis, the technology remains controversial in medical communities.

  • Qu4ndo
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    4
    ·
    edit-2
    10 months ago

    To allow ChatGPT or comparable AI models to be deployed in hospitals, Succi said that more benchmark research and regulatory guidance is needed, and diagnostic success rates need to rise to between 80% and 90%.

    Sucks if your one of the 10-20% who don’t get proper treatment (maybe die?) because some doctor doesn’t have time to double check. But hey … efficiency!

    • theluddite@lemmy.ml
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      4
      ·
      10 months ago

      Ya that’s a fundamental misunderstanding of percentages. For an analogous situation with which we’re all more intuitively familiar, a self driving car that is 99.9% accurate in detecting obstacles crashes into one in one thousand people and/or things. That sucks.

      Also, most importantly, LLMs are incapable of collaboration, something very important in any complex human endeavor but difficult to measures, and therefore undervalued by our inane, metrics-driven business culture. Chatgpt won’t develop meaningful, mutually beneficial relationships with its colleagues, who can ask each other for their thoughts when they don’t understand something. It’ll just spout bullshit when it’s wrong, not because it doesn’t know, but because it has no concept of knowing at all.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        10 months ago

        It really needs to be pinned to the top of every single discussion around chatgbt:

        It does not give answers because it knows. It gives answers because it thinks it looks right.

        Remember back in school when you didn’t study for a test and went through picking answers that “looked right” because you vaguely remember hearing the words in Answer B during class at some point?

        It will never have wisdom and intuition from experience, and that’s critically important for doctors.

          • ourob
            link
            fedilink
            English
            arrow-up
            6
            ·
            10 months ago

            “Looks right” in a human context means the one that matches a person’s actual experience and intuition. “Looks right” in an LLM context means the series of words have been seen together often in the training data (as I understand it, anyway - I am not an expert).

            Doctors are most certainly not choosing treatment based on what words they’ve seen together.

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      10 months ago

      Or one of the ninty nine percent of people who don’t give the AI their symptoms in medical terminology.