For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?

  • CoderKat@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    The example you give is also a big concern with how modern AI is very susceptible to leading questions. It’s very easy to get the answer you want by leading it on. That makes it a potential misinformation machine.

    Adversarial testing can help reduce this, but it’s an uphill battle to train an AI faster than people get mislead by it.