• meyotch@slrpnk.net
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    1 month ago

    I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      1 month ago

      I don’t think they add user input to their training data like that.

      • kitnaht@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 month ago

        They don’t. The models are trained on sanitized data, and don’t permanently “learn”. They have a large context window to pull from (reaching 200k ‘tokens’ in some instances) but lots of people misunderstand how this stuff works on a fundamental level.