It wants to seem smart, so it gives lengthy replies even when it doesn’t know what it’s talking about.

In attempt to be liked, it agrees with most everything you say, even if it just contradicted your opinion

When it doesn’t know something, it makes shit up and presents it as fact instead of admitting to not knowing something

It pulls opinions out of its nonexistent ass about the depths and meaning of a work of fiction based on info it clearly didn’t know until you told it

It often forgets what you just said and spouts bullshit you already told it was wrong

  • hunnybubny
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    They probably did anyway.

    It does not matter. Output is based on probability.

    • anomnom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Thanks, way to forget that it’s fancy autocomplete.

      Yeah they probably pirated it, but apparently done weight knowledge sources very well.

      That seems to be the big missing part of all this gen AI.

      I wonder if the selection of images online tends to be the higher quality subset of all imagery, whereas writings are all over the place, including a large quantity of shitposts. Could it make training image generators easier than text ones?