From https://twitter.com/llm_sec/status/1667573374426701824

  1. People ask LLMs to write code
  2. LLMs recommend imports that don’t actually exist
  3. Attackers work out what these imports’ names are, and create & upload them with malicious payloads
  4. People using LLM-written code then auto-add malware themselves
  • Tovervlag@feddit.nl
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    From everything I asked chatgpt I found the most success in helping me understand some stuff in Linux or just writing small simple scripts that does some converting. But more often than not if I ask it about a system and how something should work, it literally makes something up. If you have googled already and read the documentation it’s probably not going to help you.

    In the case of linux there is too much info on the web where it gets all the info from too so if posts are old or false it literally provides you with wrong information as a fact.

    What I have learned to do is to just try it but I want to fully understand what it does and I’ll convert stuff to my own style.