• Kalcifer@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    3 months ago

    Huh. That’s actually kind’ve a clever use case. I hadn’t considered that. I presume the main obstacle would be the token limit of whatever LLM that one is using (presuming that it was an LLM that was used). Analyzing an entire codebase, ofc, depending on the project, would likely require an enormous amount of tokens that an LLM wouldn’t be able to handle, or it would just be prohibitively expensive. To be clear, that’s not to say that I know that such an LLM doesn’t exist — one very well could — but if one doesn’t, then that would be rationale that i would currently stand behind.

    • unknowing8343OP
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      I understand, but I wouldn’t be surprised to see some solution out there that could maybe feed the AI chunks of code without context… It may still be able to detect “hey you told me this software is supposed to do X and here it seems to be doing Y”.

      I guess we’ll have to wait a couple of years for these tools to be accessible and affordable.

  • Static_Rocket@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    3 months ago

    You would first need to define malicious code within the context of that repo. To some people, telemetry is malicious.

    • Winfried 🌈@mastodon.nl
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      @Static_Rocket
      @unknowing8343

      Under the GDPR any data processing must be proportional to its goal, the goal must be transparent and justified and the processing must be limited to its goal. Telemetry is perfectly fine if you keep to the rules and malicious if you don’t. So simple are things. And no, this can’t be judged by looking at the repo, it is the deployment that matters. Nonetheless some code is always malicious, some code should be deployed with care. Would be good to scan for those.

    • unknowing8343OP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      3 months ago

      Yes, of course, the idea would be something like passing the AI a repo link and a prompt like “this repo is supposed to be used for X, tell me if you find anything weird that doesn’t fit that purpose”.

  • remram@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    Probably not. Obfuscation works, and might even depend on remote code being downloaded at either build time or run time.

    There are a lot of heuristics you can use (e.g. disallowing some functions/modules) to check a codebase, but those already exist no AI required. Unless you call static analysis “AI”, who knows.

    • unknowing8343OP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      But an AI can “realise” the code might be downloading something it doesn’t need to. That’s the point.

      AI is “smart” and understands that you told it that the library was supposed to do something specific, and it can understand that and look for things that seem not correlated to the purpose of the repo.

      • remram@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        If you’re one of those people that think every product is better if there’s “AI” on the box then sure. What you’re describing is static analysis though, it is not new.

      • Sethayy@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        Its got a dataset of billions for tokens, youre better off running the stock market as an antivirus.

        Instead if you care use specifically curated programs for the task, like antivirus’