• KillingTimeItself@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    28 days ago

    are there any studies or data on the actual consumption of electricity that AI uses? I know that training consumes a lot, but if the usage of it doesn’t consume much, it doesn’t really matter since it’s a one time cost.

    • Lemmilicious@feddit.nu
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      28 days ago

      I don’t know of any studies unfortunately, but I did want to point out that training is not quite a one time cost in practice, because training has already been done loads times and is still being done! I’m theory, if we stopped training all AI and just kept the ones we have, then indeed the training cost would be bounded just like you say, I’m just afraid we’re quite far from that.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        27 days ago

        this is true, but this is always going to lead to a different AI and a different product eventually, it’s not like it’s going to be sustained entirely by the whims of chatgpt 3.5 for example.

    • prettybunnys@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      28 days ago

      FWIW I’ve generated ai images using a solar panel the size of an iPhone as the power source.

      The training was the cost.

      The data was connected by wire but powered externally, which I argued was cheating but I guess not idk I just work here

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        28 days ago

        anything outside of the device doing the work is an externality, so we can write that off anyway.

        Consuming memes on le internets would be consuming whatever that consumes, and people are fine with that, just not AI, so it’s irrelevant anyway.

    • Hawk@lemmynsfw.com
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      28 days ago

      It would be on the order of aN intensive video game, maybe. Depends on the size of the model, etc.

      Training is definitely expensive but you are right in that it’s a one-time cost.

      Overall, the challenge is that it’s very inefficient. To use a machine learning algorithm to do something that could be implemented deductively is not ideal (On the other hand, if it saves human effort…)

      To a degree, trained models can also be retrained on newer data (eg freezing layers, LoRa, GaLore, Hypernetworks etc). Also newer data can be injected into a prompt to make sure that the responses are aligned with newer versions of software, for example.

      The electricity consumption is a concern, but it’s probably not going to be the end of the world.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        28 days ago

        the one place you would use an LLM is going to be something where nothing else can be used, cataloging information for example.

        This pretty much lines up with my understanding of AI.

        • Hawk@lemmynsfw.com
          link
          fedilink
          arrow-up
          2
          ·
          28 days ago

          They can also be really good for quickly writing code if you line up a whole bunch of tests and line up all the types and then copy and paste that a few times, maybe with a macro in Vim.

          The LLM will fill in the middle correctly, like 90% of the time. Compare it in git, make sure the tests pass, and then that’s an extra 20 minutes I get to spend with my wife and kids.

          • DragonTypeWyvern@midwest.social
            link
            fedilink
            arrow-up
            1
            ·
            28 days ago

            And then the one fucking hallucination you missed ends up crashing the whole thing on the weekend.

            On the other hand, you could have done that by accident yourself.

            • Hawk@lemmynsfw.com
              link
              fedilink
              arrow-up
              2
              ·
              28 days ago

              Yeah, that’s always a risk, but as you said, humans make mistakes too. And if you change your approach to software development by writing more tests and using strict interfaces or type annotations, etc., it is pretty reliable and definitely saves time.

          • KillingTimeItself@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            28 days ago

            that’s definitely one of the potentials, if you prefer a schizoprehnic person writing code which you then have to make work. I suppose.

            I’m autistic about things and would write it all myself to my very high standards of “how autistic is it? If the answer is yes, than good enough.”

            if other people look at the shit im doing and say “you could do this better” it’s not done well enough.

            • Hawk@lemmynsfw.com
              link
              fedilink
              arrow-up
              1
              ·
              28 days ago

              Which I can totally understand, but I would like to spend more time with my family and less time writing code.

              This also allows me to iterate faster and identify useful ideas That justify deeper effort.