The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

  • GreatAlbatross@feddit.uk
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    1
    ·
    4 months ago

    The workload that’s starting now, is spotting bad code written by colleagues using AI, and persuading them to re-write it.

    “But it works!”

    ‘It pulls in 15 libraries, 2 of which you need to manually install beforehand, to achieve something you can do in 5 lines using this default library’

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      35
      ·
      4 months ago

      I was trying to find out how to get human readable timestamps from my shell history. They gave me this crazy script. It worked but it was super slow. Later I learned you could do history -i.

      • GreatAlbatross@feddit.uk
        link
        fedilink
        English
        arrow-up
        20
        ·
        4 months ago

        Turns out, a lot of the problems in nixland were solved 3 decades ago with a single flag of built-in utilities.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          Apart from me not reading the manual (or skimming to quick) I might have asked the LLM to check the history file rather than the command. Idk. I honestly didn’t know the history command did anything different than just printing the history file

      • trolololol@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I don’t run crazy scripts in my machine. If I don’t understand it’s not safe enough.

        That’s how you get pranked and hacked

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 months ago

      TBH those same colleagues were probably just copy/pasting code from the first google result or stackoverflow answer, so arguably AI did make them more productive at what they do

      • skillissuer
        link
        fedilink
        English
        arrow-up
        15
        ·
        4 months ago

        yay!! do more stupid shit faster and with more baseless confidence!

      • rozodru@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        2012 me feels personally called out by this. fuck 2012 me that lazy fucker. stackoverflow was my “get out of work early and hit the bar” card.

    • ILikeBoobies@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      I asked it to spot a typo in my code, it worked but it rewrote my classes for each function that called them

      • morbidcactus@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        I gave it a fair shake after my team members were raving about it saving time last year, I tried a SFTP function and some Terraform modules and man both of them just didn’t work. it did however do a really solid job of explaining some data operation functions I wrote, which I was really happy to see. I do try to add a detail block to my functions and be explicit with typing where appropriate so that probably helped some but yeah, was actually impressed by that. For generation though, maybe it’s better now, but I still prefer to pull up the documentation as I spent more time debugging the crap it gave me than piecing together myself.

        I’d use a llm tool for interactive documentation and reverse engineering aids though, I personally think that’s where it shines, otherwise I’m not sold on the “gen ai will somehow fix all your problems” hype train.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 months ago

          I think the best current use case for AI when it comes to coding is autocomplete.

          I hate coding without Github Copilot now. You’re still in full control of what you’re building, the AI just autocompletes the menial shit you’ve written thousands of times already.

          When it comes to full applications/projects, AI still has some way to go.

          • morbidcactus@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            I can get that for sure, I did see a client using it for debugging which seemed interesting as well, made an attempt to narrow down where the error occurred and what actually caused it.

            • NιƙƙιDιɱҽʂ@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              4 months ago

              I’ll do that too! In the actual code you can just write something like

              // Q: Why isn't this working as expected?
              // A: 
              

              and it’ll auto complete an answer based on the code. It’s not always 100% on point, but it usually leads you in the right direction.