• fkn@lemmy.world
    link
    fedilink
    arrow-up
    164
    arrow-down
    3
    ·
    1 year ago

    I know this is a meme, but just in case someone doesn’t actually know. CI saves literally thousands upon thousands of dev hours a year, even for small teams.

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      38
      ·
      1 year ago

      As annoying as it is when someone else breaks the CI pipeline on me, it is utterly invaluable for keeping the vast majority of commits from being able to break other people (and from you breaking others). I can’t imagine not having some form of CI to preventing merging bad code.

        • rambaroo@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Hah, or my current one. Before we had CI you just directly committed to master (on SVN). It was incredible how unstable our build was. It broke basically everyday. Then one of the senior back end guys got promoted to architect and revamped the whole thing. Probably saved the company tens of millions dollars in man hours, at the very least.

      • Eufalconimorph
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Even better is when you restrict merges to trunk/main/master/develop (or whatever you call it) to only happen from the CI bot *after all tests (including builds for all supported platforms) pass. Nobody else breaks the CI pipiline, because breaking changes just don’t merge. The CI pipeline can test itself!

    • Jajcus@kbin.social
      link
      fedilink
      arrow-up
      37
      ·
      1 year ago

      And a lot of users’ frustration, especially on more niche platforms (Linux, ARM, etc.) - things look much better on release when the code have been regularly compiled and, hopefully tested, on all platforms, not just the one the lead developer uses.

    • devious@lemmy.world
      link
      fedilink
      arrow-up
      32
      ·
      1 year ago

      Why waste time with CI when you can save on thousands of dev hours by limiting yourself to only one giant fuck off release every year!

      /Taps forehead so hard it causes brain damage

    • engineZ@lemmy.today
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Probably also causes lots of hours of maintenance and troubleshooting…but it’s a net gain in the end.

      • fkn@lemmy.world
        link
        fedilink
        arrow-up
        29
        arrow-down
        1
        ·
        1 year ago

        I can’t even imagine not having a ci pipeline anymore. Having more than a single production architecture target complete with test sets, Security audits, linters, multiple languages, multiple hour builds per platform… hundreds to thousands of developers… It’s just not possible to even try to make software at scale without it.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    ·
    edit-2
    1 year ago

    “Leeroy Jenkins” is what my backend guys say right before they huck a major DB upgrade into prod without testing it in staging.

      • Ghostalmedia@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 year ago

        Right before a long weekend where Monday is a government holiday.

        Also, Leeroy tried to optimize his PTO and hooked a backpacking trip onto the long weekend. He will be out all week and will have no phone reception.

    • steal_your_face@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      Our old Jenkins box is called Leroy, and my old place it was called Jankins. Thankfully we’ve moved on from that trash.

  • CodeBlooded@programming.dev
    link
    fedilink
    arrow-up
    48
    arrow-down
    4
    ·
    edit-2
    1 year ago

    Real talk- I agree with this meme as truth.

    The more and more I use CICD tools, the more I see value in scripting out my deployment with shell scripts and Dockerfiles that can be run anywhere, to include within a CICD tool.

    This way, the CICD tool is merely a launch point for the aforementioned deployment scripts, and its only other responsibility is injecting deployment tokens and credentials into the scripts as necessary.

    Anyone else in the same boat as me?

    I’d be curious to hear about projects where my approach would not work, if anyone is willing to share!

    Edit: In no way does my approach to deployment reduce my appreciation for the efforts required to make a CICD pipeline happen. I’m just saying that in my experience, I don’t find most CICD platforms’ features to be necessary.

    • wso277@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 year ago

      This is pretty much what we do as well

      All the build logic is coded in python scripts, the jenkins file only defines the stage (with branch restrictions) and calls the respective script function.

      This means it works on all machines and if we need to move away from jenkins integration with a new ci platform would require minimal effort.

    • synae[he/him]@lemmy.sdf.org
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      You’re not advocating against CI like the meme seems to be, but rather for CI builds to be runnable on human’s machines and the results should be same/similar as in when running w/in the CI system. Which is what CI folks want anyway.

          • CodeBlooded@programming.dev
            link
            fedilink
            arrow-up
            9
            ·
            1 year ago

            I’ve found Docker helpful when I want to use it to build binaries or use CLI tools that may not be available directly on the CICD platform. Also, Docker makes it easier to run the same code on MacOS that I ended up running on a Linux CICD server.

            What would you consider to be overuse of containers?

    • Elise@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      What about related tools such as viewing artifacts such as for example total memory usage, and graphing that in the browser.

      And sending emails, messages etc in case of a failure or change.

      • CodeBlooded@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Most of those things mentioned aren’t bona fide needs for me. Once a developer is deploying their project, they’re watching it go through the pipeline so they can quickly respond to issues and validate that everything in production looks good before they switch contexts to something else.

        I see what you’re saying though, depending on what exactly is being deployed, the policies of your organization, and maybe expectations that developers are working in another context once they kick off a deployment, it could be necessary to have alerting like that. In that case it may be wise to flex some features of your CICD platform (or build a more robust script for deployment that can handle error alerting, which may or may not be worth it).

        • Elise@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I come from game dev. We do lots of checks on the data that all kinds of people can screw up. So it’s important these situations are handled automatically with an email to the responsible person. A simple change can break the game, or someone might commit an uncompressed texture so the memory usage jumps up.

    • devious@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I don’t think there is a single right or wrong answer but to play devils advocate making your CI tooling lightweight orchestration for your scripts that do the majority of the work means you lose the advantages of being able to easily add in third party tools that you want to integrate with your pipeline (quality, security, testing, reporting, auditing, artefact management, alerting, etc). It becomes more complex the more pipelines you are creating while maintaining a consistent set of tooling integrations.

    • gandalf_der_12te@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Honestly, CI is only meaningful on bigger projects (more than 100 man-hours invested in total). So I most often go without.

      But I do see its point.

    • Faresh@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m a bit confused. I thought “build system” referred to systems like autotools, scons or cmake. How are they related to green checkmarks? Couldn’t one also get green checkmarks when using a build shell script or makefile?

  • LOLjoeWTF@lemmy.world
    link
    fedilink
    arrow-up
    31
    arrow-down
    3
    ·
    1 year ago

    Ah, good 'ol Jenkins. It’s on my list of software I never want to use again, twice.

    One feature was really sweet though: being able to edit the Jenkinsfile script inline and run it. On the other hand, that encouraged the wild cowboy lands. Contrasted to GitHub Actions, you get to see how many commits it took to get right 🙃

    • astral_avocado@programming.dev
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      What’s wrong with Jenkins? Works pretty great for automated scripts that need to run on a schedule, but I imagine you and this post specifically mean in reference to CI/CD

      • xedrak@kbin.social
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        1 year ago

        I work for a very large company which uses Jenkins for CI/CD and it’s an absolute nightmare. Granted, some of these issues may be related to how my company has it setup. I’m not in DevOps so I wouldn’t know. But these are my complaints:

        • Can have incredibly long queue times in some cases. It takes forever to spin up additional build agents to meet demand. In one case we actually had to abort a deploy because Jenkins wasn’t spinning up more build agents, and our queue times were going to put us outside of our 3 HOUR maintenance window.

        • Non-standard format for pipeline configuration files. It could just be JSON or YAML, but noooo, I have to learn something completely different that won’t transfer to other products.

        • Dated and overly complicated UI with multiple UX issues. I can view the logs in a modal from the build page, but I can’t copy from them? Fuck off Jenkins.

        I’m actively pushing my team to transition to GitHub actions, because it’s just better in every single way.

        • astral_avocado@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Ah man, yeah I use it for a much more constrained and very narrow use case. We only use GitHub actions for CI/CD, it can be clunky itself in some aspects but otherwise works great.

        • ieatpillowtags@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          The poorly documented pipeline scripting was always a nightmare for me, plus there’s two different types (declarative vs scripted) and so you have to be extra careful pulling examples from the Internet.

          The build agent issue is 100% on your company not providing enough agents though. These days you can spin up agents as containers on k8s as needed.

        • zlatko@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          And if you have a large company and many teams, you think actions will help? (Aside from the UI issues you mention). Rebuilding the Jenkins from scratch now would probably get rid of most of your problems, but in a year is gonna be a mess. It’s similar to how it’s going to go with and CI.

          Also, a good DevOps person or team will keep the Devs happy (or at least, not very unhappy) with any tool, a bad one will suck anyhow.

          At least that’s my experience.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    1 year ago

    Joke’s on you. I have a Jenkins hook from github to trigger build.bat! :P

      • argv_minus_one@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Cargo fetches dependencies, runs a variety of build tasks, can build a typical Rust project with little or no build scripting, and is configured with a straightforward TOML file. It’s not at all like a hand-written shell script. It’s also much more pleasant to use than any other build system I’ve seen, including shell scripts.

    • socsa@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Hey buddy can you step over here, there’s a very tall cliff I want you to see

    • Azzy@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Please ignore everyone else being unkind - I’m somewhat new to build systems in general, what are the advantages/disadvantages of Bazel compared to other build systems?

  • YurkshireLad@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    If I break our master build in CI, I get multiple emails and people saying “fix this”!!! I wouldn’t have to fix it if you stopped letting people commit directly to master and stopped using git rebase! 😁

  • GlitchSir@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    The build system issue is getting out of control. Just look at cmake

    When your build system is a build system for build systems you know something went wrong years ago