I’m like a test unitarian. Unit tests? Great. Integration tests? Awesome. End to end tests? If you’re into that kind of thing, go for it. Coverage of lines of code doesn’t matter. Coverage of critical business functions does. I think TDD can be a cult, but writing software that way for a little bit is a good training exercise.

I’m a senior engineer at a small startup. We need to move fast, ship new stuff fast, and get things moving. We’ve got CICD running mocked unit tests, integration tests, and end to end tests, with patterns and tooling for each.

I have support from the CTO in getting more testing in, and I’m able to use testing to cover bugs and regressions, and there’s solid testing on a few critical user path features. However, I get resistance from the team on getting enough testing to prevent regressions going forward.

The resistance is usually along lines like:

  • You shouldn’t have to refactor to test something
  • We shouldn’t use mocks, only integration testing works.
    • Repeat for test types N and M
  • We can’t test yet, we’re going to make changes soon.

How can I convince the team that the tools available to them will help, and will improve their productivity and cut down time having to firefight?

  • eksb@programming.dev
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 year ago

    Two rules:

    1. All code gets reviewed by a team member before getting merged.
    2. Fail code reviews if there are not sufficient tests.
    • sip@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      yea but if we both aren’t really keen on writing tests and I review you, it would be in my lazy ass’ interest to 👍 without tests.

      • CodeMonkey@programming.dev
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        1 year ago

        That should be a disciplinary issue. The engineers in question should be brought forth in front of management to explain why they thought that this particular change should be exempt from testing and why this was not explained, in detail, in the code review.

        • lysdexic@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          That should be a disciplinary issue.

          And that’s how you get teams to stop collaborating and turn your work environment to shit.

    • Skyzyx@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I think that instead of “forcing tests”, you should instead focus on “proving quality.” You think that works the way you thought? Cool. How do you know? What if they were to use 128 NUL bytes? Would it still do the right thing? Cool. How do you know?

      Ensuring quality is a larger concept than simply writing tests, but writing tests is definitely part of it. I think if you aim higher and teach the provability of quality, then the better engineers will self-select by starting to write tests.

      “If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.” — Antoine de Saint-Exupery

      Additionally, if you’re one person against the world, you’re going to have a tough time. Build alliances. Partner with people who will reinforce the message. If you are the only one telling them something they don’t like, they will shun you for it. But if you partner with allies who all have the same message, people are more likely to start to listen. It starts to become a community.

      And if all else fails, prove the value of tests by going first. You can’t force anyone to do anything. But you can start doing this yourself. At some point, if code gets called into question, you can look at the tests together to see what’s covered and how that thing is supposed to work. It’s all part of letting the robots do what the robots are good at, which frees you up to do the things that you’re good at.

  • lysdexic@programming.dev
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Here’s a way to convince a team to write unit tests:

    • setup a CICD pipeline,
    • add a unit test stage,
    • add code coverage calculation,
    • add a rule where unit tests fail if a code coverage metric drops.
    • if your project is modularized, add pipeline stages to build and test and track code coverage per module.

    Now, don’t set the threshold to, say, 95 %. Keep it somewhat low. Also, be consistent but not a fundamentalist.

    Also, make test coverage a part of your daily communication. Create refactoring tickets whose definition of done specifies code coverage gains. Always give a status report on the project’s code coverage, and publicly praise those who did work to increase code coverage.

    • Reader9@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Focusing on code coverage (which doesn’t distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.

      From my experience it’s far easier to sell the need for specific tests if they are framed as “we need assurances that this component does not fail under conceivable usecases” and specially as “we were screwed by this bug and we need to be absolutely sure we don’t experience it ever again.”

      Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them. And as a developer I hate feeling “forced” and prefer if at all possible to use consensus to decide on team practices.

      • lysdexic@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Focusing on code coverage (which doesn’t distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.

        The usefulness of code coverage ratios is to drive the conversation on the need to track invariants and avoid regressions. I agree it’s very easy to interpret a metric as a target to optimize, but in this context coverage ratios is primarily used to raise the question on why wasn’t a unit test added.

        It’s counterproductive to aim for ~100% but without this indicator any question or concern regarding missing tests will feel arbitrary. With coverage ratios being tracked, this topic becomes systematic and helps build up a team culture that is test-driven or at least test-aware.

        Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them.

        True. Coverage ratios are an indicator, and they should never be an optimizable target. Hence the need to keep minimum coverage ratios low, so that the team has flexibility to manage them. Also important, have CICD pipelines export the full coverage report to track which parts of the code are not covered.

        The goal is to have meaningful tests and mitigate risks, and have a system in place to develop a test-minded culture and help the team be mindful of the need to track specific invariants. Tests need to mean something and deliver value, and maximizing ratios is not it.

  • CasualTee@beehaw.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I tried to introduce tests to one of the team I worked at. I was somewhat successful in the end but it took some time and effort.

    Basically, I made sure to work with the people interested in testing their code first. It’s good to have other people selling testing instead of being the only voice claiming testing will solve all some problems.

    Then I made examples: I tried to show that testing some code, believed to be untestable, was actually not that hard.

    I was also very clear that testing everything was not the end goal, but, new projects especially, should try and leverage testing. Both as a way to allow for regression testing later on and to improve the design. After all, a test often is the first user of a feature. (This was for internal libraries, I expect it would be a harder sell for GUI where the end design might come from a non programmer such as a UX designer).

    At this point, It was seen as a good measure to add a regression tests for most bugs found and fixed.

    Also, starting from the high level, while harder (it’s difficult to introduce reliable integration and end to end tests), usually yields benefits that are more obvious to most. People are much less nervous reworking a piece of code that has a testing harness, even if they are not in a habit of testing their code.

    I did point at bugs that could have been easily prevented by a little bit of testing, without blaming anyone. Once the framework is in place and testing has already caught a couple of mistakes, it’s much harder to defend the argument that time spent testing could be better spent elsewhere. And that’s where we started to get discussions on the balance to strike between feature work and testing. It felt like a win.

    It took two years to get to a point where most people would agree that testing has its uses and most new projects were making use of UT.

    • kersplort@programming.devOP
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      “It takes years” seems like the most reasonable alternative to forcing my coworkers to TDD or not merge code.

      I think that getting people into the benefits of testing is something that’s really worthwhile. Building out some of these test suites - especially the end to end tests- were some really eye opening experiences that gave me a lot of insight into the product. “Submit a test with your bugfix” is another good practice - getting the error the first time prevents a regression from creeping in.

  • mspencer712@programming.dev
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Support from the CTO means he’s willing to pay for it. Test coverage is a paid-for feature that your team is committing to work on. Would they refuse client-funded work because the client might have to pay for rework later?

    Maybe presenting it that way could get people past their hang-ups. Good luck.

    • sip@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      yea but the counter was that they need to move fast.

      In the beginning, tests slow you down, but in time, the amount of bugs tests catch and the confidence in refactoring adds up to way more saved time.

      • Skyzyx@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Right. “Move fast” means that it’s going to get progressively worse, and 2 years from now it will all collapse under the weight of its bugs.

        Think of tech debt as cancer, and tests as chemotherapy. It might suck for a while, but it can also make you much better.

  • mqn@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    If you can measure regressions in some way it would help to quantify the scale of the problem and also give the team something to visibly work towards.

    For example: number of automated error reports (tracking like Sentry), number of issues/bug tickets created manually or number of PRs that are associated with fixing regressions (tagged after the fact).

    Watching these numbers go down is satisfying.

    The other thing I’d do is try to improve the tooling around testing to reduce friction when writing tests.

    Are there no consequences for shipping buggy things though? No grumpy customers or internal users? I take pride in stuff that works well first time.

    • Reader9@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This is a great suggestion because it focuses directly on tracking the outcome (did the software work?) and it gives a fair chance to the folks who don’t want to test - maybe their code really is perfect!

      Another similar metric I would add is the number of rollbacks of newly released code, if the CD system supports it using a method like canary or blue-green rollouts.

  • Reader9@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    We can’t test yet, we’re going to make changes soon

    This could be a good opportunity to introduce the concept of test-driven development (TDD) without the necessity to “write tests first”. But I think it can help illustrate why having tests is better when you are expecting to make changes because of the safety they provide.

    “When we make those changes, wouldn’t it be great to have more confidence that the business logic didn’t break when adding a new technical capability?”

    You shouldn’t have to refactor to test something

    This seems like a reasonable statement and I sort of agree, in the sense that for existing production code, making a code change which only adds new tests yet also requires refactoring of existing functionality might feel a bit risky. As other commenters mentioned, starting with writing tests for new features or fixes might help prevent folks feeling like they are refactoring to test. Instead they’re refactoring and developing for the feature and the tests feel like they contribute to that feature as well.

    • lysdexic@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      This could be a good opportunity to introduce the concept of test-driven development (TDD) without the necessity to “write tests first”. But I think it can help illustrate why having tests is better when you are expecting to make changes because of the safety they provide.

      I doubt that by now the concept of TDD is unheard of to any professional team. Name-dropping concepts actually contributes to loose credibility of any code quality effort, and works against you.

      Also, TDD’s credibility is already low as it piles on the requirement of spending unordinate amounts of extra work effort on aspects of a project which don’t deliver features, and thus it’s value-added is questionable from a project management perspective.

      One aspect that does work is framing the need for tests as assurance that specific invariants are verified and preserved, and thus they contribute to prevent regressions and expected failure modes. From my experience it’s far easier to sell the need for specific tests if they are framed as “we need assurances that this component does not fail under conceivable usecases” and specially as “we were screwed by this bug and we need to be absolutely sure we don’t experience it ever again.”

      • Reader9@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        One aspect that does work is framing the need for tests as assurance that specific invariants are verified and preserved

        Agreed - this is the specific aspect which I hoped would be communicated by studying TDD a bit!

        The team is afraid that making changes will be more difficult when tests exist, but TDD (or maybe a more specific concept like you mentioned) demonstrates that tests make future changes easier.

        And I specifically advocated not to follow “write tests first”.

        Name-dropping concepts actually contributes to loose credibility of any code quality effort, and works against you.

        OK. If I were having an in-depth discussion with my team of fellow developers to convince them to start writing tests, I don’t think that’s name-dropping.

    • kersplort@programming.devOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I think the best thing to do with TDD is pair with or convince devs to try it for a feature. Coming at things test first can be novel and interesting, and it does train you to test and use tests better. Once people have tried it, I think it broadens your use of tests pretty well.

      However, TDD can be a bit of a cult, and most smart and independent people (like people willing to work at a <20 person company) will notice that TDD isn’t the silver bullet it’s proponents make it out to be.

    • Skyzyx@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think it depends. If you have to refactor in order to test, you probably built it poorly the first time around.

  • podatus@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    code review as others mentioned but if everyone is on equal footing in the review then you need some kind of enforceable policy where work isn’t merged until a policy team signs off on it. Who is on that team becomes a matter of office politics and navigating such. With support of CTO that can be worked out for designated roles among peers.

    If the people on that policy team require too much and slow things down in your fast paced environment then that is another seperate issue to navigate to find the right strategies and methodologies.

  • trot_wiertnik_zawis@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Mandatory test coverage was a thing in the past, but later many teams found out that if you are forced to have like 80% test coverage, you end testing framework or even getter and setter tests (at least in Java world). Tests for the sake of tests seems to be irrelevant.

    …but, test as a skill and habit is also necessary to some extent. Let’s say, you set up mandatory 20% of test coverage (for new code). This shouldn’t end with pointless tests (if it’s not only configuration, you will make much more coverage just by adding quality tests). If you have people in your team that don’t write tests at all, this requirement will get people used to writing tests.

    When we no longer need to spend energy on just remembering, that we should do something (to write tests in this case), we can spend this energy on making that thing with higher quality or quicker.

    Once everyone in your team is used to write tests, then is a time to start a conversation about quality of those tests, idioms, preferred patterns, aspects of readability.

    PS At the beginning of my career the most important aspect of automatic testing was writing unit tests. Then sometimes people were writing some integration tests. E2E tests? What’s that? …Only now more and more I sway to the opposite side, that E2E tests are the most important ones and even in a legacy code you may not find a single unit test, but a few E2E tests cover so much. You will hate a codebase without unit tests the same, regardless if there are E2E tests or not. But if you are new to the project, there is at least a chance that those E2E tests stop you from merging code to master if it breaks core functionality. With amount of time put to write those E2E spent instead on unit tests, you won’t have this assurance. It doesn’t matter that a shared technical component in a codebase has 100% test coverage if it’s only a tiny part of core functionality.

    PPS If you decide to incorporate mandatory X % of code coverage, take a look at mutation testing frameworks / tools

    Mutation testing (or mutation analysis or program mutation) is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. Each mutated version is called a mutant and tests detect and reject mutants by causing the behaviour of the original version to differ from the mutant. – Wikipedia

    • kersplort@programming.devOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      We use a little bit of property testing to test invariants with fuzzed data. Mutation testing seems like a neat inverse.

  • chickentendrils [any, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I definitely believe in tests but it’s always an uphill battle to convince resistant devs. Even when the implication of failures for end users could mean death or spending years in a cage.

    In businesses, I’ve only seen testing take off when it was properly budgeted for devs who already believed in it, or when the org hired test developer(s). And that was an org that had 100+ “testers” contracted, who’d literally click through the screens and note defects. So automated testing was an obvious cost savings.

  • leds@feddit.dk
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Got a bug? Write a test to proof you can reproduce it , proof you fixed it and make sure it doesn’t come back.

    • kersplort@programming.devOP
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I’ve had some luck at using AI to get over the hump of the first “does this component work” test - it’s easy to detect stuff that needs to be mocked and put in stub mocks using GPT. GPT is horrible at writing good tests, but often it’s harder to write that first one than the meaningful tests.

  • Skyzyx@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Sounds like a bunch of junior engineers with senior job titles.

    “Senior” is the new mid-level.