this post was submitted on 14 Aug 2023
43 points (100.0% liked)

Experienced Devs

3961 readers
3 users here now

A community for discussion amongst professional software developers.

Posts should be relevant to those well into their careers.

For those looking to break into the industry, are hustling for their first job, or have just started their career and are looking for advice, check out:

founded 1 year ago
MODERATORS
 

I'm like a test unitarian. Unit tests? Great. Integration tests? Awesome. End to end tests? If you're into that kind of thing, go for it. Coverage of lines of code doesn't matter. Coverage of critical business functions does. I think TDD can be a cult, but writing software that way for a little bit is a good training exercise.

I'm a senior engineer at a small startup. We need to move fast, ship new stuff fast, and get things moving. We've got CICD running mocked unit tests, integration tests, and end to end tests, with patterns and tooling for each.

I have support from the CTO in getting more testing in, and I'm able to use testing to cover bugs and regressions, and there's solid testing on a few critical user path features. However, I get resistance from the team on getting enough testing to prevent regressions going forward.

The resistance is usually along lines like:

  • You shouldn't have to refactor to test something
  • We shouldn't use mocks, only integration testing works.
    • Repeat for test types N and M
  • We can't test yet, we're going to make changes soon.

How can I convince the team that the tools available to them will help, and will improve their productivity and cut down time having to firefight?

you are viewing a single comment's thread
view the rest of the comments
[–] lysdexic@programming.dev 12 points 1 year ago* (last edited 1 year ago) (1 children)

Here's a way to convince a team to write unit tests:

  • setup a CICD pipeline,
  • add a unit test stage,
  • add code coverage calculation,
  • add a rule where unit tests fail if a code coverage metric drops.
  • if your project is modularized, add pipeline stages to build and test and track code coverage per module.

Now, don't set the threshold to, say, 95 %. Keep it somewhat low. Also, be consistent but not a fundamentalist.

Also, make test coverage a part of your daily communication. Create refactoring tickets whose definition of done specifies code coverage gains. Always give a status report on the project's code coverage, and publicly praise those who did work to increase code coverage.

[–] Reader9@programming.dev 9 points 1 year ago (1 children)

Focusing on code coverage (which doesn't distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.

From my experience it’s far easier to sell the need for specific tests if they are framed as “we need assurances that this component does not fail under conceivable usecases” and specially as “we were screwed by this bug and we need to be absolutely sure we don’t experience it ever again.”

Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them. And as a developer I hate feeling “forced” and prefer if at all possible to use consensus to decide on team practices.

[–] lysdexic@programming.dev 5 points 1 year ago

Focusing on code coverage (which doesn’t distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.

The usefulness of code coverage ratios is to drive the conversation on the need to track invariants and avoid regressions. I agree it's very easy to interpret a metric as a target to optimize, but in this context coverage ratios is primarily used to raise the question on why wasn't a unit test added.

It's counterproductive to aim for ~100% but without this indicator any question or concern regarding missing tests will feel arbitrary. With coverage ratios being tracked, this topic becomes systematic and helps build up a team culture that is test-driven or at least test-aware.

Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them.

True. Coverage ratios are an indicator, and they should never be an optimizable target. Hence the need to keep minimum coverage ratios low, so that the team has flexibility to manage them. Also important, have CICD pipelines export the full coverage report to track which parts of the code are not covered.

The goal is to have meaningful tests and mitigate risks, and have a system in place to develop a test-minded culture and help the team be mindful of the need to track specific invariants. Tests need to mean something and deliver value, and maximizing ratios is not it.