Focusing on code coverage (which doesn’t distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.
From my experience it’s far easier to sell the need for specific tests if they are framed as “we need assurances that this component does not fail under conceivable usecases” and specially as “we were screwed by this bug and we need to be absolutely sure we don’t experience it ever again.”
Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them. And as a developer I hate feeling “forced” and prefer if at all possible to use consensus to decide on team practices.
I found a few references to this exact model on candlepowerforums.com which I believe has more folks who own(ed) incandescent lights. Not that has been such a long time, but LED technology advanced very quickly. Not sure if that will help your search.
There’s also a few people over on !flashlight@lemmy.world !