This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
CTO and CEO want that, its a small startup and they have issues measuring QA value. And ofc I am against the metrics above.
I am in charge of QA on the company so I have no one to back me up when it comes to explaining why those are bad metrics.
Some examples I ve given so far regarding TC created:
- There is no relation on the number of tests and actual quality.Â
- Many tests do not contribute to coverage at all and may overlap.Â
- Tests could be deprecated or test a very specific condition that is not relevant.
- Tests could be inefficient or have design flaws.
- A lot of tests cases means a lot of complexity. Encouraging adding complexity as a metric is bad.
I am trying to shift focus to coverage and how we increase the coverage of new stuff, defects in production (severity included), how much time do we need to test an iteration of a covered functionality and how much time do we need to create new tests for a new feature (weighting in complexity ofc)
Any clues on which metrics to use, or any other helpful metric? Help? Opinions? Ways to explain to employers why those metrics are wrong?
Thanks fellow QAs!
Subreddit
Post Details
- Posted
- 3 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/QualityAssu...