Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

26
Employer wants to implement number of TC created and number of Tests Executed as the chief metric to measure QA work.
Post Body

CTO and CEO want that, its a small startup and they have issues measuring QA value. And ofc I am against the metrics above.

I am in charge of QA on the company so I have no one to back me up when it comes to explaining why those are bad metrics.
Some examples I ve given so far regarding TC created:

- There is no relation on the number of tests and actual quality. 

- Many tests do not contribute to coverage at all and may overlap. 

- Tests could be deprecated or test a very specific condition that is not relevant.

- Tests could be inefficient or have design flaws.

- A lot of tests cases means a lot of complexity. Encouraging adding complexity as a metric is bad.

I am trying to shift focus to coverage and how we increase the coverage of new stuff, defects in production (severity included), how much time do we need to test an iteration of a covered functionality and how much time do we need to create new tests for a new feature (weighting in complexity ofc)

Any clues on which metrics to use, or any other helpful metric? Help? Opinions? Ways to explain to employers why those metrics are wrong?

Thanks fellow QAs!

Author
Account Strength
100%
Account Age
7 years
Verified Email
Yes
Verified Flair
No
Total Karma
18,945
Link Karma
1,301
Comment Karma
17,227
Profile updated: 2 days ago
Posts updated: 7 months ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
3 years ago