This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Hey,
over the course of the last month I created a dataset (images bounding box annotation) for an object detection task and it is working quite well. While evaluating classical metrics like precision and recall is easy, I wondered whether there are other useful metrics that solely rely on evaluating the dataset itself without any object detector involved. I think its general consensus that a dataset should be diverse in terms of scenes, angles etc. to mitigate any potential bias by the detector towards a specific situation. But I do not have a good idea how to evaluate the datasets "diversity", I have some additional tags for every image that I can use, but that feels really simple.
I am grateful for any suggestions towards other metrics :)
Subreddit
Post Details
- Posted
- 3 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/computervis...