This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Hey,
I wonder if it possible to make valid assumptions on time-depent behavior of an object detector regarding detection throughout time of the same object.
E.g., considering traffic light detection from a cars perspective. Lets say I know recall, precision etc. for an object detector detecting traffic lights in images/single frames. Is there any way to conclude on the likelihood of a detection in at least one frame when approaching a traffic light? So I can say for example, if 1000 traffic lights were passed, in 90 % of the cases the detector is likely to detect the traffic light in at least one frame. Since two or more consecutive frames are most likely very similar, especially if the video has a high frame rate, I assume if a detection is not made in the first frame, it is very likely that it is not made in another frame as well but I cannot model that.
Subreddit
Post Details
- Posted
- 3 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/computervis...