陆宇振
[转载]Inverse Relationship Between Precision and Recall
2019-8-21 11:10
阅读:1940

https://datascience.stackexchange.com/questions/49117/inverse-relationship-between-precision-and-recall/49121#49121

https://medium.com/@timothycarlen/understanding-the-map-evaluation-metric-for-object-detection-a07fe6962cf3

https://tarangshah.com/blog/2018-01-27/what-is-map-understanding-the-statistic-of-choice-for-comparing-object-detection-models/

https://github.com/tensorflow/models/tree/master/research/object_detection#tensorflow-object-detection-api

http://host.robots.ox.ac.uk/pascal/VOC/pubs/everingham10.pdf

https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/

If we decrease the false negative (select more positives), recall always increases, but precision may increase or decrease. Generally, for models better than random, precision and recall have an inverse relationship, but for models worse than random, they have a directrelationship.

It is worth noting that we can artificially build a sample that causes a model which is better-than-random on true distribution to perform worse-than-random, so we are assuming that the sample resembles the true distribution.

Recall

We have

TP=PFNTP=P−FN


therefore, recall would be


r=PFNP=1FNPr=P−FNP=1−FNP


which always increases by decreasing in FNFN.


Precision

For precision, the relation is not as straightforward. Lets start with two examples.

First case: decrease in precision, by decrease in false negative:

label   model prediction
1       0.8
0       0.2
0       0.2
1       0.2

For threshold 0.50.5 (false negative = {(1,0.2)}{(1,0.2)}),


p=11+0=1p=11+0=1


For threshold 0.00.0 (false negative = {}{}),


p=22+2=0.5p=22+2=0.5


Second case: increase in precision, by decrease in false negative (the same as @kbrose example):

label   model prediction
0       1.0
1       0.4
0       0.1

For threshold 0.50.5 (false negative = {(1,0.4)}{(1,0.4)}),

p=00+1=0p=00+1=0


For threshold 0.00.0 (false negative = {}{}),

p=11+2=0.33p=11+2=0.33


It is worth noting that ROC curve for this case is

Analysis of precision based on ROC curve

When we lower the threshold, false negative decreases, and true positive [rate] increases, which is equivalent to moving to the right in ROC plot. I did a simulation for better-than-random, random, and worse-than-random models, and plotted ROC, recall, and precision:

As you can see, by moving to the right, for better-than-random model, precision decreases, for random model, precision has substantial fluctuations, and for worse-than-random model precision increases. And there are slight fluctuations in all three cases. Therefore,

By increase in recall, if model is better than random, precision generally decreases. If mode is worse than random, precision generally increases.


转载本文请联系原作者获取授权,同时请注明本文来自陆宇振科学网博客。

链接地址:https://wap.sciencenet.cn/blog-578676-1194555.html?mobile=1

收藏

分享到:

当前推荐数:0
推荐到博客首页
网友评论0 条评论
确定删除指定的回复吗?
确定删除本博文吗?