Hitoshi Kokumai

11ヶ月前 · 2 分の読書時間 · visibility ~100 ·

chat 著者への問い合わせ

thumb_up 関連性 message コメント

< Probability of Wrong People Getting Persecuted >

 < Probability of Wrong People Getting Persecuted >How does live facial recognition work?<br />
<br />
QP:<br />
<br />
CALCU ATG<br />
<br />
= possible<br />
and flagged<br />
<br />
Phot ¢<br />
matches may be<br />
ke



“What would it take for a global totalitarian government to rise to power indefinitely? This nightmare scenario may be closer than first appears.”

https://www.bbc.com/future/article/20201014-totalitarian-world-in-chains-artificial-intelligence

It would be nightmarish to see conscientious citizens getting mechanically-identified, detained, tortured and killed.

Even more nightmarish is to see wrong citizens facing the same fate.

How probable would it be?

We have no clue, because vendors of face recognition systems would not publicise the empirical data on False Acceptance Rates and the corresponding False Rejection Rates. (for instance, outdoor measurements in the street would be viewed as empirical in the context of this article)

For more about False Acceptance and False Rejection, please click the links

- Harmful for security or privacy’ OR ‘Harmful for both security and privacy

https://www.linkedin.com/posts/hitoshikokumai_security-vs-privacy-or-security-privacy-activity-6684279824472797184-rBGV

- What we are apt to do

https://www.linkedin.com/posts/hitoshikokumai_identity-authentication-password-activity-6712248738968141824-Wl3Y


EE


‘Security vs Privacy’ OR ’Security & Privacy’


Police facial recognition surveillance court case starts ( https://www.bbc.co.uk/news/uk-48315979 )

I am interested in what is not referred to in the linked BBC report. That is, the empirical rate of target suspects not getting spotted (False Non-Match) when 92% of 2,470 potential match was wrong (False Match).

 The police could have gathered such False Non-Match data in the street just easily and quickly by getting several cops acting as suspects, with some disguised with cosmetics, glasses, wigs, beards, bandages, etc. as many of the suspects are supposed to do when walking in the street.

 Combining the False Match and False Non-Match data, they would be able to obtain the overall picture of the performance of the AFR (automated face recognition) in question.

 1.   If the AFR is judged as accurate enough to correctly identify a person at a meaningful probability, the AFR could be viewed as a serious 'threat' to privacy’ in democratic societies as civil-rights activist fear. This scenario is very unlikely, though, in view of the figure of 92% for false spotting.

 2.   If the AFR is judged as inaccurate enough to wrongly misidentify a person at a meaningful probability as we would suspect it is, we could conclude not only that deploying AFR is just waste of time and money but also that a false sense of security brought by the misguided excessive reliance on AFR could be a ‘'threat' to security’.

 Incidentally, should the (2) be the case, we could extract two different observations.

 (A)  It could discourage civil-rights activists - It is hardly worth being called a 'threat' to our privacy - it proving only that he may be or may not be he and she may be or may not be she, say, an individual may or may not be identified correctly

 (B)  It could do encourage civil-rights activists - It debunks the story that AFR increases security so much that a certain level of threats-to- privacy must be tolerated. 

 It would be up to civil-rights activists which view point to take.

 Anyway, various people could get to different conclusions from this observation. I would like to see some police conduct the empirical False Non-Match research in the street as indicated above, which could solidly establish whether “AFR is a threat to privacy though it may help security” or “AFR is a threat to both privacy and security”

EE



thumb_up 関連性 message コメント
コメント

その他の記事 Hitoshi Kokumai

ブログを見る