Researchers have used Machine Learning to spot people using false information to answer questions. The hackers behind 2015 breach of the IRS, for example, used personal information they'd previously stolen to answer security questions on the IRS website, and in turn get access to their tax returns. The researchers used 40 respondents, and half were told to answer truthfully, while the other half was given fake identities to memorize and use in the questioning. Keeping track of the respondent's mouse movements when answering the questions, a random question was thrown in; "What is your zodiac sign," it asked. While this question was easy for the truth tellers in the group, the fakers were thrown off.
Pretty cool stuff, I can see it possibly being implemented in the future like some super CAPTCHA on higher security websites. It does make me wonder a bit about people with disabilities though, and would the AI be smart enough to see those nuances in the initial recording of your normal mouse movements. Just be careful of anyone asking "What's your sign?" from now on.
While truth-tellers easily verify questions involving the zodiac, liars do not have the zodiac immediately available, and they have to compute it for a correct verification. The uncertainty in responding to unexpected questions may lead to errors.
Pretty cool stuff, I can see it possibly being implemented in the future like some super CAPTCHA on higher security websites. It does make me wonder a bit about people with disabilities though, and would the AI be smart enough to see those nuances in the initial recording of your normal mouse movements. Just be careful of anyone asking "What's your sign?" from now on.
While truth-tellers easily verify questions involving the zodiac, liars do not have the zodiac immediately available, and they have to compute it for a correct verification. The uncertainty in responding to unexpected questions may lead to errors.