Judge Asked Amazon to Share Echo Data for Murder Case

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
AP reports that a New Hampshire judge asked Amazon to release recordings made by an Amazon Echo smart speaker. While Amazon objected to these kinds of requests in 2016 and early 2017, Alexa data was apparently used in a murder case last year, but only after a suspect defendant consented to its use. According to Amazon's FAQ page, Echos only stream data to the cloud after they're activated by the "wake word," and the court said that they are seeking data from Amazon servers, so it's possible that the victims activated Alexa during the attack.

Timothy Verrill, of Dover, is accused of first-degree murder in the deaths of 48-year-old Christine Sullivan and 32-year-old Jenna Pellegrini at a Farmington home in 2017. Verrill pleaded not guilty and faces trial. Prosecutors believe there are Echo recordings capturing the attack on Sullivan and removal of her body that could be found on the server maintained by Amazon. An Amazon spokesperson said Friday it won't release customer information "without a valid and binding legal demand properly served on us."
 
This is why there is not a device in my office. My bedroom sure, after 15 years it's mostly sleeping in there ;), and the great-room to control the lights and occasionally play the song game why not. But anywhere business is done or I may be caught saying something idiotic not a chance.
 
"Alexa, I'm being murdered Call the police!!"

I wonder if that works?

I'm betting it's always recording, under the "training use" part of the agreement no one reads.

From their privacy policy:

1.4 No Access to Emergency Services. Alexa Calling and Messaging are not a replacement for traditional two-way telephone or mobile phone service, and do not function as such. You acknowledge that you cannot use Alexa Calling and Messaging to access emergency services, such as 911. Alexa Calling and Messaging are not designed or intended to be used to send or receive emergency communications to any police, fire department, hospital, or any other service that connects a user to a public safety answering point. You should ensure you can contact your relevant emergency services providers through a mobile, landline telephone, or other service.

But that's mostly just legal protection.
 
From their privacy policy:

But that's mostly just legal protection.

I agree, they don't want to be responsible, but I wonder...

If I were the programmer responsible for reply messages, that one would be "I'm sorry Dave, I can't do that." and then make it sing Daisy, Daisy at full volume.

:)

EDIT: Or maybe "Murder was the case that they gave me"
 
Last edited:
Lemme guess Amazon fought this because they dont want it known that they record everything
 
"Alexa, I'm being murdered Call the police!!"

I wonder if that works?

I'm betting it's always recording, under the "training use" part of the agreement no one reads.

it does not work.
it COULD work if you set up a skill to accept that command and had some way to tie that into a phone that had alexa integration however.
 
So much for that keyword thingy. They are ALWAYS listening. 365/24/7. Full time. Non stop.

I wonder what Cardinal Richelieu would think about these devices
 
"Alexa, I'm being murdered Call the police!!"

I wonder if that works?

I'm betting it's always recording, under the "training use" part of the agreement no one reads.

It's likely always recording because it has to monitor for the wake phrase, so it's a hot mic all the time without a hardware button to disable.

The question is, what is it doing with the recording buffer before the wake trigger? I've long assumed these devices probably transcribe what they hear (TTS) and send that data to the cloud at intervals to be used in targeted advertising.
 
Not allowed in trial - > allowed in exceptional cases like first degree murder

Allowed in exceptional cases like first degree murder - >allowed in cases like murder and rape

Allowed in cases like murder and rape - > allowed in cases with a victim

Allowed in cases with a victim - > allowed in cases where there is a suspected crime

Allowed in cases where there is a crime -> allowed use by law enforcement for monitoring persons potentially linked to criminal activity

-> allowed by law enforcement for any reason.

Just a matter of time...
 
So much for that keyword thingy. They are ALWAYS listening. 365/24/7. Full time. Non stop.

I wonder what Cardinal Richelieu would think about these devices
He'd pay a large bag of gold coins for one.
 
So much for that keyword thingy. They are ALWAYS listening. 365/24/7. Full time. Non stop.
Not to hard of a stretch really, think about it. How does it know if a keyword has been said? Well it needs to be listening for a keyword, so mic has to be on and it has to listen to every sound that is said 365/24/7. Now the scary bit is less about listening, but more about recording/keeping logs.
 
I feel if Amazon was exposed for recording EVERYTHING the Echo hears...they'd be sued into oblivion by everyone and their mother
I choose to beleive its listening but only recording when you trigger it

All it'll hear from me is "Alexa, Tune IN", "Alexa, play this fucking song you stupid pile of shit" and "Alexa, play this...stop...shut up for fucks sake so I can ask this dumb pile of electronic shit to play the stupid baby shark song....fucking 3 year olds"
 
Not to hard of a stretch really, think about it. How does it know if a keyword has been said? Well it needs to be listening for a keyword, so mic has to be on and it has to listen to every sound that is said 365/24/7. Now the scary bit is less about listening, but more about recording/keeping logs.

Generally I think speech recognition is done in the frequency domain by comparing FFT samples with certain patterns. So I would think it would need a buffer, but basically at that point it's running FFTs to bring things to the frequency domain... and possibly recording what you actually say (which is separate). If it was actually doing the latter, that would be relatively unnecessary for the function of the device. They would legally be pretty hard pressed to justify its function. Well nevermind, they're "training", which essentially means they would probably grab the raw data whenever an action "fails" (probably some pattern based trigger, maybe using some big data and/or neural networks to figure out what constitutes a fail pattern in usage), which would probably include a certain buffer length, and then run it against their patterns to figure out why it failed, to try to improve the recognition.

That's pretty much the most innocuous use of the feature possible. The alternative is obviously a tad disingenuous, but quite possible. I'm sure companies have a habit of "overcollecting on accident" when it comes to user data and functionality. Windows 10 telemetry, Nvidia wants your soul and also has telemetry. Even Firefox started. The terms are agreeable on paper, but difficult to blindly believe. I mean really this probably isn't even done by human engineers anymore. Data based automation is at the point to where a feedback loop could probably automatically extract various bits of info from any data sample you send, including improving the recognition. The rest is collected quietly and reused for profile building. If we're not there yet, we will be.
 
Alexa is always on and always listening. The definition of wake events for Alexa is becoming more and more broad every day.
No. They had a bad behaving "Skill" that held the mic open for longer than most people would expect. They are screening skills for that behavior. Was there something new?
 
No. They had a bad behaving "Skill" that held the mic open for longer than most people would expect. They are screening skills for that behavior. Was there something new?

There is. I can't comment on it. But yes there is. And it is by design.
 
Back
Top