Apple will remotely install software to scan all US phones for child sex abuse images

Except you backed up my argument perfectly. Did you not read the last part about a state actor? This also has nothing to do with something on your personal device. Again, this case is about information stored on someone else's servers.

Apple is going to be scanning your own device using your own hardware and electricity to intentionally search for something not residing on their hardware and reporting that in order to prosecute someone. Your case does not apply.

Also, if the cloud service becomes mandatory, at that point that case is not likely to apply since it will be impossible to separate from your device and someone else's servers for something that is on your owned device.
Yes I did read it, I thought Apple would like Microsoft not be considered a state actor, I would like to expand the difference or how this back up your argument.

1) you ask apple, can you take my picture and store it please.
2) Apple: Has specified by our agreement, then I will look at it before accepting to store it.

Legally is that dissimilar to a private sporting event or bar that would search people for weapon before letting them in (if that legal in the US) ?

If the cloud service become mandatory only for people that buy a phone at this time, I feel like all of this would still apply.
 
The data base part should not have many if any. two different files with the same hash is extremely rare if not even possible.
Its the machine learning part that will give a lot of false positives.
It is “easy” to create images with the same hash. It’s called a collision I believe. I agree that randomly having the same hash is rare, but people will intentionally create collisions to muck it up.

https://natmchugh.blogspot.com/2014/10/how-i-created-two-images-with-same-md5.html?m=1
 
It is “easy” to create images with the same hash. It’s called a collision I believe. I agree that randomly having the same hash is rare, but people will intentionally create collisions to muck it up.

https://natmchugh.blogspot.com/2014/10/how-i-created-two-images-with-same-md5.html?m=1
especially if someone wants to target you. but even in the documents they are talking about expanding the program in multiple ways, one was anti-government protest. why are they even worried about that? they got the money for the phone, now f**k off. these are really scary times we're living in and if we don't stand up to this blatant violation of the 4th amendment, it's only gonna get worse. So all you apple simps in the thread saying you aren't opposed to this because you're "not a pedo" well hope you plan on aligning the rest of your life with globalist agenda, because at some point if it keeps heading in that direction you won't have a choice. just gonna leave it at that before i get censored again.
 
if we own our phones, they need permission from us or a legal order to use them to scan the data on them.

You are confusing the physical phone with the software. The hardware (the bit you own) doesn't scan anything.
The software (which you don't own, and I bet you didn't read the 200 pages of terms and conditions) is scanning other software. If you want to "own" the software, the closest you can get is with open source software, you still don't own it, but you have the freedom to do whatever you want with it (basically anything) and there is no corporate big brother with "other interests".

Apple does not allow you or anyone else any freedom with respect to software.
 
With a little option do not ask again. But for you, that does know about it (or the few reading the agreement), that nuance become really small it seem to me, specially that is purely on where some part of an hashing process occur that seem to botter you, why everything you say should not already be true ? Purely because your own device is being used in some way in the process, that could be an important nuance for the future, but for the current implementation that seem to change nothing in practice. The picture were already scanned on arrival, the thing that change is moving that a bit.

But no report is send to anyone at that point, at least that not what seem to be said:
The tool designed to detected known images of child sexual abuse, called “NeuralHash,” will scan images before they are uploaded to iCloud. If it finds a match, the image will be reviewed by a human. If child pornography is confirmed, the user’s account will be disabled and the National Center for Missing and Exploited Children notified.

Apple do an human review before filing a report to NCMEC:
https://www.apple.com/child-safety/...s_for_Children_Frequently_Asked_Questions.pdf
Lets parse:

"the tool designed to detected known images of child sexual abuse"
  • data base of images hashes, Apple has no clue what those images are except assumes they are all child phonography related, no way for Apple to check what they are since that would also cause legal problems having those images at Apple
  • Government could send hashes that are not known child phonography, What prevents that? Hashes for images for religious, political and other reasons maybe sent -> How would Apple know?
    • Those NeuralHash images would be the flagged the same as any other image

"If it finds a match, the image will be reviewed by a human."

  • This is even more disturbing, if the actual image is being flagged and viewed by Apple, that means any image on your phone can be looked at by someone, also means access to your phone when you have no clue that is being done
  • If viewed also means your image or images will be transferred from your phone to somewhere you had no idea or control over
"If child pornography is confirmed, the user’s account will be disabled and the National Center for Missing and Exploited Children notified."
  • Now your means of defense, way to verify and look at what is going on is taken away from you - is your phone now locked?
There is another aspect, with the warning system on receiving OR SENDING images. If you just taken an image and going to send it, it would not be in any kind of database for Hashes, it is new. How in the hell can it be flagged unless some sort of AI or other means is being done to your new freshly caught image? This is with Apple message app I do believe will be incorporated. Yes to protect your children and you of course :confused:
 



"open to expand to other 3rd party apps in the future"

Well there is ML on your phone, that explains the warning messaging for sexually explicit image flagging for new images just caught and being sent or received. What prevents combining a Hash to the ML on the phone? Hash of a person for example which then ML on the iPhone then relate that face to a different image on your phone? The whole frame work for this to be possible looks like it will be deployed with this.

Rapid Face Recognition Using Hashing:
http://www.iacl.ece.jhu.edu/proceedings/cvpr2010/papers/2101.pdf
 
Last edited:
It is “easy” to create images with the same hash. It’s called a collision I believe. I agree that randomly having the same hash is rare, but people will intentionally create collisions to muck it up.

https://natmchugh.blogspot.com/2014/10/how-i-created-two-images-with-same-md5.html?m=1
There are 2 types of hashes being discussed in this thread:
1) Cryptographic hashes.
2) "similar image" hashes.

1) A cryptographic hash should be practically impossible to create collisions (see SHA256 as example). MD5 is a cryptographic hash, but it has been broken for a while now. No one uses MD5 for cryptographic purposes any more (or at least they shouldn't!). SHA1 has also recently been broken. Cryptographic hashes can be used on any data, they are not related to images. Using ONLY a cryptographic hash wouldn't be good enough in this case, because simply changing 1 pixel of data in an image would create a completely different cryptographic hash (even though a human looking at the image wouldn't be able to notice the difference). Still, it would catch people that download illegal images and dont bother editing them at all.

2) "similar image" hash. This might be some machine learning algorithm or something else. Two similar images should have identical hashes or similar hashes. I dont think we know yet exactly what algorithm apple would use. Depending on what they use it might be easy to create collisions with images that are obviously very different. Someone posted an article in this thread where they were working on some sort of image hash that produced similar hashes for 2 images that obviously had completely different subjects.
 
It is and it isn't different. If you're dumb enough to store any data on "the cloud" you should know that anything there can and will be scanned even though the company should never do it. It's there and they're going to do whatever they can with your data.

Your own device should be under your control unlike anything with the cloud. Nothing should be run on it without your permission and authorization. It's your data and no one should have access to it unless you give them access every single time they ask.

And I don't want to hear anything about people saying not to turn on the Apple cloud access. It doesn't matter. Some iOS update is going to turn it on "accidentally" and with the way Apple is it probably won't be long until it's forced on and you'll be required to use it in order to use the hardware. Ironically, this will only be possible because of Apple's walled garden approach. This will be a direct result of Apple's walled garden and top to bottom ecosystem approach. Things may work easily but this is simply one of the big downsides to jumping into and using an ecosystem like that. Apple has most of the control top to bottom and it means Apple can basically do what they want. When you're the gatekeeper it means a level of control with is not possible with something like Android.

The fact of the matter is this is a massive intrusion of privacy. It doesn't matter if others have done it. They're all bad and all of them need to be fought and refused.
I have my personal Microsoft OneDrive cloud device assigned to my Microsoft account/email. The company I worked for had me use an iPhone and desktop computer for business, could not use my own phone. Anyways company decided to go Microsoft 365 (business or whatever it was called) when I entered my Work Email, my name, not my personal Microsoft Account or email, Microsoft combined the two into one OneDrive account! All my private, well semi private stuff showed up on the company controlled account. I had to call up Microsoft, literally using some pointed profanity to get them to separate the two accounts. Anyways what ever was on my computer would be backed up to the company servers, so I assume everything I had was now owned by the company.
 
There's another huge red flag regarding this program. Why is it scanning photos taken with the phone? Photos taken with the phone should not be scanned at all because there's no way possible those photos could be pictures previously added to a database of child porn. Therefore there is no reason to scan those photos as it's impossible for those photos to be what they are looking for as stated.

There is nothing innocent about this program and there are huge red flags and massive privacy issues as well as violations of multiple Constitutional Amendments. The best thing which could be said is this program is simply to get a foot in the door for later expansion of the program.
 
Lets parse:

"the tool designed to detected known images of child sexual abuse"
  • data base of images hashes, Apple has no clue what those images are except assumes they are all child phonography related, no way for Apple to check what they are since that would also cause legal problems having those images at Apple
  • Government could send hashes that are not known child phonography, What prevents that? Hashes for images for religious, political and other reasons maybe sent -> How would Apple know?
    • Those NeuralHash images would be the flagged the same as any other image

"If it finds a match, the image will be reviewed by a human."
  • This is even more disturbing, if the actual image is being flagged and viewed by Apple, that means any image on your phone can be looked at by someone, also means access to your phone when you have no clue that is being done
  • If viewed also means your image or images will be transferred from your phone to somewhere you had no idea or control over
"If child pornography is confirmed, the user’s account will be disabled and the National Center for Missing and Exploited Children notified."
  • Now your means of defense, way to verify and look at what is going on is taken away from you - is your phone now locked?
There is another aspect, with the warning system on receiving OR SENDING images. If you just taken an image and going to send it, it would not be in any kind of database for Hashes, it is new. How in the hell can it be flagged unless some sort of AI or other means is being done to your new freshly caught image? This is with Apple message app I do believe will be incorporated. Yes to protect your children and you of course :confused:

I feel you responded has you saw the text and did not come back.

You ask how would Apple known and then observe that an Apple employee do look at flagged image to confirm they are actual child pornography before contacting authorities, is they start to see pro-Russia meme image in there popping up they would know.

Has for the part for the Apple employee, not in the current iteration no, not any image on your phone can be looked at by someone, it need to be a picture you send on the cloud and it need to be a picture you send on the cloud with a matching DH hatch for Apple to be able to open it and look at it, otherwise they would be missing the de-encryption capability.
You have a clue that is being done because you just asked Apple to hold pictures for you.

And so I imagine the NCMEC that do seem like a somewhat private and somewhat international database that Apple and other use for the database.

For the part of a just taken image, with the system as described it can't, it does look like Apple making some strict minimum effort to not be singled out versus the rest, but if there picture one day enter the database, the person that took the picture and wanted to use something like iCLOUD for it, can be caught if the hash was made than not.

Obviously it is not because you send a picture that you personally took it.
 
Last edited:
Servers run on Linux because Linux is way easier to just do shit on when it comes to management, scripting, and reliability. Open source is just a perk, but no one is reviewing that code. You think people review all the code when they run yum update? They put a list of trusted repos and call it a day. However, trusted repos get compromised constantly, which is generally how Red Hat had a business model…
No, these companies don't have the resources to look through all that code, but the idea is that lots of eyes are looking at it. Like the University of Minnesota who tried to send bogus Linux submissions and got kicked out of Linux for it. If Apple were to release their source code then people will remove the part that scans all data and call it uOS or something unique. Like Google's Chrome is also offered as Chromium without the Google, through that's by Google's choice not anyone else's.
 
Like 100s if not 1000s of folks making hash collisions to defeat the purpose of the system? Id love to see the fallout of that.
People would do this to SWAT someone. Send a benign image or even host on a website, that hashes out the same as known cp. You could get someone f'ed with or the feds could get that on your machine via spam, then use that to justify a warrant for deeper monitoring.
 
I'm curious: let's pretend for argument's sake that the probability of a false positive is zero. Would you then find it acceptable for Apple (or any entity) to use the resources on your device to run code designed to collect legal evidence to be used against you?

My answer would be a firm "no". If your answer is "yes", I would be interested to hear your reasoning.
 
People would do this to SWAT someone. Send a benign image or even host on a website, that hashes out the same as known cp. You could get someone f'ed with or the feds could get that on your machine via spam, then use that to justify a warrant for deeper monitoring.
It wouldn't work that way - if the code finds a match, it gets kicked up to an employee of or contractor for Apple who looks at it before a report gets sent to NCMEC. When the human finds that it is a mismatch it should theoretically get written off as such.
 
People would do this to SWAT someone. Send a benign image or even host on a website, that hashes out the same as known cp. You could get someone f'ed with or the feds could get that on your machine via spam, then use that to justify a warrant for deeper monitoring.
Has someone that known nothing about most of this, if you send someone an image (via text, email, apple talk), for it to end up on your iCloud account and trigger the hash validation, does one need to by mistake save it or do any action or there is in some case-setting an automatically backup picture I received in my folder that can occur ?

I'm curious: let's pretend for argument's sake that the probability of a false positive is zero. Would you then find it acceptable for Apple (or any entity) to use the resources on your device to run code designed to collect legal evidence to be used against you?

My answer would be a firm "no". If your answer is "yes", I would be interested to hear your reasoning.
If it is in the condition that I ask them first to host some content and they want to only host valid content (specially if it is free hosting), yes that would sound fair, if it is the equivalent of: Can I enter this concert, yes sir but empty your pocket to enter and if we see something illegal we have the obligation to call the cop. There is a distinction here between that example and CP, that would be more akin to we want an antivirus to run on your machine to see anything that could hurt our server before we accept any data to reach it, which I feel people would have had no problem with it. But if you ask Apple to host your picture for you, them asking you to run something on your machine tor them to accept to host the picture do seem acceptable, by acceptable I mean, if I was an Apple user that use their cloud to backup my family picture I would continue to use it. In both case I use my muscle or device to do action that could incriminate myself because I would find reasonable that a concert hall do not want people to bring X things inside and I find reasonable that a picture hosting company refuse to host known for and validated by humans list of CP picture.

If it was on local picture (or hosted on a different service) I would have a bigger issue but I imagine I would still accept it (by accept it meaning I would not go through the trouble of changing device for that, which by definition mean that I accept it)
 
ahahahhaha... the company of "privacy"

That because they "are", that this thread is at page 8 too, from what I understand.

Would they not be the company of privacy they would do like the facebook, google, microsoft and other and would continue to have user put picture on the cloud that are possible for Apple to look at to detect the CP, the complex way they constructed to have user picture stay virtually impossible for Apple to look at (outside false positive that should be extreme rare) made the controversy.
 
So in theory someone at or working for apple will eventually be viewing CP as part of their job?
That's actually a good question, I don't know if they will be looking at the actual image in question or a "simplified" version of it that is more detailed than the version which set off the perceptual hash, but less detailed than the actual image. I would assume the former, because from what I have read LEOs who process CSAM have a very high burnout rate and generally need a lot of counseling from being exposed to it.
 
That's actually a good question, I don't know if they will be looking at the actual image in question or a "simplified" version of it that is more detailed than the version which set off the perceptual hash, but less detailed than the actual image. I would assume the former, because from what I have read LEOs who process CSAM have a very high burnout rate and generally need a lot of counseling from being exposed to it.
And that raises another implication, namely that the fact that this will be reviewed by a live human before the final sendoff is NOT any cause for assurance. In fact, to me this is actually more worrying. What if they get so mentally scarred that they stop actually checking the images and sending false positives over to the police, simply assuming that the system is working as intended (supposing it has had a high enough hit rate thus far)? To me, the human factor raises more concerns than assurances. Without any real insight into how well the system itself is working, we can't even be sure that the first wave of checks is good enough to begin with....
 
glad my kids are older now. Probably would have been flagged for taking a picture of first baths.
well now if they're middle/high school age you just have to be worried about them taking, sending, or recieving pictures of themselves or other kids. at least if you use apple.
 
glad my kids are older now. Probably would have been flagged for taking a picture of first baths.
well now if they're middle/high school age you just have to be worried about them taking, sending, or recieving pictures of themselves or other kids. at least if you use apple.

Yes, this is nuts. My kids have iphones for sports practice, ect, and the idea that something could get flagged and reviewed by some creepo at Apple is pretty horrifying.
 
And that raises another implication, namely that the fact that this will be reviewed by a live human before the final sendoff is NOT any cause for assurance. In fact, to me this is actually more worrying. What if they get so mentally scarred that they stop actually checking the images and sending false positives over to the police, simply assuming that the system is working as intended (supposing it has had a high enough hit rate thus far)? To me, the human factor raises more concerns than assurances. Without any real insight into how well the system itself is working, we can't even be sure that the first wave of checks is good enough to begin with....
This is a fantastic point. I supposed apple has some training slides lined up but are they to be considered law enforcement now?
 
I feel you responded has you saw the text and did not come back.

You ask how would Apple known and then observe that an Apple employee do look at flagged image to confirm they are actual child pornography before contacting authorities, is they start to see pro-Russia meme image in there popping up they would know.

Has for the part for the Apple employee, not in the current iteration no, not any image on your phone can be looked at by someone, it need to be a picture you send on the cloud and it need to be a picture you send on the cloud with a matching DH hatch for Apple to be able to open it and look at it, otherwise they would be missing the de-encryption capability.
You have a clue that is being done because you just asked Apple to hold pictures for you.

And so I imagine the NCMEC that do seem like a somewhat private and somewhat international database that Apple and other use for the database.

For the part of a just taken image, with the system as described it can't, it does look like Apple making some strict minimum effort to not be singled out versus the rest, but if there picture one day enter the database, the person that took the picture and wanted to use something like iCLOUD for it, can be caught if the hash was made than not.

Obviously it is not because you send a picture that you personally took it.
Now where in that brief sentence Apple put out did it say iCloud, plus if this is scan only going to iCloud then why do it on the phone? You will have a database of perverted child abuse image hashes on your phone, which in itself is kind of disconcerting doing work which as you say could all be done at the iCloud level -> that would not make any sense. There would never be a need to constantly update that database with hashes without your control, on your iPhone, which really you nor Apple really know what they represent since they come from the government, which may be flagged combined with Machine Learning ability to analyze images for characteristics (those characteristics can be anything, it is ML/software).

If what Apple put out is not true then what is?

You will have a phone that will analyze every picture you take (going out at least if not all the time) looking for sexual explicit material (possibly something else, you have no control nor knowledge what really is being looked at) -> What if you and your wife, love one or what ever want some memorable moments while in bed? Why should my personal life be judged or analyzed by Apple would be my thoughts? Databases built? I send or they are automatically backed up to iCloud images I take, will my privacy and partner be private? Protected?

Frankly I regret buying my wife an iPhone 12 now and this shit upcoming. She prefers iPhones over Android.
 
I'm curious: let's pretend for argument's sake that the probability of a false positive is zero. Would you then find it acceptable for Apple (or any entity) to use the resources on your device to run code designed to collect legal evidence to be used against you?

My answer would be a firm "no". If your answer is "yes", I would be interested to hear your reasoning.

No. But I'd feel better about the design of the system if they were only looking for exact matches vs. matches that use some ML to come up with a "yes, same image" or "no, not same image" result.

It still puts the onus on NCMEC and various law enforcement agencies to verify images and update the database, and then the "fuzzy logic" of ML looking for a high degree of similarity is no longer a factor.

I wouldn't like it and I still wouldn't want it on my system, but in that case I'd probably be... a little less upset about the whole thing. Especially if it was a precursor to full end to end encryption like some are speculating.

That's actually a good question, I don't know if they will be looking at the actual image in question or a "simplified" version of it that is more detailed than the version which set off the perceptual hash, but less detailed than the actual image.

Yes. From Daring Fireball, emphasis added:

Apple is cryptographically only able to examine the safety vouchers for those images whose fingerprints matched items in the CSAM database. The vouchers include a “visual derivative” of the image — basically a low-res version of the image. If innocent photos are somehow wrongly flagged, Apple’s reviewers should notice.

(source: https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope)

(I would highly recommend anyone who is confused about what the system actually does and how it works read the above article).

This is a fantastic point. I supposed apple has some training slides lined up but are they to be considered law enforcement now?

No, but a lot of this analysis is performed by authorized non-LE personnel in labs. They'll likely have a team of employees working as federal contractors since they're partnered with NCMEC and using their hash set as the basis for all this.

That'll never happen and Apple as well as everyone else will try to stop you. Most software is in a legal grey area where you license it but own the product that the software is running on. But the product won't work without the software so what's the point? Again, nothing legally tested and all you have to go by is an end user agreement.

Yeah, that's kind of my point. I think it's time we got that sorted out, otherwise we're all going to be buying stuff but never actually having ownership over it.
 
Now where in that brief sentence Apple put out did it say iCloud,
This feature only impacts users who have chosen to use iCloud Photos to store their photos. It does not impact users who have not chosen to use iCloud Photos.
https://www.apple.com/child-safety/...s_for_Children_Frequently_Asked_Questions.pdf

plus if this is scan only going to iCloud then why do it on the phone?
That because Apple will be not able anymore (allegedly) to read a picture once it is on the cloud, being encrypted on the phone before the upload, making impossible the generation of the hash on the cloud. The only way apple is able to unencrypt the picture is if the hash result attach to the picture is a valid key on their side to open it, which only happen if there was a match with the photodna database (that the reason not only the hash is made on the phone but the match with the DNA).


could all be done at the iCloud level
Could be but that would make impossible end to end encryption and would mean Apple could see one of your picture, which goes against their strong privacy moto, considering that what they are doing now from what I understand they want to change that to augment privacy.

which really you nor Apple really know what they represent since they come from the government,
It come from a private agency that merged some with the FBI, but I am not sure they are all from the government. Regardless like you said an Apple employee do validate they are CP before contacting the authority, meaning they do know what they represent.

which may be flagged combined with Machine Learning ability to analyze images for characteristics
May, like they may now, but that not allegedly the case.

If what Apple put out is not true then what is?
Not sure what you mean by that question or what answer you are looking for.

You will have a phone that will analyze every picture you take (going out at least if not all the time
Analyse is a strong word, an hash is made looking at how much pixels vary in intensity (black and white usually) over lines and blindy hard compare that with other image pixel change intensity in a brute and completely dump way, it is not an analyse of the type that look like skin, 70% of the body seem to be skin color, that look like a sexual part and so on. It is not looking for explicit material, it is looking for a match with a CP database. Really not sure that a naked picture of you have more chance to trigger an false positive than a clothed one

She prefers iPhones over Android.

Google I think do much more about scanning picture than what Apple will be:
https://miami.cbslocal.com/2018/12/07/lauderdale-man-child-porn-google-photos/
More than two thousand images of child pornography were reportedly found on a Ft. Lauderdale man’s Google Photos account.
https://nypost.com/2017/05/26/man-busted-with-child-porn-after-uploading-pictures-to-google/
The agency called in the state police to examine thousands of graphic images Cole had apparently uploaded to a Google photo account

Apple was the last big company to not do it when they started doing that in 2019-2020, right now they are scanning your wife picture to look for CP if she put them on the cloud, with that update they will stop scanning picture (they will not be able to open them on their side) it will be done on her phone instead.
 
And that raises another implication, namely that the fact that this will be reviewed by a live human before the final sendoff is NOT any cause for assurance. In fact, to me this is actually more worrying. What if they get so mentally scarred that they stop actually checking the images and sending false positives over to the police, simply assuming that the system is working as intended (supposing it has had a high enough hit rate thus far)? To me, the human factor raises more concerns than assurances. Without any real insight into how well the system itself is working, we can't even be sure that the first wave of checks is good enough to begin with....
The agency will and after that the police will and after that a judge (and-or jury) will look at them, no ?

They will not take apple word for it.

When Microsoft do it:
https://casetext.com/case/united-states-v-bohannon-15

They contact NCMEC, an analyst look at them and if valid contact the police (here the San Francisco Police Department), the police Sergeant verified the pictures himself and got a warrant a superior court of California magistrate judge issued the search warrant to look at all the suspect upload history, child porn picture with the suspect himself on them (compared to is driver license id) where found, law enforcement entered is house and seize all is device to search them.

Is it really more worrying to you that an actual human look at the picture at some point of the process than not ? How does it not made it extremely more robust.
 
StoleMyOwnCar d3athf1sh noko blackmomba and everyone else

I see some speculation about who would be considered law enforcement and how the human element of this process could work. I don't have any insight into what Apple is doing, but I can share my own experience in the hopes it might clear up some misconceptions.

I spent nearly a decade working in a law enforcement agency's major crimes unit, performing various types of digital forensic analysis. This included things like murders and theft and also child abuse & suspected CSAM cases. As a forensic analyst I was not, myself, considered law enforcement personnel. Our team was authorized by our jurisdiction's legal authority to perform this analysis on behalf of the LE agency in support of their LE efforts.

I know a lot of TV shows make it out to seem like even the computer guy in the background has a gun and a badge and epic hand-to-hand combat skills, but this is not how it actually works.

As I mentioned above, Apple will most likely have a group of employees working on a federal contract to perform this review. They will be authorized to perform the limited review as outlined in their procedures and would not be considered law enforcement. And if all they have to look at are reports with metadata (like hash values) and "visual derivatives" then they are lucky.

LukeTbk I see why you have arrived at this conclusion - that what Apple is doing is no different than anyone else. I want to add some perspective here to really get into the meat of why I think the on-device scanning is a problem.

When I was assigned a case involving suspected CSAM we used the NCMEC hash set. It was a standard part of the analysis procedure, and NCMEC had a thorough process for adding image hashes to that database. Any image suspected of being CSAM, unless it was obtained from a known source (e.g. an actual producer of the material, with corroboration from victims), went through a review by a panel of physicians before it could be added to the database. At the time ML algorithms were not used for this analysis, so even a file that was obviously the same as a known image (like, at a different resolution) had to be reviewed before it could be added to the hash set.

My own experience has me assuming that with the advent of ML capabilities this process has gotten a bit quicker, but there is still a human factor required here specifically because the subject matter is so sensitive. For instance, an ML tool might say "these two images are the same" and then a human would perform final validation of it. I don't have any specific knowledge of how NCMEC does it these days, but what Apple has described seems like the best approach to using ML to add to the database while also trying to avoid false positives, so I think it is reasonable to assume NCMEC's processes are even more thorough.

The processes NCMEC had in play to add images to this hash set were so thoroughly vetted that in all the time I worked there I do not once recall anyone ever challenging it. It is highly regarded as an evidentiary tool by law enforcement, so anything that could happen to weaken that is highly unlikely.

So, analyzing a suspected CSAM case, we used the NCMEC hash set to scan for any known files. Any that were found were flagged and added to a report. It also required manual review of all other material on a system. We were explicitly not allowed to describe any images outside of the hash set as CSAM, that determination was left to NCMEC and their - again, very thorough - procedures.

If I was working any other type of case, we were not authorized to use the NCMEC hash set. Search orders / warrants were very clear about the purpose and scope of the investigation. A murder case couldn't just also be scanned with the NCMEC hash set. And if, in the course of my investigation, I stumbled upon suspected CSAM, all work had to stop. I had to notify my section chief. Investigators were informed. Then legal counsel - both prosecutors and defense, because even the slightest hint of a child abuse investigation could be ruinous to the person being investigated. Ultimately judges were contacted, arguments were made and we would receive a new search order with an expanded scope. Only then, once all orders were signed and our team had received the new documents and added them to the case file, could we utilize the NCMEC hash set.



I'm putting up this huge wall of text to try and illustrate my point: there is a very clear process, designed around the presumption of innocence and probable cause, that is followed to preserve the integrity of these investigations. I know LE isn't painted that way often, but the fact is this analysis and these reports had to be bulletproof to stand up in a court of law. Anything less than that could undermine all of the work people were doing to try and stop child abuse, and no one wants that.



Now, if Apple were doing server-side scanning, like Microsoft and Google and everyone else that has been mentioned, that's one thing. Those servers and storage are Apple's property and as I've mentioned before, I think it would be irresponsible of them to not scan what users upload for illegal material. But Apple has talked about how user files are encrypted before, and they can't view the contents. And some people think this is a prelude to Apple offering full end to end encryption in iCloud.

If, they say, Apple plans to offer end-to-end encryption, but they want to scan for known CSAM, then the only way to do that would be to scan prior to the photo being uploaded. And the only way to scan the file, if encrypted, would be to do it on the device where it resides. So, if the photo is going up to iCloud have the iOS device scan it and compare its results to the database it has, and if there's a concern it uploads the "ticket" to Apple. This, they say, protects the user's privacy and data security while also providing the service of trying to identify and stop the proliferation of known CSAM.

It all sounds good and reasonable when you look at it that way. It is, as you LukeTbk have pointed out, effectively the same as if Apple were doing the scan server side. But, Apple's plans to use on-device scanning are problematic to me for two reasons.

1) Apple is placing this database on personal property without permission from the user or authorization from a judicial entity, which should require probable cause, to do so. Because the scan is taking place on private property - regardless of the "trigger" being an upload to iCloud - there should be legal backing for it to happen.

2) by placing it on every iOS device they can without probable cause, the system is built on the assumption that there's CSAM out there for them to find. That means they are assuming someone, somewhere - not everyone, just someone - is already guilty of possession of CSAM. And we do not assume guilty first (aside from the court of public opinion).

Again, all of this flies out the window if the scan was happening server-side, where Apple is scanning their own property. There's no reason currently that Apple needs to do it this way, which is why some are assuming that full end to end encryption is coming.

But if the tradeoff for end to end encryption is that Apple gets to assume someone is guilty, and place a hash set on every iOS device to try and find said guilty person(s), that to me seems like an end run around both probable cause and the presumption of innocence. Do not like.
 
Last edited by a moderator:
I do not see end to end encryption as mention as being real or secure if images from your phone can be viewed. And if sent encrypted from your phone to iCloud, that would mean they are not viewable. So what process is used to view your images that were flagged?

The whole idea that there is end to end encryption but yet you photos can be view with zero input from you is is a backdoor to them.

For them to be viewed means they have to be copied and sent from your private phone to someplace else. How would any chain of custody be ensured? Photos not manipulated, changed etc. How would any law enforcement agency could even accept them to begin with? To me a separate investigation, court order to pry into your phone would be taken. All would be free game now with the backdoor provided.
 
Last edited:
I can understand arguments about Apple being a private company and not beholden to laws that were intended to protect against the government (for example, the 1st Amendment, which is commonly discussed when platforms censor users or content).

However, if Apple is just going to basically scan your whole phone (and then through a third party or not) notify the government, this seems to clearly violate the 4th Amendment regarding unreasonable search and seizure.

Just cause Apple is a private company does not make them above the law, especially with regards to wiping their ass with The Constitution.
 
https://www.reuters.com/technology/...park-concern-within-its-own-ranks-2021-08-12/

Not surprisingly, some Apple employees are uncomfortable with this too. Although it seems a lot of people are more concerned with the “slippery slope” potential than the sidestepping of legal process.

The article sums it up nicely, and “they” in the below statement refers to arguments from “policy groups” like the EFF and the CDT.

They say that while the U.S. government can't legally scan wide swaths of household equipment for contraband or make others do so, Apple is doing it voluntarily, with potentially dire consequences.


I do not see end to end encryption as mention as being real or secure if images from your phone can be viewed. And if sent encrypted from your phone to iCloud, that would mean they are not viewable. So what process is used to view your images that were flagged?
They get a "visual derivative" of the image that goes through, seemingly, at least two levels of checks - once at Apple, and if the analyst there determines it needs to be sent on, once at NCMEC. I'm just making complete assumptions here based on past experience, but I assume at some point in the process someone would make a determination that a legal investigation of your files is required, instead of whatever this is, and that's when they'd go and get a warrant, using the probable cause from the unauthorized investigation they'd just conducted.

And of course, that's all after they have already started investigating you, without permission or authorization, and Apple has possibly frozen your account on the assumption that you're guilty, depriving you of things like the ability to communicate using your own personal property (I believe an Apple account freeze would result in a complete lockout of your mobile devices).

The whole idea that there is end to end encryption but yet you photos can be view with zero input from you is is a backdoor to them.

It would allow for end to end encryption, because the files are encrypted on-device, but the device itself has access to them (otherwise you wouldn't be able to view your own photos). So the device can do all of the above without a decrypted copy of the file sitting on Apple's servers. It's encrypted everywhere, but because you need to be able to see your own photos, this system, coincidentally, would also be able to see them.


For them to be viewed means they have to be copied and sent from your private phone to someplace else. How would any chain of custody be ensured? Photos not manipulated, changed etc. How would any law enforcement agency could even accept them to begin with? To me a separate investigation, court order to pry into your phone would be taken. All would be free game now with the backdoor provided.

With digital evidence, hash values - actual hashes, not fuzzy ML-based fingerprints - are used to show chain of custody. I think SHA256 is what's used today most of the time, since older algorithms have been shown to be unreliable (MD5, for instance, which a lot of people still think is reliable and is still in regular use, was actually determined to be broken way back in 2008).

So once they have the warrant that says they can do an actual legal investigation, they would seize your phone, get the data off of it (assuming they can) and generate the actual hash values for the photos. If they match what's in the database, they can show that the images on the phone match the database. If the images are visual matches but not exact digital matches, that would be when they rely on human review.

However, if Apple is just going to basically scan your whole phone (and then through a third party or not) notify the government, this seems to clearly violate the 4th Amendment regarding unreasonable search and seizure.

While I'm not at all a constitutional law expert I do agree with your interpretation. However, I think it's important to point out that they're not scanning your whole phone. The concern is that they could use this system to do that in the future.

The whole system was designed with "privacy" in mind and - if you ignore the major issues we have both brought up - it does that pretty well:

1) It only scans photos that are being uploaded directly to iCloud (it does not appear to scan iCloud backups)
2) Because the database is stored on the phone, it performs the comparison on-device
3) Because the comparison is done on the device, it only sends reports on files it considers matches - Apple doesn't get a report on every single file, like they would if they were doing all the scanning server-side
4) Because "visual derivatives" are used, no one actually sees the contents of the photos until an actual legal search order has been issued and law enforcement comes knocking on your door

Like I said, it all sounds reasonable. And if we set aside the "slippery slope" arguments for a moment and assume (just for the purposes of this one sentence) that the system won't ever be abused by a government, it's a pretty nice design for a system that can both protect user privacy and scan for contraband.

But that 1) is a huge assumption and 2) still doesn't answer where's the legal authority that says they can do this on personal property.
 
Last edited by a moderator:
It depends on whether or not Apple is acting as an agent of the government or just as a company acting on it's own volition reporting crime. See:

https://www.eff.org/deeplinks/2017/...urth-Amendment-Safeguards-by-using-Geek-Squad
I would argue just because your not associated with the government does not then give you the right to go into some ones home, phone, car etc. violating one's privacy. I would also argue your content, digital, physical, thoughts on paper, images etc. is property owned by you and taking it without your permission, especially if you don't know about it (to give you sufficient time to counter or defend yourself) should be illegal and prosecuted for. I can see why Apple prefers encryption of everything on iCloud which makes then 100% not liable for any content, holding that content and no reasonable way to expose anything to the government making any search from a legal standpoint not enforceable. The bottom line is your phone is basically open, backdoor regardless of the encryption used to keep Apple from legal consequences. Apple Encryption is to protect Apple, not you.
 
Last edited:
Frankly I regret buying my wife an iPhone 12 now and this shit upcoming. She prefers iPhones over Android.
Ditto. I bought her one as well and am trying to get her to switch to our DropBox account, but she "likes her iCloud." After this?!

Apple is not the benevolent tech corporation that people think they are. Apple has been giving the FBI access to users iCloud accounts, but that story didn't get covered well. Big tech is not to be trusted.
 
Ditto. I bought her one as well and am trying to get her to switch to our DropBox account, but she "likes her iCloud." After this?!

Apple is not the benevolent tech corporation that people think they are. Apple has been giving the FBI access to users iCloud accounts, but that story didn't get covered well. Big tech is not to be trusted.
You think dropbox isn't doing that? If you're using an Apple product it makes zero sense to use anything but iCloud unless you can get more storage for the money elsewhere, and you need A LOT of extra storage.
 
Ditto. I bought her one as well and am trying to get her to switch to our DropBox account, but she "likes her iCloud." After this?!
Can I ask why ?
Dropbox will "look" (look being probably not the right word for an hash comparison to a database) at her picture on Dropbox, after this update Apple will stop being able to do on her iCloud.

https://arstechnica.com/tech-policy...llector-and-a-chess-club-stopped-his-rampage/
https://www.publicopiniononline.com...nt-police-say/626862001/?cookies=&from=global
A Chambersburg man faces charges after online media storage company Dropbox notified police of child pornography files allegedly uploaded to his account.

https://archive.triblive.com/news/north-charleroi-man-jailed-for-child-porn/#axzz3i4btbpss
The investigation began in April, when the National Center for Missing and Exploited Children notified the attorney general's office that a report initiated by Dropbox.com that an account was flagged for uploading several media files in January.
 
Last edited:
Back
Top