Apple will remotely install software to scan all US phones for child sex abuse images

Snip

Like I said, it all sounds reasonable. And if we set aside the "slippery slope" arguments for a moment and assume (just for the purposes of this one sentence) that the system won't ever be abused by a government, it's a pretty nice design for a system that can both protect user privacy and scan for contraband.

But that 1) is a huge assumption and 2) still doesn't answer where's the legal authority that says they can do this on personal property.
I appreciate your time and contribution to this discussion. It certainly informed my thinking on this important topic for the better. As an "old" my perspective on technology has shifted over the years as the industry that one sees in media no longer considers me the target market.

Putting aside all the think of the children and slippery slope arguments for a moment: I feel that this raises a larger question. One that has been ill defined and has outpaced the current legal frameworks of countries for awhile now. That question being what constitutes digital personhood. After all, we all carry what amounts to a collective life experience in our societal pockets (I'm personally looking forward to getting a new foldable so I can actually put a phone in my tiny front pockets again which I only bring up to say that I'm not a tech troglodyte and we shouldn't go around flinging our smartphones out with the bathwater). What is life, really but a collection of memories, photos, communication exchanges, documents, numbers that signify our value to the state etc. etc. When does the collection of all of our digital ephemera eventually have some sort of value? It is a safe bet that marketers have placed a value on it.

That brings us around to my premise: Who owns the data footprint that is you? Because if the data in your pocket is you then regardless of the reason we should be advocating to have more control (and right to repair) and privacy (right to forget) over that data. Otherwise, any computer, including the data within, is just a rental. We are renting our digital lives and -insert company here- is "generously" leasing it back to us.

How should we start thinking about technology that we don't really own like smartphones that carry a digital version of our lives?

I see this as a canary for the work being laid down that will impact the generations that are coming up now. It won't be a detriment to me, personally, but what about 10, 15, 25 years from now? Young adults are being inundated with the idea that technology is only an appliance for social capital, everything is a subscription and one doesn't "own" anything. Take that to it's logical extreme if you really want to have nightmares.

Also, to be clear, I think we should have the legal and ethical expectation that companies and agencies will be scanning for things on the cloud. The cloud is the junk food for our time. We know we shouldn't rely and overindulge but it's just so damn delicious (and convenient). On device "blackbox" operations and functions, however, I'm gonna say hard pass, for me at least.
 
... First, the Fourth Amendment does not apply to Microsoft's search, as Microsoft was not acting as a government entity or agent. ...

This looks, to my admittedly inexpert eye, like a serious loophole. If microsoft browses through one's files (whether by algorithm, or not) and passes on information to the government about (potential) criminal content, it is acting close enough as a government entity that it makes no difference. Otherwise you are serving the government the instruments of repression on a silver platter.

"Enough about that $10 billion contract. Now for something completely unrelated ... <hint, hint, nudge, nudge, wink, wink>"
 
Last edited:
I appreciate your time and contribution to this discussion. It certainly informed my thinking on this important topic for the better. As an "old" my perspective on technology has shifted over the years as the industry that one sees in media no longer considers me the target market.

Putting aside all the think of the children and slippery slope arguments for a moment: I feel that this raises a larger question. One that has been ill defined and has outpaced the current legal frameworks of countries for awhile now. That question being what constitutes digital personhood. After all, we all carry what amounts to a collective life experience in our societal pockets (I'm personally looking forward to getting a new foldable so I can actually put a phone in my tiny front pockets again which I only bring up to say that I'm not a tech troglodyte and we shouldn't go around flinging our smartphones out with the bathwater). What is life, really but a collection of memories, photos, communication exchanges, documents, numbers that signify our value to the state etc. etc. When does the collection of all of our digital ephemera eventually have some sort of value? It is a safe bet that marketers have placed a value on it.

That brings us around to my premise: Who owns the data footprint that is you? Because if the data in your pocket is you then regardless of the reason we should be advocating to have more control (and right to repair) and privacy (right to forget) over that data. Otherwise, any computer, including the data within, is just a rental. We are renting our digital lives and -insert company here- is "generously" leasing it back to us.

How should we start thinking about technology that we don't really own like smartphones that carry a digital version of our lives?

I see this as a canary for the work being laid down that will impact the generations that are coming up now. It won't be a detriment to me, personally, but what about 10, 15, 25 years from now? Young adults are being inundated with the idea that technology is only an appliance for social capital, everything is a subscription and one doesn't "own" anything. Take that to it's logical extreme if you really want to have nightmares.

Also, to be clear, I think we should have the legal and ethical expectation that companies and agencies will be scanning for things on the cloud. The cloud is the junk food for our time. We know we shouldn't rely and overindulge but it's just so damn delicious (and convenient). On device "blackbox" operations and functions, however, I'm gonna say hard pass, for me at least.
This is actually covered under copyright laws, you take an image you have certain rights if you are the originator, not under contract etc. No one can use it, copy it etc. unless they have permission from you or if it is in public domain. Since Apple is not law enforcement, no due process for warrant, viewing any images that you created would violate Copyright laws if they did not receive permission or was not in the public domain. Verify statement below:
https://www.legalzoom.com/articles/how-to-copyright-a-photograph-or-image

Does Apple going through you images using ML violate copyright laws? A person or persons or Apple created the software, which by extension probes or looks at your images? If I built a robot which takes your hubcaps off your car, would I be free of any crimes since I did not remove them but the machine I built? The warning via popup is supposed to go no further than your phone also receiving end if it is also an iPhone if explicit material it thinks it finds. I think just the warnings, judgement etc. is a form of censorship to free speech, expression but one has to decide for themselves.

My wife and I may want to do something very impactful, explicit for what ever reason - gag, comedy, drama whatever for our private relationship. You know guns, cuffs, blood on walls, nudity, yeah, artistic stuff, maybe recreate a crime scene from a case or movie, experiment and of course take plenty of images with that new camera on the iPhone. We could really get into the story, what happen, horror and how one could have felt -> WHAT EVER. Of course the Apple phone ML will probably melt and blow up trying to figure out WTF. Then I send the image to my wife iPhone or she sends it to me and the pop up (if it did not break) judges our actions as explicit. Now if this goes beyond our bedroom, that is a very serious problem for freedom, freedom to experiment while not hurting anyone or violating any laws, freedom of privacy. One of my takes, this is also a form of censorship, censorship for those receiving the images you are sending, interfering, labeling, delaying. I would expect the feature to have an opt in and opt out, that does not appear to be the case.
 
Last edited:
noko you are conflating the automatic scanning feature, which happens when images are uploaded to iCloud and compares to a known bad images database , with the “explicit image” scanning feature of iMessage.

I’ve seen this a lot and I see how the two are being confused - Apple even released a statement saying something like “in hindsight maybe we shouldn’t have announced these two things at the same time”.

the iCloud known-bad ML comparison scan happens at upload time and cannot be avoided unless you choose not to use iCloud photo uploads. In this case, the “opt out” is choosing not to use iCloud for photo backups.

I am pretty sure the default is for this to be turned off, so it isn’t technically opting anyone in by default. The capability is still there, though. As is the database.



the iMessage scanning feature explicitly requires you to opt-in, and beyond that it is only something that can be enabled on the “child” accounts of a family account. Further, reports will only go to the designated “parents” on the family account. So the generic ML it does to determine whether or not an image is explicit never goes beyond your family account, assuming you even turn it on in the first place.


I do not like the iCloud feature, for reasons I have stated above.

But as someone who has small children that will eventually grow up and get phones, and as someone who has worked these types of cases and seen how child predators use things like social media and text messages to “befriend”, send and solicit explicit imagery, I am a big fan of an opt-in iMessages feature that I have full control over.

personally I hope the iMessage feature gets widely used by parents. i think it will ultimately help to catch a lot more predators than the iCloud function.

(although I expect the iCloud function will snag more than a few. Everyone is saying “oh they will just go somewhere else” but trust me, there are plenty of “dumb criminals” out there).
 
This is one of the arguments for UBI in that the people generate wealth for corporations by using their products. We cannot stop Apple from collecting data by posting on a forum but we can agree to take a portion of their generated wealth and give it back to the people that they are benefiting from, through politics of course. Capital is socially produced and therefore companies like Apple who either willing or unwillingly admitting that they are collecting data and selling it, should pay into this UBI. Socially produced capital but profits are theirs to keep. Not just Apple of course but Google, Microsoft, and etc. Doesn't matter if you say you collect data and sell it because we're going to assume you do if your product phones home at any point then you pay into this UBI. We should do it based on their stock value. It won't deter these companies but at least this is a way of selling our data to these companies and getting paid for it. Why should these companies take our data and use it however they want, including for profits? It's time for Apple to take some of that offshore money and give back to the people.
 
Why should these companies take our data and use it however they want, including for profits?

Because that is their business model and you sign your rights away by using their products. If you do not like this business model do not to business with these companies and petition your government to pass legislation protecting consumer data.
 
This looks, to my admittedly inexpert eye, like a serious loophole. If microsoft browses through one's files (whether by algorithm, or not) and passes on information to the government about (potential) criminal content, it is acting close enough as a government entity that it makes no difference. Otherwise you are serving the government the instruments of repression on a silver platter.

"Enough about that $10 billion contract. Now for something completely unrelated ... <hint, hint, nudge, nudge, wink, wink>"

Big contract but for a Facebook, pleasing the government to keep that nice special status of being neither a private publisher but not a public square either that seem essential to their business model, make it a company that will want to please the government a lot, which could indirectly make them make things they would not want to do, would it not for the goal or making the government please, which make them softly an indirect government agent.

It depend how it play out, one element that make this play out like that, is that once Microsoft learn about the CP content, they are obligated by law to tell the authorities, unlike almost any other crime (if not all).

We should do it based on their stock value. It won't deter these companies but at least this is a way of selling our data to these companies and getting paid for it.
If we stipulate that there a lot of revenues being maid, they already do it via taxations I would imagine (if they are profitable, like they are now).

Without counting the tax paid on the sales of their products, real estate properties, business partner, employee and stockholder income, Apple pay more than 10 billion in income tax every year.

Has for bringing back offshore money back, since the 2018 corporation taxes harmonization with the rest of the OCDE and the one time bring your money back deal, they did it quite a bit I think:
https://dazeinfo.com/2019/09/27/apple-cash-reserves-growth-by-quarter-graphfarm/

At a faster rate that it growth, probably brought in the 150 billions type of figure, I would imagine a lot in the form of buybacks.

What could change with a different model that we around to pay directly the generator of data, would be instead of going to everyone via tax, it would go to the data generator creator themselve, creating an incentive for people to disclose a maximum of data do sound double edge sword.
 
noko you are conflating the automatic scanning feature, which happens when images are uploaded to iCloud and compares to a known bad images database , with the “explicit image” scanning feature of iMessage.

I’ve seen this a lot and I see how the two are being confused - Apple even released a statement saying something like “in hindsight maybe we shouldn’t have announced these two things at the same time”.

the iCloud known-bad ML comparison scan happens at upload time and cannot be avoided unless you choose not to use iCloud photo uploads. In this case, the “opt out” is choosing not to use iCloud for photo backups.

I am pretty sure the default is for this to be turned off, so it isn’t technically opting anyone in by default. The capability is still there, though. As is the database.



the iMessage scanning feature explicitly requires you to opt-in, and beyond that it is only something that can be enabled on the “child” accounts of a family account. Further, reports will only go to the designated “parents” on the family account. So the generic ML it does to determine whether or not an image is explicit never goes beyond your family account, assuming you even turn it on in the first place.


I do not like the iCloud feature, for reasons I have stated above.

But as someone who has small children that will eventually grow up and get phones, and as someone who has worked these types of cases and seen how child predators use things like social media and text messages to “befriend”, send and solicit explicit imagery, I am a big fan of an opt-in iMessages feature that I have full control over.

personally I hope the iMessage feature gets widely used by parents. i think it will ultimately help to catch a lot more predators than the iCloud function.

(although I expect the iCloud function will snag more than a few. Everyone is saying “oh they will just go somewhere else” but trust me, there are plenty of “dumb criminals” out there).
I don't think so and do hope there is an opt in or not for the ML. Agree parents having a safety process protecting kids is very important, having control over kids contact list as in who can actually can message them as in any new contacts has to be first authorized by the parents could eliminate a lot of the crap that can happen. Control over if parents can see messages or copied to them automatically (that would really suck for the kids if the case but for extreme cases maybe ok. I know I would not ever want to do that telling my kids blatantly I do not trust them until they break that trust.) This would prevent all random or kids inadvertently getting a creep contact.

It is very obvious, if Apple can actually view an image on your phone then there is a backdoor to it. All the encryption to and from iCloud is not protecting you from that. Anyone, group, government that can get to that access will have complete visibility of your phone while Apple could deny any wrong doing since they can claim no knowledge of it. The encryption of all content on iCloud makes Apple totally free from liability or any wrong doing from it since it is not visible to them, it cannot be enforced by law. But, what can be enforced is your open phone which is not encrypted. Apple is making the user the attention of the law while Apple surround themselves with a wall garden.
 
Last edited:
I don't think so and do hope there is an opt in or not for the ML.

The “opt in” for the iCloud scan is using iCloud. I would be curious if Apple puts some kind of notification in place when iCloud photos are enabled.

The capability probably exists for a passive scan (done without “opting in” by using iCloud) but at this point the system is not doing that. This is where the real concern about a backdoor should be.

the iMessages feature is one you have to explicitly enable with a family account and only works on accounts assigned to a minor.

This has been very clear in numerous articles and apple’s public documentation. I think it is being ignored by a lot of people though.

It is very obvious, if Apple can actually view an image on your phone then there is a backdoor to it.

They can’t view the images on your phone though. even if an image is flagged during iCloud upload and someone needs to review they get a “visual derivative” of the image, not the actual image.

I see the point you’re trying to make but I think it’s crucially important to have a thorough understanding of how Apple is saying it’s going to work. Without that it’s hard to separate real potential issues from ones invented through misunderstandings (or conflating the two entirely separate different systems).
 
  • Like
Reactions: noko
like this
Can I ask why ?
Dropbox will "look" (look being probably not the right word for an hash comparison to a database) at her picture on Dropbox, after this update Apple will stop being able to do on her iCloud.

https://arstechnica.com/tech-policy...llector-and-a-chess-club-stopped-his-rampage/
https://www.publicopiniononline.com...nt-police-say/626862001/?cookies=&from=global
A Chambersburg man faces charges after online media storage company Dropbox notified police of child pornography files allegedly uploaded to his account.

https://archive.triblive.com/news/north-charleroi-man-jailed-for-child-porn/#axzz3i4btbpss
The investigation began in April, when the National Center for Missing and Exploited Children notified the attorney general's office that a report initiated by Dropbox.com that an account was flagged for uploading several media files in January.
Wow. Thanks for the info! I was concerned about DropBox using Amazon servers, but had not heard about this. Looks like I will be dropping dropbox.

I will still definitely be asking my wife not to upload photos to iCloud. Not sure what update you are referring to.
This is one of the arguments for UBI in that the people generate wealth for corporations by using their products. We cannot stop Apple from collecting data by posting on a forum but we can agree to take a portion of their generated wealth and give it back to the people that they are benefiting from, through politics of course. Capital is socially produced and therefore companies like Apple who either willing or unwillingly admitting that they are collecting data and selling it, should pay into this UBI. Socially produced capital but profits are theirs to keep. Not just Apple of course but Google, Microsoft, and etc. Doesn't matter if you say you collect data and sell it because we're going to assume you do if your product phones home at any point then you pay into this UBI. We should do it based on their stock value. It won't deter these companies but at least this is a way of selling our data to these companies and getting paid for it. Why should these companies take our data and use it however they want, including for profits? It's time for Apple to take some of that offshore money and give back to the people.
UBI = terrible idea. Failed wherever it has been tried. You can't legislate income. What ends up happening is corporations will still make the same amount of money by passing those taxes on through increased prices, people receiving UBI will be disincentivized to work (like the endless unemployment benefits are doing now) and prices for everything will go up. That's part of why all these retail establishments can't find workers and inflation is going through the roof. Anyone not receiving UBI (like the middle class) will suffer even more and have to work even harder. Tired of having to work harder to pay for others to sit at home.
 
...

It is very obvious, if Apple can actually view an image on your phone then there is a backdoor to it. All the encryption to and from iCloud is not protecting you from that....
The obvious backdoor is that apple (microsoft, google on windows, android) have full control of what gets onto your device. This already is abused heavily by them installing crapware/telemetry. What is new here is that apple has decided that they should become an extension of law enforcement.

This is an extremely worrying development in a world where we should be reducing the amount of software with non-user-interests that gets installed on your device and where many companies, also outside of the phone/computer industries, follow apples lead.
 
The “opt in” for the iCloud scan is using iCloud. I would be curious if Apple puts some kind of notification in place when iCloud photos are enabled.

The capability probably exists for a passive scan (done without “opting in” by using iCloud) but at this point the system is not doing that. This is where the real concern about a backdoor should be.

the iMessages feature is one you have to explicitly enable with a family account and only works on accounts assigned to a minor.

This has been very clear in numerous articles and apple’s public documentation. I think it is being ignored by a lot of people though.



They can’t view the images on your phone though. even if an image is flagged during iCloud upload and someone needs to review they get a “visual derivative” of the image, not the actual image.

I see the point you’re trying to make but I think it’s crucially important to have a thorough understanding of how Apple is saying it’s going to work. Without that it’s hard to separate real potential issues from ones invented through misunderstandings (or conflating the two entirely separate different systems).
Thanks for the explanation but visual derivative from your device? ? ? What does that mean?
 
The obvious backdoor is that apple (microsoft, google on windows, android) have full control of what gets onto your device. This already is abused heavily by them installing crapware/telemetry. What is new here is that apple has decided that they should become an extension of law enforcement.

This is an extremely worrying development in a world where we should be reducing the amount of software with non-user-interests that gets installed on your device and where many companies, also outside of the phone/computer industries, follow apples lead.
Probably best to ditch the smartphone if privacy is very important to one. Now what does one do when your company pretty much mandates you to use their phone? Currently banking, coupons, maps, useful apps and so on is part of the current culture -> when does it become mandatory or vital to have a smartphone? That is probably now for a number of folks who really do not have a good choice due to employment/location/government requirement etc.

I think terms of service agreements/contracts that are virtually incomprehensible to the majority of folks, long winded, small fonts etc. are abused to a great extent. To extract data from you, from images, documents etc. If your audience is for kids, how can that even be legal for an agreement? A quick click for a minor access? There not and are voidable. Not only that, you may have hundreds of service agreements that, well you agreed to. I would say most will not be able to discern what agreement they already committed to for all the sites/applications/hardware use and so on.

It has become beyond reasonable for many to even be able to understand and go by the Terms of Service where understanding and memory of them is not possible. I have no clue for all the different terms for Microsoft 365, how would I even be able to memorized even a fraction of them is an example. Now Microsoft's is one of the best most readable version I've seen, at least they appear to be trying, no way I will be able to remember all of this:
https://www.microsoft.com/en-us/servicesagreement

Combined that with hundreds of others from banks, online retail stores, online platforms from gaming to streaming that are not so friendly to understand. I can only use common sense. Thousands of pages of legalalize terms, probably filling up volumes if all combined. What is my point -> this is an utter mess, a pretentious way to do business and interactions. Then these are updated often.

What would be some thoughts on a better solution? In Apples case, anytime your images are scanned, for whatever reason, you are immediately notified what that scan consist of, pop up or notification which has links to a detail log, list of each image scan, result maybe link to it. Anytime any data is released dealing with your content or your identity or history, what data was taken and where sent. What exactly was found to be explicit with a way to give feedback. All images that are sent to be evaluated so that you can also give input and or get legal advice etc. There can be many circumstances where the owner of a phone is not responsible for what ends up on it, from loaning out the phone, even briefly to malicious control from a 3rd party, assuming and shutting down the owners phone is a form of assuming owner is guilty, restricting owner from his own content which maybe vital due to medical reasons, business reasons, relationships, the list is endless and can be very damaging to an utterly innocent person.

This is too long winded, real solution -> don't get an iPhone or get into the Apple eco-enslavement-system :(
 
I will still definitely be asking my wife not to upload photos to iCloud. Not sure what update you are referring to.
The annouced update this thread is about, the new update is needed because Apple will loose the ability to "look" at people picture on their Icloud. At the moment they scan people picture on the iCloud, with the update it will be needed to be transfered on people device has they will loose the ability because of full end to end encryption even from them, the ability for Apple to decrypt and open a picture will be linked to the key attach to it that will have matched an registered photoDNA hash database on your local device.
 
The annouced update this thread is about, the new update is needed because Apple will loose the ability to "look" at people picture on their Icloud. At the moment they scan people picture on the iCloud, with the update it will be needed to be transfered on people device has they will loose the ability because of full end to end encryption even from them, the ability for Apple to decrypt and open a picture will be linked to the key attach to it that will have matched an registered photoDNA hash database on your local device.
Let see, which is better, Apple scanning pictures people sent to the iCloud or Apple scanning your pictures on your iPhone all iPhones? There has to be a backdoor allowing Apple to view an image on your phone, if that backdoor becomes available to 3rd parties, bad actors etc. I would say your phone is about as secure as swiss cheese.
 
I have a question I need to get out of the way.
Let's start with the foundational assumption:
- This thing is scanning pictures on your phone and creating a match against a database on your phone. Arguably, this is "justified" by those license agreements (that probably wouldn't hold up in court in a real lawsuit) that you probably unsuspectingly signed your rights away in.

Supposing that's true:
- Does this actually upload, without your consent, any suspected matches, from your phone, even if you're not using iCloud at all?

Because I was under the assumption initially that at least even if the scan was happening, nothing was uploaded anywhere unless you actually used the cloud services, or tried to send the images in some way. That much was fine (well no, it's not fine that you're wasting my phone's CPU and battery because of absolutely dubious plausibility, but let's put that aside). I never use all of these stupid cloud services anyway because I don't trust them. If it's just doing all of this behind my back and then also UPLOADING behind my back... Then yeah, my iPhone is a 100% ditch at this point. Either that or perhaps I'll just refuse to update it at all, which, thankfully, is easy. I guess I'm staying on 14.6 for the rest of my time with this thing, at least till I switch back to Android. Frankly it works great and I don't need any of those newfangled features anyway. Just browse the damn internet, use Discord, and call people. Occasionally, shitty mobile games.
 
Last edited:
...

Supposing that's true:
- Does this actually upload, without your consent, any suspected matches, from your phone, even if you're not using iCloud at all?
...
This is not relevant, similarly to airport security having no business in your house, even if they pinky-promise not to report anything if it turns out you're not taking any flights.
 
Thanks for the explanation but visual derivative from your device? ? ? What does that mean?

hmm. Have you actually read any of the articles about these new capabilities?

because the questions you are asking and the suppositions you are making have all been clearly explained in both Apple’s documentation and various write ups on it.

https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope

This one does a good job explaining the two new functions and what they can or can’t do.

this one also does a good job clarifying a lot of points that have caused confusion:

https://techcrunch.com/2021/08/10/i...abuse-detection-and-messages-safety-features/

Let see, which is better, Apple scanning pictures people sent to the iCloud or Apple scanning your pictures on your iPhone all iPhones? There has to be a backdoor allowing Apple to view an image on your phone, if that backdoor becomes available to 3rd parties, bad actors etc. I would say your phone is about as secure as swiss cheese.

again, it is not a backdoor that allows Apple to view your images.

it is a backdoor that allows them to scan and fingerprint them if you’ve chosen to upload your photos to iCloud. And that’s where the concern about a backdoor exists.

The logic goes: if they can scan and fingerprint photos prior to upload, the concern is that later, if they received request from a government, they could scan and fingerprint photos without your permission and without you “opting in” by enabling iCloud photos. And then they could scan for whatever other material the government wanted them to.

This is what Apple says they will not do, but it seems to me to be a realistic concern.

I have a question I need to get out of the way.
Let's start with the foundational assumption:
- This thing is scanning pictures on your phone and creating a match against a database on your phone. Arguably, this is "justified" by those license agreements (that probably wouldn't hold up in court in a real lawsuit) that you probably unsuspectingly signed your rights away in.

Supposing that's true:
- Does this actually upload, without your consent, any suspected matches, from your phone, even if you're not using iCloud at all?

No, it does not do anything if you aren’t using iCloud photos. It does not even perform the scan until upload time, so as of right now, if you have iCloud photos disabled, this update “effectively” doesn’t change anything about how your phone works.

The possibility exists for it to do so in the future, however. And all we have to go on that this won’t happen is… Apple promising they won’t expand the scope of the system.

that’s the troubling part.
 
Like I said, it all sounds reasonable. And if we set aside the "slippery slope" arguments for a moment and assume (just for the purposes of this one sentence) that the system won't ever be abused by a government, it's a pretty nice design for a system that can both protect user privacy and scan for contraband.

But that 1) is a huge assumption and 2) still doesn't answer where's the legal authority that says they can do this on personal property.
i'm sure they've already had they're lawyers look into it.
Apple promising they won’t expand the scope of the system.
where did you see that^^ they said they WILL be expanding the system in the future to look into things like images or text relating to terrorism, or anti government protest! (as quoted by louis rosman as he was reading the apple press release)
btw, did you guys see the latest terrorism warning on nbc nightly news a couple nights ago?
 
Last edited:
Because that is their business model and you sign your rights away by using their products.
You mean those agreements they have you click agree? Those are about as legally binding as the stickers they put that say "removing will void warranty". Also it's a bad business model when you depend on peoples charity of their personal data. By charity I mean you take it without anyone's permission. Try walking up to someone and asking them for their phone to see if they have any photos of naked children. The person will beat the crap out of you before you could ever reach for their phone, and this includes grandma.
If you do not like this business model do not to business with these companies
Good luck with that when most phones are if not doing it now will be doing it soon. Whatever evil Apple does and gets away with is like open season for other companies. If you work for a company and they require you to use an iPhone you're screwed.
and petition your government to pass legislation protecting consumer data.
Without public access to source code, nobody is going to know what is or isn't happening in the OS. Make any company pay into a UBI that sends any packet of data back to one of their servers. It's unfair but last I checked Apple can afford to pay. Every company I know that collects telemetry data is making big profits, so have them pay. No legislation is going to stop them because we said so. How long did it take before anyone knew that VW was cheating on their vehicle emissions with clever software? You can't just tell companies what to do and expect them to comply, but you can make them pay money.
 
Good luck with that when most phones are if not doing it now will be doing it soon. Whatever evil Apple does and gets away with is like open season for other companies. If you work for a company and they require you to use an iPhone you're screwed.
This is why I can't help smirk a bit when people claim they'll switch to Android over this. Google already scans for this content in the cloud, and it wouldn't at all be surprising if Google followed suit with an on-device check before you upload to Google Drive or other cloud services. It's like trying to claim counterculture rebel status by swearing off Top 40 radio in favor of American Idol... you're not really breaking free, you're just trading one for another. And an iPhone user who switches off iCloud Photo Library has more of those protections than an Android user who syncs with Google Photos.
 
This is why I can't help smirk a bit when people claim they'll switch to Android over this. Google already scans for this content in the cloud, and it wouldn't at all be surprising if Google followed suit with an on-device check before you upload to Google Drive or other cloud services. It's like trying to claim counterculture rebel status by swearing off Top 40 radio in favor of American Idol... you're not really breaking free, you're just trading one for another. And an iPhone user who switches off iCloud Photo Library has more of those protections than an Android user who syncs with Google Photos.

The difference though is most Android phones allow for an SD card, whereas iPhones you're only option for extra storage is the iCloud or backing up to a PC. MOST users use the cloud and are ignorant to the latter.
 
where did you see that^^ they said they WILL be expanding the system in the future to look into things like images or text relating to terrorism, or anti government protest! (as quoted by louis rosman as he was reading the apple press release)

I saw it here, direct from Apple.

https://www.apple.com/child-safety/...s_for_Children_Frequently_Asked_Questions.pdf

Like I said, they claim it won’t be expanded. It’s so clear that it would be intentional misinformation to say otherwise.

(whether or not you believe them is another question entirely, but they are very clear that, as it stands, they will not expand the system’s scope).

specifically:

Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?
Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by at least two child safety organizations. There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos. In most countries, including the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.
Could governments force Apple to add non-CSAM images to the hash list?
No. Apple would refuse such demands and our system has been designed to prevent that from happening. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. The set of image hashes used for matching are from known, existing images of CSAM and only contains entries that were independently submitted by two or more child safety orga- nizations operating in separate sovereign jurisdictions. Apple does not add to the set of known CSAM image hashes, and the system is designed to be auditable. The same set of hashes is


stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under this design. Furthermore, Apple conducts human re- view before making a report to NCMEC. In a case where the system identifies photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.
Can non-CSAM images be “injected” into the system to identify ac- counts for things other than CSAM?
Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by at least two child safety organizations. Apple does not add to the set of known CSAM image hash- es. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under our design. Finally, there is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. In the unlikely event of the system identifying images that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
Will CSAM detection in iCloud Photos falsely report innocent people to law enforcement?
No. The system is designed to be very accurate, and the likelihood that the system would incor- rectly identify any given account is less than one in one trillion per year. In addition, any time an account is identified by the system, Apple conducts human review before making a report to NCMEC. As a result, system errors or attacks will not result in innocent people being reported to NCMEC.
 
Last edited by a moderator:
Like I said, they claim it won’t be expanded. It’s so clear that it would be intentional misinformation to say otherwise.
right after they said IT WOULD in the ORIGINAL press release?! clearly no misinformation going on. so what's mainstream media saying they're gonna do? and anyway if you believe that it won't be expanded, abused, or misused, well i have a bridge to sell ya. or any 🍏 5!mp's that are interested in bridge ownership.

you should also remove all curtains/blinds from your residence, unless of course, you have something to hide.
 
The difference though is most Android phones allow for an SD card, whereas iPhones you're only option for extra storage is the iCloud or backing up to a PC. MOST users use the cloud and are ignorant to the latter.
for real! i dont' know why people even use smartphones for anything other than killing time. most arent' even running virus protection but typing in their passwords to their bank and amazon, etc, accounts after giving insane permissions to 25 chinese made apps that auto start, lol!! i can't stand even using smartphones and why i still keep my dumbphone. i also have a real gps that doesn't cut out outside of the city. i guess i'm just old school, whatevs. #knockonwood still have my original identity


Saying that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about freedom of speech because you have nothing to say.
-Jean-Michel Jarre
 
I saw it here, direct from Apple.

https://www.apple.com/child-safety/...s_for_Children_Frequently_Asked_Questions.pdf

Like I said, they claim it won’t be expanded. It’s so clear that it would be intentional misinformation to say otherwise.

(whether or not you believe them is another question entirely, but they are very clear that, as it stands, they will not expand the system’s scope).
I think it would be more accurate to say that, "as it stands, they will not expand -that- system's scope"

I deal with this all the time in my work with api's and bio-informatics. Once a subsystem is installed, we can write documentation that says "this" interface/UI/program whatever will do x and y. That doesn't mean you couldn't write another one that calls the same functions but in a different capacity and call it zed. Technically (and perhaps legally INAL) it is a different program so bob's your uncle.

The larger issue to me is not what it does but what it signals. Do we as a society want to choose to look at technology as either an extension of our collective self (the bionic or baseball glove definition) and start thinking about how that impacts our selfhood and inalienable rights? Or is it the next new camera obscura in which we view everything through a mediated lens that has been bought and paid for by whatever authority happens to be in power at the time?

Come with me on this journey for a hot minute and postulate the future is a jammed up mix of futurama and black mirror. Take applpay, for example. If you live in an area where it is very convenient to use and it provides value to you as a consumer then you will think nothing of the ubiquitous and walled garden nature of those features. All that matters is that is just works. I know plenty of people who love it and from their perspective it makes sense (esp when looking at touchless payments, if you ever worked retail you know cash is pretty gross). Now take that tech, embed it into everything: Health, work, entertainment, travel. The more convenient it is , the more people will use it. Soon it will become second nature and everyone will have to have one because everyone will want one. They will beg for it, even.

Now... what if we decide one day that we don't want to use that tech anymore or we decide that we want to look at the source code for transparency or whatever. Too late. Without that tech, you can't get a job, pay rent, access communication systems, use transportation, pay any type of bill, get any sort of credit, get healthcare, get food even maybe? Even open doors to buildings.

This is not to throw appl under the bus. Everyone is looking at these systems because it is easier to bandaid a social problem with technology than actually solve the problem itself. I don't have the answers but I do know that we as a society aren't going to tech our way out of problems that are inherent to the human condition. But hey, let's just kick that can down the road, re-org our synergies, put a pin in it and cross that bridge shall we. Stiff upper lip etc.
 
right after they said IT WOULD in the ORIGINAL press release?!

You got a copy of that to back up what you’re saying, or are you just going by what you saw on YouTube?

Whether or not you believe them is a completely different question. you’ve read my posts. You can see - I don’t like the implications of this system at all. I think it’s an end run around constitutional rights and I think it strikes right at the heart of the question about whether we actually own the things we purchase.

but if we’re going to have a conversation about it we should stick with the actual facts. Show me the original press release from Apple that says they’re going to expand the system to any government and subject matter that’s requested of them. Not someone’s interpretation and insinuations and concerns after reading it. The actual text direct from Apple. If it’s really there it should be easy to find.

I get the fear that comes from misunderstanding, but claiming Apple said it will do X when they specifically said it will not doesn’t add to the conversation, it’s just nonsense internet rabble rousing that distracts from the actual issues at play.

auspexd yes… that’s what I said. All we have to go on right now is Apple’s statement. The capability is there, and with it comes the possibility for abuse.

If Apple is faced with a request from a government to do something they say they won’t, and their choices are either pay obscene fines, exit the market or just do it… I don’t trust any company to leave a market and stand on “principles”. Unless maybe it’s not a profitable market.
 
Last edited by a moderator:
The larger issue to me is not what it does but what it signals. Do we as a society want to choose to look at technology as either an extension of our collective self (the bionic or baseball glove definition) and start thinking about how that impacts our selfhood and inalienable rights? Or is it the next new camera obscura in which we view everything through a mediated lens that has been bought and paid for by whatever authority happens to be in power at the time?
Nobody looks at this as either. Technology is a tool and that tool will change over time. It's just a means to an end. Over 10 years ago it was laptops and 20 years ago it was desktops. In the future that might be glasses or implants. Nobody cares how it works or why it works so long as it works. The moment it stops working for you is the moment you take to reflect how this gadget works and then you might be horrified what you find. For most Apple users it just works, and Apple will make sure it does, otherwise they may get another battery gate or bend gate situation.
 
hmm. Have you actually read any of the articles about these new capabilities?

because the questions you are asking and the suppositions you are making have all been clearly explained in both Apple’s documentation and various write ups on it.

https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope

This one does a good job explaining the two new functions and what they can or can’t do.

this one also does a good job clarifying a lot of points that have caused confusion:

https://techcrunch.com/2021/08/10/i...abuse-detection-and-messages-safety-features/



again, it is not a backdoor that allows Apple to view your images.

it is a backdoor that allows them to scan and fingerprint them if you’ve chosen to upload your photos to iCloud. And that’s where the concern about a backdoor exists.

The logic goes: if they can scan and fingerprint photos prior to upload, the concern is that later, if they received request from a government, they could scan and fingerprint photos without your permission and without you “opting in” by enabling iCloud photos. And then they could scan for whatever other material the government wanted them to.

This is what Apple says they will not do, but it seems to me to be a realistic concern. . . .
. . .
Straight from Apple:
Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the unreadable set of known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. Private set intersection (PSI) allows Apple to learn if an image hash matches the known CSAM image hashes, without learning anything about image hashes that do not match. PSI also prevents the user from learning whether there was a match. The device then creates a cryptographic safety voucher that encodes the match result. It also encrypts the image’s NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image. Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content.
https://www.apple.com/child-safety/pdf/Expanded_Protections_for_Children_Technology_Summary.pdf

That is the reason why I asked you what does visual derivative means? Bottom line is, there is access from Apple to access your images on your phone and probably anything else. A visual image is what one looks at, a derivative could mean anything from cropping 1 pixel from it, black and white to changing the format. To hide that behind some strict, never to be seen and changing code and a few winks is not privacy. The phone/OS allows your images to be viewed, while you are not informed, (Which would be good in my opinion if that was legitimately CSAM images) you cannot from a user point know. If that access becomes available to bad actors the situation is even worst since you won't even have any clues it is happening. The idea that it is solely restricted to iCloud and never can be expanded, modified from that point is missing the point, it is software and what it does can potentially be highly modified by bad actors.

In addition Apple does not even know what the CSAM images are and must have blind faith in the provider(s) of them. While this can be 100%, it can be abused and other type of images representing other types of data.

The opt In/Out for parents with Children is good for the messaging. Once again ML analyzing images gives another avenue to do more than just restricted to messaging, you would not know this either that it is scanning your images for certain types of data/people/things etc. from a bad actor using that access changing the parameters being looked for or modified to go beyond just messaging. Apple is giving an avenue for potential attacks.
 
Last edited:
Microsoft will one-up Apple and just have goon squads raiding people's homes, tieing them to chairs Reservoir Dogs style, and beating them for hours on end until they confess to REALLY, REALLY loving kids.
 
For all of the effort put into Apple's complex (convoluted) system to ensure the semblance of privacy through this, it still leaves room for issues. The actual image hashing is some (proprietary) flavor of locality-sensitive hashing (LSH) to give similar images a similar or identical hash. Supposedly, any cropped or resolution scaled image is still going to represent a separate (but similar?) hash. So now we have these millions of hashes and their millions of corresponding slight variation hashes. Now consider hundreds (or thousands) of pictures from each of the estimated 1 billion+ iOS users worldwide. You are absolutely going to run into the birthday problem / hash collision at some point.

Unfortunately people don't have a good grasp that a 32 bit hash vs. a 1080*760*24bit = (49766400 bits) image is going to have collisions. Even if the hash is not heavily random.

And what happens if someone adds a postage stamp size porn image to the corner of a political meme, then enters that in the database. Do all variations of that meme set a flag? So anyone who retweeted a picture of Biden eating ice cream is now in some database of sus cp users? I can do that and if I get a bunch of Anti-Biden or Anti-Trump hits, I know your politics.
 
Unfortunately people don't have a good grasp that a 32 bit hash vs. a 1080*760*24bit = (49766400 bits) image is going to have collisions. Even if the hash is not heavily random.
Interesting read:

This is the technical spec for the image hashing and CSAM detection.

For the longest time I've run OS on the family mobile devices. iOS primarily because I dislike apple's support cycle (a miserly 3-4 years of updates) less than the abysmal long-term OS support on the average android phone. The privacy bits were always a nice value added proposition - I like the device encryption, treatment of advertising identifiers, MAC randomization. This unsolicited hashing of images on personal devices though, terrible precedent to set. Apple's system as explained above is relatively well-conceived for being the big brother BS that it is. The problem as I see it, is that governments will start expecting or enforcing this sort of localized hashing of images on devices. As others noted, it's not so much Apple but rather the authoritarian governments and other companies that I'm worried about.

For all of the effort put into Apple's complex (convoluted) system to ensure the semblance of privacy through this, it still leaves room for issues. The actual image hashing is some (proprietary) flavor of locality-sensitive hashing (LSH) to give similar images a similar or identical hash. Supposedly, any cropped or resolution scaled image is still going to represent a separate (but similar?) hash. So now we have these millions of hashes and their millions of corresponding slight variation hashes. Now consider hundreds (or thousands) of pictures from each of the estimated 1 billion+ iOS users worldwide. You are absolutely going to run into the birthday problem / hash collision at some point.


Unfortunately people don't have a good grasp that a 32 bit hash vs. a 1080*760*24bit = (49766400 bits) image is going to have collisions. Even if the hash is not heavily random.

And what happens if someone adds a postage stamp size porn image to the corner of a political meme, then enters that in the database. Do all variations of that meme set a flag?

I have not had time to review the tech details, but are they hashing the image or hashing what a Neural Net outputs as a potential match? That is, is there level of filtering/analysis/matching on-device with a neural net then the result of that is hashed?
 
Can’t recall if this was posted yet. Good interview. Still comes down to “trust us” though.

https://techcrunch.com/2021/08/10/i...abuse-detection-and-messages-safety-features/

noko I don’t disagree with you are saying about it being a potential avenue for abuse. but if you just want to take one or two sentences and then come up with your own interpretation of what it does and how it does it, and put your own spin on it, what does that accomplish?

Yes, the "visual derivative" has to be recognizable as an image for them to be able to perform a manual review. And again, I don't disagree with what you're saying is the potential for abuse. But you're making a lot of suppositions. And the potential for modification and expansion is certainly an issue, right now the fact that the capability exists at all is, IMO, the bigger concern, as I've explained.

Don't focus on "what if" - we should be focused on "what is". There's enough issue there already.

You might want to read the technical overview here:

https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

As it goes into much more detail about how the Neuralhash system works. It doesn't specify what the visual derivative is but I would assume it's grayscaled or intentionally obscured in some way - like having a big fuzzy filter put over it, or an extremely high level of compression. Purely guesswork on my part, and I agree that Apple not being clear about the derivative is a problem. But the defense of it would be: if the reports are only provided when an image matches something, wouldn't you want them to be able to tell, on manual review, with an extremely high level of confidence, if an image was a false positive? For that to work they'd have to be able to see some version of the image.

(Again, not saying it's a good thing. But within the confines of the system they've designed, it's a good solution to the problem).

Jagger100 Martin the Kiteboy it is not an old-school 32-bit hash. The article SamuelL421 embedded is the same one I linked above, and the explanation of NeuralHash, while (likely purposefully) vague, actually shows that the "bit" value of the hash is variable depending on the image. In bold below:

The system generates NeuralHash in two steps. First, an image is passed into a convolutional neural
network to generate an N-dimensional, floating-point descriptor. Second, the descriptor is passed through
a hashing scheme to convert the N floating-point numbers to M bits. Here, M is much smaller than the
number of bits needed to represent the N floating-point numbers. NeuralHash achieves this level of
compression and preserves sufficient information about the image so that matches and lookups on image
sets are still successful, and the compression meets the storage and transmission requirements.

The neural network that generates the descriptor is trained through a self-supervised training scheme.
Images are perturbed with transformations that keep them perceptually identical to the original, creating an
original/perturbed pair. The neural network is taught to generate descriptors that are close to one another
for the original/perturbed pair. Similarly, the network is also taught to generate descriptors that are farther
away from one another for an original/distractor pair. A distractor is any image that is not considered
identical to the original. Descriptors are considered to be close to one another if the cosine of the angle
between descriptors is close to 1. The trained network’s output is an N-dimensional, floating-point
descriptor. These N floating-point numbers are hashed using LSH, resulting in M bits. The M-bit LSH
encodes a single bit for each of M hyperplanes, based on whether the descriptor is to the left or the right of
the hyperplane. These M bits constitute the NeuralHash for the image.

So it generates a descriptor - we don't know the number of dimensions or length of the FP numbers, but it varies depending on the image - and then from that it generates a hash value of variable bit size. It's not just doing a straight hash calculation on the data in the file (which, as has been pointed out, would need to be a much more robust hashing algorithm than something like MD5 or SHA1).

As technical summaries go it's a pretty good document. Highly encourage everyone to read it.
 
Can’t recall if this was posted yet. Good interview. Still comes down to “trust us” though.

https://techcrunch.com/2021/08/10/i...abuse-detection-and-messages-safety-features/

noko I don’t disagree with you are saying about it being a potential avenue for abuse. but if you just want to take one or two sentences and then come up with your own interpretation of what it does and how it does it, and put your own spin on it, what does that accomplish?

Yes, the "visual derivative" has to be recognizable as an image for them to be able to perform a manual review. And again, I don't disagree with what you're saying is the potential for abuse. But you're making a lot of suppositions. And the potential for modification and expansion is certainly an issue, right now the fact that the capability exists at all is, IMO, the bigger concern, as I've explained.

Don't focus on "what if" - we should be focused on "what is". There's enough issue there already.

You might want to read the technical overview here:

https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

As it goes into much more detail about how the Neuralhash system works. It doesn't specify what the visual derivative is but I would assume it's grayscaled or intentionally obscured in some way - like having a big fuzzy filter put over it, or an extremely high level of compression. Purely guesswork on my part, and I agree that Apple not being clear about the derivative is a problem. But the defense of it would be: if the reports are only provided when an image matches something, wouldn't you want them to be able to tell, on manual review, with an extremely high level of confidence, if an image was a false positive? For that to work they'd have to be able to see some version of the image.

(Again, not saying it's a good thing. But within the confines of the system they've designed, it's a good solution to the problem).

Jagger100 Martin the Kiteboy it is not an old-school 32-bit hash. The article SamuelL421 embedded is the same one I linked above, and the explanation of NeuralHash, while (likely purposefully) vague, actually shows that the "bit" value of the hash is variable depending on the image. In bold below:



So it generates a descriptor - we don't know the number of dimensions or length of the FP numbers, but it varies depending on the image - and then from that it generates a hash value of variable bit size. It's not just doing a straight hash calculation on the data in the file (which, as has been pointed out, would need to be a much more robust hashing algorithm than something like MD5 or SHA1).

As technical summaries go it's a pretty good document. Highly encourage everyone to read it.

The problem is image bits >> Hash bits or its not really a hash. And to quote the article:

"This hash must be significantly smaller than the image to be sufficiently efficient when stored on disk or sent over the network."

At some point pictures of flowers will start giving you similar answers to pictures of puppies. There is essentially infinite combinations in a large picture that collision avoidance is not possible.
 
Tbh, the pedos will just keep files encrypted and use a memory only decryption/viewer. Or run linux in a vm. I'm sorry this is not robust and this is using children to get a foot in the door.
 
The problem is image bits >> Hash bits or its not really a hash. And to quote the article:

"This hash must be significantly smaller than the image to be sufficiently efficient when stored on disk or sent over the network."

At some point pictures of flowers will start giving you similar answers to pictures of puppies. There is essentially infinite combinations in a large picture that collision avoidance is not possible.

I'm really not sure how you're arriving at that extreme from that statement. The hash is an alphanumeric text string. Of course it's smaller than an image. Even if it's 4096 characters long, that's still going to be "significantly smaller" than a typical image file. Unless maybe it's a really, really tiny image file.

If you take it to the "infinite", yes, you can get collisions, but that's why the algorithms are constantly increasing in complexity and the strings increase in length ("bit depth"). These aren't simple MD5 calculations.

Tbh, the pedos will just keep files encrypted and use a memory only decryption/viewer. Or run linux in a vm. I'm sorry this is not robust and this is using children to get a foot in the door.

Then we disagree on whether or not the design of the system is "robust". That's fair, although again I think you're only saying that because you're taking things to this "infinite" extreme.

But my experience as an analyst was that there are plenty of dumb criminals out there. It will probably catch a lot of them.
 
. . .

noko I don’t disagree with you are saying about it being a potential avenue for abuse. but if you just want to take one or two sentences and then come up with your own interpretation of what it does and how it does it, and put your own spin on it, what does that accomplish?

Yes, the "visual derivative" has to be recognizable as an image for them to be able to perform a manual review. And again, I don't disagree with what you're saying is the potential for abuse. But you're making a lot of suppositions. And the potential for modification and expansion is certainly an issue, right now the fact that the capability exists at all is, IMO, the bigger concern, as I've explained.

Don't focus on "what if" - we should be focused on "what is". There's enough issue there already.

You might want to read the technical overview here:

https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

As it goes into much more detail about how the Neuralhash system works. It doesn't specify what the visual derivative is but I would assume it's grayscaled or intentionally obscured in some way - like having a big fuzzy filter put over it, or an extremely high level of compression. Purely guesswork on my part, and I agree that Apple not being clear about the derivative is a problem. But the defense of it would be: if the reports are only provided when an image matches something, wouldn't you want them to be able to tell, on manual review, with an extremely high level of confidence, if an image was a false positive? For that to work they'd have to be able to see some version of the image.

(Again, not saying it's a good thing. But within the confines of the system they've designed, it's a good solution to the problem).

. . .
Thanks for the research and link. Now I do focus on what ifs at time, what the current implementation does is defined by code. What it can do can be much different and of course Apple could be saints and deliver 100% with zero bugs, employee following it 100% all the policies etc. In our World that is not even being close to the norm. For me Red Flags are waving everywhere with this and Apple would not be where I would put important trust in. Of course my wife does so she has an iPhone.

Apple has had a long problem anyways with security with Pegasus Spyware, which can invade your phone without even clicking on anything. Controls your camera, mike, see's paswords, docs and so on.
https://www.theguardian.com/technol...walled-garden-is-no-match-for-pegasus-spyware
And yet Pegasus has worked, in one way or another, on iOS for at least five years. The latest version of the software is even capable of exploiting a brand-new iPhone 12 running iOS 14.6, the newest version of the operating system available to normal users. More than that: the version of Pegasus that infects those phones is a “zero-click” exploit. There is no dodgy link to click, or malicious attachment to open. Simply receiving the message is enough to become a victim of the malware.


It’s worth pausing to note what is, and isn’t, worth criticising Apple for here. No software on a modern computing platform can ever be bug-free, and as a result no software can ever be fully hacker-proof. Governments will pay big money for working iPhone exploits, and that motivates a lot of unscrupulous security researchers to spend a lot of time trying to work out how to break Apple’s security.

But security experts I’ve spoken to say that there is a deeper malaise at work here. “Apple’s self-assured hubris is just unparalleled,” Patrick Wardle, a former NSA employee and founder of the Mac security developer Objective-See, told me last week. “They basically believe that their way is the best way.”

Apples arrogance, ignoring failures of their OS I do not see changing, adding in IOS a number of additional avenues right at the system level just makes it worst in my book.

Brief rundown on Pegasus:
https://en.wikipedia.org/wiki/Pegasus_(spyware)
 
...

But my experience as an analyst was that there are plenty of dumb criminals out there. It will probably catch a lot of them.
My problem with this reasoning is that those dumb criminals would also be caught by an increased police effort. That makes the real choice of implementing this technology one between funding/effort and one of the most important human rights we have in a free society (the one to be assumed innocent until proven guilty), as opposed to the choice this is framed as: between waiving that right and catching those criminals.
 
Back
Top