Thanks for the research and link. Now I do focus on what ifs at time, what the current implementation does is defined by code. What it can do can be much different and of course Apple could be saints and deliver 100% with zero bugs, employee following it 100% all the policies etc. In our World that is not even being close to the norm. For me Red Flags are waving everywhere with this and Apple would not be where I would put important trust in. Of course my wife does so she has an iPhone.. . .
noko I don’t disagree with you are saying about it being a potential avenue for abuse. but if you just want to take one or two sentences and then come up with your own interpretation of what it does and how it does it, and put your own spin on it, what does that accomplish?
Yes, the "visual derivative" has to be recognizable as an image for them to be able to perform a manual review. And again, I don't disagree with what you're saying is the potential for abuse. But you're making a lot of suppositions. And the potential for modification and expansion is certainly an issue, right now the fact that the capability exists at all is, IMO, the bigger concern, as I've explained.
Don't focus on "what if" - we should be focused on "what is". There's enough issue there already.
You might want to read the technical overview here:
As it goes into much more detail about how the Neuralhash system works. It doesn't specify what the visual derivative is but I would assume it's grayscaled or intentionally obscured in some way - like having a big fuzzy filter put over it, or an extremely high level of compression. Purely guesswork on my part, and I agree that Apple not being clear about the derivative is a problem. But the defense of it would be: if the reports are only provided when an image matches something, wouldn't you want them to be able to tell, on manual review, with an extremely high level of confidence, if an image was a false positive? For that to work they'd have to be able to see some version of the image.
(Again, not saying it's a good thing. But within the confines of the system they've designed, it's a good solution to the problem).
. . .
Apple has had a long problem anyways with security with Pegasus Spyware, which can invade your phone without even clicking on anything. Controls your camera, mike, see's paswords, docs and so on.
And yet Pegasus has worked, in one way or another, on iOS for at least five years. The latest version of the software is even capable of exploiting a brand-new iPhone 12 running iOS 14.6, the newest version of the operating system available to normal users. More than that: the version of Pegasus that infects those phones is a “zero-click” exploit. There is no dodgy link to click, or malicious attachment to open. Simply receiving the message is enough to become a victim of the malware.
It’s worth pausing to note what is, and isn’t, worth criticising Apple for here. No software on a modern computing platform can ever be bug-free, and as a result no software can ever be fully hacker-proof. Governments will pay big money for working iPhone exploits, and that motivates a lot of unscrupulous security researchers to spend a lot of time trying to work out how to break Apple’s security.
But security experts I’ve spoken to say that there is a deeper malaise at work here. “Apple’s self-assured hubris is just unparalleled,” Patrick Wardle, a former NSA employee and founder of the Mac security developer Objective-See, told me last week. “They basically believe that their way is the best way.”
Apples arrogance, ignoring failures of their OS I do not see changing, adding in IOS a number of additional avenues right at the system level just makes it worst in my book.
Brief rundown on Pegasus: