Patterico's Pontifications

8/12/2021

Installed Apple Software To Check IPhones For Images of Child Sexual Abuse

Filed under: General — Dana @ 9:28 am



[guest post by Dana]

Apple announced that they would be rolling out a child safety initiative to flag child sex abuse images on iCloud accounts:

Apple announced its intention to unroll a new update that would allow the company to detect for images of child sexual abuse stored in iCloud Photos. This announcement came paired with two new features designed to similarly protect against child abuse.

Along with the iCloud feature, the company plans to launch a new tool within the Messages app that would warn children and their parents about the receiving or sending of sexually explicit photos. Additionally, Apple announced its intention to expand guidance in Siri and Search to protect children from “unsafe situations.”

News of these updates was first reported in the Financial Times where the paper wrote that the detection feature would “continuously scan photos that are stored on a U.S. user’s iPhone” with harmful material being alerted to law enforcement.

Note:

The technology involved in this plan is fundamentally new. While Facebook and Google have long scanned the photos that people share on their platforms, their systems do not process files on your own computer or phone. Because Apple’s new tools do have the power to process files stored on your phone, they pose a novel threat to privacy.

Apple addressed the privacy issue, to some degree:

Apple said…that its detection system is designed with “user privacy in mind.” Instead of scanning images on the Cloud, it said the “system performs on-device matching using a database” of known child abuse images compiled by the National Center for Missing and Exploited Children (NCMEC). Apple wrote it transforms that database material into unreadable “hashes” that are stored on the users’ device.

“Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known [child sexual abuse] hashes,” the company wrote. “This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image.”

It sounds like an absolutely worthy endeavor. After all, we want to keep children safe from being sexually exploited as well as see those who partake in such heinous behaviors against children held accountable. And yet, there are concerns…

Consider that most parents (and grandparents) have taken photos of their chubby, dimply babies and grandbabies in various states of undress, including playing in the bathtub or wading pools, toddlers romping in backyard sprinklers sans clothing, etc. While completely innocent, what happens if those images are mistakenly flagged? Because if the image is flagged, it will then be reviewed by employees. Some anonymous individuals will be making decisions about whether or not to file a report on you which could trigger a notification to law enforcement. Apple addressed these concerns. Whether satisfactorily, is up to the individual:

In conjunction with this, Apple said it uses another piece technology that ensures the safety vouchers cannot be interpreted by the company unless the voucher is flagged as a child sexual abuse image, whereupon the company will “manually review” the reported content. If deemed abusive [by an employee], the company may disable the individual’s account and will send a report to NCMEC which can then contact law enforcement. The company reported this technology has a “one in one trillion chance per year” of incorrectly flagging an image.

Understanding that the intent of the “manual review” is to help prevent mistakes, I’m wondering what sort of expertise that individual will have to make said decisions? And what if, in real-life situations, the odds of incorrectly flagging an image end up being less than the nearly impossible claimed, and innocent people are mistakenly targeted? One has to wonder just how much testing took place to be able to make the “one in a one trillion” claim. Was it enough:

Another worry is that the new technology has not been sufficiently tested. The tool relies on a new algorithm designed to recognize known child sexual abuse images, even if they have been slightly altered. Apple says this algorithm is extremely unlikely to accidentally flag legitimate content, and it has added some safeguards, including having Apple employees review images before forwarding them to the National Center for Missing and Exploited Children. But Apple has allowed few if any independent computer scientists to test its algorithm.

Moreover, although concerns about the iMessage feature are brushed off here, I’m not sure they should be, despite reassurances:

In the case of the iMessage child safety service, the privacy intrusion is not especially grave. At no time is Apple or law enforcement informed of a nude image sent or received by a child (again, only the parents of children under 13 are informed), and children are given the ability to pull back from a potentially serious mistake without informing their parents.

We aren’t told what happens if the child ignores the warning and views the image anyway, and it ends up corresponding with material on the registry for National Center for Missing and Exploited Children. Wouldn’t the image then be flagged and a duty to notify both parents and trigger steps to notify law enforcement kick in? If so, what would the parents then be facing? And considering the immaturity and lack of self-control of children under 13, I’m not convinced that, depending on the specific age, they would be able to grasp the huge risk in viewing such an image and be willing and/or capable of pulling back.

Anyway, here are more issues of concern:

While Apple has vowed to use this technology to search only for child sexual abuse material, and only if your photos are uploaded to iCloud Photos, nothing in principle prevents this sort of technology from being used for other purposes and without your consent. It is reasonable to wonder if law enforcement in the United States could compel Apple (or any other company that develops such capacities) to use this technology to detect other kinds of images or documents stored on people’s computers or phones.

While Apple is introducing the child sexual abuse detection feature only in the United States for now, it is not hard to imagine that foreign governments will be eager to use this sort of tool to monitor other aspects of their citizens’ lives — and might pressure Apple to comply. Apple does not have a good record of resisting such pressure in China, for example, having moved Chinese citizens’ data to Chinese government servers. Even some democracies criminalize broad categories of hate speech and blasphemy. Would Apple be able to resist the demands of legitimately elected governments to use this technology to help enforce those laws?

I have to admit that it feels uncomfortable – and almost wrong – to raise questions about a proposed program intended to help protect children from abhorrent evil, and yet here we are. (I just can’t help but picture some loving grandparents with images of happy, unclothed grandbabies being that “one in a trillion” mistakenly being caught up in a devastating legal nightmare…)

Thoughts?

–Dana

21 Responses to “Installed Apple Software To Check IPhones For Images of Child Sexual Abuse”

  1. The ascendancy of technology always brings its own concerns and worries. That there are evil people out there committing crimes against children is always an ugly reality that must be confronted. But I’m in the “why do kids under 13 even have cell phones anyway” camp (even while realizing that they can be lifesavers for latch key kids, etc…).

    Sigh.

    Dana (174549)

  2. I wonder what the Russian planted on Hunter Biden’s Iphone(s)?

    BuDuh (7bca93)

  3. I’d be interested to know if there’s an uptick in Samsung sales. Also, it would seem a class-action suit regarding the reneging on a publicly-stated privacy promise is possible.

    Remember, this is a company that famously defended the privacy rights of dead mass-murdering Jihadists.

    Kevin M (ab1c11)

  4. I guess my question is: how many child pornographers are using Apple iCloud and/or their iPhone to take and store pornographic photos?

    Is this a solution for a real problem, or a solution looking for a problem?

    Hoi Polloi (ade50d)

  5. Child abuse is indeed a compelling issue. So, however are drug dealing (we have a whole agency dealing with that), mass shootings, spousal abuse and violent crime.

    Why not scan texts and emails for tell-tales regarding these important issues, too. If we aren’t involved in these things, we won’t have to worry, right?

    Apple’s stance regarding the privacy of its phones and ability to resist subpoenas seems undermined. Personally, I think this is an outgrowth of similar technology demanded by Xi and the Chinese.

    Kevin M (ab1c11)

  6. Even if one is cleared of a crime after an erroneous report and investigation, how many people want to explain their arrest for child porn on their next job interview?

    Kevin M (ab1c11)

  7. What next- McDonald’s regulars who stray to Burger King w/a coupon for a free Whopper to “protect” people from obesity and heart disease? It’s time to bust these bastardly Big Tech monopolies up.

    But then… China and all that Silicon Valley campaign cash, eh Congress?!

    DCSCA (f4c5e5)

  8. On the face of it, seems like a worthy endeavor. But I, for one, am distrustful of childless people holding that power.

    Colonel Haiku (2601c0)

  9. Perfect driverless cars, first.

    DCSCA (f4c5e5)

  10. Which is why I don’t upload my photos to iCloud (or anywhere).

    Rip Murdock (d2a2a8)

  11. @10.’Which is why I don’t upload my photos to iCloud (or anywhere).’

    And which is why I don’t own an IPhone.

    “… life is skittles and life is beer…” – Tom Lehrer

    DCSCA (f4c5e5)

  12. Damn Apple Hippies, as my IT/DBA friends are wont to call them.

    urbanleftbehind (4e5526)

  13. OT: Rachel Maddow may be leaving MSNBC to start her own network.

    https://www.msn.com/en-us/tv/news/rachel-maddow-seriously-considers-leaving-msnbc/ar-AANfvQM

    A humble suggestion: counteract the Trumpist One American Network with her own No America Network.

    Kevin M (ab1c11)

  14. Perfect driverless cars, first.

    I’m sure Apple has thoroughly tested this in China, scouring phones for images of Tienanmen Square, tank man and Xi with a mustache.

    Kevin M (ab1c11)

  15. If you could get R. Maddow and the fingernails “thing” from Haiku’s video to run into each direction other knocking heads at full speed, the Freak Friday effect would render them straight.

    urbanleftbehind (4e5526)

  16. I have a big problem with employees of big tech making an interpretation call of a flagged photo. Whenever a “flag” is issued, that info, unseen, should be forwarded to an officer of the court, a lawyer, child advocate or forensic expert, who then, views and make the determination whether or not to take appropriate action, which may include, but is not limited to, sending the matter to law enforcement.

    Enough with employees of the private sector making such sensitive calls on private information. Observed behavior, where an individual has no legal expectation of privacy, is one thing, but private information is another.

    felipe (484255)

  17. I just can’t help but picture some loving grandparents with images of happy, unclothed grandbabies being that “one in a trillion” mistakenly being caught up in a devastating legal nightmare…

    I recall a case not long ago involving parents and an innocent photo of an infant; don’t remember any details. But it highlighted the difficulty of distinguishing between innocent and evil in some cases. Some people may be overzealous in accusations. But sometimes, suspicion is warranted.

    My comment right now is very general: When people try sincerely to do something good and important, even urgent, it can have side effects and downsides that lead other people to see eeeevil behind the initiative. Public policy and corporate policy are applied in a world filled with ambiguities and tradeoffs. Most people aren’t trying to do evil. Many efforts to do good are not optimal.

    Radegunda (33a224)

  18. 15… sure, ulb, you say that now. But the unintended consequences could take this entire planet down.

    You think that and contend it can be done, while never thinking or asking “should it be done?”

    Colonel Haiku (2601c0)

  19. 2.I wonder what the Russian planted on Hunter Biden’s Iphone(s)?

    Wonder what he planted in that hooker on the latest video of his sexcapades released? A paint brush, perhaps. A straw? A social disease?

    “I’m proud of him. I’m proud of my son.” – President Plagiarist

    Idiot.

    DCSCA (f4c5e5)

  20. They are only flagging matches to “known child abuse images”

    They are not looking for original pictures.

    A problem can come from something flagged wrongly but this algorithm won’t flag them in the first place.

    It will attempt to also match slightly altered images.

    Sammy Finkelman (51cd0c)

  21. “Everything that goes into an Apple device goes directly to the Chinese government” is the safest bet, in my opinion. And don’t count on the GenZers glued to their phones to overwhelm the Chinese monitors and analysts with uploads of Meghan Da Stallion. They have AIs to filter out the garbage.

    nk (1d9030)


Powered by WordPress.

Page loaded in: 0.0718 secs.