Patterico's Pontifications

8/12/2021

Republicans Introduce Vaccine Passport and Voter ID Harmonization Act

Filed under: General — Dana @ 12:33 pm



[guest post by Dana]

So, this is happening:

Republican lawmakers on Thursday introduced the Vaccine Passport and Voter ID Harmonization Act, legislation that would require states mandating vaccine passports to also mandate voter ID requirements.

The Daily Caller News Foundation first obtained the text of the bill, introduced by Kevin Cramer of North Dakota in the Senate and Nancy Mace of South Carolina in the House, “requiring states and local jurisdictions that institute vaccine passports to require voter identification in federal elections.”

“It makes no sense for Democrats to adamantly oppose commonsense Voter ID policies which protect the integrity of our elections,” Cramer said in a statement.

“If they’re comfortable making people show their private medical records to simply go to a restaurant, they should be fine having people prove they are who they say they are before they vote,” he continued. “Our legislation shines a light on their hypocrisy.”

Here’s Mace explaining why she believes the legislation is necessary:

Showing an ID is something we must do in everyday life. We need an ID when we get a job, cash our paychecks, rent an apartment, buy a car, buy alcohol or even cold medicine. States who mandate vaccine passports should be just as rigorous when it comes to something as important as protecting the right to vote. I am introducing legislation to require all states and local jurisdictions that institute vaccine passports to also require voter identification in ALL federal elections. It makes too much sense not to.

I’m pressed for time, and while I have a lot to say about this, I am forced to just drop it here for you to discuss.

–Dana

Installed Apple Software To Check IPhones For Images of Child Sexual Abuse

Filed under: General — Dana @ 9:28 am



[guest post by Dana]

Apple announced that they would be rolling out a child safety initiative to flag child sex abuse images on iCloud accounts:

Apple announced its intention to unroll a new update that would allow the company to detect for images of child sexual abuse stored in iCloud Photos. This announcement came paired with two new features designed to similarly protect against child abuse.

Along with the iCloud feature, the company plans to launch a new tool within the Messages app that would warn children and their parents about the receiving or sending of sexually explicit photos. Additionally, Apple announced its intention to expand guidance in Siri and Search to protect children from “unsafe situations.”

News of these updates was first reported in the Financial Times where the paper wrote that the detection feature would “continuously scan photos that are stored on a U.S. user’s iPhone” with harmful material being alerted to law enforcement.

Note:

The technology involved in this plan is fundamentally new. While Facebook and Google have long scanned the photos that people share on their platforms, their systems do not process files on your own computer or phone. Because Apple’s new tools do have the power to process files stored on your phone, they pose a novel threat to privacy.

Apple addressed the privacy issue, to some degree:

Apple said…that its detection system is designed with “user privacy in mind.” Instead of scanning images on the Cloud, it said the “system performs on-device matching using a database” of known child abuse images compiled by the National Center for Missing and Exploited Children (NCMEC). Apple wrote it transforms that database material into unreadable “hashes” that are stored on the users’ device.

“Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known [child sexual abuse] hashes,” the company wrote. “This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image.”

It sounds like an absolutely worthy endeavor. After all, we want to keep children safe from being sexually exploited as well as see those who partake in such heinous behaviors against children held accountable. And yet, there are concerns…

Consider that most parents (and grandparents) have taken photos of their chubby, dimply babies and grandbabies in various states of undress, including playing in the bathtub or wading pools, toddlers romping in backyard sprinklers sans clothing, etc. While completely innocent, what happens if those images are mistakenly flagged? Because if the image is flagged, it will then be reviewed by employees. Some anonymous individuals will be making decisions about whether or not to file a report on you which could trigger a notification to law enforcement. Apple addressed these concerns. Whether satisfactorily, is up to the individual:

In conjunction with this, Apple said it uses another piece technology that ensures the safety vouchers cannot be interpreted by the company unless the voucher is flagged as a child sexual abuse image, whereupon the company will “manually review” the reported content. If deemed abusive [by an employee], the company may disable the individual’s account and will send a report to NCMEC which can then contact law enforcement. The company reported this technology has a “one in one trillion chance per year” of incorrectly flagging an image.

Understanding that the intent of the “manual review” is to help prevent mistakes, I’m wondering what sort of expertise that individual will have to make said decisions? And what if, in real-life situations, the odds of incorrectly flagging an image end up being less than the nearly impossible claimed, and innocent people are mistakenly targeted? One has to wonder just how much testing took place to be able to make the “one in a one trillion” claim. Was it enough:

Another worry is that the new technology has not been sufficiently tested. The tool relies on a new algorithm designed to recognize known child sexual abuse images, even if they have been slightly altered. Apple says this algorithm is extremely unlikely to accidentally flag legitimate content, and it has added some safeguards, including having Apple employees review images before forwarding them to the National Center for Missing and Exploited Children. But Apple has allowed few if any independent computer scientists to test its algorithm.

Moreover, although concerns about the iMessage feature are brushed off here, I’m not sure they should be, despite reassurances:

In the case of the iMessage child safety service, the privacy intrusion is not especially grave. At no time is Apple or law enforcement informed of a nude image sent or received by a child (again, only the parents of children under 13 are informed), and children are given the ability to pull back from a potentially serious mistake without informing their parents.

We aren’t told what happens if the child ignores the warning and views the image anyway, and it ends up corresponding with material on the registry for National Center for Missing and Exploited Children. Wouldn’t the image then be flagged and a duty to notify both parents and trigger steps to notify law enforcement kick in? If so, what would the parents then be facing? And considering the immaturity and lack of self-control of children under 13, I’m not convinced that, depending on the specific age, they would be able to grasp the huge risk in viewing such an image and be willing and/or capable of pulling back.

Anyway, here are more issues of concern:

While Apple has vowed to use this technology to search only for child sexual abuse material, and only if your photos are uploaded to iCloud Photos, nothing in principle prevents this sort of technology from being used for other purposes and without your consent. It is reasonable to wonder if law enforcement in the United States could compel Apple (or any other company that develops such capacities) to use this technology to detect other kinds of images or documents stored on people’s computers or phones.

While Apple is introducing the child sexual abuse detection feature only in the United States for now, it is not hard to imagine that foreign governments will be eager to use this sort of tool to monitor other aspects of their citizens’ lives — and might pressure Apple to comply. Apple does not have a good record of resisting such pressure in China, for example, having moved Chinese citizens’ data to Chinese government servers. Even some democracies criminalize broad categories of hate speech and blasphemy. Would Apple be able to resist the demands of legitimately elected governments to use this technology to help enforce those laws?

I have to admit that it feels uncomfortable – and almost wrong – to raise questions about a proposed program intended to help protect children from abhorrent evil, and yet here we are. (I just can’t help but picture some loving grandparents with images of happy, unclothed grandbabies being that “one in a trillion” mistakenly being caught up in a devastating legal nightmare…)

Thoughts?

–Dana


Powered by WordPress.

Page loaded in: 0.0660 secs.