The Jury Talks Back

5/17/2019

San Francisco Bans Use Of Facial-Recognition Technology By Law Enforcement

Filed under: Uncategorized — Dana @ 12:44 pm

[guest post by Dana]

San Francisco’s Board of Supervisors voted 8-1 this week to ban what many police departments consider a useful investigative tool:

The ban is part of a broader anti-surveillance ordinance that the city’s Board of Supervisors approved on Tuesday. The ordinance, which outlaws the use of facial-recognition technology by police and other government departments, could also spur other local governments to take similar action. Eight of the board’s 11 supervisors voted in favor of it; one voted against it, and two who support it were absent.

[…]

San Francisco’s new rule, which is set to go into effect in a month, forbids the use of facial-recognition technology by the city’s 53 departments — including the San Francisco Police Department, which doesn’t currently use such technology but did test it out between 2013 and 2017. However, the ordinance carves out an exception for federally controlled facilities at San Francisco International Airport and the Port of San Francisco. The ordinance doesn’t prevent businesses or residents from using facial recognition or surveillance technology in general — such as on their own security cameras. And it also doesn’t do anything to limit police from, say, using footage from a person’s Nest camera to assist in a criminal case.

“We all support good policing but none of us want to live in a police state,” San Francisco Supervisor Aaron Peskin, who introduced the bill earlier this year, told CNN Business ahead of the vote.

Another stated problem with using the technology is the issue of a built-in bias which can lead to incorrectly identifying certain groups:

There are concerns that they are not as effective at correctly recognizing people of color and women. One reason for this issue is that the datasets used to train the software may be disproportionately male and white.

The decision will also make it more difficult for city agencies to continue the usage of any surveillance programs:

Any city department that wants to use surveillance technology or services (such as the police department if it were interested in buying new license-plate readers, for example) must first get approval from the Board of Supervisors. That process will include submitting information about the technology and how it will be used, and presenting it at a public hearing. With the new rule, any city department that already uses surveillance tech will need to tell the board how it is being used.

The ordinance also states that the city will need to report to the Board of Supervisors each year on whether surveillance equipment and services are being used in the ways for which they were approved, and include details like what data was kept, shared or erased.

Yet one needs to also consider the benefits of law enforcement (at large) using the technology in fighting crime:

Perhaps the most compelling argument for FRS is that it can make law enforcement more efficient. FRS allows a law enforcement agency to run a photograph of someone just arrested through its databases to identify the person and see if he or she is wanted for other offenses. It also can help law enforcement officers who are out on patrol or monitoring a heavily populated event identify wanted criminals if and as they encounter them.

Imagine a law enforcement officer wearing a body camera with FRS software identifying, from within a huge crowd, a person suspected of planning to detonate a bomb. The ambient presence of FRS applied to a feed from stadium cameras would allow law enforcement to identify dangerous attendees in cooperation with the company managing event security.

There are many contexts in which this law enforcement technology has already been brought to bear. Beyond spotting threats in a crowd, facial recognition software can be used to quickly suss out perpetrators of identity fraud; the New York Department of Motor Vehicles’ Facial Recognition Technology Program has been doing just that, with 21,000 possible identity fraud cases identified since 2010. The U.S. Department of Homeland Security is also experimenting with FRS to assist in identifying abducted and exploited children. Just this month, U.S. Customs and Border Protection has started teaming up with airlines in Boston, Atlanta, Washington, and New York to use FRS for boarding pass screening. In a safety context, some American schools are installing cameras with FRS to identify the presence of gang members, fired employees, and sex offenders on school grounds.

In spite of the Fourth Amendment and cited benefits of the technology, those supporting the need to ban facial recognition technology as used by law enforcement are worried about a larger, more threatening picture :

Civil libertarians worry that the technology could morph into pervasive automated authoritarianism in which individuals can be tracked everywhere, in real time, similar to the version being developed by the Chinese government. The Chinese government reportedly aims, as part of its Skynet surveillance system, to add an additional 400 million video cameras to its existing 170 million over the next three years. The cameras employ real time facial recognition technology.

Where do you draw the line: Ban it altogether or closely regulate its usage because of the risks associated with it? Which reminds me, the New York City Police Department – an agency where facial recognition has reportedly helped break open more than 2,500 cases – used the face of a well-known celebrity (apparently without his knowledge) in order to help identify a suspect:

The New York Police Department used a photo of Woody Harrelson in its facial recognition program in an attempt to identify a beer thief who looked like the actor, according to a report published Thursday.

Georgetown University’s Center on Privacy and Technology highlighted the April 2017 episode in “Garbage In, Garbage Out,” a report on what it says are flawed practices in law enforcement’s use of facial recognition.

The report says security footage of the thief was too pixelated and produced no matches while high-quality images of Harrelson, a three-time Oscar nominee, returned several possible matches and led to one arrest.

–Dana

AP Headline: “A pregnant man’s tragedy tests gender notions”

Filed under: Uncategorized — Patterico @ 7:21 am

The AP has an article titled Blurred lines: A pregnant man’s tragedy tests gender notions:

When the man arrived at the hospital with severe abdominal pains, a nurse didn’t consider it an emergency, noting that he was obese and had stopped taking blood pressure medicines. In reality, he was pregnant — a transgender man in labor that was about to end in a stillbirth.

The tragic case, described in Wednesday’s New England Journal of Medicine, points to larger issues about assigning labels or making assumptions in a society increasingly confronting gender variations in sports, entertainment and government. In medicine, there’s a similar danger of missing diseases such as sickle cell and cystic fibrosis that largely affect specific racial groups, the authors write.

“The point is not what’s happened to this particular individual but this is an example of what happens to transgender people interacting with the health care system,” said the lead author, Dr. Daphna Stroumsa of the University of Michigan, Ann Arbor.

“He was rightly classified as a man” in the medical records and appears masculine, Stroumsa said. “But that classification threw us off from considering his actual medical needs.”

Does it make sense to say the patient “was rightly classified as a man,” given what happened?

It’s likely that classification of this patient as a woman would have provided a better chance for the baby’s survival:

The 32-year-old patient told the nurse he was transgender when he arrived at the emergency room and his electronic medical record listed him as male. He hadn’t had a period in several years and had been taking testosterone, a hormone that has masculinizing effects and can decrease ovulation and menstruation. But he quit taking the hormone and blood pressure medication after he lost insurance.

A home pregnancy test was positive and he said he had “peed himself” — a possible sign of ruptured membranes and labor. A nurse ordered a pregnancy test but considered him stable and his problems non-urgent.

Several hours later, a doctor evaluated him and the hospital test confirmed pregnancy. An ultrasound showed unclear signs of fetal heart activity, and an exam revealed that part of the umbilical cord had slipped into the birth canal. Doctors prepared to do an emergency cesarean delivery, but in the operating room no fetal heartbeat was heard. Moments later, the man delivered a stillborn baby.

A woman showing up with similar symptoms “would almost surely have been triaged and evaluated more urgently for pregnancy-related problems,” the authors wrote.

But the patient was a woman, at least for purposes of medical professionals trying to assess what to do. It was the “proper” classification of the patient as a man that increased the chance of this tragic outcome.

I’m not here to mock or deride anyone who believes that they were born the wrong gender. I can understand why people like that want to be called by their preferred gender. However, as Ben Shapiro likes to say, facts don’t care about your feelings — and biology doesn’t care what you choose to call yourself. Medical professionals have to have a better way to deal with such situations than shrugging their collective shoulders and doing the same thing.


Powered by WordPress.