fbpx

Download free GDPR compliance checklist!

Tag Archives for " algorithm "

UK Government Agrees to Stop Using ‘Visa Streaming’ Algorithm

The Home Office of the UK has announced that it will halt the use of its “Visa Streaming” algorithm. This change is the result of a settlement in a lawsuit brought to challenge use of the algorithmic decision system by the UK Government.

The system produced a “traffic light” assessment of visa applicants (Green, Yellow, or Red ) that informed how they would be treated during the visa approval process.

Source: UK Government Agrees to Stop Using ‘Visa Streaming’ Algorithm

NIST study finds that masks defeat most facial recognition algorithms

A National Institutes of Science and Technology found that 89 commercial facial recognition algorithms were defeated by masks.

The study — part of a series from NIST’s Face Recognition Vendor Test (FRVT) program conducted in collaboration with the Department of Homeland Security’s Science and Technology Directorate, the Office of Biometric Identity Management, and Customs and Border Protection — explored how well each of the algorithms was able to perform “one-to-one” matching, where a photo is compared with a different photo of the same person.

Source: NIST study finds that masks defeat most facial recognition algorithms | VentureBeat

Uber Drivers Sue to Gain Access to its Secret Algorithms

Uber’s power lies In information asymmetry. This EU court case could help end it.

Four United Kingdom Uber drivers launched a lawsuit Monday to gain access to Uber’s algorithms through Europe’s General Data Protection Regulation (GDPR) in a bid that could reshape the gig economy landscape across Europe.

Source: Uber Drivers Sue to Gain Access to its Secret Algorithms

Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

Mr. Williams’s case combines flawed technology with poor police work, illustrating how facial recognition can go awry.

Full article: Wrongfully Accused by an Algorithm – The New York Times

FTC Cautions Against Biased Outcomes in Use of AI and Algorithms

As the healthcare and financial impacts of COVID-19 continue to evolve with the global pandemic, the use of AI technology and associated risks have received greater attention.

On April 8, 2020, the FTC posted an extensive summary of its recent enforcement actions, studies, and guidance regarding the use of AI tools and algorithms. The FTC expects the use of AI tools to be transparent, explainable, fair, empirically sound, and managed in a compliant and ethically accountable way.

Source: FTC Cautions Against Biased Outcomes in Use of AI and Algorithms

Time to re-evaluate AI algorithms right from the design stage

The inherent bias that all-too-often springs from AI algorithms is well-documented.

With AI bias and errant outcomes surging, a call for more human involvement. ‘Even the people deploying these algorithms sometimes would be surprised that these things could happen’.

The best approaches to eradicating such bias is general awareness, as well as designating trained people to examine and audit AI output.

Full article: Time to re-evaluate AI algorithms right from the design stage, experts urge | ZDNet

Speech recognition algorithms may also have racial bias

As it turns out, algorithms that are trained on data that’s already subject to human biases can readily recapitulate them, as we’ve seen in places like the banking and judicial systems. Other algorithms have just turned out to be not especially good.

Now, researchers at Stanford have identified another area with potential issues: the speech-recognition algorithms that do everything from basic transcription to letting our phones fulfill our requests. These algorithms seem to have more issues with the speech patterns used by African Americans, although there’s a chance that geography plays a part, too.

Source: Speech recognition algorithms may also have racial bias | Ars Technica

This Filter Makes Your Photos Invisible to Facial Recognition

Digital cloaking, and how you can reclaim a modicum of digital privacy.

A.I. researchers are starting to think about how technology can solve the problem it created. Algorithms with names like “PrivacyNet” and “AnonymousNet” and “Fawkes” now offer a glimmer of refuge from the facial recognition algorithms trawling the public web.

Full article: This Filter Makes Your Photos Invisible to Facial Recognition

How Explainable AI Is Helping Algorithms Avoid Bias

Artificial intelligence is biased. Human beings are biased. In fact, everyone and everything that makes choices is biased, insofar as we lend greater weight to certain factors over others when choosing.

Developers design neural networks that can learn from data, but once they’ve released their creations into ‘the wild’, such neural nets have operated without programmers being able to see what exactly makes them tick. Hence, companies don’t find out their AI is biased until it’s too late.

Still, as much as AI has (deservedly) gained a reputation for being prejudiced against certain demographics (e.g. women and people of colour), companies involved in artificial intelligence are increasingly getting better at combating algorithmic bias.

Source: How Explainable AI Is Helping Algorithms Avoid Bias

NIST Study Evaluates Algorithmic Bias

A new NIST study examines how accurately face recognition software tools identify people of varied sex, age and racial background.

Results captured in the report, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects (NISTIR 8280), are intended to inform policymakers and to help software developers better understand the performance of their algorithms. Face recognition technology has inspired public debate in part because of the need to understand the effect of demographics on face recognition algorithms.

Source: NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software | NIST

1 2 3 6
>