fbpx

Download free GDPR compliance checklist!

Tag Archives for " algorithm "

Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

Mr. Williams’s case combines flawed technology with poor police work, illustrating how facial recognition can go awry.

Full article: Wrongfully Accused by an Algorithm – The New York Times

FTC Cautions Against Biased Outcomes in Use of AI and Algorithms

As the healthcare and financial impacts of COVID-19 continue to evolve with the global pandemic, the use of AI technology and associated risks have received greater attention.

On April 8, 2020, the FTC posted an extensive summary of its recent enforcement actions, studies, and guidance regarding the use of AI tools and algorithms. The FTC expects the use of AI tools to be transparent, explainable, fair, empirically sound, and managed in a compliant and ethically accountable way.

Source: FTC Cautions Against Biased Outcomes in Use of AI and Algorithms

Time to re-evaluate AI algorithms right from the design stage

The inherent bias that all-too-often springs from AI algorithms is well-documented.

With AI bias and errant outcomes surging, a call for more human involvement. ‘Even the people deploying these algorithms sometimes would be surprised that these things could happen’.

The best approaches to eradicating such bias is general awareness, as well as designating trained people to examine and audit AI output.

Full article: Time to re-evaluate AI algorithms right from the design stage, experts urge | ZDNet

Speech recognition algorithms may also have racial bias

As it turns out, algorithms that are trained on data that’s already subject to human biases can readily recapitulate them, as we’ve seen in places like the banking and judicial systems. Other algorithms have just turned out to be not especially good.

Now, researchers at Stanford have identified another area with potential issues: the speech-recognition algorithms that do everything from basic transcription to letting our phones fulfill our requests. These algorithms seem to have more issues with the speech patterns used by African Americans, although there’s a chance that geography plays a part, too.

Source: Speech recognition algorithms may also have racial bias | Ars Technica

This Filter Makes Your Photos Invisible to Facial Recognition

Digital cloaking, and how you can reclaim a modicum of digital privacy.

A.I. researchers are starting to think about how technology can solve the problem it created. Algorithms with names like “PrivacyNet” and “AnonymousNet” and “Fawkes” now offer a glimmer of refuge from the facial recognition algorithms trawling the public web.

Full article: This Filter Makes Your Photos Invisible to Facial Recognition

How Explainable AI Is Helping Algorithms Avoid Bias

Artificial intelligence is biased. Human beings are biased. In fact, everyone and everything that makes choices is biased, insofar as we lend greater weight to certain factors over others when choosing.

Developers design neural networks that can learn from data, but once they’ve released their creations into ‘the wild’, such neural nets have operated without programmers being able to see what exactly makes them tick. Hence, companies don’t find out their AI is biased until it’s too late.

Still, as much as AI has (deservedly) gained a reputation for being prejudiced against certain demographics (e.g. women and people of colour), companies involved in artificial intelligence are increasingly getting better at combating algorithmic bias.

Source: How Explainable AI Is Helping Algorithms Avoid Bias

NIST Study Evaluates Algorithmic Bias

A new NIST study examines how accurately face recognition software tools identify people of varied sex, age and racial background.

Results captured in the report, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects (NISTIR 8280), are intended to inform policymakers and to help software developers better understand the performance of their algorithms. Face recognition technology has inspired public debate in part because of the need to understand the effect of demographics on face recognition algorithms.

Source: NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software | NIST

The Senate’s secret algorithms bill doesn’t actually fight secret algorithms

In the case of the Filter Bubble Transparency Act, it’s not just spin; it’s an example of how badly defined buzzwords can make it impossible to address the internet’s problems. The bill is named after Eli Pariser’s 2011 book The Filter Bubble, which argues that companies like Facebook create digital echo chambers by optimizing content for what each person already engages with.

The FBTA aims to let people opt out of those echo chambers. Large companies would have to notify users if they’re delivering content — like search results or a news feed — based on personal information that the user didn’t explicitly provide.

However, the FBTA doesn’t make platforms explain exactly how their algorithms work. It doesn’t prevent them from using arcane and manipulative rules, as long as those rules aren’t built around certain kinds of personal data. And removing or disclosing a few factors in an algorithm doesn’t make the overall algorithm transparent.

Full article: The Senate’s secret algorithms bill doesn’t actually fight secret algorithms – The Verge

Legislation Would Force Google and Rivals to Disclose Search Algorithms

Senate lawmakers are teeing up a bill that would require search engines to disclose the algorithms they apply in ranking internet searches amid growing concern over their use of personal data and give consumers an option for unfiltered searches.

Search engines such as Alphabet Inc.’s Google unit use a variety of measures to filter results for individual searches, such as the user’s browsing activity, search history and geographical location.

Source: Legislation Would Force Google and Rivals to Disclose Search Algorithms – WSJ

UK Government Faces Court Over ‘Biased’ Visa Algorithm

The UK’s Home Office is facing a landmark Judicial Review to reveal how an algorithm it uses to triage visa applications works – in what appears to be the first case of its kind here, and which could open up a series of future similar demands in the public and private sectors if successful.

The legal challenge has been launched by campaign groups Foxglove – which focuses on legal rights in relation to the abuse of technology – and the Joint Council for the Welfare of Immigrants. They believe the algorithm ‘may be discriminating on the basis of crude characteristics like nationality or age – rather than assessing applicants fairly, on the merits‘.

Source: UK Government Faces Court Over ‘Biased’ Visa Algorithm – Artificial Lawyer

1 2 3 6
>