fbpx

Download free GDPR compliance checklist!

Tag Archives for " algorithm "

Speech recognition algorithms may also have racial bias

As it turns out, algorithms that are trained on data that’s already subject to human biases can readily recapitulate them, as we’ve seen in places like the banking and judicial systems. Other algorithms have just turned out to be not especially good.

Now, researchers at Stanford have identified another area with potential issues: the speech-recognition algorithms that do everything from basic transcription to letting our phones fulfill our requests. These algorithms seem to have more issues with the speech patterns used by African Americans, although there’s a chance that geography plays a part, too.

Source: Speech recognition algorithms may also have racial bias | Ars Technica

This Filter Makes Your Photos Invisible to Facial Recognition

Digital cloaking, and how you can reclaim a modicum of digital privacy.

A.I. researchers are starting to think about how technology can solve the problem it created. Algorithms with names like “PrivacyNet” and “AnonymousNet” and “Fawkes” now offer a glimmer of refuge from the facial recognition algorithms trawling the public web.

Full article: This Filter Makes Your Photos Invisible to Facial Recognition

How Explainable AI Is Helping Algorithms Avoid Bias

Artificial intelligence is biased. Human beings are biased. In fact, everyone and everything that makes choices is biased, insofar as we lend greater weight to certain factors over others when choosing.

Developers design neural networks that can learn from data, but once they’ve released their creations into ‘the wild’, such neural nets have operated without programmers being able to see what exactly makes them tick. Hence, companies don’t find out their AI is biased until it’s too late.

Still, as much as AI has (deservedly) gained a reputation for being prejudiced against certain demographics (e.g. women and people of colour), companies involved in artificial intelligence are increasingly getting better at combating algorithmic bias.

Source: How Explainable AI Is Helping Algorithms Avoid Bias

NIST Study Evaluates Algorithmic Bias

A new NIST study examines how accurately face recognition software tools identify people of varied sex, age and racial background.

Results captured in the report, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects (NISTIR 8280), are intended to inform policymakers and to help software developers better understand the performance of their algorithms. Face recognition technology has inspired public debate in part because of the need to understand the effect of demographics on face recognition algorithms.

Source: NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software | NIST

The Senate’s secret algorithms bill doesn’t actually fight secret algorithms

In the case of the Filter Bubble Transparency Act, it’s not just spin; it’s an example of how badly defined buzzwords can make it impossible to address the internet’s problems. The bill is named after Eli Pariser’s 2011 book The Filter Bubble, which argues that companies like Facebook create digital echo chambers by optimizing content for what each person already engages with.

The FBTA aims to let people opt out of those echo chambers. Large companies would have to notify users if they’re delivering content — like search results or a news feed — based on personal information that the user didn’t explicitly provide.

However, the FBTA doesn’t make platforms explain exactly how their algorithms work. It doesn’t prevent them from using arcane and manipulative rules, as long as those rules aren’t built around certain kinds of personal data. And removing or disclosing a few factors in an algorithm doesn’t make the overall algorithm transparent.

Full article: The Senate’s secret algorithms bill doesn’t actually fight secret algorithms – The Verge

Legislation Would Force Google and Rivals to Disclose Search Algorithms

Senate lawmakers are teeing up a bill that would require search engines to disclose the algorithms they apply in ranking internet searches amid growing concern over their use of personal data and give consumers an option for unfiltered searches.

Search engines such as Alphabet Inc.’s Google unit use a variety of measures to filter results for individual searches, such as the user’s browsing activity, search history and geographical location.

Source: Legislation Would Force Google and Rivals to Disclose Search Algorithms – WSJ

UK Government Faces Court Over ‘Biased’ Visa Algorithm

The UK’s Home Office is facing a landmark Judicial Review to reveal how an algorithm it uses to triage visa applications works – in what appears to be the first case of its kind here, and which could open up a series of future similar demands in the public and private sectors if successful.

The legal challenge has been launched by campaign groups Foxglove – which focuses on legal rights in relation to the abuse of technology – and the Joint Council for the Welfare of Immigrants. They believe the algorithm ‘may be discriminating on the basis of crude characteristics like nationality or age – rather than assessing applicants fairly, on the merits‘.

Source: UK Government Faces Court Over ‘Biased’ Visa Algorithm – Artificial Lawyer

Health Care in the U.S. Has an Algorithm Bias Problem

While algorithms have become more powerful and ubiquitous, evidence has mounted that they reflect and even amplify real-world biases and racism. Recent research shows black patients are disproportionately impacted.

An algorithm used to determine prison sentences was found to be racially biased, incorrectly predicting a higher recidivism risk for black defendants and a lower risk for white defendants. Facial recognition software has been shown to have both race and gender bias, accurately identifying a person’s gender only among white men. Online advertisements that appear with Google search results have been found to show high-income jobs to men far more often than to women.

Source: Health Care in the U.S. Has an Algorithm Bias Problem

One in three councils using algorithms to make welfare decisions

One in three councils are using computer algorithms to help make decisions about benefit claims and other welfare issues, despite evidence emerging that some of the systems are unreliable.

Companies including the US credit-rating businesses Experian and TransUnion, as well as the outsourcing specialist Capita and Palantir, a data-mining firm co-founded by the Trump-supporting billionaire Peter Thiel, are selling machine-learning packages to local authorities that are under pressure to save money.

Source: One in three councils using algorithms to make welfare decisions | Society | The Guardian

CoE launches public consultation on human rights impact of algorithmic systems

The Steering Committee on Media and Information Society (CDMSI) of the Council of Europe has published draft recommendation on the human rights impacts of algorithmic systems  and invites comments from the public.

Draft recommendation outlines that private sector actors should actively engage in participatory processes with consumer associations and data protection authorities for the design, implementation and evaluation of their complaint mechanisms, including collective redress mechanisms.

In addition, private sector actors must adequately train the staff involved in the review of algorithmic systems on, among other things, applicable personal data protection and privacy standards.

Source: Have your say on the draft recommendation on the human rights impacts of algorithmic systems! – Newsroom

1 2 3 5
>