fbpx

Download free GDPR compliance checklist!

Tag Archives for " algorithm "

The Senate’s secret algorithms bill doesn’t actually fight secret algorithms

In the case of the Filter Bubble Transparency Act, it’s not just spin; it’s an example of how badly defined buzzwords can make it impossible to address the internet’s problems. The bill is named after Eli Pariser’s 2011 book The Filter Bubble, which argues that companies like Facebook create digital echo chambers by optimizing content for what each person already engages with.

The FBTA aims to let people opt out of those echo chambers. Large companies would have to notify users if they’re delivering content — like search results or a news feed — based on personal information that the user didn’t explicitly provide.

However, the FBTA doesn’t make platforms explain exactly how their algorithms work. It doesn’t prevent them from using arcane and manipulative rules, as long as those rules aren’t built around certain kinds of personal data. And removing or disclosing a few factors in an algorithm doesn’t make the overall algorithm transparent.

Full article: The Senate’s secret algorithms bill doesn’t actually fight secret algorithms – The Verge

Legislation Would Force Google and Rivals to Disclose Search Algorithms

Senate lawmakers are teeing up a bill that would require search engines to disclose the algorithms they apply in ranking internet searches amid growing concern over their use of personal data and give consumers an option for unfiltered searches.

Search engines such as Alphabet Inc.’s Google unit use a variety of measures to filter results for individual searches, such as the user’s browsing activity, search history and geographical location.

Source: Legislation Would Force Google and Rivals to Disclose Search Algorithms – WSJ

UK Government Faces Court Over ‘Biased’ Visa Algorithm

The UK’s Home Office is facing a landmark Judicial Review to reveal how an algorithm it uses to triage visa applications works – in what appears to be the first case of its kind here, and which could open up a series of future similar demands in the public and private sectors if successful.

The legal challenge has been launched by campaign groups Foxglove – which focuses on legal rights in relation to the abuse of technology – and the Joint Council for the Welfare of Immigrants. They believe the algorithm ‘may be discriminating on the basis of crude characteristics like nationality or age – rather than assessing applicants fairly, on the merits‘.

Source: UK Government Faces Court Over ‘Biased’ Visa Algorithm – Artificial Lawyer

Health Care in the U.S. Has an Algorithm Bias Problem

While algorithms have become more powerful and ubiquitous, evidence has mounted that they reflect and even amplify real-world biases and racism. Recent research shows black patients are disproportionately impacted.

An algorithm used to determine prison sentences was found to be racially biased, incorrectly predicting a higher recidivism risk for black defendants and a lower risk for white defendants. Facial recognition software has been shown to have both race and gender bias, accurately identifying a person’s gender only among white men. Online advertisements that appear with Google search results have been found to show high-income jobs to men far more often than to women.

Source: Health Care in the U.S. Has an Algorithm Bias Problem

One in three councils using algorithms to make welfare decisions

One in three councils are using computer algorithms to help make decisions about benefit claims and other welfare issues, despite evidence emerging that some of the systems are unreliable.

Companies including the US credit-rating businesses Experian and TransUnion, as well as the outsourcing specialist Capita and Palantir, a data-mining firm co-founded by the Trump-supporting billionaire Peter Thiel, are selling machine-learning packages to local authorities that are under pressure to save money.

Source: One in three councils using algorithms to make welfare decisions | Society | The Guardian

CoE launches public consultation on human rights impact of algorithmic systems

The Steering Committee on Media and Information Society (CDMSI) of the Council of Europe has published draft recommendation on the human rights impacts of algorithmic systems  and invites comments from the public.

Draft recommendation outlines that private sector actors should actively engage in participatory processes with consumer associations and data protection authorities for the design, implementation and evaluation of their complaint mechanisms, including collective redress mechanisms.

In addition, private sector actors must adequately train the staff involved in the review of algorithmic systems on, among other things, applicable personal data protection and privacy standards.

Source: Have your say on the draft recommendation on the human rights impacts of algorithmic systems! – Newsroom

A new US bill would force companies to check their algorithms for bias

US lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems — like facial recognition or ad targeting algorithms — for bias.

If passed, it would ask the Federal Trade Commission to create rules for evaluating “highly sensitive” automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.

Source: A new bill would force companies to check their algorithms for bias – The Verge

Researchers Find Facebook’s Ad Targeting Algorithm Is Inherently Biased

Facebook is in trouble with the US Department of Housing and Urban Development (HUD) for what the department says are discriminatory ad targeting practices.

For years, advertisers were allowed by Facebook to target (or avoid targeting) protected groups, like minorities and specific gender identities. But in a new paper, a team of researchers says that Facebook’s ad delivery algorithm is inherently biased even when advertisers are trying to reach a large, inclusive audience.

Source: Researchers Find Facebook’s Ad Targeting Algorithm Is Inherently Biased – Motherboard

This little-known facial-recognition accuracy test has big influence

The closely watched NIST results released last November concluded that the entire industry has improved not just incrementally, but “massively.” It showed that at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013, and just 0.2 percent of all searches by all algorithms tested failed in 2018, compared with a 4 percent failure rate in 2014 and 5 percent rate in 2010.

Full article: This little-known facial-recognition accuracy test has big influence

AI Diagnoses Genetic Syndromes Just From Patients’ Pictures

An algorithm is able to identify genetic syndromes in patients more accurately than doctors can — just by looking at a picture of a patient’s face. The results suggest AI could help diagnosis rare disorders.

Source: AI Diagnoses Genetic Syndromes Just From Patients’ Pictures – D-brief

1 2 3 5
>