fbpx

Free tools and resources for Data Protection Officers!

Tag Archives for " algorithm "

A new US bill would force companies to check their algorithms for bias

US lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems — like facial recognition or ad targeting algorithms — for bias.

If passed, it would ask the Federal Trade Commission to create rules for evaluating “highly sensitive” automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.

Source: A new bill would force companies to check their algorithms for bias – The Verge

Researchers Find Facebook’s Ad Targeting Algorithm Is Inherently Biased

Facebook is in trouble with the US Department of Housing and Urban Development (HUD) for what the department says are discriminatory ad targeting practices.

For years, advertisers were allowed by Facebook to target (or avoid targeting) protected groups, like minorities and specific gender identities. But in a new paper, a team of researchers says that Facebook’s ad delivery algorithm is inherently biased even when advertisers are trying to reach a large, inclusive audience.

Source: Researchers Find Facebook’s Ad Targeting Algorithm Is Inherently Biased – Motherboard

This little-known facial-recognition accuracy test has big influence

The closely watched NIST results released last November concluded that the entire industry has improved not just incrementally, but “massively.” It showed that at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013, and just 0.2 percent of all searches by all algorithms tested failed in 2018, compared with a 4 percent failure rate in 2014 and 5 percent rate in 2010.

Full article: This little-known facial-recognition accuracy test has big influence

AI Diagnoses Genetic Syndromes Just From Patients’ Pictures

An algorithm is able to identify genetic syndromes in patients more accurately than doctors can — just by looking at a picture of a patient’s face. The results suggest AI could help diagnosis rare disorders.

Source: AI Diagnoses Genetic Syndromes Just From Patients’ Pictures – D-brief

Why We Need to Audit Algorithms

Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools.

A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Full article: Why We Need to Audit Algorithms

Cambridge Analytica Knew How You’d Vote If You Wore Wrangler

The whistle-blower behind the Cambridge Analytica revelations said the now-defunct data research firm used the fashion preferences of Facebook Inc. users to help develop the algorithms needed to target them with political messaging.

Sharing examples of the anonymized data for the first time, originally collected and used by Cambridge Analytica, Christopher Wylie said people who displayed an interest in Abercrombie & Fitch tended on average to be less cautious and more liberal, and individuals who liked Wrangler were usually more conservative and more keen on “orderliness.”

Full article: Cambridge Analytica Knew How You’d Vote If You Wore Wrangler – Bloomberg

Algorithms can reduce discrimination, but only with proper data

If self-learning algorithms discriminate, it is not because there is an error in the algorithm, but because the data used to train the algorithm are “biased.”

It is only when you know which data subjects belong to vulnerable groups that bias in the data can be made transparent and algorithms trained properly. The taboo against collecting such data should, therefore, be broken, as this is the only way to eliminate future discrimination.

Full article: Algorithms can reduce discrimination, but only with proper data

Facebook to let French regulators investigate on moderation processes

Facebook and the French government are going to cooperate to look at Facebook’s efforts when it comes to moderation. At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation. Facebook is willing to cooperate and give unprecedented access to its internal processes.

Full article: Facebook to let French regulators investigate on moderation processes

Child abuse algorithms: from science fiction to cost-cutting reality

In an age of austerity, and a climate of fear about child abuse, perhaps it is unsurprising that social workers have turned to new technology for help.

Local authorities are beginning to ask whether big data could help to identify vulnerable children. Could a computer program flag a problem family, identify a potential victim and prevent another Baby P or Victoria Climbié?

Source: Child abuse algorithms: from science fiction to cost-cutting reality | Society | The Guardian

>