fbpx

Download free GDPR compliance checklist!

Tag Archives for " Artificial Intelligence "

This Filter Makes Your Photos Invisible to Facial Recognition

Digital cloaking, and how you can reclaim a modicum of digital privacy.

A.I. researchers are starting to think about how technology can solve the problem it created. Algorithms with names like “PrivacyNet” and “AnonymousNet” and “Fawkes” now offer a glimmer of refuge from the facial recognition algorithms trawling the public web.

Full article: This Filter Makes Your Photos Invisible to Facial Recognition

Fighting AI Bias

Artificial intelligence (AI) has amazing potential to change the world, and we’ve only just begun to scratch the surface. In financial services, AI will help banks make loans more quickly and fairly, reduce the incidences of credit card fraud and help keep banking networks safe from hackers.

But innovations such as AI constantly test the bounds of what is acceptable, responsible and ethical. There’s always tension between what is next and what is right. And as we manage that tension in AI and machine learning, everyone from data scientists to boardroom executives must focus deeply on outcomes and the people who will be most affected by the decisions emerging from AI algorithms.

Full article: Forbes Insights: Fighting AI Bias—Digital Rights Are Human Rights

Clearview AI facial recognition app maker sued by Vermont

The complaint alleges that the facial recognition company’s scraping of images for its database violates state privacy laws.

Vermont’s complaint alleges Clearview AI violates the state’s Consumer Protection Act by collecting facial recognition data of Vermont residents, including children, without their consent. It also alleges that the “screen scraping” Clearview AI uses to collect the data violates the state’s new Data Broker Law, which targets companies that collect and sell data on consumers.

Source: Clearview AI facial recognition app maker sued by Vermont – CNET

Surveillance Firm Banjo Used a Secret Company and Fake Apps to Scrape Social Media

One former employee said the secret company called Pink Unicorn Labs was doing the same thing as Cambridge Analytica, “but more nefariously, arguably.”

Banjo, an artificial intelligence firm that works with police used a shadow company to create an array of Android and iOS apps that looked innocuous but were specifically designed to secretly scrape social media. This was done to avoid detection by social networks. The news signifies an abuse of data by a government contractor, with Banjo going far beyond what companies which scrape social networks usually do.

Source: Surveillance Firm Banjo Used a Secret Company and Fake Apps to Scrape Social Media – VICE

IBM and Microsoft support the Vatican’s guidelines for ethical AI

IBM and Microsoft have signed the Vatican’s “Rome Call for AI Ethics,” a pledge to develop artificial intelligence in a way that protects all people and the planet.

Microsoft President Brad Smith and John Kelly, IBM’s executive vice-president, are among the first global tech leaders to sign the document.
The pledge calls for AI that safeguards the rights of all humans, especially the underprivileged, and for new regulations in areas like facial recognition.

Source: IBM and Microsoft support the Vatican’s guidelines for ethical AI | Engadget

How Europe’s AI strategy is getting it right

The European Commission’s new White Paper may be the most ambitious yet realistic government strategy for AI we have seen.

Privacy guarantees and construction of a technological and data-driven economy are not in a zero-sum equation. The Commission’s new strategy recognizes that “building an ecosystem of trust is a policy objective in itself, and should give citizens the confidence to take up AI applications and give companies and public organizations the legal certainty to innovate using AI.”

Source: How Europe’s AI strategy is getting it right – EURACTIV.com

Clearview AI: Entire Client List Was Stolen

Clearview AI, which contracts with law enforcement after reportedly scraping 3 billion images from the web, now says someone got “unauthorized access” to its list of customers.

In the notification Clearview AI disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.

The notification did not describe the breach as a hack. The notification said the company’s servers were not breached and that there was “no compromise of Clearview’s systems or network.”

Source: Clearview AI, Facial Recognition Company That Works With Law Enforcement, Says Entire Client List Was Stolen

How Explainable AI Is Helping Algorithms Avoid Bias

Artificial intelligence is biased. Human beings are biased. In fact, everyone and everything that makes choices is biased, insofar as we lend greater weight to certain factors over others when choosing.

Developers design neural networks that can learn from data, but once they’ve released their creations into ‘the wild’, such neural nets have operated without programmers being able to see what exactly makes them tick. Hence, companies don’t find out their AI is biased until it’s too late.

Still, as much as AI has (deservedly) gained a reputation for being prejudiced against certain demographics (e.g. women and people of colour), companies involved in artificial intelligence are increasingly getting better at combating algorithmic bias.

Source: How Explainable AI Is Helping Algorithms Avoid Bias

Predictive policing systems are flawed because they replicate and amplify racism

The AI Now Institute’s Executive Director, Andrea Nill Sánchez, today testified before the European Parliament LIBE Committee Public Hearing on “Artificial Intelligence in Criminal Law and Its Use by the Police and Judicial Authorities in Criminal Matters.”

Her message was simple: “Predictive policing systems will never be safe… until the criminal justice system they’re built on are reformed.” Sanchez argued that predictive policing systems are built with “dirty data” compiled over decades of police misconduct, and that there’s no current method by which this can be resolved with technology.

Her testimony was based on a detailed study conducted by the AI Now Institute last year that detailed how predictive policing systems are inherently biased.

Source: AI Now: Predictive policing systems are flawed because they replicate and amplify racism

AI systems claiming to ‘read’ emotions pose discrimination risks

Artificial Intelligence (AI) systems that companies claim can “read” facial expressions is based on outdated science and risks being unreliable and discriminatory, one of the world’s leading experts on the psychology of emotion has warned.

The AI system, developed by the company HireVue, scans candidates’ facial expressions, body language and word choice and cross-references them with traits that considered to be correlated with job success.

However, a growing body of evidence has shown that beyond these basic stereotypes there is a huge range in how people express emotion, both across and within cultures.

Source: AI systems claiming to ‘read’ emotions pose discrimination risks | Technology | The Guardian

1 2 3 19
>