fbpx

Download free GDPR compliance checklist!

Tag Archives for " Artificial Intelligence "

Global AI fight heats up over health data

Spat over a Microsoft health data project highlights growing European distrust of U.S. tech.

The French government made that clear last week, when it said it wanted to move control of an effort to centralize the country’s health data project away from the American tech giant Microsoft and into the hands of a French or European platform.

The attention to health data underscores the increasing politicization of questions about who owns private information about European consumers, after the European Court of Justice struck down a framework for sharing data between the European Union and the United States known as the Privacy Shield.

It also comes as governments around the world race to develop new artificial intelligence technology — and grapple with how to regulate it. The EU is set to present rules on AI early next year, and must confront a risk inherent to rule-making: making regulation that quickly becomes obsolete.

Full article: Global AI fight heats up over health data – POLITICO

The Netherlands Is Becoming a Predictive Policing Hot Spot

A report released late last month by Amnesty International revealed that Dutch law enforcement have been engaged in a number of predictive-policing pilots and referred to the Netherlands as “one of the countries at the forefront of predictive policing in practice.”

The project is not only intrusive, the report claims, but discriminatory by design, since its aim is to fight “mobile banditry” (crimes like theft, pickpocketing, and drug trafficking), a term which explicitly excludes people of Dutch nationality and assumes that the offender is either of Eastern European origin or Romani, a minority ethnic group.

‘Predictive policing projects like these are explicitly biased and prejudiced and rely on data that is explicitly biased and prejudiced, but nobody does anything about it.’ says Amnesty International.

Source: The Netherlands Is Becoming a Predictive Policing Hot Spot

EU nations call for ‘soft law solutions’ in future Artificial Intelligence regulation

Fourteen EU countries have set out their position on the future regulation of Artificial Intelligence, urging the European Commission to adopt a “soft law approach”.

In a position paper spearheaded by Denmark and signed by digital ministers from other EU tech heavyweights such as France, Finland and Estonia, the signatories call on the Commission to incentivise the development of next-gen AI technologies, rather than put up barriers.

“We should turn to soft law solutions such as self-regulation, voluntary labelling and other voluntary practices as well as robust standardisation process as a supplement to existing legislation that ensures that essential safety and security standards are met,” the paper noted.

Source: EU nations call for ‘soft law solutions’ in future Artificial Intelligence regulation – EURACTIV.com

Amazon aims to improve biometric features and privacy with new edge AI chip in Echo devices

A new processor in Amazon’s latest generation of Echo devices are giving the Alexa assistant intriguing capabilities that the company say offer consumers a more natural experience of speech-based interaction.

There’s also plenty of scientific research that’s gone into sound localization and computer vision to offer new features without creating new biometric data storage and privacy problems—and device edge processing is the key.

Source: Amazon aims to improve biometric features and privacy with new edge AI chip in Echo devices | Biometric Update

ICO Issues Guidance on Artificial Intelligence

The UK’s Information Commissioner’s Office (ICO) has finalised the key component of its “AI Auditing Framework” following consultation.

The Guidance covers what the ICO considers “best practice” in the development and deployment of AI technologies. It is not a statutory code and there is no penalty for failing to follow the Guidance.

Source: ICO Guidance on Artificial Intelligence

Clearview AI Mounts a First Amendment Defense

Clearview AI has hired Floyd Abrams, a top lawyer, to help fight claims that selling its data to law enforcement agencies violates privacy laws.

Clearview AI has scraped billions of photos from the internet, including from platforms like LinkedIn and Instagram, and sells access to the resulting database to law enforcement agencies.

The company also faces two lawsuits filed in state courts: one from Vermont’s attorney general and one from the American Civil Liberties Union in Illinois, where a statute forbids the corporate use of residents’ faceprints without explicit consent.

Source: Facial Recognition Start-Up Mounts a First Amendment Defense – The New York Times

US Govt. Releases Report on Privacy, Discrimination Risks of Facial Recognition

The U.S. Government Accountability Office has released a key report about privacy and discrimination risks posed by the commercial use of facial recognition.

The GAO completed the report in response to research showing the disparate impact the technology has on minorities, including a National institute of Science and Technology study which found that facial recognition systems misidentify Black women at disproportionately high rates.

Source: GAO Releases Report on Privacy, Discrimination Risks of Facial Recognition

US Govt. issues Artificial Intelligence Ethics Framework for the Intelligence Community

US government has issued ethics guide for United States Intelligence Community personnel on how to procure, design, build, use, protect, consume, and manage AI and related data.

The guide is a “living document” intended to provide stakeholders with a reasoned approach to judgment and to assist with the documentation of considerations associated with the AI lifecycle. In doing so, this guide will enable mission through an enhanced understanding of goals between AI practitioners and managers while promoting the ethical use of AI.

Source: Artificial Intelligence Ethics Framework for the Intelligence Community

Amazon, Google, Microsoft sued over photos in facial recognition database

Amazon, Google parent Alphabet and Microsoft used people’s photos to train their facial recognition technologies without obtaining the subjects’ permission, in violation of an Illinois biometric privacy statute, a trio of federal lawsuits filed Tuesday allege.

The photos in question were part of IBM’s Diversity in Faces database, which is designed to advance the study of fairness and accuracy in facial recognition by looking at more than just skin tone, age and gender. The data includes 1 million images of human faces, annotated with tags such as face symmetry, nose length and forehead height.

Source: Amazon, Google, Microsoft sued over photos in facial recognition database – CNET

Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

Mr. Williams’s case combines flawed technology with poor police work, illustrating how facial recognition can go awry.

Full article: Wrongfully Accused by an Algorithm – The New York Times

1 2 3 21
>