fbpx

Free tools and resources for Data Protection Officers!

Tag Archives for " Artificial Intelligence "

Blind Spots in AI Just Might Help Protect Your Privacy

Machine learning, for all its benevolent potential to detect cancers and create collision-proof self-driving cars , also threatens to upend our notions of what’s visible and hidden.

It can, for instance, enable highly accurate facial recognition , see through the pixelation in photos , and even—as Facebook’s Cambridge Analytica scandal showed —use public social media data to predict more sensitive traits like someone’s political orientation.

Full article: Blind Spots in AI Just Might Help Protect Your Privacy

U.S. Chamber of Commerce Releases Principles on Artificial Intelligence

The U.S. Chamber’s Technology Engagement Center and Center for Global Regulatory Cooperation recently released a set of ten principles essential for attaining the full potential of AI technologies.

The principles, drafted with input from more than 50 Chamber member companies, stress the importance of creating a sensible and innovation-forward approach to addressing the challenges and opportunities presented by AI.

Source: U.S. Chamber of Commerce Releases Principles on Artificial Intelligence

AI policing tools may “amplify” prejudices

Evidence has suggested that the absence of consistent guidelines for the use of automation and algorithms, may lead to discrimination in police work.

The Royal United Services Institute (RUSI) published a report which was commissioned by the Centre for Data Ethics and Innovation (CDEI), whereby 50 experts, including senior police officers in England and Wales, were interviewed.

It was found that the use of AI policing tools could result in potential bias occurring. The report stated that algorithms that are trained on prior police data “may replicate (and in some cases amplify) the existing biases inherent in the dataset”, such as under- or over-policing of certain communities.

Source: #privacy: Report warns that AI policing tools may “amplify” prejudices

Researchers Created AI That Hides Your Emotions From Other AI

As smart speaker makers such as Amazon improve emotion-detecting AI, researchers are coming up with ways to protect our privacy.

Now, researchers at the Imperial College London have used AI to mask the emotional cues in users’ voices when they’re speaking to internet-connected voice assistants. The idea is to put a “layer” between the user and the cloud their data is uploaded to by automatically converting emotional speech into “normal” speech.

Source: Researchers Created AI That Hides Your Emotions From Other AI – VICE

CoE launches public consultation on human rights impact of algorithmic systems

The Steering Committee on Media and Information Society (CDMSI) of the Council of Europe has published draft recommendation on the human rights impacts of algorithmic systems  and invites comments from the public.

Draft recommendation outlines that private sector actors should actively engage in participatory processes with consumer associations and data protection authorities for the design, implementation and evaluation of their complaint mechanisms, including collective redress mechanisms.

In addition, private sector actors must adequately train the staff involved in the review of algorithmic systems on, among other things, applicable personal data protection and privacy standards.

Source: Have your say on the draft recommendation on the human rights impacts of algorithmic systems! – Newsroom

Facebook’s face recognition software should worry us.

Facebook holds “the largest facial dataset to date”—powered by DeepFace, Facebook’s deep-learning facial recognition system.

Policymakers and experts are now beginning to weigh how the government’s use of facial recognition should be regulated and constrained. A crackdown on how government agencies can use the technology needs to consider how companies do, too.

Full article: Facebook’s face recognition software should worry us.

European Commission Releases Factsheet on Artificial Intelligence

On July 4, 2019, the European Commission published a factsheet on artificial intelligence for Europe.

In the Factsheet, the European Commission underlines the importance of AI and its role in improving people’s lives and bringing major benefits to the society and economy.

Full article: European Commission Releases Factsheet on Artificial Intelligence

EU High-Level Working Group on AI launches pilot phase of Ethics Guidelines and publishes  Recommendations for Trustworthy AI

On June 26, 2019, the EU High-Level Expert Group on Artificial Intelligence (AI HLEG) announced two important developments: (1) the launch of the pilot phase of the assessment list in its Ethics Guidelines for Trustworthy AI; and (2) the publication of its Policy and Investment Recommendations for Trustworthy AI.

The Recommendations are the second deliverable of the AI HLEG; the first was the Group’s Ethics Guidelines of April 2019, which defined the contours of “Trustworthy AI”.

Source: Two new developments from the EU High-Level Working Group on AI: launch of pilot phase of Ethics Guidelines and publication of Policy and Investment Recommendations for Trustworthy AI

AI used to identify thieves in Walmart

The American supermarket chain, Walmart has said that it uses AI recognition technology on its checkouts to help root out shoplifters.

The AI cameras are capable of spotting when items have been placed inside a shopping bag without having been scanned either by a cashier or through the self-service scan mechanism.

Source: AI used to identify thieves in Walmart, USA

Can Government Manage Risks Associated with Artificial Intelligence?

Artificial intelligence can help government agencies deliver better results, but there are underlying risks and ethical issues with its implementation that need to be resolved before AI becomes part of the fabric of government.

Agencies will need to address multiple risks and ethical imperatives in order to realize the opportunity that AI technology brings.

Full article: Can Government Manage Risks Associated with Artificial Intelligence? – Nextgov

1 2 3 16
>