fbpx

Download free GDPR compliance checklist!

Tag Archives for " Artificial Intelligence "

UK ICO Issues Draft Guidance on Explaining Decisions Made by AI

The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI. The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.

The guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

Guidance is out for consultation until January 24 2020.

Acces ICO AI guidelines.

EU Commissioner Vestager to present new AI law at the start of 2020

Over the next three months, European Commissioner Margrethe Vestager will draft a new European law for AI. As of December, she will be responsible for the digitization of the European market. She plans to present her new AI law in March. After that, the European Parliament and the governments and parliaments of the Member States will have to approve her new AI law.

The new AI law is to lay out the rules regarding the collection and sharing of data by, among others, the large American tech companies such as Facebook, Amazon and Google whose internet platforms are being used on a massive scale by European citizens. At the moment there is only a guideline for e-privacy and one set of regulations for data protection (GDPR). The new law must include rules that make the collectors and distributors of data liable for any abuse use of this data.

Source: EU Commissioner Vestager to present new AI law at the start of 2020 – Innovation Origins

Facebook alters video to make people invisible to facial recognition

Facebook AI Research says it’s created the first machine learning system that can stop a facial recognition network from identifying people in videos.

In initial tests, the method was able to thwart state-of-the-art facial recognition systems. The AI for automatic video modification doesn’t need to be retrained to be applied to each video. It maps a slightly distorted version on a person’s face in order to make it difficult for facial recognition technology to identify a person.

Source: Facebook alters video to make people invisible to facial recognition | VentureBeat

AI face-scanning algorithm to decide whether you deserve the job

HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it “profoundly disturbing.”

More than 100 employers now use the system, including Hilton and Unilever, and more than a million job seekers have been analyzed.

But some AI researchers argue the system is digital snake oil — an unfounded blend of superficial measurements and arbitrary number-crunching that is not rooted in scientific fact.

Source: HireVue’s AI face-scanning algorithm increasingly decides whether you deserve the job – The Washington Post

This Is What the Future of A.I. Regulation Could Look Like

The German Data Ethics Commission has produced a series of recommendations for regulating algorithms and artificial intelligence. Its ideas will likely influence new EU rules.

The commission insisted that algorithmic systems should be designed safely, to respect people’s rights and freedoms, protect democracy, be secure, and avoid bias and discrimination.

It said systems presenting a significant risk of harm, such as those that show different people different prices based on their profiles, should in some cases require licensing. And systems with an “untenable potential for harm”—killer robots, for example—should be banned outright.

Source: This Is What the Future of A.I. Regulation Could Look Like | Fortune

Blind Spots in AI Just Might Help Protect Your Privacy

Machine learning, for all its benevolent potential to detect cancers and create collision-proof self-driving cars , also threatens to upend our notions of what’s visible and hidden.

It can, for instance, enable highly accurate facial recognition , see through the pixelation in photos , and even—as Facebook’s Cambridge Analytica scandal showed —use public social media data to predict more sensitive traits like someone’s political orientation.

Full article: Blind Spots in AI Just Might Help Protect Your Privacy

U.S. Chamber of Commerce Releases Principles on Artificial Intelligence

The U.S. Chamber’s Technology Engagement Center and Center for Global Regulatory Cooperation recently released a set of ten principles essential for attaining the full potential of AI technologies.

The principles, drafted with input from more than 50 Chamber member companies, stress the importance of creating a sensible and innovation-forward approach to addressing the challenges and opportunities presented by AI.

Source: U.S. Chamber of Commerce Releases Principles on Artificial Intelligence

AI policing tools may “amplify” prejudices

Evidence has suggested that the absence of consistent guidelines for the use of automation and algorithms, may lead to discrimination in police work.

The Royal United Services Institute (RUSI) published a report which was commissioned by the Centre for Data Ethics and Innovation (CDEI), whereby 50 experts, including senior police officers in England and Wales, were interviewed.

It was found that the use of AI policing tools could result in potential bias occurring. The report stated that algorithms that are trained on prior police data “may replicate (and in some cases amplify) the existing biases inherent in the dataset”, such as under- or over-policing of certain communities.

Source: #privacy: Report warns that AI policing tools may “amplify” prejudices

Researchers Created AI That Hides Your Emotions From Other AI

As smart speaker makers such as Amazon improve emotion-detecting AI, researchers are coming up with ways to protect our privacy.

Now, researchers at the Imperial College London have used AI to mask the emotional cues in users’ voices when they’re speaking to internet-connected voice assistants. The idea is to put a “layer” between the user and the cloud their data is uploaded to by automatically converting emotional speech into “normal” speech.

Source: Researchers Created AI That Hides Your Emotions From Other AI – VICE

CoE launches public consultation on human rights impact of algorithmic systems

The Steering Committee on Media and Information Society (CDMSI) of the Council of Europe has published draft recommendation on the human rights impacts of algorithmic systems  and invites comments from the public.

Draft recommendation outlines that private sector actors should actively engage in participatory processes with consumer associations and data protection authorities for the design, implementation and evaluation of their complaint mechanisms, including collective redress mechanisms.

In addition, private sector actors must adequately train the staff involved in the review of algorithmic systems on, among other things, applicable personal data protection and privacy standards.

Source: Have your say on the draft recommendation on the human rights impacts of algorithmic systems! – Newsroom

>