Free tools and resources for Data Protection Officers!

Tag Archives for " Artificial Intelligence "

GDPR challenges of using AI

AI, machine learning, deep learning, neural networks – call it what you like, there’s a lot of excitement about the ability of software to analyse a lot of data, spot patterns, learn (sometimes independently) and to make conclusions and produce insights that are entirely new.

Full article: GDPR challenges of using AI | Gowling WLG

The Future of AI Will Be About Less Data, Not More

Over the coming five years applications and machines will become less artificial and more intelligent. They will rely less on bottom-up big data and more on top-down reasoning that more closely resembles the way humans approach problems and tasks.

Using huge amounts of citizens’ data raises privacy issues likely to lead to more government action like the European Union’s General Data Protection Regulation (GDPR), which imposes stringent requirements on the use of individuals’ personal data.

Source: The Future of AI Will Be About Less Data, Not More

Together we can thwart the big-tech data grab. Here’s how.

Our lives, online and off, depend upon decentralising power on the internet, says the Guardian columnist John Harris.

As the year unfolds, pay attention to the people who are talking about a new, decentralised internet – AKA Web 3.0 – and the possibility of data being returned to the control of the people who generate it.

In Boston, the worldwide web’s founder, Tim Berners-Lee, is working on a new way of using the internet, called Solid. Then there are the possibilities bound up with the blockchain, the system of verification that sits under so-called cryptocurrencies. Startup called Fetch is developing what it calls decentralised artificial intelligence.

Full article: Together we can thwart the big-tech data grab. Here’s how | John Harris | Opinion | The Guardian

Why We Need to Audit Algorithms

Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools.

A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Full article: Why We Need to Audit Algorithms

Google is afraid of assuming your gender with Gmail’s Smart Compose feature

Instead of building a better AI system or giving users a choice of suggestions, Google is removing all gender-specific terms from Gmail’s Smart Compose suggestion tool over fears of backlash from easily offended users.

Full article: Google is afraid of assuming your gender with Gmail’s Smart Compose feature – TechSpot

CIPL Publishes Report on Artificial Intelligence and Data Protection in Tension

The Centre for Information Policy Leadership (“CIPL”) recently published the first report in its project on Artificial Intelligence (“AI”) and Data Protection: Delivering Sustainable AI Accountability in Practice.

The report, entitled “Artificial Intelligence and Data Protection in Tension,” aims to describe in clear, understandable terms:

  • what AI is and how it is being used all around us today;
  • the role that personal data plays in the development, deployment and oversight of AI; and
  • the opportunities and challenges presented by AI to data protection laws and norms.

Source: CIPL Publishes Report on Artificial Intelligence and Data Protection in Tension

Algorithms can reduce discrimination, but only with proper data

If self-learning algorithms discriminate, it is not because there is an error in the algorithm, but because the data used to train the algorithm are “biased.”

It is only when you know which data subjects belong to vulnerable groups that bias in the data can be made transparent and algorithms trained properly. The taboo against collecting such data should, therefore, be broken, as this is the only way to eliminate future discrimination.

Full article: Algorithms can reduce discrimination, but only with proper data

AI, IoT, and edge computing drive cybersecurity concerns for 2019

As companies adopt emerging technologies, the cyber risk landscape is set to grow larger in the new year, according to a Forcepoint report. 2018 saw many large-scale data breaches, but 2019 will shift to more widespread, integrated cybersecurity concerns. Industrial IoT disruption, phishing attacks, and edge computing present some of the largest areas of cybersecurity risks.

Full article: AI, IoT, and edge computing drive cybersecurity concerns for 2019 – TechRepublic

>