fbpx

Download free GDPR compliance checklist!

Tag Archives for " Artificial Intelligence "

European Commission Releases Final Ethics Guidelines for Trustworthy AI

On April 8, 2019, the European Commission High-Level Expert Group (the “HLEG”) on Artificial Intelligence released the final version of its Ethics Guidelines for Trustworthy AI.

The Guidelines’ release follows a public consultation process in which the HLEG received over 500 comments on its initial draft version. The Guidelines outline a framework for achieving trustworthy AI and offer guidance on two of its fundamental components: (1) that AI should be ethical and (2) that it should be robust, both from a technical and societal perspective. The Guidelines intend to go beyond a list of principles and operationalize the requirements to realize trustworthy AI.

Source: European Commission Releases Final Ethics Guidelines for Trustworthy AI

UK businesses using artificial intelligence to monitor staff activity

Unions warn systems such as Isaak may increase pressure on workers and cause distrust Dozens of UK business owners are using artificial intelligence to scrutinise staff behaviour minute-to-minute by harvesting data on who emails whom and when, who accesses and edits files and who meets whom and when.

The actions of 130,000 people in the UK and abroad are being monitored in real-time by the Isaak system, which ranks staff members’ attributes.

Source: UK businesses using artificial intelligence to monitor staff activity

Why facial recognition’s racial bias problem is so hard to crack

Nearly 40 percent of the false matches by Amazon’s facial recognition tool, which is being used by police, involved people of color.

Tech companies have responded to the criticism by improving the data used to train their facial recognition systems, but they’re also calling for more government regulation to help safeguard the technology from being abused.

Source: Why facial recognition’s racial bias problem is so hard to crack – CNET

A.I. Experts Question Amazon’s Facial-Recognition Technology

At least 25 prominent researchers are calling on the company to stop selling the technology to law enforcement agencies, citing concerns that it has built-in biases.

Amazon sells a product called Rekognition through its cloud-computing division, Amazon Web Services. The company said last year that early customers included the Orlando Police Department in Florida and the Washington County Sheriff’s Office in Oregon.

Source: A.I. Experts Question Amazon’s Facial-Recognition Technology – The New York Times

How to achieve digital governance?

Digital governance is corporate oversight of technologies that use personal or sensitive information, make autonomous decisions or exercise human-like responsibilities. The concept addresses disruptive technologies including artificial intelligence (AI), connected devices (IoT, cars, ubiquitous sensors, etc), and machine learning.

To establish digital governance programmes, companies must:

  1. first structure themselves accordingly,
  2. have a full picture of what they are doing,
  3. create an organisational culture that values fair digital practices.

Full article: Data Protection & Cybersecurity 2019 | Global Practice Guides | Chambers and Partners

How to address new privacy issues raised by artificial intelligence and machine learning

Artificial intelligence and machine learning present unique challenges for protecting the privacy of personal data.

For this reason, policymakers need to craft new national privacy legislation that accounts for the numerous limitations that scholars have identified in the notice and consent model of privacy that has guided privacy thinking for decades. The exacerbation of privacy externalities created by machine learning techniques is just one more reason regarding the need for new privacy rules.

Full article: How to address new privacy issues raised by artificial intelligence and machine learning

How Facial Recognition Databases See Copyright Law But Not Your Privacy

Whether you realize it or not, your face may not be “yours” anymore.

Certain companies engaging in facial recognition research (like IBM) obtain photos from publicly available collections for research purposes to “train” their algorithms, without your permission or even knowledge.

From a copyright perspective, they are covered by “fair use” of copyrighted works for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research….”. Or they use compilations of photos licensed under Creative Commons license,

Full article: In Your Face: How Facial Recognition Databases See Copyright Law But Not Your Privacy | Above the Law

UK to investigate bias of algorithmic decision-making

The potential for bias in the use of algorithms in crime and justice, financial services, recruitment and local government will be investigated by the Centre for Data Ethics and Innovation (CDEI).

  • Centre will investigate how to maximise the benefits in the use of algorithms in recruitment, local government and financial services
  • Comes as organisation publishes its first full-year work programme and strategy setting out its priorities for the year ahead

Source: Investigation launched into potential for bias in algorithmic decision-making in society – GOV.UK

Can AI Be a Fair Judge in Court? Estonia Thinks So

Estonia plans to use an artificial intelligence program to decide some small-claims cases, part of a push to make government services smarter.

In the most ambitious project to date, the Estonian Ministry of Justice has asked Estonia’s chief data officer Otto Velsberg and his team to design a “robot judge” that could adjudicate small claims disputes of less than €7,000 (about $8,000). Officials hope the system can clear a backlog of cases for judges and court clerks.

Full article: Can AI Be a Fair Judge in Court? Estonia Thinks So

Google is making it easier for AI developers to keep users’ data private

Google has announced a new module for its machine learning framework, TensorFlow, that lets developers improve the privacy of their AI models with just a few lines of extra code.

TensorFlow is one of the most popular tools for building machine learning applications, and it’s used by developers around the world to create programs like text, audio, and image recognition algorithms. With the introduction of TensorFlow Privacy, these developers will be able to safeguard users’ data with a statistical technique known as “differential privacy.”

Source: Google is making it easier for AI developers to keep users’ data private – The Verge

>