fbpx

Free tools and resources for Data Protection Officers!

Tag Archives for " Artificial Intelligence "

FRA publishes paper on Quality vital for data-driven artificial intelligence

Algorithms, used in machine learning and artificial intelligence, are responsible for analysing the data and making these decisions.

Therefore, a new European Union Agency for Fundamental Rights (FRA) focus paper questions the quality of data behind automated decision-making and underlines the need to pay more attention to improving data quality in artificial intelligence.

Source: Quality vital for data-driven artificial intelligence | European Union Agency for Fundamental Rights

ICO’s Interim Report on Explaining AI

On June 3, 2019, the UK Information Commissioner’s Office (ICO), released an Interim Report on a collaboration project with The Alan Turing Institute called “Project ExplAIn.”

The purpose of this project, according to the ICO, is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (AI) decision-making systems; in particular, to explain the impact AI decisions may have on individuals.

Source: ICO’s Interim Report on Explaining AI

Coming to store shelves: cameras that guess your age and sex

Eyeing that can of soda in the supermarket cooler? Or maybe you’re craving a pint of ice cream? A camera could be watching you.

But it’s not there to see if you’re stealing. These cameras want to get to know you and what you’re buying.

It’s a new technology being trotted out to retailers, where cameras try to guess your age, gender or mood as you walk by. The intent is to use the information to show you targeted real-time ads on in-store video screens.

Full article: Coming to store shelves: cameras that guess your age and sex

International Privacy Experts Adopt Recommendations for AI, Location Tracking

The International Working Group on Data Protection has adopted new recommendations for artificial intelligence and location tracking.

The Berlin-based Working Group includes data protection authorities who assess emerging privacy challenges. The IWG report “Privacy and Artificial Intelligence” sets out fairness and respect for human rights, oversight, transparency and intelligibility as key elements of AI design and use.

Source: International Privacy Experts Adopt Recommendations for AI, Location Tracking

ICO Blog Post on AI and Solely Automated Decision-Making

The ICO has published a blog post on the role of “meaningful” human reviews in AI systems to prevent them from being categorised as “solely automated decision-making” under Article 22 of the GDPR.

That Article imposes strict conditions on making decisions with legal or similarly significant effects based on personal data where there is no human input, or where there is limited human input (e.g. a decision is merely “rubber-stamped”).

Source: ICO Blog Post on AI and Solely Automated Decision-Making

Pilot promised for new EU ethical guidelines for AI

Businesses in Europe exploring the use of artificial intelligence (AI) will be given a chance this summer to pilot the use of new ethical guidelines for AI, the European Commission has said.

Companies, public administrations and organisations can participate by signing up to the European AI Alliance.

Source: Pilot promised for new EU ethical guidelines for AI

A new US bill would force companies to check their algorithms for bias

US lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems — like facial recognition or ad targeting algorithms — for bias.

If passed, it would ask the Federal Trade Commission to create rules for evaluating “highly sensitive” automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.

Source: A new bill would force companies to check their algorithms for bias – The Verge

European Commission Releases Final Ethics Guidelines for Trustworthy AI

On April 8, 2019, the European Commission High-Level Expert Group (the “HLEG”) on Artificial Intelligence released the final version of its Ethics Guidelines for Trustworthy AI.

The Guidelines’ release follows a public consultation process in which the HLEG received over 500 comments on its initial draft version. The Guidelines outline a framework for achieving trustworthy AI and offer guidance on two of its fundamental components: (1) that AI should be ethical and (2) that it should be robust, both from a technical and societal perspective. The Guidelines intend to go beyond a list of principles and operationalize the requirements to realize trustworthy AI.

Source: European Commission Releases Final Ethics Guidelines for Trustworthy AI

UK businesses using artificial intelligence to monitor staff activity

Unions warn systems such as Isaak may increase pressure on workers and cause distrust Dozens of UK business owners are using artificial intelligence to scrutinise staff behaviour minute-to-minute by harvesting data on who emails whom and when, who accesses and edits files and who meets whom and when.

The actions of 130,000 people in the UK and abroad are being monitored in real-time by the Isaak system, which ranks staff members’ attributes.

Source: UK businesses using artificial intelligence to monitor staff activity

Why facial recognition’s racial bias problem is so hard to crack

Nearly 40 percent of the false matches by Amazon’s facial recognition tool, which is being used by police, involved people of color.

Tech companies have responded to the criticism by improving the data used to train their facial recognition systems, but they’re also calling for more government regulation to help safeguard the technology from being abused.

Source: Why facial recognition’s racial bias problem is so hard to crack – CNET

1 2 3 15
>