fbpx

Free tools and resources for Data Protection Officers!

Category Archives for "Technology"

ICO Blog Post on AI and Solely Automated Decision-Making

The ICO has published a blog post on the role of “meaningful” human reviews in AI systems to prevent them from being categorised as “solely automated decision-making” under Article 22 of the GDPR.

That Article imposes strict conditions on making decisions with legal or similarly significant effects based on personal data where there is no human input, or where there is limited human input (e.g. a decision is merely “rubber-stamped”).

Source: ICO Blog Post on AI and Solely Automated Decision-Making

Pilot promised for new EU ethical guidelines for AI

Businesses in Europe exploring the use of artificial intelligence (AI) will be given a chance this summer to pilot the use of new ethical guidelines for AI, the European Commission has said.

Companies, public administrations and organisations can participate by signing up to the European AI Alliance.

Source: Pilot promised for new EU ethical guidelines for AI

EU pushes to link tracking databases

Lawmakers are set to approve plans for an enormous new database that will collect biometric data on almost all non-EU citizens in Europe’s visa-free Schengen area.

The database — merging previously separate systems tracking migration, travel and crime — will grant officials access to a person’s verified identity with a single fingerprint scan.

Source: EU pushes to link tracking databases – POLITICO

How To Avoid Bias In Data Collection

Data collection is the most crucial part of machine learning models as the working of the model will completely depend on the data which we push as training.

Knowing what you really want to do with your data and more basically its purpose to serve your specific project is a very crucial part. You should develop a clear understanding of the data requirements before you take any further step of collecting data.

Full article: How To Avoid Bias In Data Collection

Amazon staff listen to customers’ Alexa recordings

Staff review audio in effort to help AI-powered voice assistant respond to commands.

When Amazon customers speak to Alexa, the company’s AI-powered voice assistant, they may be heard by more people than they expect, according to a report. Amazon employees around the world regularly listen to recordings from the company’s smart speakers as part of the development process for new services.

Source: Amazon staff listen to customers’ Alexa recordings, report says

A new US bill would force companies to check their algorithms for bias

US lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems — like facial recognition or ad targeting algorithms — for bias.

If passed, it would ask the Federal Trade Commission to create rules for evaluating “highly sensitive” automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.

Source: A new bill would force companies to check their algorithms for bias – The Verge

European Commission Releases Final Ethics Guidelines for Trustworthy AI

On April 8, 2019, the European Commission High-Level Expert Group (the “HLEG”) on Artificial Intelligence released the final version of its Ethics Guidelines for Trustworthy AI.

The Guidelines’ release follows a public consultation process in which the HLEG received over 500 comments on its initial draft version. The Guidelines outline a framework for achieving trustworthy AI and offer guidance on two of its fundamental components: (1) that AI should be ethical and (2) that it should be robust, both from a technical and societal perspective. The Guidelines intend to go beyond a list of principles and operationalize the requirements to realize trustworthy AI.

Source: European Commission Releases Final Ethics Guidelines for Trustworthy AI

WTF is differential privacy?

Differential privacy allows companies to share aggregate data about user habits while protecting individual privacy.

It’s a process used to aggregate data that was pioneered by Microsoft and is now used by Apple, Google and other big tech companies. In a nutshell, a differential privacy algorithm injects random data into a data set to protect individual privacy.

Full article: WTF is differential privacy? – Digiday

Your social media activity and your credit score

Banks and credit agencies have started coming up with creative ways of assessing risk of “unbanked” or “credit invisible” people.

They’re calling it “alternative data,” which really just means data that isn’t normally used in a credit report. That could be things like proof of rental payments, or mobile phone bill payments, or cable TV payments. Anything people can use to prove that they’ve paid bills on time certainly helps.

But it doesn’t stop there. In a report on alternative data, Experian proposed also using things like a person’s educational history, occupation, and even social media activity. “Yelp reviews, Foursquare check-ins and online rankings and ratings can all shed light on a business’s health, growth and stability,” the report explains.

Source: Forms From the Future: your social media activity and your credit score.

UK businesses using artificial intelligence to monitor staff activity

Unions warn systems such as Isaak may increase pressure on workers and cause distrust Dozens of UK business owners are using artificial intelligence to scrutinise staff behaviour minute-to-minute by harvesting data on who emails whom and when, who accesses and edits files and who meets whom and when.

The actions of 130,000 people in the UK and abroad are being monitored in real-time by the Isaak system, which ranks staff members’ attributes.

Source: UK businesses using artificial intelligence to monitor staff activity

1 2 3 78
>