fbpx

Download free GDPR compliance checklist!

Tag Archives for " automatic decisions "

Uber drivers union asks EU court to overrule ‘robo-firing’ by algorithm

Former Uber drivers have filed a legal challenge against the company in Europe, arguing that its “robo-firing” practices contravene GDPR.

The union is arguing that Uber’s alleged “robo-firing” practices contravene Article 22 of the EU General Data Protection Regulation (GDPR), which seeks to protect individuals from automated decision-making. The action has been filed in the District Court of Amsterdam, where Uber’s European HQ is located.

Source: Uber drivers union asks EU court to overrule ‘robo-firing’ by algorithm | VentureBeat

UK Government Agrees to Stop Using ‘Visa Streaming’ Algorithm

The Home Office of the UK has announced that it will halt the use of its “Visa Streaming” algorithm. This change is the result of a settlement in a lawsuit brought to challenge use of the algorithmic decision system by the UK Government.

The system produced a “traffic light” assessment of visa applicants (Green, Yellow, or Red ) that informed how they would be treated during the visa approval process.

Source: UK Government Agrees to Stop Using ‘Visa Streaming’ Algorithm

California Introduces Bill to Regulate Automated Decision Systems

On February 14, 2020, California State Assembly Member Ed Chau introduced the Automated Decision Systems Accountability Act of 2020, which would require any business in California that provides a person with a program or device that uses an “automated decision system” (“ADS”) to establish processes to “continually test for biases during the development and usage of the ADS” and to conduct an impact assessment on that program or device.

By March 1, 2022, businesses will be required to annually submit to the Department of Business Oversight a report that summarizes the results of their ADS impact assessments. If a change is made during the year, then the results of a new ADS impact assessment must be submitted within 60 days.

Source: California Introduces Bill to Regulate Automated Decision Systems

Cyprus DPA banns automated scoring of employee sick leaves

The Commissioner for Personal Data Protection (Cypriot SA) banned the processing and fined LGS Handling Ltd, Louis Travel Ltd and Louis Aviation Ltd (Louis Group of Companies) for a total amount of EUR 82,000.00, concerning the lack of legal basis of “Bradford Factor” tool, which was used to score sick leaves of employees.

The reasoning behind Bradford’s Factor automated system for scoring employees’ sick leave was that short, frequent, and unplanned absences lead to a higher disorganising of the company rather than longer absences.

Source: The Cypriot Supervisory Authority banned the processing of an automated tool, used for scoring sick leaves of employees, known as the “Bradford Factor’’ and subsequently fined the controller | European Data Protection Board

One in three councils using algorithms to make welfare decisions

One in three councils are using computer algorithms to help make decisions about benefit claims and other welfare issues, despite evidence emerging that some of the systems are unreliable.

Companies including the US credit-rating businesses Experian and TransUnion, as well as the outsourcing specialist Capita and Palantir, a data-mining firm co-founded by the Trump-supporting billionaire Peter Thiel, are selling machine-learning packages to local authorities that are under pressure to save money.

Source: One in three councils using algorithms to make welfare decisions | Society | The Guardian

ICO Blog Post on AI and Solely Automated Decision-Making

The ICO has published a blog post on the role of “meaningful” human reviews in AI systems to prevent them from being categorised as “solely automated decision-making” under Article 22 of the GDPR.

That Article imposes strict conditions on making decisions with legal or similarly significant effects based on personal data where there is no human input, or where there is limited human input (e.g. a decision is merely “rubber-stamped”).

Source: ICO Blog Post on AI and Solely Automated Decision-Making

Can AI Be a Fair Judge in Court? Estonia Thinks So

Estonia plans to use an artificial intelligence program to decide some small-claims cases, part of a push to make government services smarter.

In the most ambitious project to date, the Estonian Ministry of Justice has asked Estonia’s chief data officer Otto Velsberg and his team to design a “robot judge” that could adjudicate small claims disputes of less than €7,000 (about $8,000). Officials hope the system can clear a backlog of cases for judges and court clerks.

Full article: Can AI Be a Fair Judge in Court? Estonia Thinks So

Questions We Need To Be Asking Before Deciding an Algorithm is the Answer

Across the globe, algorithms are quietly but increasingly being relied upon to make important decisions that impact our lives.

This includes determining the number of hours of in-home medical care patients will receive, whether a child is so at risk that child protective services should investigate, if a teacher adds value to a classroom or should be fired , and whether or not someone should continue receiving welfare benefits.

Source: Math Can’t Solve Everything: Questions We Need To Be Asking Before Deciding an Algorithm is the Answer

The tyranny of algorithms is part of our lives

Credit scores already control our finances. With personal data being increasingly trawled, our politics and our friendships will be next.

For the past couple of years a big story about the future of China has been the focus of both fascination and horror. It is all about what the authorities in Beijing call “social credit”, and the kind of surveillance that is now within governments’ grasp. The official rhetoric is poetic.

According to the documents, what is being developed will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”.

Source: The tyranny of algorithms is part of our lives: soon they could rate everything we do | John Harris | Opinion | The Guardian

Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions—like taxation, justice, and child protection—are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work.

Source: [1802.01029] Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

>