Free tools and resources for Data Protection Officers!

Tag Archives for " automatic decisions "

Questions We Need To Be Asking Before Deciding an Algorithm is the Answer

Across the globe, algorithms are quietly but increasingly being relied upon to make important decisions that impact our lives.

This includes determining the number of hours of in-home medical care patients will receive, whether a child is so at risk that child protective services should investigate, if a teacher adds value to a classroom or should be fired , and whether or not someone should continue receiving welfare benefits.

Source: Math Can’t Solve Everything: Questions We Need To Be Asking Before Deciding an Algorithm is the Answer

The tyranny of algorithms is part of our lives

Credit scores already control our finances. With personal data being increasingly trawled, our politics and our friendships will be next.

For the past couple of years a big story about the future of China has been the focus of both fascination and horror. It is all about what the authorities in Beijing call “social credit”, and the kind of surveillance that is now within governments’ grasp. The official rhetoric is poetic.

According to the documents, what is being developed will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”.

Source: The tyranny of algorithms is part of our lives: soon they could rate everything we do | John Harris | Opinion | The Guardian

Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions—like taxation, justice, and child protection—are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work.

Source: [1802.01029] Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities.

Source: Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

How much …? The rise of dynamic and personalised pricing

Cheaper croissants in the morning is one thing; being charged according to your credit rating is another. As Black Friday approaches, should we trust the prices of online stores – and even bricks-and-mortar retailers?

Source: How much …? The rise of dynamic and personalised pricing | Money | The Guardian

Automated Decisions – A Right for Individuals or A Prohibition for Controllers?

The complexity of the EU General Data Protection Regulation is often alleviated by the guidance of regulatory authorities who contribute their practical interpretation of the black letter of the law and provide welcome certainty. However, the latest draft guidelines issued by the Article 29 Working Party on automated decision-making has thrown up a particular curve ball which bears further investigation. It relates to whether Article 22(1) of the GDPR should be read as a right available to data subjects or as a straightforward prohibition for controllers.

Source: Automated Decision-Making Under the GDPR – A Right for Individuals or A Prohibition for Controllers? | HL Chronicle of Data Protection

Center for Democracy & Technology releases Digital Decisions Tool

The engineers and product managers who design these systems are the first line of defense against unfair, discriminatory, and harmful outcomes. To help mitigate harm at the design level, we have launched the first public version of our digital decisions tool. We created the tool to help developers understand and mitigate unintended bias and ethical pitfalls as they design automated decision-making systems.

Source: Digital Decisions Tool | Center for Democracy & Technology

>