The review of artificial intelligence argues a new AI council should be created but it wouldn’t be in charge of regulating systems.
Computer algorithms now shape our world in profound and mostly invisible ways. They predict if we’ll be valuable customers and whether we’re likely to repay a loan. They filter what we see on social media, sort through resumes, and evaluate job performance. They inform prison sentences and monitor our health. Most of these algorithms have been created with good intentions. The goal is to replace subjective judgments with objective measurements. But it doesn’t always work out like that.
The device can switch on a night light if it hears a baby crying and even help a preteen with homework.
John Giannandrea, who leads AI at Google, is worried about intelligent systems learning human prejudices.
It’s been evident that people are surprised that machines can be biased. They assume machines are necessarily neutral and objective, which is in some sense true — in the sense that there is no machine perspective or ethics. But to the extent an artefact is an element of our culture, it will always reflect bias.
Artificial intelligence keeps getting creepier. In one controversial study, researchers at Stanford University have demonstrated that facial recognition technology can identify gay people with surprising precision, although many caveats apply. Imagine how that could be used in the many countries where homosexuality is a criminal offense.
More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people.
“Generative” neural networks teach themselves to guess realistic passwords.
“Justice is blind.” It’s a wonderful concept that represents an even-handed legal system that is impartial and objective in equal measure. But there’s no denying the fact that the justice system is deeply flawed. So, could Artificial Intelligence provide the answer? Yes, in the end, but not yet.