Tag Archives for " Artificial Intelligence "

Schools are using AI to track their students

Any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe.

While some simply block inappropriate websites, others turn to Safety Management Platforms (SMPs) that use natural-language processing to scan through the millions of words typed on school computers. If a word or phrase might indicate bullying or self-harm behavior, it gets surfaced for a team of humans to review. But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?

Source: Schools are using AI to track their students — Quartz

Fighting cybercrime with A.I.

Cybersecurity start-up Darktrace, which uses artificial intelligence to fight cybercrime against corporations. Its artificial intelligence takes inspiration from something distinctly organic: the way the human immune system fights illness. Its machine learning understands normal patterns of behavior of every user and every device connected to a corporate network.

Source: Billion-dollar start-up Darktrace is fighting cybercrime with A.I.

When malware turns artificial intelligence into a weapon

AI can be used to automatically detect and combat malware — but this does not mean hackers can also use it to their advantage. Cybersecurity, in a world full of networked systems, data collection, Internet of Things (IoT) devices and mobility, has become a race between white hats and threat actors.

Read article: DeepLocker: When malware turns artificial intelligence into a weapon – TechRepublic

The ethical and legal ramifications of using ‘pseudo-AI’

Pseudo-AI, or human workers performing work eventually intended for an  “artificial intelligence” or supplementing an AI still under development, is a common prototyping practice, necessitated by the inherit difficulty and large datasets necessary to create an AI. The revelation that human beings are regularly performing work customers are lead to believe is automated can have major trust and public-image ramifications, even if the primary service-providing company is unaware. Additionally, there are numerous legal ramifications.

Read full article: The ethical and legal ramifications of using ‘pseudo-AI’

Health Insurers Tap Data Brokers To Help Predict Costs

Without scrutiny, insurers and data brokers are predicting your health costs based on public data about things like race, marital status, your TV consumption and even if you buy plus-size clothing. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Source: Health Insurers Tap Data Brokers To Help Predict Costs : Shots – Health News : NPR

China’s Dystopian Dreams: A.I., Shame and Lots of Cameras

With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.

China is reversing the commonly held vision of technology as a great democratizer, bringing people more freedom and connecting them to the world. In China, it has brought control.

Source: Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras – The New York Times

AI spots legal problems with tech T&Cs in GDPR research project

An experimental European research project applied machine learning technology to big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law. Project results shows tah the AI was able to automatically flag a range of problems with the privacy policies, like use of unclear language, insufficient information, processing of personal data not in compliance with GDPR requirements,

Source: AI spots legal problems with tech T&Cs in GDPR research project | TechCrunch

Alexa and other smart speakers may endanger privacy rights

Legal experts say internet-connected smart speakers are the latest example of how technology and devices endear themselves to consumers before they realize the downsides.

The devices are supposed to begin recording the conversation only in response to “wake words” — like “Alexa” (for the Echo), “OK Google” (for the Google Home) and “Hey Siri” (for Apple’s HomePod). But they may be able to hear background conversations while activated.

Source: Alexa and other smart speakers may endanger privacy rights – SFChronicle.com

This AI Knows Who You Are by the Way You Walk

Our individual walking styles, much like snowflakes, are unique. With this in mind, computer scientists have developed a powerful new footstep-recognition system using AI, and it could theoretically replace retinal scanners and fingerprinting at security checkpoints, including airports.

Source: This AI Knows Who You Are by the Way You Walk

Can Digital Assistants Work without Prying into Our Lives?

Personalized AI requires personal data. Apple, Google and others say they can now grab more of it while keeping privacy and security intact. However, some security researchers still have reservations.

Source: Private Smarts: Can Digital Assistants Work without Prying into Our Lives? – Scientific American

1 2 3 9
>