Free tools and resources for Data Protection Officers!

Tag Archives for " Artificial Intelligence "

‘Minority Report’ AI System Boosts ‘Pre-Crime’ Tech

Preventive risk system, Intraspexion, has increased the capabilities of its ‘pre-crime’ legal review tech by integrating the strengths of the dtSearch engine. The combination allows document filters to parse a wide variety of information and document formats, and to search terabytes of company data for any signs employees may be setting the business up for a future law suit.

Source: ‘Minority Report’ AI System, Intraspexion, Boosts ‘Pre-Crime’ Tech with dtSearch – Artificial Lawyer

How on-chip AI helps GDPR compliance

Given the repercussions of getting GDPR compliance wrong, businesses could be forgiven for not wanting to collect any data about individuals at all. But a flow of data between businesses and consumers is essential and, whilst it can be minimised to just what is necessary to provide service, it cannot be avoided.

Read article: How on-chip AI helps GDPR compliance

Mind reading: is no data safe?

According to a report in The South China Post, Chinese government backed organisations are using mind reading technology to help improve productivity. If all the AI is doing is learning to read the signs in a way that comes naturally to us humans, it may be scary, but not because of the threat to our human right to privacy. If the data is collected and stored, it might be a different matter.

Read article: Mind reading: is no data safe?

AI for Cybersecurity Is a Hot New Thing — and a Dangerous Gamble

Machine learning and artificial intelligence can help guard against cyberattacks, but hackers can foil security algorithms by targeting the data they train on and the warning flags they look for.

Read article: AI for Cybersecurity Is a Hot New Thing — and a Dangerous Gamble

FPF Releases New AI Resource Guides

Future of Privacy Forum has launched Resource Guides which collect current and leading news, academic publications, and multi-media training, providing in-depth sources for the technical, educational, and policy-focused perspectives.

Source: FPF Launches AI and Machine Learning Working Group and Releases New AI Resource Guides

Schools are using AI to track their students

Any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe.

While some simply block inappropriate websites, others turn to Safety Management Platforms (SMPs) that use natural-language processing to scan through the millions of words typed on school computers. If a word or phrase might indicate bullying or self-harm behavior, it gets surfaced for a team of humans to review. But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?

Source: Schools are using AI to track their students — Quartz

Fighting cybercrime with A.I.

Cybersecurity start-up Darktrace, which uses artificial intelligence to fight cybercrime against corporations. Its artificial intelligence takes inspiration from something distinctly organic: the way the human immune system fights illness. Its machine learning understands normal patterns of behavior of every user and every device connected to a corporate network.

Source: Billion-dollar start-up Darktrace is fighting cybercrime with A.I.

When malware turns artificial intelligence into a weapon

AI can be used to automatically detect and combat malware — but this does not mean hackers can also use it to their advantage. Cybersecurity, in a world full of networked systems, data collection, Internet of Things (IoT) devices and mobility, has become a race between white hats and threat actors.

Read article: DeepLocker: When malware turns artificial intelligence into a weapon – TechRepublic

The ethical and legal ramifications of using ‘pseudo-AI’

Pseudo-AI, or human workers performing work eventually intended for an  “artificial intelligence” or supplementing an AI still under development, is a common prototyping practice, necessitated by the inherit difficulty and large datasets necessary to create an AI. The revelation that human beings are regularly performing work customers are lead to believe is automated can have major trust and public-image ramifications, even if the primary service-providing company is unaware. Additionally, there are numerous legal ramifications.

Read full article: The ethical and legal ramifications of using ‘pseudo-AI’

Health Insurers Tap Data Brokers To Help Predict Costs

Without scrutiny, insurers and data brokers are predicting your health costs based on public data about things like race, marital status, your TV consumption and even if you buy plus-size clothing. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Source: Health Insurers Tap Data Brokers To Help Predict Costs : Shots – Health News : NPR

>