The tech giant wants its core product to infer meaning from human language, answer multipart questions—and look more like Google Assistant sounds.
A list of incidents that caused, or nearly caused, harm aims to prompt developers to think more carefully about the tech they create.
Video from the cameras is often used in facial-recognition searches. A report finds they are most common in neighborhoods with large nonwhite populations.
Researchers applied AI techniques to make portions of Seattle look more like Beijing. Such imagery could mislead governments or spread misinformation online.
A study of 10,000 images found bias in what the system chooses to highlight. Twitter has stopped using it on mobile, and will consider ditching it on the web.
When films are dubbed in another language, an actor’s facial movements may clash with his lines. Technology related to deepfakes can help smooth things over.
Drills involving swarms of drones raise questions about whether machines could outperform a human operator in complex scenarios.
Kate Crawford, who holds positions at USC and Microsoft, says in a new book that even experts working on the technology misunderstand AI.
The EU released draft laws that would regulate facial recognition and uses of algorithms. If it passes, the policy will impact companies in the US and China.