A list of incidents that caused, or nearly caused, harm aims to prompt developers to think more carefully about the tech they create.
Researchers applied AI techniques to make portions of Seattle look more like Beijing. Such imagery could mislead governments or spread misinformation online.
Georgetown researchers used text generator GPT-3 to write misleading tweets about climate change and foreign affairs. People found the posts persuasive.
CaliberAI wants to help overstretched newsrooms with a tool that’s like spell-check for libel. But its potential uses go far beyond traditional media.
A study of 10,000 images found bias in what the system chooses to highlight. Twitter has stopped using it on mobile, and will consider ditching it on the web.
The EU released draft laws that would regulate facial recognition and uses of algorithms. If it passes, the policy will impact companies in the US and China.