Misogyny on the internet too often slips through the filters of content moderators. A new method hopes to inject more nuance into the process.
A new “red team” will try to anticipate and thwart attacks on machine learning programs.
Copilot is pitched as a helpful aid to developers. But some programmers object to the blind copying of blocks of code used to train the algorithm.
Law enforcement officials say the tool can help them combat misinformation. Civil liberties advocates say it can be used for mass surveillance.
Language models like GPT-3 can write poetry, but they often amplify negative stereotypes. Researchers are trying different approaches to address the problem.
The tech giant wants its core product to infer meaning from human language, answer multipart questions—and look more like Google Assistant sounds.
A list of incidents that caused, or nearly caused, harm aims to prompt developers to think more carefully about the tech they create.
Microsoft reveals plans to bring GPT-3, best known for generating text, to programming. “The code writes itself,” CEO Satya Nadella says.
Georgetown researchers used text generator GPT-3 to write misleading tweets about climate change and foreign affairs. People found the posts persuasive.
CaliberAI wants to help overstretched newsrooms with a tool that’s like spell-check for libel. But its potential uses go far beyond traditional media.