She was a star engineer who warned that messy AI can spread racism. Google brought her in. Then it forced her out. Can Big Tech take criticism from within?
The tech giant wants its core product to infer meaning from human language, answer multipart questions—and look more like Google Assistant sounds.
A list of incidents that caused, or nearly caused, harm aims to prompt developers to think more carefully about the tech they create.
The move is the latest fallout following the departures of the heads of the company’s ethical AI research team and a recruiter.