Google fires researcher Meg Mitchell, escalating AI saga
Meg Mitchell, the lead of Google's Ethical Artificial Intelligence team was fired on Friday.
Google fired the lead of its Ethical Artificial Intelligence team, Meg Mitchell, escalating the turmoil surrounding its AI division after the acrimonious exit of Mitchell’s former colleague Timnit Gebru.
In the aftermath of Friday’s firing, Google’s head of AI, Jeff Dean, tried to repair relations with the company’s staff, saying at an all-hands meeting that he took “some responsibility” for the break in trust with researchers.
“I take some amount of responsibility and I feel like other leads in the organization also take responsibility,” Dean said during the meeting. “We know that the ethical AI team feels sort of aggrieved by the decision to ask some members of that team to retract the paper and other subsequent events.”
Earlier, Mitchell tweeted “I’m fired,” adding she was “in too much pain to articulate much of anything useful. Firing @timnitGebru created a domino effect of trauma for me and the rest of the team, and I believe we are being increasingly punished for that trauma.”
Mitchell’s firing highlighted that even as Google tried to move past the disarray in its AI division with an apology, a leadership change and new policies, the upheaval showed no sign of letting up.
Mitchell had become a fierce public critic of Google and its management after Gebru’s exit. Gebru, one of the few prominent Black women in AI research, said she was fired in December after refusing to retract a research paper critical of a key Google technology or remove the Google authors from it. The company has said that she resigned. Mitchell was a co-author of the paper. Former colleagues expressed outrage over Google’s handling of the matter.
The Alphabet Inc. company had accused Mitchell of downloading files from its systems and said it would review her conduct. For five weeks, Mitchell, who had co-led the Ethical AI team with Gebru, was locked out of all corporate systems -- unable to access her email or calendar.
“After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees,” a Google spokesman said in a statement.
Mitchell’s dismissal came the same day that Dean apologized in an email to staff for how he handled Gebru’s departure and pledged that executives would be graded on diversity progress. Dean also said Google would double its human resources staff dedicated to employee retention.
Alex Hanna, a researcher on Google’s Ethical AI team, wrote that there’s a double standard for conduct at the internet giant, alluding to allegations of sexual misconduct against former executives.
“Google is a breeding ground to abusers, opportunity hoarders, and people only concerned with ego and prestige,” Hanna wrote on Twitter. “But anyone who is willing to defend friends against discrimination, who lift up voices who need to be heard, are shown the door.”
Dean, in his email to staff, said Google’s behavior toward Gebru hurt some female and Black employees and led them to question whether they belonged at the company, but he didn’t apologize directly to Gebru. The company announced a new organization for the responsible use of AI on Thursday, led by Marian Croak, a respected Black executive who had previously handled site reliability. But some members of the Ethical AI team, who will report to Croak, felt blindsided by the news.
Later, at the staff meeting, Dean said wasn’t seeking to “punish anyone” by orchestrating the reorganization of the company’s responsible AI efforts.
Mitchell joined Google in November 2016 after a stint at Microsoft Corp.’s research lab where she worked on the company’s Seeing AI project, a technology to help blind users “visualize” the world around them that was heavily promoted by Chief Executive Officer Satya Nadella. At Google, she founded the Ethical AI team in 2017 and worked on projects including a way to explain what machine-learning models do and their limitations, and how to make machine-learning datasets more accountable and transparent.