Netflix cofounder Reed Hastings to "help humanity progress" with AI board seat

Hastings has been appointed to the board of Anthropic, which runs the popular Claude LLM.

Netflix cofounder Reed Hastings to
Introducing Endless Mode: A New Games & Anime Site from Paste

It’s a good thing Netflix cofounder Reed Hastings doesn’t work at the streaming service anymore, because the fight to keep thousands of copyrighted works away from the clutches of AI may have gotten that much harder. Hastings—who departed Netflix as co-CEO in 2023 and chairman earlier this year—is joining the board of major AI firm Anthropic, per The Hollywood Reporter. “The Long Term Benefit Trust appointed Reed because his impressive leadership experience, deep philanthropic work, and commitment to addressing AI’s societal challenges makes him uniquely qualified to guide Anthropic at this critical juncture in AI development,” Buddy Shah, chair of Anthropic’s Long Term Benefit Trust, wrote in a statement. 

Added Hastings: “Anthropic is very optimistic about the AI benefits for humanity, but is also very aware of the economic, social, and safety challenges… I’m joining Anthropic’s board because I believe in their approach to AI development, and to help humanity progress.”

While this is a moderately self-aware statement, it’s hard to buy what Hastings is selling when Anthropic—and its proprietary large language model, Claude—has been named in multiple copyright disputes, most notably in a suit from major music publisher, UMG. On the other hand, Hastings does seem keenly aware of the need for human intervention to curtail and shape the rapid growth of the technology. In March, he donated $50 million to his alma mater, Bowdoin College, to fund the Hastings Initiative for AI and Humanity, a project described as a “step forward in higher education’s growing role to provide ethical frameworks for technology.”

“Just as Bowdoin’s mission emphasizes the formation of complete individuals who can navigate a world in flux, this initiative will empower students and faculty to critically examine, thoughtfully utilize, and ethically shape AI’s trajectory,” Hastings wrote in a statement at the time, suggesting that this sort of “deep thinking” would become necessary “as AI becomes smarter than humans.” Despite this particular brand of doomerism, that day still seems to be a ways away.

 
Join the discussion...