Well, that didn’t happen, obviously. 

I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take stock of what has happened since. Here are highlights of our conversation. 

On shifting the Overton window on AI risk: Tegmark told me that in conversations with AI researchers and tech CEOs, it had become clear that there was a huge amount of anxiety about the existential risk AI poses, but nobody felt they could speak about it openly “for fear of being ridiculed as Luddite scaremongerers.” “The key goal of the letter was to mainstream the conversation, to move the Overton window so that people felt safe expressing these concerns,” he says. “Six months later, it’s clear that part was a success.”

But that’s about it: “What’s not great is that all the companies are still going full steam ahead and we still have no meaningful regulation in America. It looks like US policymakers, for all their talk, aren’t going to pass any laws this year that meaningfully rein in the most dangerous stuff.”

Why the government should step in: Tegmark is lobbying for an FDA-style agency that would enforce rules around AI, and for the government to force tech companies to pause AI development. “It’s also clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very concerned themselves. But they all know they can’t pause alone,” Tegmark says. Pausing alone would be “a disaster for their company, right?” he adds. “They just get outcompeted, and then that CEO will be replaced with someone who doesn’t want to pause. The only way the pause comes about is if the governments of the world step in and put in place safety standards that force everyone to pause.” 

So how about Elon … ? Musk signed the letter calling for a pause, only to set up a new AI company called X.AI to build AI systems that would “understand the true nature of the universe.” (Musk is an advisor to the FLI.) “Obviously, he wants a pause just like a lot of other AI leaders. But as long as there isn’t one, he feels he has to also stay in the game.”

Why he thinks tech CEOs have the goodness of humanity in their hearts: “What makes me think that they really want a good future with AI, not a bad one? I’ve known them for many years. I talk with them regularly. And I can tell even in private conversations—I can sense it.” 

Response to critics who say focusing on existential risk distracts from current harms: “It’s crucial that those who care a lot about current problems and those who care about imminent upcoming harms work together rather than infighting. I have zero criticism of people who focus on current harms. I think it’s great that they’re doing it. I care about those things very much. If people engage in this kind of infighting, it’s just helping Big Tech divide and conquer all those who want to really rein in Big Tech.”



Source link

By admin

Malcare WordPress Security