Top AI Researchers Meet to Discuss What Comes After Humanity
A group of the top minds in AI gathered over the weekend to discuss the "posthuman transition," imagining a future when humanity willfully hands over power to a benevolent superhuman intelligence. As Wired reports, the lavish party was organized by generative AI entrepreneur Daniel Faggella. Attendees included "AI founders from $100 million to $5 billion valuations" and "most of the important philosophical thinkers on AGI," Faggella boasted in a LinkedIn post. He organized the matinee at a $30 million mansion in San Francisco because "the big labs, the people that know that AGI is likely to end humanity, don't talk […]


A group of the top minds in AI gathered over the weekend to discuss the "posthuman transition" — a mind-bending exercise in imagining a future in which humanity willfully hands over power, or perhaps bequeaths existence entirely, to some sort of superhuman intelligence.
As Wired reports, the lavish party was organized by generative AI entrepreneur Daniel Faggella. Attendees included "AI founders from $100 million to $5 billion valuations" and "most of the important philosophical thinkers on AGI," Faggella enthused in a LinkedIn post.
He organized the soirée at a $30 million mansion in San Francisco because the "big labs, the people that know that AGI is likely to end humanity, don't talk about it because the incentives don't permit it," Faggella told Wired.
The symposium allowed attendees and speakers alike to steep themselves in a largely fantastical vision of a future where artificial general intelligence (AGI) was a given, rather than some distant dream of tech that isn't even close to existing.
AI companies, most notably OpenAI, have talked at length about wanting to realize AGI, though often without clearly defining the term.
The risks of racing toward a superhuman intelligence have remained hotly debated, with billionaire Elon Musk once arguing that unregulated AI could be the "biggest risk we face as a civilization." OpenAI Sam Altman has also warned of dangers facing humanity, including increased inequality and population control through mass surveillance, as a result of realizing AGI — which also happens to be his firm's number one priority.
But for now, those are largely moot points made by individuals who are billions of dollars deep in reassuring investors that AGI is mere years away. Given the current state of wildly hallucinating large language models that still fail at the most basic tasks, we are seemingly still a long way from a point at which AI could surpass the intellectual capabilities of humans.
Just last week, researchers at Apple released a damning paper that threw cold water on the "reasoning" capabilities of the latest and most powerful LLMs, arguing they "face a complete accuracy collapse beyond certain complexities."
However, to insiders and believers in the tech, AGI is mostly a matter of when, not if. Speakers at this weekend's event talked about how AI can seek out deeper, universal values that humanity hasn't even been privy to, and that machines should be taught to pursue "the good," or risk enslaving an entity capable of suffering.
As Wired reports, Faggella similarly invoked philosophers including Baruch Spinoza and Friedrich Nietzsche, calling on humanity to seek out the yet-undiscovered value in the universe.
"This is an advocacy group for the slowing down of AI progress, if anything, to make sure we're going in the right direction," he told the publication.
More on AGI: OpenAI's Top Scientist Wanted to "Build a Bunker Before We Release AGI"
The post Top AI Researchers Meet to Discuss What Comes After Humanity appeared first on Futurism.