[ad_1]
Richard Branson believes that the environmental costs of space travel will “go even lower.”
Patrick T. Fallon | AFP | getty images
Dozens of high-profile figures in business and politics are calling on world leaders to address the existential risks of artificial intelligence and the climate crisis.
Virgin Group founder Richard Branson, former UN Secretary-General Ban Ki-moon and American physicist J. Signed an open letter with Charles Oppenheimer, grandson of Robert Oppenheimer, urging action against growing threats of climate crisis, pandemics. Nuclear weapons and uncontrolled AI.
The message calls on world leaders to adopt a long-term strategy and “a determination to solve intractable problems, not just manage them, the wisdom to make decisions based on scientific evidence and logic, and the humility to listen to all affected people.” .
The letter, which was published on Thursday and shared with global governments, said, “Our world is in grave danger. We face threats that imperil all of humanity. Our leaders are essential “Not responding with knowledge and urgency.” The spokesperson said.
“The impact of these threats is already being seen: a rapidly changing climate, a pandemic that has killed millions and caused billions in damage, wars in which the use of nuclear weapons has openly escalated,” “Forthcoming “It may get worse in time. Some of these threats threaten the existence of life on Earth.”
The signatories called for urgent multilateral action, including financing the transition away from fossil fuels, signing an equitable pandemic treaty, restarting nuclear weapons negotiations and building the global governance needed to make AI a force for good. Doing is involved.
The letter was released on Thursday by The Elders, an NGO started by former South African President Nelson Mandela and Branson to address global human rights issues and advocate for world peace.
This message is also supported by the Future of Life Institute, a non-profit organization founded by MIT cosmologist Max Tegmark and Skype co-founder Jaan Talin that aims to harness transformative technology like AI to benefit life and protect it from risks at scale. Have to keep away.

Tegmark said that The Elders and his organization wanted to convey that, while it is not “evil” in itself, the technology remains a “tool” that could have some serious consequences if it were used rapidly in the wrong hands. Be left to move forward. People.
“Putting old strategies to good use [when it comes to new technology] “We’ve always been learning from mistakes,” Tegmark told CNBC in an interview. “We invented fire, then later we invented the fire extinguisher. We invented the car, then we learned from our mistakes and invented seat belts and traffic lights and speed limits.”
‘Security Engineering’
“But when the power of technology crosses a threshold, the ‘learning from mistakes’ strategy becomes dire,” Tegmark said.
“As an idiot, I think of it as safety engineering. When we sent people to the moon, we thought carefully about all the things that could go wrong when we put people on explosive fuel tanks and sent them there. They could have been where no one could help them. And that’s why it worked out well in the end.”
He added: “That was not ‘nihilism’.” That was security engineering. And we need this kind of security engineering for our future too, with nuclear weapons, with synthetic biology, and with more powerful AI.”
The letter was released ahead of the Munich Security Conference, where government officials, military leaders and diplomats will discuss international security amid escalating global armed conflicts, including the Russia-Ukraine and Israel-Hamas wars. Tegmark will attend the event to advocate for the message of the letter.
The Future of Life Institute also issued an open letter last year, supported by prominent figures Tesla Bosses Elon Musk and Apple co-founder Steve Wozniak called on AI labs like OpenAI to stop work on training AI models that are more powerful than GPT-4 – currently the most advanced AI model from Sam Altman’s OpenAI.
Technologists called for such a pause in AI development to avoid a “loss of control” of civilization, which could result in mass elimination of jobs and humans being outcompeted by computers.
Correction: Ban Ki-moon is the former Secretary-General of the United Nations, an earlier version misstated his title.