We Are Opening the Lids on Two Giant Pandora’s Boxes
Opinion Columnist Merriam-Webster that a Pandoras box can be anything that looks ordinary but may produce unpredictable harmful results. Ive been thinking a lot about Pandoras boxes lately, because we Homo sapiens are doing something weve never done before: lifting the lids on two giant Pandoras boxes at the same time, without any idea of what could come flying out. One of these Pandoras boxes is labeled artificial intelligence, and it is exemplified by the likes of , and , which testify to humanitys ability for the first time to manufacture something in a godlike way that approaches general intelligence, far exceeding the brainpower with which we evolved naturally. The other Pandoras box is labeled climate change, and with it we humans are for the first time driving ourselves in a godlike way from one climate epoch into another. Up to now, that power was largely confined to natural forces involving Earths orbit around the sun. For me the big question, as we lift the lids simultaneously, is: What kind of regulations and ethics must we put in place to manage what comes screaming out? Lets face it, we did not understand how much social networks would be used to undermine the twin pillars of any free society truth and trust. So if we approach generative A.I. just as heedlessly if we again go along with Mark Zuckerbergs reckless mantra at the dawn of social networks, move fast and break things oh, baby, we are going to break things faster, harder and deeper than anyone can imagine. There was a failure of imagination when social networks were unleashed and then a failure to responsibly respond to their unimagined consequences once they permeated the lives of billions of people, the founder and chairman of and , told me. We lost a lot of time and our way in utopian thinking that only good things could come from social networks, from just people and giving people a voice. We cannot afford similar failures with artificial intelligence. So there is an urgent imperative both ethical and regulatory that these artificial intelligence technologies should only be used to complement and elevate what makes us uniquely human: our creativity, our curiosity and, at our best, our capacity for hope, ethics, empathy, grit and collaborating with others, added Seidman (a board member of the museum my wife founded, ). The adage that with great power comes great responsibility has never been more true. We cannot afford another generation of technologists proclaiming their ethical neutrality and telling us, Hey, were just a platform, when these A.I. technologies are enabling exponentially more powerful and profound forms of human empowerment and interaction. For those reasons, I asked James Manyika, who heads Googles technology and society team, as well as Google Research, where much of its A.I. innovation is conducted, for his thinking on A.I.s promise and challenge. We have to be bold and responsible at the same time, he said. The reason to be bold is that in so many different realms A.I. has the potential to help people with everyday tasks, and to tackle some of humanitys greatest challenges like health care, for instance and make new scientific discoveries and innovations and productivity gains that will lead to wider economic prosperity. It will do so, he added, by giving people everywhere access to the sum of the worlds knowledge in their own language, in their preferred mode of communication, via text, speech, images or code, delivered by smartphone, through television, radio or e-book. A lot more people will be able to get the best assistance and the best answers to improve their lives. But we also must be responsible, Manyika added, citing several concerns. First, these tools need to be fully aligned with humanitys goals. Second, in the wrong hands, these tools could do enormous harm, whether we are talking about disinformation, perfectly faked things or hacking. (Bad guys are always early adopters.) Finally, the engineering is ahead of the science to some degree, Manyika explained. That is, even the people building these so-called large language models that underlie products like ChatGPT and Bard dont fully understand how they work and the full extent of their capabilities. We can engineer extraordinarily capable A.I. systems, he added, that can be shown a few examples of arithmetic, a rare language or explanations of jokes and that then can start to do many more things with just those fragments astonishingly well. In other words, we dont fully understand yet how much more good stuff or bad stuff these systems can do. So, we need some regulation, but it needs to be done carefully and iteratively. One size will not fit all. Why? Well, if you are most worried about China beating America in A.I., you want to turbocharge our A.I. innovation, not slow it down. If you want to truly democratize A.I., you might want to open-source its code. But open-sourcing can be exploited. What would ISIS do with the code? So, you have to think about arms control. If you are worried that A.I. systems will compound discrimination, privacy violations and other divisive societal harms, the way social networks do, you want regulations now. If you want to take advantage of all the productivity gains A.I. is expected to generate, you need to focus on creating new opportunities and safety nets for all the paralegals, researchers, financial advisers, translators and rote workers , and maybe lawyers and coders tomorrow. If you are worried that A.I. will become superintelligent and start defining its own goals, irrespective of human harm, you want to stop it immediately. That last danger is real enough that on Monday Geoffrey Hinton, one of the pioneering designers of A.I. systems, announced that he was leaving Googles A.I. team. Hinton said that he thought in rolling out its A.I. products but that he wanted to be free to speak out about all the risks. It is hard to see how you can prevent the bad actors from using it for bad things, Hinton . Add it all up and it says one thing: We as a society are on the cusp of having to decide on some very big trade-offs as we introduce generative A.I. And government regulation alone will not save us. I have a simple rule: The faster the pace of change and the more godlike powers we humans develop, the more everything old and slow matters more than ever the more everything you learned in Sunday school, or from wherever you draw ethical inspiration, matters more than ever. Because the wider we scale artificial intelligence, the more the golden rule needs to scale: Do unto others as you would wish them to do unto you. Because given the increasingly godlike powers were endowing ourselves with, we can all now do unto each other faster, cheaper and deeper than ever before. Ditto when it comes to the climate Pandoras box were opening. As NASA on its website, In the last 800,000 years, there have been eight cycles of ice ages and warmer periods. The last ice age ended some 11,700 years ago, giving way to our current climate era known as the Holocene (meaning ) which was characterized by stable seasons that allowed for stable agriculture, the building of human communities and ultimately civilization as we know it today. Most of these climate changes are attributed to that change the amount of solar energy our planet receives, . Well, say goodbye to that. There is now an among environmentalists and geological experts at the International Union of Geological Sciences, the professional organization responsible for defining Earths geological/climate eras about whether we humans have driven ourselves out of the Holocene into a new epoch, called the That name comes from anthropo, for man, and cene, for new because humankind has caused mass extinctions of plant and animal species, polluted the oceans and altered the atmosphere, among other lasting impacts, an article in the Smithsonian magazine Earth system scientists fear that this man-made epoch, the Anthropocene, will have none of the predictable seasons of the Holocene. Farming could become a nightmare. But here is where A.I. could be our savior by hastening breakthroughs in material science, battery density, fusion energy and safe modular nuclear energy that enable humans to manage the impacts of climate change that are now unavoidable and to avoid those that would be unmanageable. But if A.I. gives us a way to cushion the worst effects of climate change we had better do it over right. That means with smart regulations to rapidly scale clean energy and with scaled sustainable values. Unless we spread an ethic of conservation a reverence for wild nature and all that it provides us free, like clean air and clean water we could end up in a world where people feel entitled to drive through the rainforest now that their . That cant happen. Bottom line: These two big Pandoras boxes being opened. God save us if we acquire godlike powers to part the Red Sea but fail to scale the Ten Commandments. Thomas L. Friedman is the foreign affairs Op-Ed columnist. He joined the paper in 1981, and has won three Pulitzer Prizes. He is the author of seven books, including From Beirut to Jerusalem, which won the National Book Award.