The theme of the day was “exploring the (im)possible.” We learned how Google’s AI was being put to use fighting wildfires, forecasting floods, and assessing retinal disease. But the stars of this show were what Google called “generative AI models.” These are the content machines, schooled on massive training sets of data, designed to churn out writings, images, and even computer code that once only humans could hope to produce. Something weird is happening in the world of AI. In the early part of this century, the field burst out of a lethargy—known as an AI winter—by the innovation of “deep learning” led by three academics. This approach to AI transformed the field and made many of our applications more useful, powering language translations, search, Uber routing, and just about everything that has “smart” as part of its name. We’ve spent a dozen years in this AI springtime. But in the past year or so there has been a dramatic aftershock to that earthquake as a sudden profusion of mind-bending generative models have appeared. Most of the toys Google demoed on the pier in New York showed the fruits of generative models like its flagship large language model, called LaMDA. It can answer questions and work with creative writers to make stories. Other projects can produce 3D images from text prompts or even help to produce videos by cranking out storyboard-like suggestions on a scene-by-scene basis. But a big piece of the program dealt with some of the ethical issues and potential dangers of unleashing robot content generators on the world. The company took pains to emphasize how it was proceeding cautiously in employing its powerful creations. The most telling statement came from Douglas Eck, a principal scientist at Google Research. “Generative AI models are powerful—there’s no doubt about that,” he said. “But we also have to acknowledge the real risks that this technology can pose if we don’t take care, which is why we’ve been slow to release them. And I’m proud we’ve been slow to release them.”   But Google’s competitors don’t seem to have “slow” in their vocabularies. While Google has provided limited access to LaMDA in a protected Test Kitchen app, other companies have been offering an all-you-can-eat smorgasbord with their own chatbots and image generators. Only a few weeks after the Google event came the most consequential release yet: OpenAI’s latest version of its own powerful text generation technology, ChatGPT, a lightning-fast, logorrheic gadfly that spits out coherent essays, poems, plays, songs, and even obituaries at the merest hint of a prompt. Taking advantage of the chatbot’s wide availability, millions of people have tinkered with it and shared its amazing responses, to the point where it’s become an international obsession, as well as a source of wonder and fear. Will ChatGPT kill the college essay? Destroy traditional internet search? Put millions of copywriters, journalists, artists, songwriters, and legal assistants out of a job? Answers to those questions aren’t clear right now. But one thing is. Granting open access to these models has kicked off a wet hot AI summer that’s energizing the tech sector, even as the current giants are laying off chunks of their workforces. Contrary to Mark Zuckerberg’s belief, the next big paradigm isn’t the metaverse—it’s this new wave of AI content engines, and it’s here now. In the 1980s, we saw a gold rush of products moving tasks from paper to PC application. In the 1990s, you could make a quick fortune by shifting those desktop products to online. A decade later, the movement was to mobile. In the 2020s the big shift is toward building with generative AI. This year thousands of startups will emerge with business plans based on tapping into the APIs of those systems. The cost of churning out generic copy will go to zero. By the end of the decade, AI video-generation systems may well dominate TikTok and other apps. They may not be anywhere as good as the innovative creations of talented human beings, but the robots will quantitatively dominate. Let’s consider just one issue: What, if anything, should limit the output of those engines? Google’s SVP of technology and society, James Manyika, explained to me that one reason for holding back a mass release of LaMDA is the time-consuming effort to set limits on what comes out of the bot’s mouth. “When you prompt it, what you’re getting from it isn’t the first thing LaMDA came up with,” he says. “We’re looking at the output before we present it back to you to say, is it safe?”  He further explains that Google winds up defining “safe” by using human moderators to identify what’s proper and then putting those standards into code. Laudable intentions, to be sure. But in the long run, setting boundaries might be futile—if they are easily circumvented—or even counterproductive. It might seem like a good idea to forbid a language model to express certain ideas, like Covid misinformation or racial animus. But you could also imagine an authoritarian regime rigging a system to prevent any statement that might express doubt about the infallibility of its leaders. It could be that designing easy-to-implement guardrails could be a blueprint for creating propaganda machines. By the way, former Google engineer Blake Lemoine—the guy who thinks that LaMDA is sentient—is predictably against imposing such limits on bots. “You have some purpose in creating the person [bot] in the first place, but once they exist they’re their own person and an end in and of themselves,” he told me in a Twitter DM.  Now that the chatbots are out of their sandboxes, we’ll have to argue all those issues after the fact.  Also you can expect Google’s own generative progeny to soon burst out of their test kitchens. Its scientists consider LaMDA best in class, but the company is miffed that it’s second rate in terms of buzz. Reports are that Google has declared an internal Code Red alert to respond to what is now a competitive emergency. Ideally, Google will fast-track LaMDA while maintaining the same caution that let OpenAI zip past it in the chatbot war, but that may be (im)possible. In December 2010 I wrote an introduction to a WIRED package about the “AI Revolution,” taking note of how artificial intelligence had officially passed out of its winter and wondering what came next. One thing I got right: It’s too late to stop it. We must learn to adapt. AI is so crucial to some systems—like the financial infrastructure—that getting rid of it would be a lot harder than simply disconnecting HAL 9000’s modules. “In some sense, you can argue that the science fiction scenario is already starting to happen,” Thinking Machines’ [Danny] Hillis says. “The computers are in control, and we just live in their world.” [Stephen] Wolfram says this conundrum will intensify as AI takes on new tasks, spinning further out of human comprehension. “Do you regulate an underlying algorithm?” he asks. “That’s crazy, because you can’t foresee in most cases what consequences that algorithm will have.” Glenn writes, “Is anyone interested in the Facebook trend of suspending active accounts and then demanding mobile numbers, selfies, and multiple other personal info before an appeal can start? Facebook insists all of my messages, photos, posts, etc. will be deleted in 20 days unless I give them my personal information.” Hi, Glenn. I don’t want to get into the weeds of whether your account was suspended or hacked or whatever—let alone whether it’s a trendy thing among the Metas. But I can offer some general advice about providing information from anyone claiming to be Facebook, or any other service for that matter: Be careful. It could be someone trying to phish you. Facebook has a help page that tells you how to verify whether a message is valid. (And yes, sometimes the company does ask for a selfie if it thinks that you’ve been hacked.) But please note that the company also says one of the telltale signs of an attack is “warnings that something will happen to your account if you don’t update it or take a certain action.” Isn’t that what you are describing? That said, users are generally screwed, because it’s sometimes hard to distinguish legitimate requests for information from attacks. And all too often, tech companies leave customers at the mercy of malfeasants. I am continually astonished at the persistence of certain attacks on users of Facebook and Messenger. There are several in particular that have been going on for years. In one, someone fakes an account of a person you know and asks to friend you. You forget that you are already friends with that person and agree. From that point on, the impersonator has a higher level of access to your content. Another hack involves someone getting hold of a friend’s account and sending you a message with what looks like a link to a video that you might want to watch. The link is toxic. Savvy users won’t fall for this, but why can’t the company nip these in the bud? Is there some esoteric computer science reason that Meta hasn’t been able to recognize these easily identifiable attacks? Or is it just a lower priority than optimizing ads or building the metaverse? Let me reach back into December for my favorite apocalyptic moment of 2022: the bursting of the AquaDom, the world’s largest fish tank, leaving 1,500 instances of rare marine life in flopping death throes on the freezing streets of Berlin. To quote The New York Times, “The entire block of the street outside the building remained soaked by 264,000 gallons of water that rushed out of the lobby, uprooting plants and ripping out telephones that lay strewn among hundreds of chocolate balls from a neighboring Lindt chocolate shop.” Can you top that, 2023? WIRED is at CES, so you can skip Las Vegas and enjoy the bomb cyclones and atmospheric rivers at home. Catch up on the latest gadgets here. It’s time to pay kidney donors.  The Big Bang. Quantum physics. Invention of the chip. And Allison Williams.

Welcome to the Wet Hot AI Chatbot Summer - 95Welcome to the Wet Hot AI Chatbot Summer - 69Welcome to the Wet Hot AI Chatbot Summer - 75Welcome to the Wet Hot AI Chatbot Summer - 31Welcome to the Wet Hot AI Chatbot Summer - 41Welcome to the Wet Hot AI Chatbot Summer - 76