O'Mara: Politics and Commercial Pressure, not ChatGPT, are the ThreatsHistorians in the News
tags: capitalism, technology, Silicon Valley, history of technology, artificial intelligence, ChatGPT
As sectors ranging from education to health care to insurance to marketing consider how AI might reshape their businesses, a crescendo of hype has given rise to wild hopes and desperate fears. Fueling both is the sense that machines are getting too smart, too fast — and could someday slip beyond our control. “What nukes are to the physical world,” tech ethicist Tristan Harris recently proclaimed, “AI is to everything else.”
The benefits and dark sides are real, experts say. But in the short term, the promise and perils of generative AI may be more modest than the headlines make them seem.
“The combination of fascination and fear, or euphoria and alarm, is something that has greeted every new technological wave since the first all-digital computer,” said Margaret O’Mara, a professor of history at the University of Washington. As with past technological shifts, she added, today’s AI models could automate certain everyday tasks, obviate some types of jobs, solve some problems and exacerbate others, but “it isn’t going to be the singular force that changes everything.”
Neither artificial intelligence nor chatbots is new. Various forms of AI already power TikTok’s “For You” feed, Spotify’s personalized music playlists, Tesla’s Autopilot driving systems, pharmaceutical drug development and facial recognition systems used in criminal investigations. Simple computer chatbots have been around since the 1960s and are widely used for online customer service.
What’s new is the fervor surrounding generative AI, a category of AI tools that draws on oceans of data to create their own content — art, songs, essays, even computer code — rather than simply analyzing or recommending content created by humans. While the technology behind generative AI has been brewing for years in research labs, start-ups and companies have only recently begun releasing them to the public.
The lesson isn’t that technology is inherently good, evil or even neutral, said O’Mara, the history professor. How it’s designed, deployed and marketed to users can affect the degree to which something like an AI chatbot lends itself to harm and abuse. And the “overheated” hype over ChatGPT, with people declaring that it will transform society or lead to “robot overlords,” risks clouding the judgment of both its users and its creators.
“Now we have this sort of AI arms race — this race to be the first,” O’Mara said. “And that’s actually where my worry is. If you have companies like Microsoft and Google falling over each other to be the company that has the AI-enabled search — if you’re trying to move really fast to do that, that’s when things get broken.”
comments powered by Disqus
- Chair of Florida Charter School Board on Firing of Principal: About Policy, Not David Statue
- Graduate Student Strikes Fight Back Against Decades of Austerity, Seek to Revive Opportunity
- When Right Wingers Struggle with Defining "Woke" it Shows they Oppose Pursuing Equality
- Strangelove on the Square: Secret USAF Films Showed Airmen What to Expect if Nuclear War Broke Out
- The Women of the Montgomery Bus Boycott
- New Books Force Consideration of Reconstruction's End from Black Perspective
- Excerpt: How Apartheid South Africa Tried to Create a Libertarian Utopia
- Historian's Book on 1970s NBA Shows Racial Politics around Basketball Have Always Been Ugly
- Kendi: "Anti-woke" Part of Backlash Against Antiracist Protest Movements
- Monica Muñoz Martinez Honored for Truth-Telling in Texas History