O'Mara: Politics and Commercial Pressure, not ChatGPT, are the ThreatsHistorians in the News
tags: capitalism, technology, Silicon Valley, history of technology, artificial intelligence, ChatGPT
As sectors ranging from education to health care to insurance to marketing consider how AI might reshape their businesses, a crescendo of hype has given rise to wild hopes and desperate fears. Fueling both is the sense that machines are getting too smart, too fast — and could someday slip beyond our control. “What nukes are to the physical world,” tech ethicist Tristan Harris recently proclaimed, “AI is to everything else.”
The benefits and dark sides are real, experts say. But in the short term, the promise and perils of generative AI may be more modest than the headlines make them seem.
“The combination of fascination and fear, or euphoria and alarm, is something that has greeted every new technological wave since the first all-digital computer,” said Margaret O’Mara, a professor of history at the University of Washington. As with past technological shifts, she added, today’s AI models could automate certain everyday tasks, obviate some types of jobs, solve some problems and exacerbate others, but “it isn’t going to be the singular force that changes everything.”
Neither artificial intelligence nor chatbots is new. Various forms of AI already power TikTok’s “For You” feed, Spotify’s personalized music playlists, Tesla’s Autopilot driving systems, pharmaceutical drug development and facial recognition systems used in criminal investigations. Simple computer chatbots have been around since the 1960s and are widely used for online customer service.
What’s new is the fervor surrounding generative AI, a category of AI tools that draws on oceans of data to create their own content — art, songs, essays, even computer code — rather than simply analyzing or recommending content created by humans. While the technology behind generative AI has been brewing for years in research labs, start-ups and companies have only recently begun releasing them to the public.
The lesson isn’t that technology is inherently good, evil or even neutral, said O’Mara, the history professor. How it’s designed, deployed and marketed to users can affect the degree to which something like an AI chatbot lends itself to harm and abuse. And the “overheated” hype over ChatGPT, with people declaring that it will transform society or lead to “robot overlords,” risks clouding the judgment of both its users and its creators.
“Now we have this sort of AI arms race — this race to be the first,” O’Mara said. “And that’s actually where my worry is. If you have companies like Microsoft and Google falling over each other to be the company that has the AI-enabled search — if you’re trying to move really fast to do that, that’s when things get broken.”
comments powered by Disqus
- Josh Hawley Earns F in Early American History
- Does Germany's Holocaust Education Give Cover to Nativism?
- "Car Brain" Has Long Normalized Carnage on the Roads
- Hawley's Use of Fake Patrick Henry Quote a Revealing Error
- Health Researchers Show Segregation 100 Years Ago Harmed Black Health, and Effects Continue Today
- Nelson Lichtenstein on a Half Century of Labor History
- Can America Handle a 250th Anniversary?
- New Research Shows British Industrialization Drew Ironworking Methods from Colonized and Enslaved Jamaicans
- The American Revolution Remains a Hotly Contested Symbolic Field
- Untangling Fact and Fiction in the Story of a Nazi-Era Brothel