May 17, 2023
What’s the transformative potential of artificial intelligence?
Robert Brunner is the associate dean for innovation and chief disruption officer at the Gies College of Business at the University of Illinois Urbana-Champaign. Brunner spoke with News Bureau business and law editor Phil Ciciora about the transformative potential of artificial intelligence technology such as ChatGPT.
There’s been a lot of hype about artificial intelligence technology and its potential to change every industry and every job in the global economy. What should we know about AI and its potential?
This is a difficult question to answer because if you’re too bullish, you’re accused of being a techno-optimist or apologist who’s downplaying what’s perceived by some as one of the greatest threats to society, perhaps in our history. Or if you’re too bearish, you’re a doom and gloom naysayer – a Chicken Little running around saying “The sky is falling!”
I think the real concern isn’t ChatGPT, Bard or what are more generally called generative artificial intelligence tools. It’s the pace from which we went from something like Siri, Apple’s virtual assistant technology that is often comically wrong or doesn’t do what you want it to do half the time, to a level of sophistication that’s almost unimaginable outside of the realm of science fiction.
That’s disconcerting to a lot of people, particularly those whose jobs might be most negatively impacted by AI. A plumber, for example, is unlikely to see much benefit or harm from generative AI. But there’s a lot of anxiety among knowledge workers, who may fear that they’re going to be replaced by AI.
I do think some work will be replaced by generative AI, but that doesn’t mean there will be mass layoffs anytime soon. My guess is that if there’s a natural attrition rate of, say, 10% at a company, those workers just won’t be replaced.
And the reason why is that there will be an expectation that workers will be more productive because they’ll be able to leverage AI, which will act as a force multiplier. If you were able to produce five widgets in a week, AI may help you produce 10 or more.
But the fact that this has happened so fast is what’s most unnerving. When change happens this fast, people need time to process and adapt – and I don’t think most people feel like they’ve had a chance to do that. It just burst onto the scene in terms of public consciousness, so it’s natural to go to the doom and gloom scenario, which is partly due to Hollywood and popular culture.
If you understand the underlying technology behind generative AI and how it works, it’s not magic. We tend to anthropomorphize technology, and, for some, the knee-jerk reaction is to think this is like the HAL 9000, so it wants to kill us.
My gut feeling is that, right now, it’s not as dangerous as people think or fear. The net effect overall could be that we’ll see increased opportunities or better productivity in a number of different jobs. It’s possible that workers will be encouraged to automate the parts of their job that they like the least. If you dislike taking and transcribing meeting notes, for example, you can let AI take care of it. We’ll all be getting this personal digital assistant that will be very helpful to us.
So it’s entirely possible that there’s going to be tremendous new job opportunities that we don’t even know about yet, because the technology is just so new.
Do you foresee technology such as ChatGPT continuously evolving?
We’re on the fourth version of ChatGPT, but we’re not getting version five anytime soon. The current version is the one we’ll have for the foreseeable future, meaning that we’ve reached its limits for now. What we’ll likely see next are applications of this fundamental technology to different areas or fields, which is very exciting.
Do we need to pause development on generative AI, as some notable tech luminaries have advocated for?
There are some legitimate things to think about in terms of pausing development. Could we pause and then ensure that the AI models are fair and equitable to all groups, that there are not going to be accentuated biases?
That would be nice, but I would say the toothpaste is already out of the tube. It would be wholly impractical right now to pause development. It’s out there and it’s not coming back. Also, we don’t want to give any unfriendly countries or their militaries a six-month head start on development.
But that doesn’t mean we shouldn’t think about any potential downside effects. There’s going to need to be regulations and thoughtful discussions around how we want this technology to be rolled out and used. For example, should children be allowed to use generative AI? Should high school students?
Hopefully, we don’t follow past models and wait for the tech companies to go out and do questionable things before we realize we need to regulate it. We have to start thinking about such guardrails right now because the genie is out of the bottle – and we want to make sure that the genie is a force for good. We have to be careful that we’re not creating a future society that accentuates existing inequalities and biases.
What other potential pitfalls await with AI?
Heading into the 2024 presidential election cycle, the potential for deepfakes is very concerning. But the only remedy we have right now is that we have to be thinking and talking about this issue. How do we want AI to benefit society? How do we want it to help us become better?
We have lots of problems, but we also have an ability to try to work through them and get to a better place. I have a feeling that we’ll hit some issues with AI but ultimately we’ll struggle through.
Which is good because, at this point, it’s hard to imagine a future in which generative AI is not an everyday, commonplace tool in your personal and professional life. Even if you’re uneasy about this technology you should at least be aware of its transformative potential. Historically speaking, things don’t tend to end well for Luddites.