
Even if you haven't been following the current conversations around artificial intelligence, it鈥檚 hard not to do a double take at the . The stories revolve around recent statements made by several industry leaders, including one of AI鈥檚 鈥済odfathers,鈥 that the technology is now evolving so rapidly, it could represent an 鈥渆xtinction-level鈥 threat sooner than expected 鈥 or at least trigger societal-scale disruptions on par with the recent global pandemic. Frankly, when you鈥檙e not an AI expert yourself, it can be hard to know what to make of such claims. For sure, today鈥檚 incarnations of , including many things humans could never do. And as we鈥檝e detailed in the past, current AI-powered technologies have a problematic track record, whether it鈥檚 amplifying disinformation, perpetuating human racial biases or supercharging criminal scams. Maybe it鈥檚 not so hard to believe that the machines really are about to get the better of us.
Here at 51视频-Dearborn, we have a lot of faculty who work with artificial intelligence, so we thought it鈥檇 be interesting to put the question of whether AI really is on the verge of becoming a civilization-ender to three of the university鈥檚 leaders in this area. Professor Hafiz Malik, Associate Professor Samir Rawashdeh and Assistant Professor Birhanu Eshete were all in agreement that the assertion that artificial intelligence could represent an extinction-level threat any time soon is overblown. Rawashdeh quipped that folks seem to be forgetting that 鈥渨e can always unplug them if they start misbehaving.鈥 Eshete generally rates today鈥檚 version of artificial general intelligence as 鈥渃at-level.鈥 Malik said that鈥檚 not giving cats enough credit. 鈥淚 think when you see some of the impressive things AI can do, it can be easy to get the impression that the technology is further along than it is,鈥 Malik says. AI may be able to beat the best human chess player or diagnose illnesses doctors can鈥檛, but he notes those are 鈥渧ery task-specific things with strict constraints.鈥 鈥淕eneral intelligence, the kind of intelligence that humans possess, where we can adapt to new circumstances, that鈥檚 not a kind of AI I would expect to see in my lifetime,鈥 he says. 鈥淎nd as far as the pace of advancement, I would say it has been fairly steady, not accelerating rapidly.鈥
This points to an important distinction between types of artificial intelligence that鈥檚 often overlooked in the current discussions around AI. Today鈥檚 AI, which is mostly driven by machine learning, is task specific. It鈥檚 the technology that allows algorithms, when given enough exposure to, say, photos of cars or X-rays of a particular type of cancer, to learn the essential characteristics of those things. But artificial general intelligence, or AGI, is a completely different creature. It would mean that machines could, as humans do, adapt to an almost infinite array of new tasks without being specifically trained or programmed to do those tasks. Notably, while task-specific AI is becoming ubiquitous, AGI doesn鈥檛 exist yet. Some doubt that it鈥檚 even possible. If it is achievable, there are many who think that it wouldn鈥檛 look anything like today鈥檚 AI.
Still, Rawashdeh, Eshete and Malik all say it wouldn鈥檛 take something as advanced as AGI to cause big problems in the human world. Rawashdeh and Eshete both voiced concerns over the fact that the highest levels of artificial intelligence are basically controlled by a handful of very large, powerful companies, which are developing the technology for commercial purposes, not to benefit human society. 鈥淚 think the real risk is we could very quickly become dependent on the technology in a huge range of sectors, and then we end up with systems that perpetuate inequality,鈥 Rawashdeh says. 鈥淎nd at that point, you could imagine people saying, 鈥榃ell, you can鈥檛 just turn it off because it would crash the economy.鈥欌 Like the justification used to bailout misbehaving banks during financial crises, AI could be judged too big to fail.
Disinformation is the other obvious area where we鈥檙e already seeing AI鈥檚 disruptive power. Disinformation, of course, is likely as old as human civilization itself. But Malik, who鈥檚 an expert in deepfakes, says AI has supercharged its impacts. 鈥淭he polarization which we鈥檙e seeing around the world, not just in the U.S., has a lot to do with social media platforms, which are driven by algorithms, creating echo chambers where people end up with very distorted views of reality,鈥 Malik says. Deepfakes, which he says are 鈥済etting better and better every day,鈥 have only made people more vulnerable. In fact, scammers are now putting deepfake technology to use in even more clever ways. Malik says criminals can now use , complete with an array of accents, powering convincing phone scams designed to scare people into draining their bank accounts. Whether it鈥檚 social media disinformation or a criminal scam, Malik says the result is a general erosion of trust in information and democratic institutions. And if we鈥檙e looking for things that could legitimately contribute to an unraveling of human society, this loss of trust seems like a good place to start.
Concerns over these problematic sides of AI technology have also sparked conversations about how to protect ourselves, and Eshete and Malik say the European Union has been a leader when it comes to regulation. Just this month, the , which, among other things, would seriously limit the use of facial recognition software and require creators of generative AI systems like ChatGPT to be more transparent about the data used to train their programs. Here in the U.S., Eshete notes the White House has also released the to 鈥渉elp guide the design, use and deployment of automated systems to protect the American Public.鈥 Eshete says penning good regulations is complicated by the fact that there still is no consensus among AI experts on whether AGI systems have anything resembling human capabilities, or even which risks we should be most worried about. He notes it鈥檚 really easy to get distracted by the frightening, future hypothetical threats of AI, like creating a lethal bio-weapon. 鈥淏ut there are all sorts of ways AI is already impacting people鈥檚 lives. So perhaps we should focus first on what is happening right now. And then once we鈥檝e done that, we鈥檒l have time to look at what鈥檚 coming.鈥
Eshete, Rawashdeh and Malik all say how much AI ultimately ends up reshaping our world, and whether its impacts will be beneficial or harmful, is largely up to us. Could we end up in a place where AI really does become a civilization-ender? Possibly. But if we do, we likely won鈥檛 have the machines to blame.
###
Story by Lou Blouin