By Emilio Giuliani III.
As a young child in elementary school, only a few years ago, I remember when a teacher asked my class what kind of exciting marvels the future might bring. The predictions ranged from self-driving robot cars and ultra-intelligent computers to new armed conflict with Russia and the Minnesota Vikings winning their first ever Super Bowl. Thanks to artificial intelligence and the unrelenting aggression of Vladimir Putin, three out of four of these have already come to fruition, and maybe next year will bring a Vikings victory.
Above all when it comes to today’s modern technology, legal statutes and regulations have struggled at times to keep pace. This is unsurprising when the vast majority of lawmakers were born before the invention of the CD player, and we don’t even use those anymore. The advent of artificial intelligence has been a huge boon to life in the 21st century, but more often than not such advances come incrementally to the public, with legislation lagging even further behind.
Flying Robot Cars, Minus the Flying Part
It may not be as readily apparent if you’re not constantly stuck in traffic with Waymo vehicles in Phoenix or Tempe, but autonomous driving cars are likely here to stay. Autonomous cars remove a lot of the risks inherent in Arizona drivers: they can be programmed not to tailgate, they don’t become distracted by eating Chipotle or calling their granddaughter, and they can recognize what the speed limit and flow of traffic necessitates in terms of a driving speed. Waymo in Phoenix has been a leader in this regard, providing an autonomous ride-hailing service based on tech that has driven over 20 million miles on public roads as of January 2022. Uber and Tesla are two other big names involved in the autonomous vehicle industry, and to a greater extent than Waymo, they’ve been exploring hybrid options between human and AI driving with a focus on shifting more toward self-driving cars over time.
As of mid-2021, eight states including Arizona have passed legislation allowing fully self-driving cars on the roads. Fourteen states have not enacted any legislation on the topic, and the remaining twenty-eight plus Washington, D.C. have legalized autonomous driving with a safety driver only.
Not So Fast My Friend
Self-driving cars may be a preferable option for many, but there is still cause for concern. Governor Doug Ducey directed the Arizona Department of Transportation to suspend Uber’s ability to test and operate autonomous vehicles following a 2018 fatal pedestrian accident. The Uber in the collision had a human driver behind the wheel, but unfortunately that safety driver was distracted and did not intervene to stop the accident. Video footage and experts described a system failure of the vehicle to recognize the deceased in the crosswalk.
Public opinion polling shows a deep-seated skepticism of AI because of such devastating, if uncommon, incidents. Ethical concerns and legal uncertainty remain major factors for why people are unwilling to readily adopt such new technologies. Placing blame has also been a central issue with autonomous machines—is it the manufacturer of the car, the designer of the software, or the operator of the updating system who should be liable for an accident?
The Robots Aren’t Playing Fair
Machines, thanks to advanced AI, have been beating chess grandmasters for twenty-five years, and in 2017 Google’s Deepmind computer defeated the world’s top human player in the board game Go. The limits of machine learning, whereby a computer algorithm runs numerous experiential scenarios and reviews the data to improve its own performance , seem boundless considering computers utilizing it can beat our best humans at our most complex games. Artificial intelligence and machine learning excel when all the necessary parameters of a given event can be mapped out, even if it would take humans thousands of hours to accomplish the same feat. However, artificial intelligence often falls short in cases where human subjectivity is a critical element to solving a problem.
For example, artificial language processing programs would struggle understanding what “it” refers to in the sentence “The trophy would not fit in the brown suitcase because it was too big.” To humans, it’s obvious to infer that the trophy was too big, but an AI cannot automatically determine which noun, the trophy or the suitcase, is going to fit in the other given the limited context. Such limitations apply to other areas of AI and machine learning where human inferences play an important role.
Furthermore, when AI and machine learning are deployed in the real world, there can be serious consequences. Biases in AI are unfortunately common. If you input a dataset into a machine learning algorithm that contains bias, the data reflected will mimic the same. This was apparent in Amazon’s AI recruiting tool that disfavored women based on patterns in resumes. Because the algorithm defined success based on past hires under limited criteria, it drew conclusions that men were better suited for a given role because historically men had disproportionately held technical leadership roles at Amazon. Legal recourse is sparse and largely untested in these situations, and it often takes other AI experts to recognize when an AI is acting unfairly.
AI has amazing potential, but at the end of the day it’s humans who are at least initially designing and operating the machines. It will take both consistent human oversight combined with algorithmic power to overcome these pivotal problems. Providing greater transparency for underlying AI code and requiring periodic audits by third parties can assist in identifying and resolving such issues before they become significant. Furthermore, mandating fail-safe measures like a kill-switch, which the EU has contemplated, are worth exploring.
There is incredible potential for AI, and the capabilities are progressing faster than many of us definitely non-machine humanoids realize. Who knows, the next blog post you read may be generated by a smart, creative, and independent AI algorithm instead of an inferior human.