Politicians Need to Learn How AI Works—Fast

This week, US Senators heard alarming testimony suggesting uncontrolled AI is stealing jobs, spreading misinformation, and generally “go pretty wrong”, in the words of OpenAI CEO Sam Altman (whatever that means). He and several lawmakers agreed that the US may now need a new federal agency to oversee the development of the technology. However, the hearing also agreed that no one wants to bring down a technology that could potentially increase productivity and give the US a head start in a new technological revolution.
Concerned senators might consider speaking to them Miss CummingsA former fighter pilot and Professor of Engineering and Robotics at George Mason University. She studies the use of AI and automation in safety-critical systems, including cars and planes, and returned to academia earlier this year after a stint with the National Highway Traffic Safety Administration, which oversees automotive technology, including Tesla’s Autopilot and self-driving cars. Cummings’ perspective could help politicians and policymakers weigh the promises of much-touted new algorithms against the risks ahead.
Cummings told me this week that she left NHTSA with deep concern about the autonomous systems deployed by many automakers. “We’re in serious trouble with the performance of these cars,” Cummings said. “You’re not nearly as capable as people think.”
I was struck by the parallels to ChatGPT and similar chatbots, which fueled excitement and concern about the power of AI. Automated driving functions have been around longer, but like large language models, they are based on machine learning algorithms that are inherently unpredictable and difficult to test, requiring a different kind of technical thinking than in the past.
Just like ChatGPT, Tesla’s Autopilot and other autonomous driving projects have generated absurd hype. Heady dreams of a transportation revolution prompted automakers, startups and investors to pour huge sums of money into the development and deployment of a technology that still has many unsolved problems. In the mid-2010s, the regulatory environment around autonomous cars was lax, and government officials were reluctant to put the brakes on a technology that promised billions in value for US companies.
Even though self-driving cars have invested billions in the technology, problems still exist, and some automakers have shelved major autonomy projects. As Cummings says, the public is often unclear how powerful the semi-autonomous technology really is.
In a way, it’s good to see governments and lawmakers quickly proposing regulation of generative AI tools and large language models. The current panic focuses on large language models and tools like ChatGPT, which are remarkably good at answering questions and solving problems, although they still have significant flaws, including being able to securely falsify facts.
At this week’s Senate hearing, Altman of OpenAI, which provided us with ChatGPT, even went so far as to call for a licensing system to control whether companies like his are allowed to work on advanced AI. “My worst fear is that we — the field, the technology, the industry — will cause significant harm to the world,” Altman said during the hearing.