Another change, called for by lawmakers and industry insiders alike, was to require disclosure to let people know when they’re conversing with a language model rather than a human, or when AI technology involves making important life-changing decisions consequences. An example might be a disclosure requirement to disclose when a facial recognition match is the basis for an arrest or criminal charge.
The Senate hearing follows growing interest from US and European governments, and even some technology insiders, to push AI to new frontiers to prevent it from harming people. In March, a group letter signed by big names in tech and AI called for a six-month hiatus in AI development, and this month the White House summoned executives from OpenAI, Microsoft, and other companies and announced it backed public support for hacking -Competition for research into generative AI systems. The European Union is also finalizing a comprehensive law called the AI Act.
IBM’s Montgomery yesterday called on Congress to take inspiration from the AI Act, which categorizes AI systems according to the risks they pose to people or society and sets rules for them or even bans them accordingly. She also endorsed the idea of promoting self-regulation and emphasized her position on IBM’s AI ethics committee, although those structures have lurched in controversy at Google and Axon.
The Center for Data Innovation, a tech think tank, said in a letter released after yesterday’s hearing that the US doesn’t need a new AI regulator. “Just as it would be unwise for one government agency to regulate all human decision-making, it would also be unwise for one agency to regulate all AI,” the letter reads.
“I don’t think that’s pragmatic, and it’s not what they should be thinking about right now,” says Hodan Omaar, a senior analyst at the center.
Omaar says the idea of creating a whole new agency for AI is unlikely as Congress has yet to enact other necessary technical reforms, such as the need for overarching privacy protections. She believes it is better to update existing laws and allow federal agencies to include AI oversight in their existing regulatory work.
The Equal Employment Opportunity Commission and the Justice Department last summer issued guidance on how companies that use algorithms in hiring — algorithms that can expect people to look or behave in certain ways — can comply with the Americans with Disabilities Act. Such guidance shows how AI policies can intersect with existing laws and encompass many different communities and use cases.
Alex Engler, a fellow at the Brookings Institution, says he’s concerned the US could repeat problems that led to a breach of federal privacy rules last fall. The historic bill was defeated by California lawmakers, who declined to vote because the law would take precedence over the state’s privacy laws. “That’s absolutely justified,” says Engler. “Now is this so worrying that you’re saying we’re just not going to have civil society protections for AI? I don’t know anything about that.”
Although the hearing raised potential harms from AI—from electoral disinformation to conceptual dangers that don’t yet exist, such as self-aware AI—generative AI systems like ChatGPT that inspired the hearing caught the most attention. Several senators argued they could increase inequality and monopolization. The only way to protect yourself from this is for Congress to make traffic rules, said Sen. Cory Booker, a New Jersey Democrat who has in the past helped regulate AI and supported a federal ban on facial recognition.