The White House Already Knows How to Make AI Safer

Second, it could direct any federal agency to procure an AI system that has the potential to “achieve meaningful effect [our] Rights, opportunities, or access to important resources or services” to require that the system conforms to these practices and that providers provide evidence of such compliance. This recognizes the power of the federal government as a customer to shape business practices. After all, it’s the country’s largest employer and could use its purchasing power to dictate best practices for the algorithms used to screen and select candidates for jobs, for example.

Third, the executive order could require anyone receiving federal funds (including state and local agencies) to ensure that the AI ​​systems they use comply with those practices. This recognizes the important role of federal investment in states and localities. For example, AI is involved in many components of the criminal justice system, including predictive policing, surveillance, pre-trial detention, sentencing, and probation. Although most law enforcement practices are local, the Justice Department offers federal grants to state and local law enforcement agencies and could tie those funds to conditions that determine how the technology will be used.

Finally, this executive order could direct agencies with regulatory powers to update their rulemaking and extend it to processes under their jurisdiction that include AI. Some early attempts at regulating companies using AI medical equipment, adjustment algorithmsAnd credit-worthiness are already underway and these initiatives could be further developed. worker monitoring And property valuation systems are just two examples of areas that would benefit from such regulation.

Of course, the testing and monitoring system for AI systems described here is likely to raise a number of concerns. For example, some may argue that if we slow down in implementing such guard rails, other countries will overtake us. However, other countries are in the process of passing their own laws that impose sweeping restrictions on AI systems, and any American companies wishing to operate in those countries must abide by their rules. The EU is in the process of passing a comprehensive AI law that includes many of the provisions outlined above, and even China is on it Introducing limits for commercially deployed AI systems far beyond what we are currently prepared to consider.

Others may express concerns that it might be difficult for a small business to comply with these extensive requirements. This could be countered by linking the requirements to the level of impact: software that can impact the livelihoods of millions of people should be thoroughly reviewed, no matter how big or how small the developer is. An AI system that individuals use for recreational purposes should not be subject to the same limitations and restrictions.

There are also likely to be concerns about the practicality of these requirements. Again, it is important not to underestimate the power of the federal government as a market maker. An executive order requiring testing and validation frameworks will incentivize companies looking to translate best practices into viable commercial test systems. The responsible AI sector is already filling up with companies offering algorithmic testing and assessment services. industry consortia which issue detailed guidelines that providers must comply with, and large consulting firms which offer advice to their customers. And nonprofit, independent organizations like data and society (Disclaimer: I sit on their board of directors). whole labs Develop tools that assess how AI systems affect different populations.

We did the research, we built the systems and we identified the damage. There are established methods to ensure that the technology we develop and deploy benefits us all while reducing the harm to those who already suffer from a deeply unequal society. The time for studying is over – now the White House needs to issue an executive order and take action.


WIRED opinion publishes articles by external contributors representing a wide range of viewpoints. Read more opinions Here. Submit a comment below ideas@wired.com.

Zack Zwiezen

Zack Zwiezen is a USTimesPost U.S. News Reporter based in London. His focus is on U.S. politics and the environment. He has covered climate change extensively, as well as healthcare and crime. Zack Zwiezen joined USTimesPost in 2023 from the Daily Express and previously worked for Chemist and Druggist and the Jewish Chronicle. He is a graduate of Cambridge University. Languages: English. You can get in touch with me by emailing zackzwiezen@ustimespost.com.

Related Articles

Back to top button