The White House has recently announced a comprehensive policy aimed at managing the risks associated with artificial intelligence (AI). This policy, outlined in a White House fact sheet, requires federal agencies to establish specific safeguards for AI applications that could impact the rights or safety of Americans. This crucial step comes in response to the rapid expansion of AI technology and the need to ensure its safe and ethical utilization. This article delves into the key directives issued by the Biden administration, highlighting the measures in place to protect the public from potential harm while harnessing the full benefits of AI.

The Biden administration's latest directive mandates federal agencies to appoint a chief AI officer within 60 days, disclose AI usage, and integrate protective measures. These efforts align with President Biden's executive order on AI from October 2023. The administration's focus on safeguarding the public from the potential risks posed by AI is underscored by the statement of Vice President Kamala Harris, emphasizing the ethical and societal duty to ensure the responsible adoption and advancement of AI. However, certain AI use cases within the Department of Defense will not be mandated for disclosure in the inventory as their sharing would contradict existing laws and government-wide policies.


Moreover, the memorandum stipulates that by December 1, agencies must establish specific safeguards for AI applications that could affect the rights or safety of Americans. For instance, travelers should have the option to opt out of the facial recognition technology used by the Transportation Security Administration at airports. Agencies unable to implement these safeguards must discontinue using the AI system unless agency leadership can justify how doing otherwise would heighten risks to safety or rights or hinder critical agency operations.


The Office of Management and Budget's (OMB) recent AI directives align with the administration's blueprint for an "AI Bill of Rights" from October 2022 and the National Institute of Standards and Technology's AI Risk Management Framework from January 2023. These initiatives collectively emphasize the importance of creating reliable AI systems and seek input on enforcing compliance and best practices among government contractors supplying technology. Furthermore, the administration aims to recruit 100 AI professionals into the government by the summer as part of the "talent surge" outlined in the October executive order.


The Biden administration's cautious yet proactive approach reflects the ongoing efforts to ensure the responsible and efficient utilization of AI across the federal government. This comprehensive policy for managing AI risks is a significant step towards fostering public trust and safety while leveraging the transformative potential of AI technology. As the administration continues to shape the regulatory landscape for AI, these directives are poised to have a far-reaching impact on the development and deployment of AI systems in the public domain.


In conclusion, the Biden administration's initiative to manage AI risks underscores the critical balance between harnessing the potential of AI technology and safeguarding public interests and safety. By establishing specific safeguards, mandating disclosure of AI usage, and aligning with existing frameworks, the administration is paving the way for the responsible adoption and regulation of AI in the public sector. These measures, combined with the recruitment of AI professionals and ongoing initiatives, represent a comprehensive approach to safeguarding the public from the potential risks associated with AI.


(AMAKA NWAOKOCHA, COINTELEGRAPH, 2024)