A coalition of 18 countries, including the United States, the United Kingdom, and Australia, has issued comprehensive global guidelines aimed at enhancing the cybersecurity of artificial intelligence (AI) models. Released on November 26, the 20-page document encourages AI firms to adopt a "secure by design" approach, emphasizing the importance of prioritizing security during the entire lifecycle of AI development. The guidelines cover practices such as closely managing AI model infrastructure, implementing robust monitoring for tampering, and providing cybersecurity training for staff. Notably, the guidelines avoid specific contentious issues, focusing on broader cybersecurity principles.


In a landmark move, a coalition of 18 countries, including the United States, the United Kingdom, and Australia, has jointly released global guidelines to bolster the cybersecurity of artificial intelligence (AI) models. The 20-page document, unveiled on November 26, advocates for a "secure by design" approach, stressing the significance of integrating cybersecurity considerations throughout the entire lifecycle of AI development.


The guidelines provide a comprehensive framework for AI firms, offering general recommendations to enhance the security posture of AI models. Key suggestions include maintaining stringent control over AI model infrastructure, implementing robust monitoring mechanisms to detect tampering both before and after release, and ensuring that staff receives adequate training on cybersecurity risks.


Notably absent from the guidelines are specific references to contentious issues within the AI space, such as controls around image-generating models, deep fakes, and data collection methods used in training models. The document aims to address the overarching need for enhanced cybersecurity practices in the rapidly evolving AI industry.


U.S. Secretary of Homeland Security Alejandro Mayorkas highlighted the significance of cybersecurity in building trustworthy AI systems, acknowledging the pivotal role of AI as a transformative technology. The guidelines align with ongoing government initiatives globally, including an AI Safety Summit in London where governments and AI firms collaborated to reach agreements on AI development.


While the European Union is actively working on its AI Act to regulate the AI space, and U.S. President Joe Biden issued an executive order in October setting standards for AI safety and security, the guidelines from this global coalition mark a collaborative effort to address cybersecurity concerns without stifling innovation.


The "secure by design" guidelines received contributions from an array of countries, including Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea, and Singapore. Additionally, leading AI firms such as OpenAI, Microsoft, Google, Anthropic, and Scale AI played a pivotal role in the development of these guidelines, reflecting a multi-stakeholder approach to advancing global AI cybersecurity standards.


(JESSE COGHLAN, COINTELEGRAPH, 2023)