The United States, the United Kingdom, Australia and 15 other countries have issued global guidelines to help protect AI models from tampering, urging companies to make their models “secure by design.”
On 26 November, 18 countries issued a 20-page version document outlining how AI companies should handle their own cybersecurity when developing or using AI models, as he claimed that “security can often be a secondary consideration” in the rapidly evolving industry.
The guidelines mostly included general recommendations such as maintaining a tight grip on the infrastructure of AI models, monitoring any tampering with models before and after release, and training staff on cybersecurity risks.
Exciting news! we supported @NCSC And 21 international partners will develop “Guidelines for Safe AI System Development”! This is operational collaboration in action for secure AI in the digital age: https://t.co/DimUhZGW4R#AISafety #SecureByDesign pic.twitter.com/e0sv5ACiC3
– Cybersecurity and Infrastructure Security Agency (@CISAgov) 27 November 2023
Some controversial issues in the AI field were not mentioned, including what potential controls should be in place around its use. Image-generating models and deep fakes or methods of data collection and use in training models – an issue that has been observed Several AI firms were sued On claims of copyright infringement.
US Homeland Security Secretary Alejandro Mayorkas said, “We are at a turning point in the development of artificial intelligence, which may be the most important technology of our time.” Said in a statement. “Cybersecurity is the key to building AI systems that are safe, secure, and trustworthy.”
Connected: EU tech alliance warns of over-regulating AI before finalizing EU AI Act
The guidelines follow other government initiatives that impact AI, including by governments and AI firms Meeting for AI Security Summit in London earlier this month to coordinate an agreement on AI development.
Meanwhile, the European Union is hashing out details of its AI Act that would oversee space and U.S. President Joe Biden issued an executive order in October that sets standards for AI safety and security — though both have seen pushback The AI industry claims they could stifle innovation.
Other co-signers of the new “Secure by Design” guidelines include Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea and Singapore. AI firms including OpenAI, Microsoft, Google, Anthropic and Scale AI also contributed to developing the guidelines.
magazine: AI Eye: Real Uses of AI in Crypto, Google’s GPT-4 Rival, AI Edge for Bad Employees