U.S., global partners release guidelines for secure AI system development

The U.S. Cybersecurity and Infrastructure Security Agency, U.K. National Cyber Security Centre and other global partners this week released recommended guidelines or secure artificial intelligence design, development, deployment and use.
鈥淭his very useful guide represents the peer-reviewed work of AI experts from over 20 international law enforcement and intelligence agencies," said John Riggi, AHA鈥檚 national advisor for cybersecurity and risk. "AI clearly represents novel security and privacy risks, which may not be fully understood by developers and users of AI systems, such as the consequences of corrupted or harmful outputs due to 鈥榓dversarial machine learning.鈥 As indicated in the guide, the best way to mitigate the emerging threats and risks related to the rapid expansion of AI in health care is to ensure that the developers of AI technology closely follow the principles of 鈥榮ecure by design鈥 and work closely with end users in the deployment and management of AI systems. It is also recommended that health care organizations form multidisciplinary AI governance and risk committees to identify, assess and manage risk related to AI technology at acquisition stages and throughout the life-cycle of the technology. The NIST AI Risk Management Framework is another useful resource to supplement the above guide.鈥
For more information on this or other cyber and risk issues, contact Riggi at鈥. For the latest cyber and risk resources and threat intelligence, visit鈥aha.org/cybersecurity.