DHS Issues New Recommendations for Secure AI Deployment in US Infrastructure

100

TL/DR –

The U.S. Department of Homeland Security (DHS) released new recommendations for the secure development and deployment of artificial intelligence (AI) across all U.S. critical infrastructure, including healthcare and public health. The document, developed with numerous stakeholders, aligns with President Biden’s executive order on AI and serves as a guide for AI use into the next administration. Among the recommendations are that cloud infrastructure providers secure environments for AI development, AI developers adopt a Secure by Design approach, and critical infrastructure owners maintain strong cybersecurity practices that account for AI-related risks.


US Homeland Security Issues Recommendations for Safe AI Deployment in Critical Infrastructure

The US Department of Homeland Security (DHS) has released actionable suggestions for the secure development and application of artificial intelligence (AI) within key US establishments, comprising healthcare and public health. The guidelines were created via collaboration with multiple stakeholders, both public and private sectors.

This move is intended to align with the executive order on AI issued by President Biden a year back. The document, titled Roles and Responsibilities Framework for AI in Critical Infrastructure, serves as a roadmap for AI utilization as these systems become integral to critical infrastructures.

Among the 16 critical infrastructure sectors identified by DHS, which are essential for national and global safety, economic security, and public health, are Healthcare and Public Health. These sectors are increasingly adopting AI to enhance services, build resilience, and counter threats.

However, AI-related risks and vulnerabilities are also escalating, potentially exposing critical systems to failure or malicious manipulation. These risks must be anticipated and addressed given the potentially disastrous consequences of disruptions to interconnected systems.

DHS Secretary Alejandro N. Mayorkas, in a conference call, spoke about the board’s inception and activities. The board consists of leaders from OpenAI, Anthropic, AWS, IBM, Microsoft, Alphabet, Northrop Grumman, and others. The main concerns of the board included the lack of common approaches to AI deployment, physical security flaws, and industry reluctance to share information.

The framework, established with AI supply chain layers in mind, was developed to complement existing guidance from numerous agencies, including the White House and the AI Safety Institute. The AI Safety and Security Board identified three main categories of AI vulnerabilities in critical infrastructure.

The guidelines propose actionable recommendations for key stakeholders along the AI supply chain. These stakeholders include cloud and compute infrastructure providers, AI developers, critical infrastructure owners and operators, civil society, and public sector entities. In this way, the framework defines how each player in the ecosystem can contribute to the secure deployment of AI in critical infrastructure.

Greg Garcia, executive director of the Health Sector Coordinating Council Cybersecurity Working Group, recently highlighted the need for healthcare organizations to fortify their defenses against increasingly sophisticated cyber threats, which are often empowered by AI. DHS’s new framework on AI safety and security offers the guiding principles to enable this.


Read More Health & Wellness News ; US News