Managing Cybersecurity Risks in AI Use and Development
The advances and potential use cases for adopting artificial intelligence (AI) technologies “brings both new opportunities and new cybersecurity risks,” a recent National Institute of Standards and Technology (NIST) concept paper points out. NIST is therefore developing guidance for how businesses can implement various types of systems in a secure manner and is seeking public input. While the rapid growth of AI use in business environments such as public pension plans has created opportunities to improve workplace productivity, it has also raised serious concerns about whether the technology can be implemented securely, and this important topic will be among the issues discussed at a break-out session on “Artificial Intelligence and Plan Administration” at NCTR’s upcoming Annual Conference, October 4-7, in Salt Lake City. Registration is now open, so be sure not to miss this cutting-edge discussion!
NIST promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security. It develops cybersecurity standards, guidelines, best practices, and other resources to meet the needs of U.S. industry, federal agencies, and the broader public. As NIST notes, “[w]hile modern AI systems are predominantly software, they introduce different security challenges and risks than traditional software,” and that the security of AI systems “is closely intertwined with the security of the IT infrastructure on which they run and operate.”
NIST offers guidelines on designing and implementing secure, trustworthy AI, including the AI Risk Management Framework (RMF), guidelines to manage misuse risk from advanced AI (draft), and a taxonomy of AI attacks and mitigations. But AI and cybersecurity stakeholders also want NIST to provide additional implementation-focused guidelines that build on existing resources and frameworks to improve the cybersecurity of AI systems.
Therefore, NIST is proposing to develop a series of “Control Overlays for Securing AI Systems” using the NIST Special Publication (SP) 800-53 security controls, which are a comprehensive framework that was developed by NIST to provide security and privacy controls for federal information systems and organizations. These, NIST explains, are “widely used beyond federal agencies, including by private sector entities seeking robust cybersecurity and privacy practices.” The “control overlays” will enable organizations to customize the controls (or control baselines) for a specific technology or system, offering “both the flexibility to meet unique requirements and a level of specificity that allows for consistent technical implementation,” NIST explains.
In short, NIST underscores that using the SP 800-53 controls “provides a common technical foundation for identifying cybersecurity outcomes, and developing overlays allows for customization and the prioritization of the most critical controls to consider for AI systems.” NIST also stresses that the overlays “will not be a comprehensive set of controls for securing an enterprise” but will assume that certain controls are already in place (e.g., organization-wide policies, procedures, and implementations of access control for datasets and services, account management, identification and authentication, configuration management, and incident response).
In its concept paper, NIST proposes an initial set of five use cases for developers and organizations that use AI. “Each use case represents a category of common basic scenarios and addresses specific cybersecurity risks,” NIST explains.
[To better understand these “use cases,” it is necessary to understand the difference between generative AI and agent AI (also known as agentic AI).
According to Coursera — a global platform for online learning and career development that offers access to online courses and degrees from leading universities and companies — generative AI is a system that acts as a digital creator, producing media content such as text essays, computer code, music compositions, image designs, for example, by responding specifically to instructions and requests. It does not take action on its own. Examples are ChatGPT and Siri.
Agentic AI performs tasks autonomously based on predefined rules or constraints. “Rather than just generating content, this type of AI can make decisions, take actions, and coordinate multistep actions without ongoing prompting or inputs,” explains Coursera, acting more as a digital assistant than a creator. That is, instead of just generating an email, agentic systems can create the email, send it, schedule a meeting in response, and otherwise manage ongoing projects.
“You can remember this by thinking about how generative AI ‘generates,’ while agentic AI essentially acts as its own ‘agent,’” according to Coursera.”]
These NIST “use cases” are:
- Adapting and Using Generative AI – Assistant/Large Language Model. This is intended for organizations interested in using AI for creating new content (e.g., text, images, audio, video) based on user prompts by learning from large datasets and identifying patterns in the datasets. NIST says this use case will cover examples of generative AI used for internal business augmentation (e.g., creating summaries, analyzing data) by internal users
- Using and Fine-Tuning Predictive AI. This is aimed at organizations using predictive AI systems to analyze historical data to inform decision-making (e.g., for business and service augmentation). Predictive AI uses statistical analytics and machine learning to analyze historical data and predict future outcomes, trends, or behaviors, such as recommendation services, classification services, and business workflow efficiency improvements through automated decision-making (e.g., resume review for hiring). NIST says that these use cases will address cybersecurity risks at three stages of the predictive AI life cycle: (i) model training, (ii) model deployment, and (iii) model maintenance. Each scenario will address a different business workflow example focused on unique cybersecurity risks of using AI systems (e.g., on premises or third-party hosted AI model, using propriety data or publicly available data).
- Using AI Agent Systems – Single Agent. This is intended for organizations interested in using AI agent systems to automate business tasks and workflows. (These AI agent systems have the capability for autonomous decision-making and taking action to operate with limited human supervision to achieve complex goals. Characteristics of AI agent systems include the ability to understand context, reason, plan, adapt, and execute tasks.) This use case will cover examples of AI agent system use, such as (i)“enterprise copilot,” connected to the user’s “personal enterprise environment,” including emails, files, calendar, or internal enterprise systems, and assisting with common tasks, such as creating calendar events, streamlining workflows, and providing contextual insights; (ii) “coding assistant,” which understands the enterprise codebase and automates the development of software through natural language commands, such as connecting to the source code repository to enable the editing of files; interacting with a code repository to create and commit pull requests, resolve merge conflicts, and similar actions; and (iii) developing, executing, and fixing unit and integration tests; browsing proprietary and web resources; and assisting in the deployment of software.
- Using AI Agent Systems – Multi Agent. Organizations interested in using AI agent systems to automate complex business tasks and workflows with multiple interacting workstreams are the focus. Multi-agent AI systems — which include the ability to understand context, reason, plan, adapt, coordinate actions, and execute tasks — have the capability for autonomous decision-making and have multiple agents working in concert taking action to operate cooperatively with limited human supervision to achieve complex goals.
- Security Controls for AI Developers. This is directed at AI developers with the goal of allowing for effective risk management built upon existing organizational practices.
This new “Security Control Overlays for AI Systems” project will utilize a newly launched “AI Slack” channel (#nist-overlays-securing-ai), which is a secure “hub” where cybersecurity and AI communities can discuss the development of these overlays. Interested parties are therefore encouraged to join the “NIST AI Overlay Slack Channel” to get updates, engage in facilitated discussions with the NIST principal investigators and other subgroup members, share ideas, provide real-time feedback, and contribute to overlay development.
In addition, NIST is asking for public feedback on the proposed high-level use cases and potential additional future work. Specifically, NIST seeks feedback on the following:
- How well do the use cases capture representative types of AI adoption for user communities, and what potential gap areas need to be addressed?
- To what extent do the architectures and example use cases reflect real-world adoption patterns, and where might there be gaps or issues?
- How should NIST prioritize overlay development for the use cases?
- Are there additional areas or use cases to consider for future work?
Feedback to these questions and on the concept paper should be sent to overlays-securing-ai@list.nist.gov and can also be shared in the Slack channel. NIST say that “based on feedback,” it will start the first use case with the goal of issuing a public draft for comment in early FY26.
IT professionals responsible for AI, security, identity management, compliance, and operations at enterprise companies representing all seniority levels were recently invited to participate in a survey on their company’s use of AI agents. The survey — commissioned by SailPoint, an enterprise identity security company based in Austin, Texas, conducted by Dimensional Research, and released on May 28, 2025 – found that, based on a total of 353 responses – with a total of 35 percent of these from firms representing “Technology – Software” and “ Financial Services and Insurance” — 82 percent of companies now utilize AI agents, with over half reporting these agents access sensitive data daily. Furthermore, 80 percent of respondents have experienced unintended actions from their AI agents, including inappropriate data sharing and unauthorized system access. Finally, while 92 percent recognize AI agent governance as crucial to enterprise security, only 44 percent have implemented relevant policies.
However, the SailPoint study points out that “unlike traditional identities, AI agents often require broader privileges across more systems, data, and services.” They are also “more difficult to govern, with rapid access typically provisioned directly within IT.” Yet, despite these concerns, just over 60 percent of companies employ identity security solutions to manage access.
“Agentic AI is both a powerful force for innovation and a potential risk,” according to Chandra Gnanasambandam, EVP of Product and CTO at SailPoint. “These autonomous agents are transforming how work gets done, but they also introduce a new attack surface,” he stresses.
Why? “They often operate with broad access to sensitive systems and data, yet have limited oversight,” SailPoint underscores. Gnanasambandam also explains that this “combination of high privilege and low visibility creates a prime target for attackers,” and he warns that, as organizations expand their use of AI agents, “they must take an identity-first approach to ensure these agents are governed as strictly as human users, with real-time permissions, ‘least privilege’ and full visibility into their actions.” [The principle of “least privilege” refers to an information security concept, widely considered to be a cybersecurity best practice, in which a user is given the minimum levels of access – or permissions – needed to perform his/her job functions. “Least privilege” enforcement is intended to ensure a non-human tool has the requisite access needed – and nothing more.]
In summary, with agentic AI having access to customer information, financial data, legal documents, supply chain transactions, and other highly sensitive data, the SailPoint survey results underscore the importance of the new NIST project. Also, as public pension plans are increasingly turning to AI in the administration of their systems, it is critically important to have clear goals and objectives regarding AI use, particularly agentic AI.
That is why NCTR is devoting a breakout session to this very important subject at its upcoming Annual Conference, October 4-7, in Salt Lake City, Utah. Moderated by Dearld Snider, Executive Director, Missouri PSRS/PEERS, and a member of NCTR’s executive committee, panelists will include Nate Haws, Associate Principal Consultant and AI Researcher, Linea Solutions, and Jeff Adair, Executive Director of Pension Sales, Sagitec. There is still time to register, and remember the hotel room block rate closes on September 11, 2025.
- NIST: “NIST Releases Control Overlays for Securing AI Systems Concept Paper”
- NIST Special Publication (SP) 800-53: Security and Privacy Controls for Information Systems and Organizations
- SailPoint: “AI Agents: the New Attack Surface”
- BusinessWire: “SailPoint Research Highlights Rapid AI Agent Adoption, Driving Urgent Need for Evolved Security”
