AI on Both Sides of the Cyber Battlefield: What 2026 Means for Public‑Sector Cybersecurity
Artificial intelligence (AI) is reshaping cybersecurity from both sides of the ongoing cybersecurity battle. On one side, as Cyber Security News reports, attackers are rapidly adopting AI tools that can write shockingly convincing phishing emails and even operate cyberattacks on their own. The article warns that AI has become “both the primary weapon and the ultimate shield in global cyber warfare,” accelerating the speed and scale of attacks in ways traditional cybersecurity defenses cannot match. On the other side, a new report from McKinsey & Company explains how organizations are beginning to deploy their own AI “agents” — software that doesn’t just answer questions but takes actions inside systems and can “independently perform complex tasks at machine speed,” which is projected to double in adoption over the next year. What happens when AI is used to attack as well as defend public pension plans? How do we protect ourselves when cyberattacks become faster and more automated? And what new risks emerge when our own systems may begin relying on AI tools that act independently? Don’t miss NCTR’s upcoming webinar on “The Intersect between Cybersecurity and AI” on Wednesday, April 1st, at 3:00 p.m. to learn more about this important subject. Register today.
In January, Cyber Security News published their “Cybersecurity Predictions 2026.” It begins by pointing out that “[a]s artificial intelligence becomes deeply embedded in enterprise operations and cybercriminal arsenals alike, the Cybersecurity Predictions 2026 landscape reveals an unprecedented convergence of autonomous threats, identity-centric attacks, and accelerated digital transformation risks.”
“The stakes have never been higher,” the article stresses, emphasizing that ransomware victims are projected to increase by 40 percent compared to 2024; third-party breaches are expected to double to 30 percent of all incidents; and AI-driven attacks are expected to dominate 50 percent of the threat landscape. “Organizations face a fundamental shift from reactive security to predictive resilience,” the article underscores.
The article goes on to discuss the manner in which cybercriminals will use AI in 2026 and says that the most dramatic shift will be increased use of autonomous malware — malicious software that plans and executes attacks without human help. These systems are described as “self‑directed” and able to“autonomously plan” cyberattacks in real time. Experts predict they will steal data “100 times faster than human attackers.” This means traditional defenses — which rely on humans responding — will very likely struggle to keep up.
One way this is occurring is through the use of deepfakes — fake videos, images, or audio recordings created by AI that look and sound real — helping AI‑phishing become nearly impossible to spot, the article warns. Why? The article explains that phishing emails now use AI to “scrape public profiles” – that is, a computer program automatically collects information that someone has posted publicly online, usually from places like LinkedIn, Facebook (public sections) and organizations’ websites. This allows cybercrooks to mimic writing styles so well that they are “indistinguishable from legitimate communications.” Furthermore, Deepfake‑as‑a‑Service — Microsoft Copilot describes it as the criminal version of hiring a graphic designer, except what you’re buying is a realistic fake video or voice meant to fool someone – is increasingly prevalent and was used in “over 30 percent of high‑impact corporate impersonation attacks,” with 62 percent of organizations experiencing deepfake incidents last year, Cyber Security News reports.
[These deepfakes are no longer novelties and can really work. The example of the 2024 $25 million Arup scam was cited. This was a major fraud incident in which criminals used a deepfake video call to impersonate a senior executive at Arup, the global engineering and consulting firm. During the call, the attackers created a real‑time deepfake of the company’s Chief Financial Officer, making it look and sound exactly like the real person. The fake CFO instructed an employee to transfer money for what appeared to be a legitimate business purpose. Because the deepfake was so convincing — matching the executive’s face, voice, and mannerisms — the employee believed the request was authentic. As a result, the company was tricked into sending $25 million to the attackers!]
Cyber Security News also reports that identity theft has become the main way hackers break in. In fact, in most cyberattacks today, hackers do not “break in” — they “log in.” That is, instead of exploiting a technical flaw, attackers simply steal someone’s username and password (their “credentials”) and use those real, legitimate login details to access a system. Consequently, three out of four breaches happen because someone’s login information was stolen, guessed, or tricked out of them, and not because a firewall failed or a system was hacked.
Complicating the situation is that the article also notes that “machine identities” — automated accounts used by software — are “exploding” too. For example, by 2026, “machine identities will outnumber human employees by 82 to 1,” making identity theft far easier and far more damaging.
Turning to ransomware, the article explains how it has evolved into what Cyber Security News calls “multi‑stage extortion,” shifting from simple file‑locking to multi‑stage operations involving data theft, deepfake blackmail, and operational disruption. And because AI‑driven ransomware can “reason, plan, and act autonomously,” it can often adapt faster than defenders. The article predicts a “40 percent increase in publicly named victims” and warns that ransomware attacks will soon occur “every 2 seconds by 2031.”
Cloud and supply chain weaknesses are also expanding the attack surface. As Cyber Security Newspoints out, third‑party breaches (supply chain/vendor attacks) have doubled to 30 percent, with 80 percent of data breaches in 2026 expected to involve insecure APIs. [API stands for Application Programming Interface. Microsoft Copilot describes this as a digital “bridge” that lets two computer systems talk to each other and share information. As it puts it, “Think of it like a secure window between two rooms. If the window is built well, only the right information passes through. If the window is left open or poorly locked, someone can climb through.” Everyday examples of APIs include when a weather app pulls today’s forecast; when a plan’s vendor system connects to your internal database; and when a retirement system pulls payroll data from a school district.]
The article also touches on quantum computing as a future threat but noting that the risk starts now. As is pointed out, quantum computers are not breaking encryption yet, but attackers are already stealing encrypted data today so they can decrypt it later. The article warns of “‘harvest now, decrypt later’ attacks” targeting sensitive long‑term data like medical records and governments. The point is that organizations will need to shift to “post‑quantum” encryption well before quantum computers figure it all out.
There is also another aspect of cybercrime and public pensions’ necessary operations, and that is investments. As Cyber Security News explains, critical infrastructure and IoT Devices are at high risk and warns that cybercrime will be “the foremost disruptive threat to industrial control systems” such as energy, water, and transportation networks. And disruption in these areas affects the underlying assets in which plans are invested as well as global financial markets.
As for “IoT,” this stands for the “Internet of Things.” That is, an IoT device is any everyday object that connects to the internet or to other devices to send or receive data. Microsoft Copilot says to think of IoT devices as “smart” versions of ordinary things. For example, everyday examples of IoT devices include smart thermostats (like Nest), Smart speakers (Alexa, Google Home), and doorbell cameras (Ring). They also include public security cameras, smart badges and access cards, connected printers and building sensors. The Cyber Security News article warns that IoT devices are a major risk because there are billions of them (more than 27 billion connected devices by 2025); many are cheaply made and poorly secured; they often run outdated software; they are connected to critical systems; and attackers can use them as “back doors” into larger networks. In other words, every connected device is a tiny computer — and every tiny computer is a potential entry point for attackers.
Finally, the Cyber Security News piece discusses how regulations are tightening and breach costs are rising. For example, the article notes that compliance is becoming a “strategic business imperative,” not just a checkbox exercise, and that breach costs in the U.S. have hit “$10.22 million” on average — the highest ever recorded.
In summary, the article argues that AI has fundamentally changed cybersecurity. Criminals are using AI to attack faster and smarter, while organizations must use AI to defend themselves. Its final message is that organizations must accept that some attacks will get through, and that the new goal should therefore be to recover quickly, not prevent every attack. As the article puts it, “survival depends less on stopping every attack and more on recovering faster than attackers can adapt.”
So much for “gloom and doom.” But as the saying goes, “forewarned is forearmed.” Also, a recent report by McKinsey and Company – “Securing the agentic enterprise: Opportunities for cybersecurity providers” – provides insights into how organizations can use AI as a critical component of their defenses. [McKinsey & Company s an American multinational strategy and management consulting firm that offers professional services to corporations, governments, and other organizations. Founded in 1926, McKinsey is the oldest and largest of the “MBB” management consultants – the three most prestigious global management consulting firms: M – McKinsey & Company; B – Boston Consulting Group (BCG); and B – Bain & Company. The firm mainly focuses on the finances and operations of their clients.]
In summary, McKinsey’s article focuses on how AI is starting to act inside organizations, not just outside them. Instead of simply generating text or answering questions, companies are beginning to use “agentic AI” — software that McKinsey describes as AI that can “plan, execute, and complete tasks autonomously,” — which means they behave less like tools and more like junior employees, Microsoft Copilot explains.
Adoption of agentic AI is expected to more than double in the next year, according to the McKinsey’s report – so whether or not your system is there yet, it is on its way! Also, you do not need agentic AI today to be affected tomorrow by bad actors’ use of it.
However, this increasing interest in moving to agentic AI also can create new cybersecurity challenges. Organizations will now need to protect AI systems that log in, move files, approve tasks, and interact with sensitive data. McKinsey notes that this will require a major overhaul of three areas:
- Identity management, because companies must track not only human users but also thousands of AI “identities.”
- Real‑time monitoring, because AI agents act too quickly for traditional security tools to keep up.
- Security operations, which McKinsey predicts will increasingly rely on AI to handle routine tasks, with humans supervising instead of doing everything manually.
The article’s message is that as organizations adopt AI that can act independently, cybersecurity must evolve to keep these systems safe, predictable, and under control.
For example, as agentic AI — through “AI Agents” — begins acting like digital employees, this creates new risks. If an AI agent receives manipulated inputs, it could “override approval workflows” or “distribute confidential information externally,” fundamentally changing an organization’s risk profile, the McKinsey report warns.
The new report also discusses “Identity Management,” saying this must expand to machine identities. That is, traditional identity systems were built for humans. Now, organizations must track thousands of short‑lived AI identities that appear and disappear in seconds. McKinsey warns of “autonomous privilege escalation,” where agents accumulate unintended permissions.
The McKinsey report also underscores that security must shift to real‑time monitoring. McKinsey notes that 75 percent of CISOs — Chief Information Security Officers, the senior executive responsible for protecting an organization’s data, systems, and technology from cyber threats — are seeking protection against AI input manipulation, but McKinsey says current tools “lack sufficient sophistication.” Organizations will therefore need continuous, behavior‑based monitoring. McKinsey warns.
Finally, the new report says that security operations centers will need to become AI‑driven, predicting that 35 percent of organizations expect AI agents to replace Tier‑1 SOC analysts. The future SOC will be “human‑supervised but machine‑operated,” with AI handling routine detection and response. [A tier‑1 SOC analyst is the front‑line cybersecurity responder in a Security Operations Center (SOC), who watches the alarms, reviews alerts, and decides whether something suspicious needs to be investigated further.]
In summary, as organizations adopt AI that can act independently (agentic AI), cybersecurity must evolve to keep these systems safe, predictable, and auditable. In short, the use of AI, particularly agentic AI, in other areas of a system’s operations can directly implicate a system’s cybersecurity risk profile.
However, it must also be remembered that just as AI is giving cybercriminals new tools, it is also becoming one of the most powerful defensive technologies available to organizations. Modern security platforms now use AI to analyze enormous volumes of network activity in real time, spotting unusual behavior far faster than human analysts could. Many systems can automatically isolate suspicious accounts, shut down malicious connections, or flag anomalies that suggest credential theft — a critical capability in a world where 75 percent of breaches involve valid logins. AI is also improving email security by detecting subtle linguistic patterns in phishing attempts, even when attackers use scraped profile data or deepfake audio to mimic real employees. And as McKinsey notes, AI agents are increasingly being deployed inside security operations centers to handle routine triage, reduce alert fatigue, and escalate only the threats that truly require human judgment. In short, while AI introduces new risks, it also offers organizations a way to defend themselves at machine speed — the only pace that can realistically keep up with AI‑driven attackers.
In conclusion, AI appears to be rewriting the rules of cybersecurity faster than most organizations can adapt. Attackers are using AI to automate and accelerate their operations, while organizations are beginning to rely on AI tools that act independently inside their systems. These two forces — external threats and internal transformation — are converging at the same moment. For public‑sector retirement systems, the challenge is not simply keeping up with new technologies but understanding how AI changes the nature of risk itself. Preparation — strengthening identity protections, improving vendor oversight, investing in resilience, and building the capacity to recover quickly when incidents occur – is key to success in this area.
The message from both articles is clear: AI will shape the future of cybersecurity whether we are ready or not. That is why you will not want to miss NCTR’s member-only webinar on April 1, 2026, at 3:00 PM/ET, when Leigh Snell, NCTR’s Director of Federal Relations, will discuss this important intersect between cybersecurity and AI with his panelists, who will include:
- Gisela De San Roman, Sr. Consultant, Administration and Technology Consulting, Segal
- Viraj Singh, Director, Sales Engineering, Majesco
- Dearld Snider, Executive Director, Missouri PSRS/PEERS
Hear their recommendations on strengthening cybersecurity fundamentals, using AI to enhance — not replace — those fundamentals. Register today!
