Security eMagazines

october 2025

Share This
Share This

Integrated Solutions

AI, Compliance, and a New Era of Cybersecurity

Cybersecurity compliance has been forever changed by artificial intelligence.

By Wayne Dorris, Contributing Writer

Kwanchanok Taen-on / iStock / Getty Images Plus via Getty Images

No matter the industry, AI has impacted it in one way or another. Particularly because of the digitization of essentially everything, AI has been integrated into facets of organizations that might not be immediately obvious. Whether in customer support, data processing, or authentication systems, AI is quietly shaping the way businesses operate.

But with this widespread digitization comes new vulnerabilities. AI makes the lives of good, law-abiding organizations easier — but it’s not beholden to the good side. Bad actors are using the very same technology to probe for weaknesses and exploit them at unprecedented speed. Because of this — and the many other evolving factors influencing cybersecurity — cyber compliance and close adherence to legislation [JA1] have become mission-critical.

Following cybersecurity standards is good business, regardless of whether or not such compliance is mandated or enforced by a government entity. In fact, organizations should strive to have internal policies that go above and beyond what any legislation may require, so as to have the best protection possible. Cutting corners used to result in unhappy customers — but today, it can also result in financial penalties, operational breakdowns, and legal liability.

Historical Precedent

Cybersecurity standards, and the legislation that enforces them, have almost always been reactionary — built on the fallout from major breaches. Two of the most significant incidents in recent memory — the Mirai botnet of 2016 and the SolarWinds breach of 2020 — shaped modern cybersecurity policy in both the U.S. and Europe.

The Mirai attack weaponized unsecured IoT devices, taking down major portions of the internet. In response, the U.S. issued Executive Order 13800 in May 2017, which sought to strengthen the cybersecurity of federal networks and critical infrastructure. The National Institute of Standards and Technology (NIST) followed with two proposals: NISTIR 8259A (IoT Device Cybersecurity Capability Core Baseline) and NISTIR 8259B (IoT Non-Technical Supporting Capability Core Baseline). Developed collaboratively with both government and industry, these standards became unified in NISTIR 8425 and were later codified as requirements in Executive Order 14028 in 2021 — this time, following the SolarWinds supply chain compromise.

These cases mark a clear pattern: each major cyber event results in more rigorous standards. Today, with AI not just proliferating, but evolving, history is repeating itself.

New Vulnerabilities from AI

The latest evolution in AI — Agentic AI — is already introducing new risks, and attackers are taking advantage. Agentic AI systems are designed to act autonomously, carrying out tasks across applications and networks with minimal human intervention. But this autonomy also opens the door to new forms of exploitation.

One key vulnerability lies in user authentication. When an identity access management platform authenticates through an AI agent rather than directly with the customer, that “middle layer” creates a new attack surface. In addition, attackers no longer need to trick people through traditional social engineering; they only need to confuse the AI agent. Once the AI agent is confused, its guardrails can collapse, it can produce hallucinations, and the attacker can breach the network.

Each major cyber event results in more rigorous standards. Today, with AI not just proliferating, but evolving, history is repeating itself.”

The danger posed by AI agents is compounded by the speed of implementation. Organizations are racing to implement these systems to remain competitive, but they don’t always fully understand what they’re putting in place. Proper risk assessments fall by the wayside, and implementation happens in haste. In this rush, organizations can potentially expose themselves to significant security risks — and regulators are watching.

Recent Strides in Legislation

Just as legislation followed the Mirai and SolarWinds incidents, today’s AI-driven cyber landscape is ushering in new policies around the world. Some focus on the ethics of AI use, while others are attempting to proactively and directly address cybersecurity risks.

In the U.S., one notable step came on July 18, 2023, when the Biden-Harris Administration announced the U.S. Cyber Trust Mark — a cybersecurity certification and labeling program. This initiative helps consumers more easily identify smart devices that are safer and less vulnerable to attacks, while pushing manufacturers to raise their security standards across the board.

In Europe, the Cyber Resilience Act, working in tandem with the preexisting General Protection Data Regulation (GDPR), will come into force next year. Its implications are sweeping: if a manufacturer of software or hardware is found to have failed to comply with cybersecurity rules, that manufacturer can be fined in addition to the potential reputational and financial damages from the breach itself. While European in origin, the legislation will affect global companies that manufacture hardware or software in the EU, forcing multinational manufacturers to elevate their compliance practices. These frameworks don’t just influence Europe — they set de facto global standards.

Common Mistakes Organizations Are Still Making

Despite the abundance of guidance, frameworks, and legislation, organizations continue to make the same mistakes. One of the toughest challenges remains third-party risk.

The SolarWinds breach is a perfect example: attackers gained access to far more than they ultimately exploited, but the foothold was established through a compromised vendor relationship. Too often, companies ignore or downplay the importance of rigorous vendor risk assessments.

Instead of embracing new tools for managing risk, many organizations cling to outdated, manual processes. Ironically, this is where AI itself could provide a solution rather than a vulnerability — if applied thoughtfully. AI-driven risk assessments can help organizations identify weak links and measure exposures more effectively.

Unfortunately, speed continues to be the enemy of security. Because getting new technology implemented quickly is a priority, organizations may evaluate vendors with artifacts like penetration tests. However, a penetration test is only as good as the scope that it was written to cover, and as a result, organizations may give high level summaries of penetration tests and not get into proper scope or depth. AI only accelerates this cycle, making oversight even more critical.

And now, the consequences of neglect are higher. Under legislation like the Cyber Resilience Act, if an organization fails to properly evaluate a vendor and suffers a breach, it could face regulatory penalties in addition to the costs of the incident itself.

Compliance as a Strategic Advantage

As cybercriminals get their hands on the same cutting-edge technology as enterprises, defensive measures must evolve just as quickly. AI brings incredible benefits, but it also gives adversaries powerful new tools to exploit.

Cybersecurity standards, recommendations, and legislation exist to level the playing field. They provide guidance on secure product development, lifecycle management, vulnerability reporting, and system controls. They are not just boxes to check — they are safeguards against threats that may already be in motion.

Compliance, then, isn’t just about avoiding fines or checking legal requirements. It’s about building resilience, protecting business operations, and staying one step ahead of attackers. Standards and regulations are minimum requirements, and organizations should have internal policies that are as comprehensive as possible in order to truly protect their business. Organizations that treat compliance as a strategic advantage — not a burden — will be the best prepared for the AI-driven future of cybersecurity.

Share This

About the Author
Wayne Dorris, CISSP, Program Manager, Cybersecurity