The White House has laid out its roadmap for artificial intelligence, Congress is moving to update national security guidelines and local officials in cities like Philadelphia are establishing new standards for responsible AI.
Still, even as these steps signal progress, they are only the beginning. Today, in a far more complex world, cyber disruption can devastate not only corporations but also have wide-ranging impacts for local institutions such as colleges and hospitals, even small businesses.
Bad actors, nation-states and cybercriminals are exploiting AI to expand attack vectors, making them more effective and more damaging to American businesses.
Now’s the time to prepare or risk digital disruption that could impact everything from healthcare and pharmacies to transportation and public safety.
With 2025 soon concluding, the biggest challenge over the next year lies in sustaining the resilience of the country’s critical infrastructure in industries as wide-ranging as healthcare, finance, transportation and beyond.
It’s all happening at a moment when AI is already leading to higher electric bills and, increasingly, being weaponized in ways that test community bedrocks and local governments’ ability to defend themselves.
Having led technology at organizations such as Shazam and Thomson Reuters, before cofounding Red Sift, my focus has always remained on the critical need for strong digital hygiene and robust safeguards on a global scale.
Bad actors, nation-states and cybercriminals are exploiting AI to expand attack vectors, making them more effective and more damaging to American businesses.
AI just made hacking a lot easier
Even in the past several months, phishing, long the most common entry point for cyberattacks, has been transformed by generative AI.
Once clumsy scams have become sophisticated, near-flawless imitations of trusted communications, capable of reaching millions in seconds. This shift represents more than a technical threat; it is a direct challenge to the trust and continuity underpinning our economy’s most vital systems.
Banks, airlines and other industries that families across the country rely on every day face the same vulnerabilities.
In aviation, a convincing but fraudulent maintenance request could ripple across flight operations. A recent analysis from my company found only one in five airlines enforces top-tier email security, leaving billions of dollars and passenger trust at risk — and that was before the many cancellations and delays caused by the federal government shutdown.
Likewise, in banking, a fake invoice or loan approval email can shake faith not just in one institution but in the entire financial system. Meanwhile, in healthcare, even a single spoofed email about a clinical trial can erode confidence in lifesaving therapies.
The impact of these breaches is staggering, encompassing both immediate financial losses and long-term reputational damage. For attackers, AI is a force multiplier.
They no longer need large, sophisticated teams or design skills. A lone individual with free tools can now impersonate a CEO, generate realistic deepfake audio or video, and send convincing messages at scale. Phishing kits, complete with brand templates, are already accessible.
In effect, AI has lowered the barriers to entry for cybercrime while increasing the sophistication of the attacks themselves.
Fight AI with AI
In 2026, the only answer will be to match AI with AI.
Defensive AI can scan the digital landscape continuously, flagging spoofed domains, deepfakes and other malicious activity in real time.
As AI systems move from human-driven prompts to machine-to-machine workflows, the economics of intelligence are shifting dramatically. Every token processed by a large language model consumes compute, money and energy.
At scale, this is no longer a rounding error; it is a strategic and environmental challenge. Over the next year, token efficiency must become a boardroom conversation, as enterprises recognize that integration inefficiencies translate into millions in operating costs and megawatt-hours of energy consumption.
And the urgency is only growing. According to IEA projections, data center electrical demand could nearly double by 2030, driven primarily by AI workloads. That’s equivalent to the annual consumption of an entire nation like Japan.
In this context, optimizing schemas and reducing token overhead is not just a technical tweak. It’s an imperative. Every redundant field name is wasted energy, and every inefficiency compounds across billions of tool calls.
Moving forward, thinking through API design that recognizes LLMs’ impact will both cut costs and reduce carbon footprint, positioning forward-thinking companies as leaders in responsible AI.
While local governments have begun to lay the groundwork for protections, businesses must meet them halfway. Closing obvious security gaps, adopting stronger standards, and embracing AI-powered defense are not optional; they are the only way to ensure trust.The year’s conclusion offers an opportunity. We can either allow AI to become the great enabler of the next wave of cybercrime, exacerbating our regional electric loads in the process, or local government and business leaders can commit now to making it the cornerstone of a stronger, more resilient digital future.