“On April 7, 2026, the artificial intelligence company Anthropic announced that it was releasing the newest generation of its large language model, dubbed “Claude Mythos Preview”. However, it was only releasing it to a limited consortium of roughly 40 technology companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks, Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among these partners because this new A.I. model represents a “step change” in performance that has some critically important positive and negative implications for cybersecurity and America’s national security.

       (This is the introduction of an article by Thomas Friedman). Friedman’s article goes on to say:

       The good news is that Anthropic discovered in the process of developing Claude Mythos Preview that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world’s most popular software systems in a way that is much easier than before. The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium.

       This is not a publicity stunt. In the run-up to this announcement, representatives of leading American tech companies have been in private conversation with the Trump administration about the implications for the security of the United States and all the other countries that use these now vulnerable software systems. For good reason. As Anthropic said in its written statement last week, in just the past month, “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in EVERY MAJOR OPERATING SYSTEM AND WEB BROWSER (My Caps). Given the rate of A.I. progress, it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them safely. The fallout – economics, public safety and national security – could be severe.’’

       Project Glasswing, Anthropic’s name for the consortium, is an undertaking to work with the biggest and most trusted tech companies and critical infrastructure providers, including banks, “to put these capabilities to work for defensive purposes,” the company added, and to give the leading technology firms a head start in finding and patching those vulnerabilities. “We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale – for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring,” Anthropic said.

       My (Thomas Friedman’s) translation: Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code. Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems and hospitals all over the world.

       If this A.I. tool were, indeed, to become widely available, it would mean the ability to hack any major infrastructure system – a hard and expensive effort that was once essentially the province only of private-sector experts and intelligence organizations – will be available to every criminal actor, terrorist organization and country, no matter how small.

       I’m really not being hyperbolic when I say that kids could deploy this by accident. Mom and Dad, get ready for: “Honey, what did you do after school today?” “Well, Mom, my friends and I took down the power grid. What’s for dinner?” That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do – or your kids.

       No country in the world can solve this problem alone. The solution – this may shock people – must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability. Such a powerful tool would threaten them both, leaving them exposed to criminal actors inside their countries and terrorist groups and other adversaries outside. It could easily become a greater threat to each country than the two countries are to each other.

       It will be interesting to see what history remembers most about April 7, 2026, the postponed U.S. release of bombs over Iran, or the carefully controlled release of the Claude Mythos Preview by Anthropic and its technical allies.”

       This is really frightening stuff, to put it mildly. Even Elon Musk has stated publicly, “AI scares the shit out of me!”

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

For security, use of hCaptcha is required which is subject to their Privacy Policy and Terms of Use.

I agree to these terms.

Scroll to Top