• CRYPTO-GRAM, May 15, 2026 Part1

    From TCOB1 Security Posts@21:1/229 to All on Friday, May 15, 2026 10:39:43

    Crypto-Gram
    May 15, 2026

    by Bruce Schneier
    Fellow and Lecturer, Harvard Kennedy School
    schneier@schneier.com
    https://www.schneier.com

    A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

    For back issues, or to subscribe, visit Crypto-Gram's web page.

    Read this issue on the web

    These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

    ** *** ***** ******* *********** *************
    In this issue:

    If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.

    Defense in Depth, Medieval Style
    Human Trust of AI Agents
    Mythos and Cybersecurity
    Is "Satoshi Nakamoto" Really Adam Back?
    Mexican Surveillance Company
    ICE Uses Graphite Spyware
    FBI Extracts Deleted Signal Messages from iPhone Notification Database
    Hiding Bluetooth Trackers in Mail
    Medieval Encrypted Letter Decoded
    What Anthropic?s Mythos Means for the Future of Cybersecurity
    Claude Mythos Has Found 271 Zero-Days in Firefox
    Fast16 Malware
    A Ransomware Negotiator Was Working for a Ransomware Gang
    Hacking Polymarket
    DarkSword Malware
    Rowhammer Attack Against NVIDIA Chips
    Smart Glasses for the Authorities
    Insider Betting on Polymarket
    LLMs and Text-in-Text Steganography
    Copy.Fail Linux Vulnerability
    OpenAI?s GPT-5.5 is as Good as Mythos at Finding Security Vulnerabilities
    How Dangerous Is Anthropic?s Mythos AI?
    Upcoming Speaking Engagements

    ** *** ***** ******* *********** *************
    Defense in Depth, Medieval Style

    [2026.04.15] This article on the walls of Constantinople is fascinating.

    The system comprised four defensive lines arranged in formidable layers:

    The brick-lined ditch, divided by bulkheads and often flooded, 15-20 meters wide and up to 7 meters deep.
    A low breastwork, about 2 meters high, enabling defenders to fire freely from behind.
    The outer wall, 8 meters tall and 2.8 meters thick, with 82 projecting towers.
    The main wall -- a towering 12 meters high and 5 meters thick -- with 96 massive towers offset from those of the outer wall for maximum coverage.

    Behind the walls lay broad terraces: the parateichion, 18 meters wide, ideal for repelling enemies who crossed the moat, and the peribolos, 15-20 meters wide between the inner and outer walls. From the moat?s bottom to the highest tower top, the defences reached nearly 30 meters -- a nearly unscalable barrier of stone and ingenuity.

    ** *** ***** ******* *********** *************
    Human Trust of AI Agents

    [2026.04.16] Interesting research: ?Humans expect rationality and cooperation from LLM opponents in strategic games.?

    Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ?zero? Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM?s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects? behaviour and beliefs about LLM?s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.

    ** *** ***** ******* *********** *************
    Mythos and Cybersecurity

    [2026.04.17] Last week, Anthropic pulled back the curtain on Claude Mythos Preview, an AI model so capable at finding and exploiting software vulnerabilities that the company decided it was too dangerous to release to the public. Instead, access has been restricted to roughly 50 organizations -- Microsoft, Apple, Amazon Web Services, CrowdStrike and other vendors of critical infrastructure -- under an initiative called Project Glasswing.

    The announcement was accompanied by a barrage of hair-raising anecdotes: thousands of vulnerabilities uncovered across every major operating system and browser, including a 27-year-old bug in OpenBSD, a 16-year-old flaw in FFmpeg. Mythos was able to weaponize a set of vulnerabilities it found in the Firefox browser into 181 usable attacks; Anthropic?s previous flagship model could only achieve two.

    This is, in many respects, exactly the kind of responsible disclosure that security researchers have long urged. And yet the public has been given remarkably little with which to evaluate Anthropic?s decision. We have been shown a highlight reel of spectacular successes. However, we can?t tell if we have a blockbuster until they let us see the whole movie.

    For example, we don?t know how many times Mythos mistakenly flagged code as vulnerable. Anthropic said security contractors agreed with the AI?s severity rating 198 times, with an 89 per cent severity agreement. That?s impressive, but incomplete. Independent researchers examining similar models have found that AI that detects nearly every real bug also hallucinates plausible-sounding vulnerabilities in patched, correct code.

    This matters. A model that autonomously finds and exploits hundreds of vulnerabilities with inhuman precision is a game changer, but a model that generates thousands of false alarms and non-working attacks still needs skilled and knowledgeable humans. Without knowing the rate of false alarms in Mythos?s unfiltered output, we cannot tell whether the examples showcased are representative.

    There is a second, subtler problem. Large language models, including Mythos, perform best on inputs that resemble what they were trained on: widely used open-source projects, major browsers, the Linux kernel and popular web frameworks. Concentrating early access among the largest vendors of precisely this software is sensible; it lets them patch first, before adversaries catch up.

    But the inverse is also true. Software outside the training distribution -- industrial control systems, medical device firmware, bespoke financial infrastructure, regional banking software, older embedded systems -- is exactly where out-of-the-box Mythos is likely least able to find or exploit bugs.

    However, a sufficiently motivated attacker with domain expertise in one of these fields could nevertheless wield Mythos?s advanced reasoning capabilities as a force multiplier, probing systems that Anthropic?s own engineers lack the specialized knowledge
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)