Recent Entries

From Schneier on Security at 2024-04-25 12:02:48

The Rise of Large-Language-Model Optimization

The web has become so interwoven with everyday life that it is easy to forget what an extraordinary accomplishment and treasure it is. In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection.

But all of this is coming to an end. The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.

To understand why, you must understand publishing. Its core task is to connect writers to an audience. Publishers work as gatekeepers, filtering candidates and then amplifying the chosen ones. Hoping to be selected, writers shape their work in various ways. This article might be written very differently in an academic publication, for example, and publishing it here entailed pitching an editor, revising multiple drafts for style and focus, and so on...

From Schneier on Security at 2024-04-24 12:05:29

Dan Solove on Privacy Regulation

Law professor Dan Solove has a new article on privacy regulation. In his email to me, he writes: “I’ve been pondering privacy consent for more than a decade, and I think I finally made a breakthrough with this article.” His mini-abstract:

In this Article I argue that most of the time, privacy consent is fictitious. Instead of futile efforts to try to turn privacy consent from fiction to fact, the better approach is to lean into the fictions. The law can’t stop privacy consent from being a fairy tale, but the law can ensure that the story ends well. I argue that privacy consent should confer less legitimacy and power and that it be backstopped by a set of duties on organizations that process personal data based on consent...

From Schneier on Security at 2024-04-23 12:09:31

Microsoft and Security Incentives

Former senior White House cyber policy director A. J. Grotto talks about the economic incentives for companies to improve their security—in particular, Microsoft:

Grotto told us Microsoft had to be “dragged kicking and screaming” to provide logging capabilities to the government by default, and given the fact the mega-corp banked around $20 billion in revenue from security services last year, the concession was minimal at best.

[…]

“The government needs to focus on encouraging and catalyzing competition,” Grotto said. He believes it also needs to publicly scrutinize Microsoft and make sure everyone knows when it messes up...

From Schneier on Security at 2024-04-22 16:26:34

Using Legitimate GitHub URLs for Malware

Interesting social-engineering attack vector:

McAfee released a report on a new LUA malware loader distributed through what appeared to be a legitimate Microsoft GitHub repository for the “C++ Library Manager for Windows, Linux, and MacOS,” known as vcpkg.

The attacker is exploiting a property of GitHub: comments to a particular repo can contain files, and those files will be associated with the project in the URL.

What this means is that someone can upload malware and “attach” it to a legitimate and trusted project.

As the file’s URL contains the name of the repository the comment was created in, and as almost every software company uses GitHub, this flaw can allow threat actors to develop extraordinarily crafty and trustworthy lures...

From Schneier on Security at 2024-04-19 22:05:43

Friday Squid Blogging: Squid Trackers

A new bioadhesive makes it easier to attach trackers to squid.

Note: the article does not discuss squid privacy rights.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-04-18 12:06:45

Other Attempts to Take Over Open Source Projects

After the XZ Utils discovery, people have been examining other open-source projects. Surprising no one, the incident is not unique:

The OpenJS Foundation Cross Project Council received a suspicious series of emails with similar messages, bearing different names and overlapping GitHub-associated emails. These emails implored OpenJS to take action to update one of its popular JavaScript projects to “address any critical vulnerabilities,” yet cited no specifics. The email author(s) wanted OpenJS to designate them as a new maintainer of the project despite having little prior involvement. This approach bears strong resemblance to the manner in which “Jia Tan” positioned themselves in the XZ/liblzma backdoor...

From Schneier on Security at 2024-04-17 12:08:32

Using AI-Generated Legislative Amendments as a Delaying Technique

Canadian legislators proposed 19,600 amendments—almost certainly AI-generated—to a bill in an attempt to delay its adoption.

I wrote about many different legislative delaying tactics in A Hacker’s Mind, but this is a new one.

From Schneier on Security at 2024-04-16 12:00:58

X.com Automatically Changing Link Text but Not URLs

Brian Krebs reported that X (formerly known as Twitter) started automatically changing twitter.com links to x.com links. The problem is: (1) it changed any domain name that ended with “twitter.com,” and (2) it only changed the link’s appearance (anchortext), not the underlying URL. So if you were a clever phisher and registered fedetwitter.com, people would see the link as fedex.com, but it would send people to fedetwitter.com.

Thankfully, the problem has been fixed.

From Schneier on Security at 2024-04-15 12:04:50

New Lattice Cryptanalytic Technique

A new paper presents a polynomial-time quantum algorithm for solving certain hard lattice problems. This could be a big deal for post-quantum cryptographic algorithms, since many of them base their security on hard lattice problems.

A few things to note. One, this paper has not yet been peer reviewed. As this comment points out: “We had already some cases where efficient quantum algorithms for lattice problems were discovered, but they turned out not being correct or only worked for simple special cases.”

Two, this is a quantum algorithm, which means that it has not been tested. There is a wide gulf between quantum algorithms in theory and in practice. And until we can actually code and test these algorithms, we should be suspicious of their speed and complexity claims...

From Schneier on Security at 2024-04-14 17:02:52

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking twice at RSA Conference 2024 in San Francisco. I’ll be on a panel on software liability on May 6, 2024 at 8:30 AM, and I’m giving a keynote on AI and democracy on May 7, 2024 at 2:25 PM.

The list is maintained on this page.

From Schneier on Security at 2024-04-12 22:08:47

Friday Squid Blogging: The Awfulness of Squid Fishing Boats

It’s a pretty awful story.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-04-12 12:01:18

Smuggling Gold by Disguising it as Machine Parts

Someone got caught trying to smuggle 322 pounds of gold (that’s about 1/4 of a cubic foot) out of Hong Kong. It was disguised as machine parts:

On March 27, customs officials x-rayed two air compressors and discovered that they contained gold that had been “concealed in the integral parts” of the compressors. Those gold parts had also been painted silver to match the other components in an attempt to throw customs off the trail.

From Schneier on Security at 2024-04-11 12:01:51

Backdoor in XZ Utils That Almost Happened

Last week, the internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it...

From Schneier on Security at 2024-04-10 12:08:10

In Memoriam: Ross Anderson, 1956-2024

Last week I posted a short memorial of Ross Anderson. The Communications of the ACM asked me to expand it. Here’s the longer version.

From Schneier on Security at 2024-04-09 14:56:55

US Cyber Safety Review Board on the 2023 Microsoft Exchange Hack

US Cyber Safety Review Board released a report on the summer 2023 hack of Microsoft Exchange by China. It was a serious attack by the Chinese government that accessed the emails of senior U.S. government officials.

From the executive summary:

The Board finds that this intrusion was preventable and should never have occurred. The Board also concludes that Microsoft’s security culture was inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations. The Board reaches this conclusion based on:...

From Schneier on Security at 2024-04-08 12:03:44

Security Vulnerability of HTML Emails

This is a newly discovered email vulnerability:

The email your manager received and forwarded to you was something completely innocent, such as a potential customer asking a few questions. All that email was supposed to achieve was being forwarded to you. However, the moment the email appeared in your inbox, it changed. The innocent pretext disappeared and the real phishing email became visible. A phishing email you had to trust because you knew the sender and they even confirmed that they had forwarded it to you.

This attack is possible because most email clients allow CSS to be used to style HTML emails. When an email is forwarded, the position of the original email in the DOM usually changes, allowing for CSS rules to be selectively applied only when an email has been forwarded...

From Schneier on Security at 2024-04-05 22:02:11

Friday Squid Blogging: SqUID Bots

They’re AI warehouse robots.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-04-05 12:00:42

Maybe the Phone System Surveillance Vulnerabilities Will Be Fixed

It seems that the FCC might be fixing the vulnerabilities in SS7 and the Diameter protocol:

On March 27 the commission asked telecommunications providers to weigh in and detail what they are doing to prevent SS7 and Diameter vulnerabilities from being misused to track consumers’ locations.

The FCC has also asked carriers to detail any exploits of the protocols since 2018. The regulator wants to know the date(s) of the incident(s), what happened, which vulnerabilities were exploited and with which techniques, where the location tracking occurred, and ­ if known ­ the attacker’s identity...

From Schneier on Security at 2024-04-04 12:07:39

Surveillance by the New Microsoft Outlook App

The ProtonMail people are accusing Microsoft’s new Outlook for Windows app of conducting extensive surveillance on its users. It shares data with advertisers, a lot of data:

The window informs users that Microsoft and those 801 third parties use their data for a number of purposes, including to:

  • Store and/or access information on the user’s device
  • Develop and improve products
  • Personalize ads and content
  • Measure ads and content
  • Derive audience insights
  • Obtain precise geolocation data
  • Identify users through device scanning

Commentary.

...

From Schneier on Security at 2024-04-03 12:01:51

Class-Action Lawsuit against Google’s Incognito Mode

The lawsuit has been settled:

Google has agreed to delete “billions of data records” the company collected while users browsed the web using Incognito mode, according to documents filed in federal court in San Francisco on Monday. The agreement, part of a settlement in a class action lawsuit filed in 2020, caps off years of disclosures about Google’s practices that shed light on how much data the tech giant siphons from its users­—even when they’re in private-browsing mode.

Under the terms of the settlement, Google must further update the Incognito mode “splash page” that appears anytime you open an Incognito mode Chrome window after ...

From Schneier on Security at 2024-04-02 19:50:50

xz Utils Backdoor

The cybersecurity world got really lucky last week. An intentionally placed backdoor in xz Utils, an open-source compression utility, was pretty much accidentally discovered by a Microsoft engineer—weeks before it would have been incorporated into both Debian and Red Hat Linux. From ArsTehnica:

Malicious code added to xz Utils versions 5.6.0 and 5.6.1 modified the way the software functions. The backdoor manipulated sshd, the executable file used to make remote SSH connections. Anyone in possession of a predetermined encryption key could stash any code of their choice in an SSH login certificate, upload it, and execute it on the backdoored device. No one has actually seen code uploaded, so it’s not known what code the attacker planned to run. In theory, the code could allow for just about anything, including stealing encryption keys or installing malware...

From Schneier on Security at 2024-04-02 18:05:15

Declassified NSA Newsletters

Through a 2010 FOIA request (yes, it took that long), we have copies of the NSA’s KRYPTOS Society Newsletter, “Tales of the Krypt,” from 1994 to 2003.

There are many interesting things in the 800 pages of newsletter. There are many redactions. And a 1994 review of Applied Cryptography by redacted:

Applied Cryptography, for those who don’t read the internet news, is a book written by Bruce Schneier last year. According to the jacket, Schneier is a data security expert with a master’s degree in computer science. According to his followers, he is a hero who has finally brought together the loose threads of cryptography for the general public to understand. Schneier has gathered academic research, internet gossip, and everything he could find on cryptography into one 600-page jumble...

From Schneier on Security at 2024-04-01 15:19:54

Magic Security Dust

Adam Shostack is selling magic security dust.

It’s about time someone is commercializing this essential technology.

From Schneier on Security at 2024-04-01 01:21:09

Ross Anderson

Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute, at Cambridge University, for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996...

From Schneier on Security at 2024-03-29 21:02:07

Friday Squid Blogging: The Geopolitics of Eating Squid

New York Times op-ed on the Chinese dominance of the squid industry:

China’s domination in seafood has raised deep concerns among American fishermen, policymakers and human rights activists. They warn that China is expanding its maritime reach in ways that are putting domestic fishermen around the world at a competitive disadvantage, eroding international law governing sea borders and undermining food security, especially in poorer countries that rely heavily on fish for protein. In some parts of the world, frequent illegal incursions by Chinese ships into other nations’ waters are heightening military tensions. American lawmakers are concerned because the United States, locked in a trade war with China, is the world’s largest importer of seafood...

From Schneier on Security at 2024-03-29 11:03:14

Lessons from a Ransomware Attack against the British Library

You might think that libraries are kind of boring, but this self-analysis of a 2023 ransomware and extortion attack against the British Library is anything but.

From Schneier on Security at 2024-03-28 11:05:01

Hardware Vulnerability in Apple’s M-Series Chips

It’s yet another hardware side-channel attack:

The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years...

From Schneier on Security at 2024-03-27 11:01:08

Security Vulnerability in Saflok’s RFID-Based Keycard Locks

It’s pretty devastating:

Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok. The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries. By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it...

From Schneier on Security at 2024-03-26 11:08:16

On Secure Voting Systems

Andrew Appel shepherded a public comment—signed by twenty election cybersecurity experts, including myself—on best practices for ballot marking devices and vote tabulation. It was written for the Pennsylvania legislature, but it’s general in nature.

From the executive summary:

We believe that no system is perfect, with each having trade-offs. Hand-marked and hand-counted ballots remove the uncertainty introduced by use of electronic machinery and the ability of bad actors to exploit electronic vulnerabilities to remotely alter the results. However, some portion of voters mistakenly mark paper ballots in a manner that will not be counted in the way the voter intended, or which even voids the ballot. Hand-counts delay timely reporting of results, and introduce the possibility for human error, bias, or misinterpretation...

From Schneier on Security at 2024-03-25 11:04:34

Licensing AI Engineers

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?...

From Schneier on Security at 2024-03-22 21:03:52

Friday Squid Blogging: New Species of Squid Discovered

A new species of squid was discovered, along with about a hundred other species.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-03-22 11:01:39

Google Pays $10M in Bug Bounties in 2023

BleepingComputer has the details. It’s $2M less than in 2022, but it’s still a lot.

The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables...

From Schneier on Security at 2024-03-21 11:03:18

Public AI as an Alternative to Corporate AI

This mini-essay was my contribution to a round table on Power and Governance in the Age of AI.  It’s nothing I haven’t said here before, but for anyone who hasn’t read my longer essays on the topic, it’s a shorter introduction.

 

The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

To benefit society as a whole we need an ...

From Schneier on Security at 2024-03-20 11:08:52

Cheating Automatic Toll Booths by Obscuring License Plates

The Wall Street Journal is reporting on a variety of techniques drivers are using to obscure their license plates so that automatic readers can’t identify them and charge tolls properly.

Some drivers have power-washed paint off their plates or covered them with a range of household items such as leaf-shaped magnets, Bramwell-Stewart said. The Port Authority says officers in 2023 roughly doubled the number of summonses issued for obstructed, missing or fictitious license plates compared with the prior year.

Bramwell-Stewart said one driver from New Jersey repeatedly used what’s known in the streets as a flipper, which lets you remotely swap out a car’s real plate for a bogus one ahead of a toll area. In this instance, the bogus plate corresponded to an actual one registered to a woman who was mystified to receive the tolls. “Why do you keep billing me?” Bramwell-Stewart recalled her asking...

From Schneier on Security at 2024-03-19 11:05:23

AI and the Evolution of Social Media

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society...

From Schneier on Security at 2024-03-14 11:01:43

Automakers Are Sharing Driver Data with Insurers without Consent

Kasmir Hill has the story:

Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis [who then sell it to insurance companies]...

From Schneier on Security at 2024-03-13 11:07:18

Burglars Using Wi-Fi Jammers to Disable Security Cameras

The arms race continues, as burglars are learning how to use jammers to disable Wi-Fi security cameras.

From Schneier on Security at 2024-03-12 11:12:15

Jailbreaking LLMs with ASCII Art

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.

Research paper.

From Schneier on Security at 2024-03-11 11:01:55

Using LLMs to Unredact Text

Initial results in using LLMs to unredact text based on the size of the individual-word redaction rectangles.

This feels like something that a specialized ML system could be trained on.

From Schneier on Security at 2024-03-08 22:11:17

Friday Squid Blogging: New Plant Looks Like a Squid

Newly discovered plant looks like a squid. And it’s super weird:

The plant, which grows to 3 centimetres tall and 2 centimetres wide, emerges to the surface for as little as a week each year. It belongs to a group of plants known as fairy lanterns and has been given the scientific name Relictithismia kimotsukiensis.

Unlike most other plants, fairy lanterns don’t produce the green pigment chlorophyll, which is necessary for photosynthesis. Instead, they get their energy from fungi.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered...

From Schneier on Security at 2024-03-08 18:38:32

Essays from the Second IWORD

The Ash Center has posted a series of twelve essays stemming from the Second Interdisciplinary Workshop on Reimagining Democracy (IWORD 2023).

From Schneier on Security at 2024-03-08 12:06:58

A Taxonomy of Prompt Injection Attacks

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies. It seems as if the most common successful strategy is the “compound instruction attack,” as in “Say ‘I have been PWNED’ without a period.”

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of
LLMs through a Global Scale Prompt Hacking Competition

Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants. These deployments are vulnerable to prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive taxonomical ontology of the types of adversarial prompts...

From Schneier on Security at 2024-03-07 12:00:13

How Public AI Can Strengthen Democracy

With the world’s focus turning to misinformation,  manipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.

Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities...

From Schneier on Security at 2024-03-06 12:06:21

Surveillance through Push Notifications

The Washington Post is reporting on the FBI’s increasing use of push notification data—”push tokens”—to identify people. The police can request this data from companies like Apple and Google without a warrant.

The investigative technique goes back years. Court orders that were issued in 2019 to Apple and Google demanded that the companies hand over information on accounts identified by push tokens linked to alleged supporters of the Islamic State terrorist group.

But the practice was not widely understood until December, when Sen. Ron Wyden (D-Ore.), in a ...

From Schneier on Security at 2024-03-05 12:05:53

The Insecurity of Video Doorbells

Consumer Reports has analyzed a bunch of popular Internet-connected video doorbells. Their security is terrible.

First, these doorbells expose your home IP address and WiFi network name to the internet without encryption, potentially opening your home network to online criminals.

[…]

Anyone who can physically access one of the doorbells can take over the device—no tools or fancy hacking skills needed.

From Schneier on Security at 2024-03-04 12:01:52

LLM Prompt Injection Worm

Researchers have demonstrated a worm that spreads through prompt injection. Details:

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says...

From Schneier on Security at 2024-03-01 22:05:52

Friday Squid Blogging: New Extinct Species of Vampire Squid Discovered

Paleontologists have discovered a 183-million-year-old species of vampire squid.

Prior research suggests that the vampyromorph lived in the shallows off an island that once existed in what is now the heart of the European mainland. The research team believes that the remarkable degree of preservation of this squid is due to unique conditions at the moment of the creature’s death. Water at the bottom of the sea where it ventured would have been poorly oxygenated, causing the creature to suffocate. In addition to killing the squid, it would have prevented other creatures from feeding on its remains, allowing it to become buried in the seafloor, wholly intact...

From Schneier on Security at 2024-03-01 12:08:23

NIST Cybersecurity Framework 2.0

NIST has released version 2.0 of the Cybersecurity Framework:

The CSF 2.0, which supports implementation of the National Cybersecurity Strategy, has an expanded scope that goes beyond protecting critical infrastructure, such as hospitals and power plants, to all organizations in any sector. It also has a new focus on governance, which encompasses how organizations make and carry out informed decisions on cybersecurity strategy. The CSF’s governance component emphasizes that cybersecurity is a major source of enterprise risk that senior leaders should consider alongside others such as finance and reputation...

From Schneier on Security at 2024-02-29 12:00:19

How the “Frontier” Became the Slogan of Uncontrolled AI

Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots ...

From Schneier on Security at 2024-02-28 12:02:58

A Cyber Insurance Backstop

In the first week of January, the pharmaceutical giant Merck quietly settled its years-long lawsuit over whether or not its property and casualty insurers would cover a $700 million claim filed after the devastating NotPetya cyberattack in 2017. The malware ultimately infected more than 40,000 of Merck’s computers, which significantly disrupted the company’s drug and vaccine production. After Merck filed its $700 million claim, the pharmaceutical giant’s insurers argued that they were not required to cover the malware’s damage because the cyberattack was widely attributed to the Russian government and therefore was excluded from standard property and casualty insurance coverage as a “hostile or warlike act.”...

From Schneier on Security at 2024-02-27 12:03:59

China Surveillance Company Hacked

Last week, someone posted something like 570 files, images and chat logs from a Chinese company called I-Soon. I-Soon sells hacking and espionage services to Chinese national and local government.

Lots of details in the news articles.

These aren’t details about the tools or techniques, more the inner workings of the company. And they seem to primarily be hacking regionally.

From Schneier on Security at 2024-02-26 12:04:34

Apple Announces Post-Quantum Encryption Algorithms for iMessage

Apple announced PQ3, its post-quantum encryption standard based on the Kyber secure key-encapsulation protocol, one of the post-quantum algorithms selected by NIST in 2022.

There’s a lot of detail in the Apple blog post, and more in Douglas Stabila’s security analysis.

I am of two minds about this. On the one hand, it’s probably premature to switch to any particular post-quantum algorithms. The mathematics of cryptanalysis for these lattice and other systems is still rapidly evolving, and we’re likely to break more of them—and learn a lot in the process—over the coming few years. But if you’re going to make the switch, this is an excellent choice. And Apple’s ability to do this so efficiently speaks well about its algorithmic agility, which is probably more important than its particular cryptographic design. And it is probably about the right time to worry about, and defend against, attackers who are storing encrypted messages in hopes of breaking them later on future quantum computers...

From Schneier on Security at 2024-02-23 22:04:28

Friday Squid Blogging: Illex Squid and Climate Change

There are correlations between the populations of the Illex Argentines squid and water temperatures.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-02-23 16:14:27

AIs Hacking Websites

New research:

LLM Agents can Autonomously Hack Websites

Abstract: In recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the rise in capabilities of these agents, recent work has speculated on how LLM agents would affect cybersecurity. However, not much is known about the offensive capabilities of LLM agents.

In this work, we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs...

From Schneier on Security at 2024-02-22 17:08:49

New Image/Video Prompt Injection Attacks

Simon Willison has been playing with the video processing capabilities of the new Gemini Pro 1.5 model from Google, and it’s really impressive.

Which means a lot of scary new video prompt injection attacks. And remember, given the current state of technology, prompt injection attacks are impossible to prevent in general.

From Schneier on Security at 2024-02-21 12:08:56

Details of a Phone Scam

First-person account of someone who fell for a scam, that started as a fake Amazon service rep and ended with a fake CIA agent, and lost $50,000 cash. And this is not a naive or stupid person.

The details are fascinating. And if you think it couldn’t happen to you, think again. Given the right set of circumstances, it can.

It happened to Cory Doctorow.

From Schneier on Security at 2024-02-20 12:02:00

Microsoft Is Spying on Users of Its AI Tools

Microsoft announced that it caught Chinese, Russian, and Iranian hackers using its AI tools—presumably coding tools—to improve their hacking abilities.

From their report:

In collaboration with OpenAI, we are sharing threat intelligence showing detected state affiliated adversaries—tracked as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon—using LLMs to augment cyberoperations.

The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I’m sure the terms of service—if I bothered to read them—gives them that permission. And of course it’s no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it...

From Schneier on Security at 2024-02-19 16:15:17

EU Court of Human Rights Rejects Encryption Backdoors

The European Court of Human Rights has ruled that breaking end-to-end encryption by adding backdoors violates human rights:

Seemingly most critically, the [Russian] government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats...

From Schneier on Security at 2024-02-16 22:04:11

Friday Squid Blogging: Vegan Squid-Ink Pasta

It uses black beans for color and seaweed for flavor.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-02-15 12:04:45

On the Insecurity of Software Bloat

Good essay on software bloat and the insecurities it causes.

The world ships too much code, most of it by third parties, sometimes unintended, most of it uninspected. Because of this, there is a huge attack surface full of mediocre code. Efforts are ongoing to improve the quality of code itself, but many exploits are due to logic fails, and less progress has been made scanning for those. Meanwhile, great strides could be made by paring down just how much code we expose to the world. This will increase time to market for products, but legislation is around the corner that should force vendors to take security more seriously...

From Schneier on Security at 2024-02-14 17:01:13

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking at the Munich Security Conference (MSC) 2024 in Munich, Germany, on Friday, February 16, 2024.
  • I’m giving a keynote on “AI and Trust” at Generative AI, Free Speech, & Public Discourse. The symposium will be held at Columbia University in New York City and online, at 3 PM ET on Tuesday, February 20, 2024.
  • I’m speaking (remotely) on “AI, Trust and Democracy” at Indiana University in Bloomington, Indiana, USA, at noon ET on February 20, 2024. The talk is part of the 2023-2024 Beyond the Web Speaker Series, presented by The Ostrom Workshop and Hamilton Lugar School...

From Schneier on Security at 2024-02-14 12:08:03

Improving the Cryptanalysis of Lattice-Based Public-Key Algorithms

The winner of the Best Paper Award at Crypto this year was a significant improvement to lattice-based cryptanalysis.

This is important, because a bunch of NIST’s post-quantum options base their security on lattice problems.

I worry about standardizing on post-quantum algorithms too quickly. We are still learning a lot about the security of these systems, and this paper is an example of that learning.

News story.

From Schneier on Security at 2024-02-13 20:13:29

A Hacker’s Mind is Out in Paperback

The paperback version of A Hacker’s Mind has just been published. It’s the same book, only a cheaper format.

But—and this is the real reason I am posting this—Amazon has significantly discounted the hardcover to $15 to get rid of its stock. This is much cheaper than I am selling it for, and cheaper even than the paperback. So if you’ve been waiting for a price drop, this is your chance.

From Schneier on Security at 2024-02-13 12:07:03

Molly White Reviews Blockchain Book

Molly White—of “Web3 is Going Just Great” fame—reviews Chris Dixon’s blockchain solutions book: Read Write Own:

In fact, throughout the entire book, Dixon fails to identify a single blockchain project that has successfully provided a non-speculative service at any kind of scale. The closest he ever comes is when he speaks of how “for decades, technologists have dreamed of building a grassroots internet access provider”. He describes one project that “got further than anyone else”: Helium. He’s right, as long as you ignore the fact that Helium was providing LoRaWAN, not Internet, that by the time he was writing his book Helium hotspots had long since passed the phase where they might generate even enough tokens for their operators to merely break even, and that the network was pulling in somewhere around $1,150 in usage fees a month despite the company being valued at $1.2 billion. Oh, and that the company had widely lied to the public about its supposed big-name clients, and that its executives have been accused of hoarding the project’s token to enrich themselves. But hey, a16z sunk millions into Helium (a fact Dixon never mentions), so might as well try to drum up some new interest!...

From Schneier on Security at 2024-02-12 16:49:36

On Passkey Usability

Matt Burgess tries to only use passkeys. The results are mixed.

From Schneier on Security at 2024-02-09 22:09:30

Friday Squid Blogging: A Penguin Named “Squid”

Amusing story about a penguin named “Squid.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-02-09 18:10:57

No, Toothbrushes Were Not Used in a Massive DDoS Attack

The widely reported story last week that 1.5 million smart toothbrushes were hacked and used in a DDoS attack is false.

Near as I can tell, a German reporter talking to someone at Fortinet got it wrong, and then everyone else ran with it without reading the German text. It was a hypothetical, which Fortinet eventually confirmed.

Or maybe it was a stock-price hack.

From Schneier on Security at 2024-02-08 12:00:19

On Software Liabilities

Over on Lawfare, Jim Dempsey published a really interesting proposal for software liability: “Standard for Software Liability: Focus on the Product for Liability, Focus on the Process for Safe Harbor.”

Section 1 of this paper sets the stage by briefly describing the problem to be solved. Section 2 canvasses the different fields of law (warranty, negligence, products liability, and certification) that could provide a starting point for what would have to be legislative action establishing a system of software liability. The conclusion is that all of these fields would face the same question: How buggy is too buggy? Section 3 explains why existing software development frameworks do not provide a sufficiently definitive basis for legal liability. They focus on process, while a liability regime should begin with a focus on the product—­that is, on outcomes. Expanding on the idea of building codes for building code, Section 4 shows some examples of product-focused standards from other fields. Section 5 notes that already there have been definitive expressions of software defects that can be drawn together to form the minimum legal standard of security. It specifically calls out the list of common software weaknesses tracked by the MITRE Corporation under a government contract. Section 6 considers how to define flaws above the minimum floor and how to limit that liability with a safe harbor...

From Schneier on Security at 2024-02-07 12:04:25

Teaching LLMs to Be Deceptive

Interesting research: “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training“:

Abstract: Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety...

From Schneier on Security at 2024-02-06 17:03:06

Documents about the NSA’s Banning of Furby Toys in the 1990s

Via a FOIA request, we have documents from the NSA about their banning of Furby toys.

From Schneier on Security at 2024-02-05 16:10:13

Deepfake Fraud

A deepfake video conference call—with everyone else on the call a fake—fooled a finance worker into sending $25M to the criminals’ account.

From Schneier on Security at 2024-02-02 22:03:33

Friday Squid Blogging: Illex Squid in Argentina Waters

Argentina is reporting that there is a good population of illex squid in its waters ready for fishing, and is working to ensure that Chinese fishing boats don’t take it all.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-02-02 20:06:13

David Kahn

David Kahn has died. His groundbreaking book, The Codebreakers was the first serious book I read about codebreaking, and one of the primary reasons I entered this field.

He will be missed.

From Schneier on Security at 2024-02-02 12:01:55

A Self-Enforcing Protocol to Solve Gerrymandering

In 2009, I wrote:

There are several ways two people can divide a piece of cake in half. One way is to find someone impartial to do it for them. This works, but it requires another person. Another way is for one person to divide the piece, and the other person to complain (to the police, a judge, or his parents) if he doesn’t think it’s fair. This also works, but still requires another person—­at least to resolve disputes. A third way is for one person to do the dividing, and for the other person to choose the half he wants.

The point is that unlike protocols that require a neutral third party to complete (arbitrated), or protocols that require that neutral third party to resolve disputes (adjudicated), self-enforcing protocols just work. Cut-and-choose works because neither side can cheat. And while the math can get really complicated, the idea ...

From Schneier on Security at 2024-02-01 12:06:14

Facebook’s Extensive Surveillance Network

Consumer Reports is reporting that Facebook has built a massive surveillance network:

Using a panel of 709 volunteers who shared archives of their Facebook data, Consumer Reports found that a total of 186,892 companies sent data about them to the social network. On average, each participant in the study had their data sent to Facebook by 2,230 companies. That number varied significantly, with some panelists’ data listing over 7,000 companies providing their data. The Markup helped Consumer Reports recruit participants for the study. Participants downloaded an archive of the previous three years of their data from their Facebook settings, then provided it to Consumer Reports...

From Schneier on Security at 2024-01-31 12:04:33

CFPB’s Proposed Data Rules

In October, the Consumer Financial Protection Bureau (CFPB) proposed a set of rules that if implemented would transform how financial institutions handle personal data about their customers. The rules put control of that data back in the hands of ordinary Americans, while at the same time undermining the data broker economy and increasing customer choice and competition. Beyond these economic effects, the rules have important data security benefits.

The CFPB’s rules align with a key security idea: the decoupling principle. By separating which companies see what parts of our data, and in what contexts, we can gain control over data about ourselves (improving privacy) and harden cloud infrastructure against hacks (improving security). Officials at the CFPB have described the new rules as an attempt to accelerate a shift toward “open banking,” and after an initial comment period on the new rules closed late last year, Rohit Chopra, the CFPB’s director, ...

From Schneier on Security at 2024-01-30 12:12:30

NSA Buying Bulk Surveillance Data on Americans without a Warrant

It finally admitted to buying bulk data on Americans from data brokers, in response to a query by Senator Weyden.

This is almost certainly illegal, although the NSA maintains that it is legal until it’s told otherwise.

Some news articles.

From Schneier on Security at 2024-01-29 12:03:42

Microsoft Executives Hacked

Microsoft is reporting that a Russian intelligence agency—the same one responsible for SolarWinds—accessed the email system of the company’s executives.

Beginning in late November 2023, the threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents. The investigation indicates they were initially targeting email accounts for information related to Midnight Blizzard itself. ...

From Schneier on Security at 2024-01-26 22:10:36

Friday Squid Blogging: Footage of Black-Eyed Squid Brooding Her Eggs

Amazing footage of a black-eyed squid (Gonatus onyx) carrying thousands of eggs. They tend to hang out about 6,200 feet below sea level.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-01-26 12:09:45

Chatbots and Human Conversation

For most of history, communicating with a computer has not been like communicating with a person. In their earliest years, computers required carefully constructed instructions, delivered through punch cards; then came a command-line interface, followed by menus and options and text boxes. If you wanted results, you needed to learn the computer’s language.

This is beginning to change. Large language models—the technology undergirding modern chatbots—allow users to interact with computers through natural conversation, an innovation that introduces some baggage from human-to-human exchanges. Early on in our respective explorations of ChatGPT, the two of us found ourselves typing a word that we’d never said to a computer before: “Please.” The syntax of civility has crept into nearly every aspect of our encounters; we speak to this algebraic assemblage as if it were a person—even when we know that ...

From Schneier on Security at 2024-01-25 12:04:15

Quantum Computing Skeptics

Interesting article. I am also skeptical that we are going to see useful quantum computers anytime soon. Since at least 2019, I have been saying that this is hard. And that we don’t know if it’s “land a person on the surface of the moon” hard, or “land a person on the surface of the sun” hard. They’re both hard, but very different.

From Schneier on Security at 2024-01-24 12:06:20

Poisoning AI Models

New research into poisoning AI models:

The researchers first trained the AI models using supervised learning and then used additional “safety training” methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts, the AI could still generate exploitable code, even though it seemed safe and reliable during its training.

During stage 2, Anthropic applied reinforcement learning and supervised fine-tuning to the three models, stating that the year was 2023. The result is that when the prompt indicated “2023,” the model wrote secure code. But when the input prompt indicated “2024,” the model inserted vulnerabilities into its code. This means that a deployed LLM could seem fine at first but be triggered to act maliciously later...

From Schneier on Security at 2024-01-23 12:09:42

Side Channels Are Common

Really interesting research: “Lend Me Your Ear: Passive Remote Physical Side Channels on PCs.”

Abstract:

We show that built-in sensors in commodity PCs, such as microphones, inadvertently capture electromagnetic side-channel leakage from ongoing computation. Moreover, this information is often conveyed by supposedly-benign channels such as audio recordings and common Voice-over-IP applications, even after lossy compression.

Thus, we show, it is possible to conduct physical side-channel attacks on computation by remote and purely passive analysis of commonly-shared channels. These attacks require neither physical proximity (which could be mitigated by distance and shielding), nor the ability to run code on the target or configure its hardware. Consequently, we argue, physical side channels on PCs can no longer be excluded from remote-attack threat models...

From Schneier on Security at 2024-01-22 12:09:55

AI Bots on X (Twitter)

You can find them by searching for OpenAI chatbot warning messages, like: “I’m sorry, I cannot provide a response as it goes against OpenAI’s use case policy.”

I hadn’t thought about this before: identifying bots by searching for distinctive bot phrases.

From Schneier on Security at 2024-01-19 22:07:08

Friday Squid Blogging: New Foods from Squid Fins

We only eat about half of a squid, ignoring the fins. A group of researchers is working to change that.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-01-19 20:05:56

Zelle Is Using My Name and Voice without My Consent

Okay, so this is weird. Zelle has been using my name, and my voice, in audio podcast ads—without my permission. At least, I think it is without my permission. It’s possible that I gave some sort of blanket permission when speaking at an event. It’s not likely, but it is possible.

I wrote to Zelle about it. Or, at least, I wrote to a company called Early Warning that owns Zelle about it. They asked me where the ads appeared. This seems odd to me. Podcast distribution networks drop ads in podcasts depending on the listener—like personalized ads on webpages—so the actual podcast doesn’t matter. And shouldn’t they know their own ads? Annoyingly, it seems time to get attorneys involved...

From Schneier on Security at 2024-01-19 12:21:13

Speaking to the CIA’s Creative Writing Group

This is a fascinating story.

Last spring, a friend of a friend visited my office and invited me to Langley to speak to Invisible Ink, the CIA’s creative writing group.

I asked Vivian (not her real name) what she wanted me to talk about.

She said that the topic of the talk was entirely up to me.

I asked what level the writers in the group were.

She said the group had writers of all levels.

I asked what the speaking fee was.

She said that as far as she knew, there was no speaking fee.

What I want to know is, why haven’t I been invited? There are nonfiction writers in that group...

From Schneier on Security at 2024-01-18 12:02:21

Canadian Citizen Gets Phone Back from Police

After 175 million failed password guesses, a judge rules that the Canadian police must return a suspect’s phone.

[Judge] Carter said the investigation can continue without the phones, and he noted that Ottawa police have made a formal request to obtain more data from Google.

“This strikes me as a potentially more fruitful avenue of investigation than using brute force to enter the phones,” he said.

From Schneier on Security at 2024-01-17 12:14:03

Code Written with AI Assistants Is Less Secure

Interesting research: “Do Users Write More Insecure Code with AI Assistants?“:

Abstract: We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants’ language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future...

From Schneier on Security at 2024-01-16 12:21:21

The Story of the Mirai Botnet

Over at Wired, Andy Greenberg has an excellent story about the creators of the 2016 Mirai botnet.

From Schneier on Security at 2024-01-14 17:01:46

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

From Schneier on Security at 2024-01-12 22:06:25

Friday Squid Blogging: Giant Squid from Newfoundland in the 1800s

Interesting article, with photographs.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

From Schneier on Security at 2024-01-12 12:03:48

On IoT Devices and Software Liability

New law journal article:

Smart Device Manufacturer Liability and Redress for Third-Party Cyberattack Victims

Abstract: Smart devices are used to facilitate cyberattacks against both their users and third parties. While users are generally able to seek redress following a cyberattack via data protection legislation, there is no equivalent pathway available to third-party victims who suffer harm at the hands of a cyberattacker. Given how these cyberattacks are usually conducted by exploiting a publicly known and yet un-remediated bug in the smart device’s code, this lacuna is unreasonable. This paper scrutinises recent judgments from both the Supreme Court of the United Kingdom and the Supreme Court of the Republic of Ireland to ascertain whether these rulings pave the way for third-party victims to pursue negligence claims against the manufacturers of smart devices. From this analysis, a narrow pathway, which outlines how given a limited set of circumstances, a duty of care can be established between the third-party victim and the manufacturer of the smart device is proposed...

From Schneier on Security at 2024-01-08 12:03:23

Second Interdisciplinary Workshop on Reimagining Democracy

Last month, I convened the Second Interdisciplinary Workshop on Reimagining Democracy (IWORD 2023) at the Harvard Kennedy School Ash Center. As with IWORD 2022, the goal was to bring together a diverse set of thinkers and practitioners to talk about how democracy might be reimagined for the twenty-first century.

My thinking is very broad here. Modern democracy was invented in the mid-eighteenth century, using mid-eighteenth-century technology. Were democracy to be invented from scratch today, with today’s technologies, it would look very different. Representation would look different. Adjudication would look different. Resource allocation and reallocation would look different. Everything would look different, because we would have much more powerful technology to build on and no legacy systems to worry about...

From Schneier on Security at 2024-01-05 22:05:52

Friday Squid Blogging—18th Anniversary Post: New Species of Pygmy Squid Discovered

They’re Ryukyuan pygmy squid (Idiosepius kijimuna) and Hannan’s pygmy squid (Kodama jujutsu). The second one represents an entire new genus.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

And, yes, this is the eighteenth anniversary of Friday Squid Blogging. The first squid post is from January 6, 2006, and I have been posting them weekly since then. Never did I believe there would be so much to write about squid—but the links never seem to end.

Read my blog posting guidelines ...

From Schneier on Security at 2024-01-05 12:07:35

Improving Shor’s Algorithm

We don’t have a useful quantum computer yet, but we do have quantum algorithms. Shor’s algorithm has the potential to factor large numbers faster than otherwise possible, which—if the run times are actually feasible—could break both the RSA and Diffie-Hellman public-key algorithms.

Now, computer scientist Oded Regev has a significant speed-up to Shor’s algorithm, at the cost of more storage.

Details are in this article. Here’s the result:

The improvement was profound. The number of elementary logical steps in the quantum part of Regev’s algorithm is proportional to ...

From Schneier on Security at 2024-01-04 12:11:49

New iPhone Exploit Uses Four Zero-Days

Kaspersky researchers are detailing “an attack that over four years backdoored dozens if not thousands of iPhones, many of which belonged to employees of Moscow-based security firm Kaspersky.” It’s a zero-click exploit that makes use of four iPhone zero-days.

The most intriguing new detail is the targeting of the heretofore-unknown hardware feature, which proved to be pivotal to the Operation Triangulation campaign. A zero-day in the feature allowed the attackers to bypass advanced hardware-based memory protections designed to safeguard device system integrity even after an attacker gained the ability to tamper with memory of the underlying kernel. On most other platforms, once attackers successfully exploit a kernel vulnerability they have full control of the compromised system...

From Schneier on Security at 2024-01-03 12:07:44

Facial Recognition Systems in the US

A helpful summary of which US retail stores are using facial recognition, thinking about using it, or currently not planning on using it. (This, of course, can all change without notice.)

Three years ago, I wrote that campaigns to ban facial recognition are too narrow. The problem here is identification, correlation, and then discrimination. There’s no difference whether the identification technology is facial recognition, the MAC address of our phones, gait recognition, license plate recognition, or anything else. Facial recognition is just the easiest technology right now...

From Schneier on Security at 2023-12-29 12:03:53

AI Is Scarily Good at Guessing the Location of Random Photos

Wow:

To test PIGEON’s performance, I gave it five personal photos from a trip I took across America years ago, none of which have been published online. Some photos were snapped in cities, but a few were taken in places nowhere near roads or other easily recognizable landmarks.

That didn’t seem to matter much.

It guessed a campsite in Yellowstone to within around 35 miles of the actual location. The program placed another photo, taken on a street in San Francisco, to within a few city blocks.

Not every photo was an easy match: The program mistakenly linked one photo taken on the front range of Wyoming to a spot along the front range of Colorado, more than a hundred miles away. And it guessed that a picture of the Snake River Canyon in Idaho was of the Kawarau Gorge in New Zealand (in fairness, the two landscapes look remarkably similar)...

From Schneier on Security at 2023-12-29 10:08:31

Friday Squid Blogging: Sqids

They’re short unique strings:

Sqids (pronounced “squids”) is an open-source library that lets you generate YouTube-looking IDs from numbers. These IDs are short, can be generated from a custom alphabet and are guaranteed to be collision-free.

I haven’t dug into the details enough to know how they can be guaranteed to be collision-free.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.