Showing only posts tagged academic papers. Show all posts.

AIs as Trusted Third Parties

Source

This is a truly fascinating paper: “ Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as …

Friday Squid Blogging: A New Explanation of Squid Camouflage

Source

New research : An associate professor of chemistry and chemical biology at Northeastern University, Deravi’s recently published paper in the Journal of Materials Chemistry C sheds new light on how squid use organs that essentially function as organic solar cells to help power their camouflage abilities. As usual, you …

Is Security Human Factors Research Skewed Towards Western Ideas and Habits?

Source

Really interesting research: “ How WEIRD is Usable Privacy and Security Research? ” by Ayako A. Hasegawa Daisuke Inoue, and Mitsuaki Akiyama: Abstract : In human factor fields such as human-computer interaction (HCI) and psychology, researchers have been concerned that participants mostly come from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries …

Improvements in Brute Force Attacks

Source

New paper: “ GPU Assisted Brute Force Cryptanalysis of GPRS, GSM, RFID, and TETRA: Brute Force Cryptanalysis of KASUMI, SPECK, and TEA3.” Abstract: Key lengths in symmetric cryptography are determined with respect to the brute force attacks with current technology. While nowadays at least 128-bit keys are recommended, there are …

“Emergent Misalignment” in LLMs

Source

Interesting research: “ Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs “: Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts …

More Research Showing AI Breaking the Rules

Source

These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating. Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any …

Implementing Cryptography in AI Systems

Source

Interesting research: “ How to Securely Implement Cryptography in Deep Neural Networks.” Abstract: The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or …

Security Analysis of the MERGE Voting Protocol

Source

Interesting analysis: An Internet Voting System Fatally Flawed in Creative New Ways. Abstract: The recently published “MERGE” protocol is designed to be used in the prototype CAC-vote system. The voting kiosk and protocol transmit votes over the internet and then transmit voter-verifiable paper ballots through the mail. In the …

The Scale of Geoblocking by Nation

Source

Interesting analysis : We introduce and explore a little-known threat to digital equality and freedom­websites geoblocking users in response to political risks from sanctions. U.S. policy prioritizes internet freedom and access to information in repressive regimes. Clarifying distinctions between free and paid websites, allowing trunk cables to repressive …

Subverting LLM Coders

Source

Really interesting research: “ An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection “: Abstract : Large Language Models (LLMs) have transformed code com- pletion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning …

Watermark for LLM-Generated Text

Source

Researchers at Google have developed a watermark for LLM-generated text. The basics are pretty obvious: the LLM chooses between tokens partly based on a cryptographic key, and someone with knowledge of the key can detect those choices. What makes this hard is (1) how much text is required for …

Evaluating the Effectiveness of Reward Modeling of Generative AI Systems

Source

New research evaluating the effectiveness of reward modeling during Reinforcement Learning from Human Feedback (RLHF): “ SEAL: Systematic Error Analysis for Value ALignment.” The paper introduces quantitative metrics for evaluating the effectiveness of modeling and aligning human values: Abstract : Reinforcement Learning from Human Feedback (RLHF) aims to align language models …

YubiKey Side-Channel Attack

Source

There is a side-channel attack against YubiKey access tokens that allows someone to clone a device. It’s a complicated attack, requiring the victim’s username and password, and physical access to their YubiKey—as well as some technical expertise and equipment. Still, nice piece of security analysis. [...]

Hacking Wireless Bicycle Shifters

Source

This is yet another insecure Internet-of-things story, this one about wireless gear shifters for bicycles. These gear shifters are used in big-money professional bicycle races like the Tour de France, which provides an incentive to actually implement this attack. Research paper. Another news story. Slashdot thread. [...]

On the Voynich Manuscript

Source

Really interesting article on the ancient-manuscript scholars who are applying their techniques to the Voynich Manuscript. No one has been able to understand the writing yet, but there are some new understandings: Davis presented her findings at the medieval-studies conference and published them in 2020 in the journal Manuscript …

Taxonomy of Generative AI Misuse

Source

Interesting paper: “ Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data “: Generative, multimodal artificial intelligence (GenAI) offers transformative potential across industries, but its misuse poses significant risks. Prior research has shed light on the potential of advanced AI systems to be exploited for malicious purposes. However …

New Research in Detecting AI-Generated Videos

Source

The latest in what will be a continuing arms race between creating and detecting videos: The new tool the research project is unleashing on deepfakes, called “MISLnet”, evolved from years of data derived from detecting fake images and video with tools that spot changes made to digital video or …

RADIUS Vulnerability

Source

New attack against the RADIUS authentication protocol: The Blast-RADIUS attack allows a man-in-the-middle attacker between the RADIUS client and server to forge a valid protocol accept message in response to a failed authentication request. This forgery could give the attacker access to network devices and services without the attacker …

Model Extraction from Neural Networks

Source

A new paper, “Polynomial Time Cryptanalytic Extraction of Neural Network Models,” by Adi Shamir and others, uses ideas from differential cryptanalysis to extract the weights inside a neural network using specific queries and their results. This is much more theoretical than practical, but it’s a really interesting result …

page 1 | older articles »