Here’s a Subliminal Channel You Haven’t Considered Before
Scientists can manipulate air bubbles trapped in ice to encode messages. [...]
Scientists can manipulate air bubbles trapped in ice to encode messages. [...]
This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within …
Interesting research: “ Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract :As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that …
This is a truly fascinating paper: “ Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as …
New research : An associate professor of chemistry and chemical biology at Northeastern University, Deravi’s recently published paper in the Journal of Materials Chemistry C sheds new light on how squid use organs that essentially function as organic solar cells to help power their camouflage abilities. As usual, you …
Really interesting research: “ How WEIRD is Usable Privacy and Security Research? ” by Ayako A. Hasegawa Daisuke Inoue, and Mitsuaki Akiyama: Abstract : In human factor fields such as human-computer interaction (HCI) and psychology, researchers have been concerned that participants mostly come from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries …
New paper: “ GPU Assisted Brute Force Cryptanalysis of GPRS, GSM, RFID, and TETRA: Brute Force Cryptanalysis of KASUMI, SPECK, and TEA3.” Abstract: Key lengths in symmetric cryptography are determined with respect to the brute force attacks with current technology. While nowadays at least 128-bit keys are recommended, there are …
Interesting research: “ Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs “: Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts …
These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating. Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any …
Interesting research: “ How to Securely Implement Cryptography in Deep Neural Networks.” Abstract: The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or …
Really good—and detailed— survey of Trusted Encryption Environments (TEEs.) [...]
Really good—and detailed— survey of Trusted Execution Environments (TEEs.) [...]
News : A sponge made of cotton and squid bone that has absorbed about 99.9% of microplastics in water samples in China could provide an elusive answer to ubiquitous microplastic pollution in water across the globe, a new report suggests. [...] The study tested the material in an irrigation ditch …