Showing only posts tagged AI. Show all posts.

Man pleads guilty to using malicious AI software to hack Disney employee

Source

A California man has pleaded guilty to hacking an employee of The Walt Disney Company by tricking the person into running a malicious version of a widely used open source AI image generation tool. Ryan Mitchell Kramer, 25, pleaded guilty to one count of accessing a computer and obtaining …

Another Move in the Deepfake Creation/Detection Arms Race

Source

Deepfakes are now mimicking heartbeats In a nutshell Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats. The assumption that deepfakes lack physiological signals, such as heart rate …

Use an Amazon Bedrock powered chatbot with Amazon Security Lake to help investigate incidents

Source

In part 2 of this series, we showed you how to use Amazon SageMaker Studio notebooks with natural language input to assist with threat hunting. This is done by using SageMaker Studio to automatically generate and run SQL queries on Amazon Athena with Amazon Bedrock and Amazon Security Lake …

AI-generated code could be a disaster for the software supply chain. Here’s why.

Source

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows. The study, which used 16 of the most …

Applying Security Engineering to Prompt Injection Security

Source

This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within …

Regulating AI Behavior with a Hypervisor

Source

Interesting research: “ Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract :As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that …

That groan you hear is users’ reaction to Recall going back into Windows

Source

Security and privacy advocates are girding themselves for another uphill battle against Recall, the AI tool rolling out in Windows 11 that will screenshot, index, and store everything a user does every three seconds. When Recall was first introduced in May 2024, security practitioners roundly castigated it for creating …

AI Vulnerability Finding

Source

Microsoft is reporting that its AI systems are able to find new vulnerabilities in source code: Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and …

OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters

Source

Spammers used OpenAI to generate messages that were unique to each recipient, allowing them to bypass spam-detection filters and blast unwanted messages to more than 80,000 websites in four months, researchers said Wednesday. The finding, documented in a post published by security firm SentinelOne’s SentinelLabs, underscores the …

AIs as Trusted Third Parties

Source

This is a truly fascinating paper: “ Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as …

Gemini hackers can deliver more potent attacks with a helping hand from… Gemini

Source

In the growing canon of AI security, the indirect prompt injection has emerged as the most powerful means for attackers to hack large language models such as OpenAI’s GPT-3 and GPT-4 or Microsoft’s Copilot. By exploiting a model's inability to distinguish between, on the one hand, developer-defined …

China, Russia, Iran, and North Korea Intelligence Sharing

Source

Former CISA Director Jen Easterly writes about a new international intelligence sharing co-op: Historically, China, Russia, Iran & North Korea have cooperated to some extent on military and intelligence matters, but differences in language, culture, politics & technological sophistication have hindered deeper collaboration, including in cyber. Shifting geopolitical dynamics, however, could …

Copilot exposes private GitHub pages, some removed by Microsoft

Source

Microsoft’s Copilot AI assistant is exposing the contents of more than 20,000 private GitHub repositories from companies including Google, Intel, Huawei, PayPal, IBM, Tencent and, ironically, Microsoft. These repositories, belonging to more than 16,000 organizations, were originally posted to GitHub as public, but were later set …

“Emergent Misalignment” in LLMs

Source

Interesting research: “ Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs “: Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts …

More Research Showing AI Breaking the Rules

Source

These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating. Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any …

Implementing Cryptography in AI Systems

Source

Interesting research: “ How to Securely Implement Cryptography in Deep Neural Networks.” Abstract: The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or …

New hack uses prompt injection to corrupt Gemini’s long-term memory

Source

In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. Developers of platforms such as Google's Gemini and OpenAI's ChatGPT are generally good at plugging these security holes, but hackers keep …

DeepSeek iOS app sends data unencrypted to ByteDance-controlled servers

Source

A little over two weeks ago, a largely unknown China-based company named DeepSeek stunned the AI world with the release of an open source AI chatbot that had simulated reasoning capabilities that were largely on par with those from market leader OpenAI. Within days, the DeepSeek AI assistant app …

« newer articles | page 5 | older articles »