Showing only posts tagged AI. Show all posts.

Subliminal Learning in AIs

Source

Today’s freaky LLM behavior : We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a “student” model learns to prefer owls when trained on sequences of numbers generated by a “teacher” model that prefers …

Unless users take action, Android will let Gemini access third-party apps

Source

Starting today, Google is implementing a change that will enable its Gemini AI engine to interact with third-party apps, such as WhatsApp, even when users previously configured their devices to block such interactions. Users who don't want their previous settings to be overridden may have to take action. An …

The Age of Integrity

Source

We need to talk about data integrity. Narrowly, the term refers to ensuring that data isn’t tampered with, either in transit or in storage. Manipulating account balances in bank databases, removing entries from criminal records, and murder by removing notations about allergies from medical records are all integrity …

AI security strategies from Amazon and the CIA: Insights from AWS Summit Washington, DC

Source

At this year’s AWS Summit in Washington, DC, I had the privilege of moderating a fireside chat with Steve Schmidt, Amazon’s Chief Security Officer, and Lakshmi Raman, the CIA’s Chief Artificial Intelligence Officer. Our discussion explored how AI is transforming cybersecurity, threat response, and innovation across …

Researchers cause GitLab AI developer assistant to turn safe code malicious

Source

Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these …

AI-Generated Law

Source

On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to “regularly suggest updates” to the law and “accelerate the issuance of …

Introducing the AWS User Guide to Governance, Risk and Compliance for Responsible AI Adoption within Financial Services Industries

Source

Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …

AI lifecycle risk management: ISO/IEC 42001:2023 for AI governance

Source

As AI becomes central to business operations, so does the need for responsible AI governance. But how can you make sure that your AI systems are ethical, resilient, and aligned with compliance standards? ISO/IEC 42001, the international management system standard for AI, offers a framework to help organizations …

New attack can steal cryptocurrency by planting false memories in AI chatbots

Source

Imagine a world where AI-powered bots can buy or sell cryptocurrency, make investments, and execute software-defined contracts at the blink of an eye, depending on minute-to-minute currency prices, breaking news, or other market-moving events. Then imagine an adversary causing the bot to redirect payments to an account they control …

Implementing safety guardrails for applications using Amazon SageMaker

Source

Large Language Models (LLMs) have become essential tools for content generation, document analysis, and natural language processing tasks. Because of the complex non-deterministic output generated by these models, you need to apply robust safety measures to help prevent inappropriate outputs and protect user interactions. These measures are crucial to …

Introducing the AWS User Guide to Governance, Risk and Compliance for Responsible AI Adoption within Financial Services Industries

Source

Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …

Man pleads guilty to using malicious AI software to hack Disney employee

Source

A California man has pleaded guilty to hacking an employee of The Walt Disney Company by tricking the person into running a malicious version of a widely used open source AI image generation tool. Ryan Mitchell Kramer, 25, pleaded guilty to one count of accessing a computer and obtaining …

Another Move in the Deepfake Creation/Detection Arms Race

Source

Deepfakes are now mimicking heartbeats In a nutshell Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats. The assumption that deepfakes lack physiological signals, such as heart rate …

Use an Amazon Bedrock powered chatbot with Amazon Security Lake to help investigate incidents

Source

In part 2 of this series, we showed you how to use Amazon SageMaker Studio notebooks with natural language input to assist with threat hunting. This is done by using SageMaker Studio to automatically generate and run SQL queries on Amazon Athena with Amazon Bedrock and Amazon Security Lake …

AI-generated code could be a disaster for the software supply chain. Here’s why.

Source

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows. The study, which used 16 of the most …

Applying Security Engineering to Prompt Injection Security

Source

This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within …

Regulating AI Behavior with a Hypervisor

Source

Interesting research: “ Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract :As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that …

That groan you hear is users’ reaction to Recall going back into Windows

Source

Security and privacy advocates are girding themselves for another uphill battle against Recall, the AI tool rolling out in Windows 11 that will screenshot, index, and store everything a user does every three seconds. When Recall was first introduced in May 2024, security practitioners roundly castigated it for creating …

« newer articles | page 2 | older articles »