Showing only posts tagged AI. Show all posts.

AI-Generated Law

Source

On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to “regularly suggest updates” to the law and “accelerate the issuance of …

Introducing the AWS User Guide to Governance, Risk and Compliance for Responsible AI Adoption within Financial Services Industries

Source

Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …

AI lifecycle risk management: ISO/IEC 42001:2023 for AI governance

Source

As AI becomes central to business operations, so does the need for responsible AI governance. But how can you make sure that your AI systems are ethical, resilient, and aligned with compliance standards? ISO/IEC 42001, the international management system standard for AI, offers a framework to help organizations …

New attack can steal cryptocurrency by planting false memories in AI chatbots

Source

Imagine a world where AI-powered bots can buy or sell cryptocurrency, make investments, and execute software-defined contracts at the blink of an eye, depending on minute-to-minute currency prices, breaking news, or other market-moving events. Then imagine an adversary causing the bot to redirect payments to an account they control …

Implementing safety guardrails for applications using Amazon SageMaker

Source

Large Language Models (LLMs) have become essential tools for content generation, document analysis, and natural language processing tasks. Because of the complex non-deterministic output generated by these models, you need to apply robust safety measures to help prevent inappropriate outputs and protect user interactions. These measures are crucial to …

Introducing the AWS User Guide to Governance, Risk and Compliance for Responsible AI Adoption within Financial Services Industries

Source

Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …

Man pleads guilty to using malicious AI software to hack Disney employee

Source

A California man has pleaded guilty to hacking an employee of The Walt Disney Company by tricking the person into running a malicious version of a widely used open source AI image generation tool. Ryan Mitchell Kramer, 25, pleaded guilty to one count of accessing a computer and obtaining …

Another Move in the Deepfake Creation/Detection Arms Race

Source

Deepfakes are now mimicking heartbeats In a nutshell Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats. The assumption that deepfakes lack physiological signals, such as heart rate …

Use an Amazon Bedrock powered chatbot with Amazon Security Lake to help investigate incidents

Source

In part 2 of this series, we showed you how to use Amazon SageMaker Studio notebooks with natural language input to assist with threat hunting. This is done by using SageMaker Studio to automatically generate and run SQL queries on Amazon Athena with Amazon Bedrock and Amazon Security Lake …

AI-generated code could be a disaster for the software supply chain. Here’s why.

Source

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows. The study, which used 16 of the most …

Applying Security Engineering to Prompt Injection Security

Source

This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within …

Regulating AI Behavior with a Hypervisor

Source

Interesting research: “ Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract :As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that …

That groan you hear is users’ reaction to Recall going back into Windows

Source

Security and privacy advocates are girding themselves for another uphill battle against Recall, the AI tool rolling out in Windows 11 that will screenshot, index, and store everything a user does every three seconds. When Recall was first introduced in May 2024, security practitioners roundly castigated it for creating …

AI Vulnerability Finding

Source

Microsoft is reporting that its AI systems are able to find new vulnerabilities in source code: Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and …

OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters

Source

Spammers used OpenAI to generate messages that were unique to each recipient, allowing them to bypass spam-detection filters and blast unwanted messages to more than 80,000 websites in four months, researchers said Wednesday. The finding, documented in a post published by security firm SentinelOne’s SentinelLabs, underscores the …

AIs as Trusted Third Parties

Source

This is a truly fascinating paper: “ Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as …

Gemini hackers can deliver more potent attacks with a helping hand from… Gemini

Source

In the growing canon of AI security, the indirect prompt injection has emerged as the most powerful means for attackers to hack large language models such as OpenAI’s GPT-3 and GPT-4 or Microsoft’s Copilot. By exploiting a model's inability to distinguish between, on the one hand, developer-defined …

China, Russia, Iran, and North Korea Intelligence Sharing

Source

Former CISA Director Jen Easterly writes about a new international intelligence sharing co-op: Historically, China, Russia, Iran & North Korea have cooperated to some extent on military and intelligence matters, but differences in language, culture, politics & technological sophistication have hindered deeper collaboration, including in cyber. Shifting geopolitical dynamics, however, could …

« newer articles | page 3 | older articles »