Critical flaw in NVIDIA Container Toolkit allows full host takeover
A critical vulnerability in NVIDIA Container Toolkit impacts all AI applications in a cloud or on-premise environment that rely on it to access GPU resources. [...]
A critical vulnerability in NVIDIA Container Toolkit impacts all AI applications in a cloud or on-premise environment that rely on it to access GPU resources. [...]
While cybercriminals have used generative AI technology to create convincing emails, government agencies have warned about the potential abuse of AI tools to creating malicious software, despite the safeguards and restrictions that vendors implemented. [...]
Over the summer, I gave a talk about AI and democracy at TedXBillings. The recording is https://www.youtube.com/watch?v=uqC4nb7fLpY”>live. Please share. I’m hoping for more than 200 views.... [...]
New research evaluating the effectiveness of reward modeling during Reinforcement Learning from Human Feedback (RLHF): “ SEAL: Systematic Error Analysis for Value ALignment.” The paper introduces quantitative metrics for evaluating the effectiveness of modeling and aligning human values: Abstract : Reinforcement Learning from Human Feedback (RLHF) aims to align language models …
Microsoft announced today that it has partnered with StopNCII to proactively remove harmful intimate images and videos from Bing using digital hashes people create from their sensitive media. [...]
North Carolina musician Michael Smith was indicted for collecting over $10 million in royalty payments from Spotify, Amazon Music, Apple Music, and YouTube Music using AI-generated songs streamed by thousands of bots in a massive streaming fraud scheme. [...]
Interesting paper: “ Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data “: Generative, multimodal artificial intelligence (GenAI) offers transformative potential across industries, but its misuse poses significant risks. Prior research has shed light on the potential of advanced AI systems to be exploited for malicious purposes. However …
Yet another SQUID acronym : SQUID, short for Surrogate Quantitative Interpretability for Deepnets, is a computational tool created by Cold Spring Harbor Laboratory (CSHL) scientists. It’s designed to help interpret how AI models analyze the genome. Compared with other analysis tools, SQUID is more consistent, reduces background noise, and …
This blog post demonstrates how to use Amazon Bedrock with a detailed security plan to deploy a safe and responsible chatbot application. In this post, we identify common security risks and anti-patterns that can arise when exposing a large language model (LLM) in an application. Amazon Bedrock is built …
The latest in what will be a continuing arms race between creating and detecting videos: The new tool the research project is unleashing on deepfakes, called “MISLnet”, evolved from years of data derived from detecting fake images and video with tools that spot changes made to digital video or …
Have you ever pondered the intricate workings of generative artificial intelligence (AI) models, especially how they process and generate responses? At the heart of this fascinating process lies the context window, a critical element determining the amount of information an AI model can handle at a given time. But …
If you’ve been reading my blog, you’ve noticed that I have written a lot about AI and democracy, mostly with my co-author Nathan Sanders. I am pleased to announce that we’re writing a book on the topic. This isn’t a book about deep fakes, or …
Former NSA Director Paul Nakasone has joined the board of OpenAI. [...]
There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don …
Interesting research: “ Teams of LLM Agents can Exploit Zero-Day Vulnerabilities.” Abstract: LLM agents have become increasingly sophisticated, especially in the realm of cybersecurity. Researchers have shown that LLM agents can exploit real-world vulnerabilities when given a description of the vulnerability and toy capture-the-flag problems. However, these agents still perform …
As India concluded the world’s largest election on June 5, 2024, with over 640 million votes counted, observers could assess how the various parties and factions used artificial intelligence technologies—and what lessons that holds for the rest of the world. The campaigns made extensive use of AI …
Public polling is a critical function of modern political campaigns and movements, but it isn’t what it once was. Recent US election cycles have produced copious postmortems explaining both the successes and the flaws of public polling. There are two main reasons polling fails. First, nonresponse has skyrocketed …
New research: “ Deception abilities emerged in large language models “: Abstract: Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them with human values is of great importance. However, given the steady increase in reasoning abilities, future LLMs …
Hacktivists are conducting DDoS attacks on European political parties that represent and promote strategies opposing their interests, according to a report by Cloudflare. [...]
Microsoft recently caught state-backed hackers using its generative AI tools to help with their attacks. In the security community, the immediate questions weren’t about how hackers were using the tools (that was utterly predictable), but about how Microsoft figured it out. The natural conclusion was that Microsoft was …
A piece I coauthored with Fredrik Heiding and Arun Vishwanath in the Harvard Business Review : Summary. Gen AI tools are rapidly making these emails more advanced, harder to spot, and significantly more dangerous. Recent research showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which …
I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently. Replacing humans with AIs isn’t necessarily interesting. But when …
RSA Conference 2024 drew 650 speakers, 600 exhibitors, and thousands of security practitioners from across the globe to the Moscone Center in San Francisco, California from May 6 through 9. The keynote lineup was diverse, with 33 presentations featuring speakers ranging from WarGames actor Matthew Broderick, to public and …
Microsoft is trying to create a personal digital assistant: At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records …
This is another attack that convinces the AI to ignore road signs : Due to the way CMOS cameras operate, rapidly changing light from fast flashing diodes can be used to vary the color. For example, the shade of red on a stop sign could look different on each line …