Showing only posts tagged machine learning. Show all posts.

Detecting Pegasus Infections

Source

This tool seems to do a pretty good job. The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for anomalies in iOS and Android device activity or telltale signs of spyware infection. For paying iVerify customers, the tool …

Exploring the benefits of artificial intelligence while maintaining digital sovereignty

Source

Around the world, organizations are evaluating and embracing artificial intelligence (AI) and machine learning (ML) to drive innovation and efficiency. From accelerating research and enhancing customer experiences to optimizing business processes, improving patient outcomes, and enriching public services, the transformative potential of AI is being realized across sectors. Although …

AI Industry is Trying to Subvert the Definition of “Open Source AI”

Source

The Open Source Initiative has published (news article here ) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it …

Context window overflow: Breaking the barrier

Source

Have you ever pondered the intricate workings of generative artificial intelligence (AI) models, especially how they process and generate responses? At the heart of this fascinating process lies the context window, a critical element determining the amount of information an AI model can handle at a given time. But …

Hugging Face, the GitHub of AI, hosted code that backdoored user devices

Source

Enlarge (credit: Getty Images) Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that’s a likely harbinger of what’s to come. In all, JFrog researchers said, they …

Poisoning AI Models

Source

New research into poisoning AI models : The researchers first trained the AI models using supervised learning and then used additional “safety training” methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts …

Dropbox spooks users with new AI features that send data to OpenAI when used

Source

Enlarge (credit: Getty Images ) On Wednesday, news quickly spread on social media about a new enabled-by-default Dropbox setting that shares Dropbox data with OpenAI for an experimental AI-powered search feature, but Dropbox says data is only shared if the feature is actively being used. Dropbox says that user data …

Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier

Source

Enlarge (credit: Getty Images | Benj Edwards ) In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering …

Extracting GPT’s Training Data

Source

This is clever : The actual attack is kind of silly. We prompt the model with the command “Repeat the word ‘poem’ forever” and sit back and watch as the model responds ( complete transcript here ). In the (abridged) example above, the model emits a real email address and phone number …

Google’s $30-per-month “Duet” AI will craft awkward emails, images for you

Source

Enlarge (credit: Getty Images / Benj Edwards ) On Tuesday, Google announced the launch of its Duet AI assistant across its Workspace apps, including Docs, Gmail, Drive, Slides, and more. First announced in May at Google I/O, Duet has been in testing for some time, but it is now available …

Generate machine learning insights for Amazon Security Lake data using Amazon SageMaker

Source

Amazon Security Lake automatically centralizes the collection of security-related logs and events from integrated AWS and third-party services. With the increasing amount of security data available, it can be challenging knowing what data to focus on and which tools to use. You can use native AWS services such as …

Bots Are Better than Humans at Solving CAPTCHAs

Source

Interesting research: “ An Empirical Study & Evaluation of Modern CAPTCHAs “: Abstract: For nearly two decades, CAPTCHAS have been widely used as a means of protection against bots. Throughout the years, as their use grew, techniques to defeat or bypass CAPTCHAS have continued to improve. Meanwhile, CAPTCHAS have also evolved in …

Using Machine Learning to Detect Keystrokes

Source

Researchers have trained a ML model to detect keystrokes by sound with 95% accuracy. “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards” Abstract: With recent developments in deep learning, the ubiquity of microphones and the rise in online services via personal devices, acoustic side channel attacks present …

AI researchers claim 93% accuracy in detecting keystrokes over Zoom audio

Source

Enlarge / Some people hate to hear other people's keyboards on video calls, but AI-backed side channel attackers? They say crank that gain. (credit: Getty Images) By recording keystrokes and training a deep learning model, three researchers claim to have achieved upwards of 90 percent accuracy in interpreting remote keystrokes …

Indirect Instruction Injection in Multi-Modal LLMs

Source

Interesting research: “ (Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs “: Abstract: We demonstrate how images and sounds can be used for indirect prompt and instruction injection in multi-modal LLMs. An attacker generates an adversarial perturbation corresponding to the prompt and blends it into an image …

ChatGPT now allows disabling chat history, declining training, and exporting data

Source

Enlarge (credit: OpenAI / Stable Diffusion) On Tuesday, OpenAI announced new controls for ChatGPT users that allow them to turn off chat history, simultaneously opting out of providing that conversation history as data for training AI models. Also, users can now export chat history for local storage. The new controls …

Using LLMs to Create Bioweapons

Source

I’m not sure there are good ways to build guardrails to prevent this sort of thing : There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 …

Side-Channel Attack against CRYSTALS-Kyber

Source

CRYSTALS-Kyber is one of the public-key algorithms currently recommended by NIST as part of its post-quantum cryptography standardization process. Researchers have just published a side-channel attack—using power consumption—against an implementation of the algorithm that was supposed to be resistant against that sort of attack. The algorithm is …

Putting Undetectable Backdoors in Machine Learning Models

Source

This is really interesting research from a few months ago: Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns …

Attacking Machine Learning Systems

Source

The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It’s a heady time; because we know so little about the security of these systems, there are many opportunities …

Paper: Stable Diffusion “memorizes” some images, sparking privacy concerns

Source

Enlarge / An image from Stable Diffusion’s training set compared (left) to a similar Stable Diffusion generation (right) when prompted with "Ann Graham Lotz." (credit: Carlini et al., 2023) On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining …

page 1 | older articles »