Showing only posts tagged machine learning. Show all posts.

AIs as Trusted Third Parties

Source

This is a truly fascinating paper: “ Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as …

Enhancing cloud security in AI/ML: The little pickle story

Source

As AI and machine learning (AI/ML) become increasingly accessible through cloud service providers (CSPs) such as Amazon Web Services (AWS), new security issues can arise that customers need to address. AWS provides a variety of services for AI/ML use cases, and developers often interact with these services …

Detecting Pegasus Infections

Source

This tool seems to do a pretty good job. The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for anomalies in iOS and Android device activity or telltale signs of spyware infection. For paying iVerify customers, the tool …

Exploring the benefits of artificial intelligence while maintaining digital sovereignty

Source

Around the world, organizations are evaluating and embracing artificial intelligence (AI) and machine learning (ML) to drive innovation and efficiency. From accelerating research and enhancing customer experiences to optimizing business processes, improving patient outcomes, and enriching public services, the transformative potential of AI is being realized across sectors. Although …

AI Industry is Trying to Subvert the Definition of “Open Source AI”

Source

The Open Source Initiative has published (news article here ) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it …

Context window overflow: Breaking the barrier

Source

Have you ever pondered the intricate workings of generative artificial intelligence (AI) models, especially how they process and generate responses? At the heart of this fascinating process lies the context window, a critical element determining the amount of information an AI model can handle at a given time. But …

Hugging Face, the GitHub of AI, hosted code that backdoored user devices

Source

Enlarge (credit: Getty Images) Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that’s a likely harbinger of what’s to come. In all, JFrog researchers said, they …

Poisoning AI Models

Source

New research into poisoning AI models : The researchers first trained the AI models using supervised learning and then used additional “safety training” methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts …