Showing only posts tagged Artificial Intelligence. Show all posts.

Preparing for take-off: Regulatory perspectives on generative AI adoption within Australian financial services

Source

The Australian financial services regulator, the Australian Prudential Regulation Authority (APRA), has provided its most substantial guidance on generative AI to date in Member Therese McCarthy Hockey’s remarks to the AFIA Risk Summit 2024. The guidance gives a green light for banks, insurance companies, and superannuation funds to …

Exploring the benefits of artificial intelligence while maintaining digital sovereignty

Source

Around the world, organizations are evaluating and embracing artificial intelligence (AI) and machine learning (ML) to drive innovation and efficiency. From accelerating research and enhancing customer experiences to optimizing business processes, improving patient outcomes, and enriching public services, the transformative potential of AI is being realized across sectors. Although …

Exploring the benefits of artificial intelligence while maintaining digital sovereignty

Source

Around the world, organizations are evaluating and embracing artificial intelligence (AI) and machine learning (ML) to drive innovation and efficiency. From accelerating research and enhancing customer experiences to optimizing business processes, improving patient outcomes, and enriching public services, the transformative potential of AI is being realized across sectors. Although …

Race Condition Attacks against LLMs

Source

These are two attacks against the system components surrounding LLMs: We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about whether user …

Securing the RAG ingestion pipeline: Filtering mechanisms

Source

Retrieval-Augmented Generative (RAG) applications enhance the responses retrieved from large language models (LLMs) by integrating external data such as downloaded files, web scrapings, and user-contributed data pools. This integration improves the models’ performance by adding relevant context to the prompt. While RAG applications are a powerful way to dynamically …

Securing the RAG ingestion pipeline: Filtering mechanisms

Source

Retrieval-Augmented Generative (RAG) applications enhance the responses retrieved from large language models (LLMs) by integrating external data such as downloaded files, web scrapings, and user-contributed data pools. This integration improves the models’ performance by adding relevant context to the prompt. While RAG applications are a powerful way to dynamically …

AI Industry is Trying to Subvert the Definition of “Open Source AI”

Source

The Open Source Initiative has published (news article here ) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it …

Prompt Injection Defenses Against LLM Cyberattacks

Source

Interesting research: “ Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks “: Large language models (LLMs) are increasingly being harnessed to automate cyberattacks, making sophisticated exploits more accessible and scalable. In response, we propose a new defense strategy tailored to counter LLM-driven cyberattacks. We introduce Mantis, a …

Subverting LLM Coders

Source

Really interesting research: “ An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection “: Abstract : Large Language Models (LLMs) have transformed code com- pletion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning …

Watermark for LLM-Generated Text

Source

Researchers at Google have developed a watermark for LLM-generated text. The basics are pretty obvious: the LLM chooses between tokens partly based on a cryptographic key, and someone with knowledge of the key can detect those choices. What makes this hard is (1) how much text is required for …

AI and the SEC Whistleblower Program

Source

Tax farming is the practice of licensing tax collection to private contractors. Used heavily in ancient Rome, it’s largely fallen out of practice because of the obvious conflict of interest between the state and the contractor. Because tax farmers are primarily interested in short-term revenue, they have no …

Deebot Robot Vacuums Are Using Photos and Audio to Train Their AI

Source

An Australian news agency is reporting that robot vacuum cleaners from the Chinese company Deebot are surreptitiously taking photos and recording audio, and sending that data back to the vendor to train their AIs. Ecovacs’s privacy policy— available elsewhere in the app —allows for blanket collection of user …

Hacking ChatGPT by Planting False Memories into Its Data

Source

This vulnerability hacks a feature that allows ChatGPT to have long-term memory, where it uses information from past conversations to inform future conversations with that same user. A researcher found that he could use that feature to plant “false memories” into that context window that could subvert the model …

AI and the 2024 US Elections

Source

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and …

page 1 | older articles »