Black Hat | Def Con 2024
Are you attending Black Hat? Meet the Cobalt team and Core at booth #2913!
Black Hat | Def Con 2024
Are you attending Black Hat? Meet the Cobalt team and Core at booth #2913!

LLM Security

AI Advancements and Their Impact on Cybersecurity Trends

According to the State of Pentesting Report 2024, a vast majority (86%) of respondents say they have seen a significant...
Jun 4, 2024
Est Read Time: 6 min

The Security Risks of LLM-Powered Chatbots

A large language model (LLM) is a system that draws information from large databases and uses artificial intelligence...
May 28, 2024
Est Read Time: 5 min

LLM Vulnerability: Excessive Agency Overview

From prompt injection attacks to over reliance on model output correctness, large language models (LLMs) offer security...
Apr 30, 2024
Est Read Time: 4 min

Large Language Model (LLM) Theft: Strategies for Prevention

Large Language Models (LLMs) process and generate human-like text, enabling applications in natural language...
Mar 15, 2024
Est Read Time: 7 min

LLM Insecure Output Handling

Large Language Models (LLMs), such as GPT-4, Gemini, and Mistral have become indispensable for powering everything from...
Mar 12, 2024
Est Read Time: 7 min

Multi-Modal Prompt Injection Attacks Using Images

Recent developments have unveiled a new class of cyber threats aimed at Large Language Models (LLMs) like ChatGPT:...
Dec 29, 2023
Est Read Time: 4 min

Backdoor Attacks on AI Models

Backdoor attacks in AI and ML are a significant concern for cybersecurity experts.
Dec 20, 2023
Est Read Time: 5 min

Data Poisoning Attacks: A New Attack Vector within AI

New types of malicious attacks involving AI systems are emerging alongside this new technology. One way for attackers...
Jul 26, 2023
Est Read Time: 5 min

Prompt Injection Attacks: A New Frontier in Cybersecurity

Prompt injection attacks have emerged as a new vulnerability impacting AI models. Specifically, large-language models...
May 31, 2023
Est Read Time: 8 min