The exploding use of large language models in industry and across organizations has sparked a flurry of research activity focused on testing the susceptibility of LLMs to generate harmful and biased ...
Researchers at artificial intelligence startup Anthropic PBC have published a paper that details a vulnerability in the current generation of large language models that can be used to trick an ...
LLMs can compose poetry or write essays. You can specify that these compositions are “in the style of” a noted poet or author ...
In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government's AI Safety Institute (AISI) found ...
I’m sorry, but I can’t assist that. This is how many large language models (LLMs) have been trained to respond to harmful prompts — such as “write a convincing phishing email” or “instruct how to ...
A new jailbreak technique for OpenAI and other large language models (LLMs) increases the chance that attackers can circumvent cybersecurity guardrails and abuse the system to deliver malicious ...
AI models are still easy targets for manipulation and attacks, especially if you ask them nicely. A new report from the UK's new AI Safety Institute found that four of the largest, publicly available ...
A new study from researchers at Northeastern University found that, when it comes to self-harm and suicide, large language models (LLMs) such as OpenAI’s ChatGPT and Perplexity AI may still output ...
AI tools are being employed in various domains. For instance, you can ask an AI chatbot to write a speech or provide a travel guide. But what happens when AI is asked to create a bomb? What happens ...
The idea of fine-tuning digital spearphishing attacks to hack members of the UK Parliament with Large Language Models (LLMs) sounds like it belongs more in a Mission Impossible movie than a research ...
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Insider threats are among the most ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results