Learning More about Web Based LLM Injections

Gupta Bless
5 min readJan 23, 2024
Source

Introduction

Large language models, or LLMs, are a subset of foundation models used as a framework for LLM-integrated apps like AI-powered search. Large volumes of unlabeled and self-supervised data can be used to train these foundational models. To put it simply, these models are able to provide flexible output by learning from data patterns. Consequently, our LLM scrapped these foundational models for text or text-like elements. Why are they so massive if they can handle gigabytes of data?

Take, for instance, a 1 GB byte file containing almost 178 million words. With LLM, we can handle such massive amounts of data. Organizations begin utilizing these LLMs to boost productivity and accuracy, but in the process, they reveal or grant access rights to their data and APIs. Because these AIs’ “data” must be “architected” before the tool can be “trained,” a process by which the model learns to anticipate the following word in a phrase. To improve its performance on the targeted tasks, the model might hone its understanding. On occasion, LLM-used data has been the target of malicious operations sent via APIs or triggers. However, because resources or servers are not directly accessible, LLMs assaults are somewhat comparable to server side injection. So, let’s talk about quick injection tactics in detail because they are relied upon by many…

--

--

Gupta Bless
Gupta Bless

Written by Gupta Bless

Security enthusiast working to secure web for others.

No responses yet