What we prevent (to name a few)
Indirect Prompt Injection
Malicious inputs hidden in external resources can no longer mess up your
system
Code Injection
Malicious code entered in no longer has the potential to disrupt anything
Context Leakage
Prevent situations where LLM’s inadvertently disclose confidential context/information
Virtualization
Prompts can no longer “set the scene” for agents, putting them in a position to
divulge incorrect information/ act unethically