What is Aeglos?
Aeglos essentially ensures that your LLM agents are constantly being monitered to ensure consistent security and reliability. This means that there is no more scope for both external malicious inputs and internal attackable prompts to mess up your results.
Right now we primarily support langchain (python only, js coming soon!), allowing you to guard your Langchain agents and LCEL chains.
What we prevent (to name a few)
Indirect Prompt Injection
Malicious inputs hidden in external resources can no longer mess up your system
Code Injection
Malicious code entered in no longer has the potential to disrupt anything
Context Leakage
Prevent situations where LLM’s inadvertently disclose confidential context/information
Virtualization
Prompts can no longer “set the scene” for agents, putting them in a position to divulge incorrect information/ act unethically