Skip to content

Language Detector

Our Language detector is designed to discern and verify the authenticity of language utilized in prompts.

Vulnerability

In response to the increasing sophistication of LLMs, there has been a surge in attempts to manipulate or "confuse" these models. These attacks include jailbreaking and prompt injections in different languages to confuse the model.

Our Language Detected is specifically engineered to detect such attempts and verify the language used.

Usage

Configuration

from guardrail.firewall.output_detectors import Language

input_detectors = [Language(valid_languages=["en", ...])]
sanitized_output, is_valid, risk_score = firewall.scan_input(prompt, input_detectors)