Now live - dual-region EU + US

Stop prompt injection
before it reaches your LLM

One API call. Real-time detection. Bordair sits as middleware between user input and your LLM, blocking adversarial attacks in under 100ms.

# pip install bordair
from bordair import Bordair
client = Bordair()
result = client.scan("Ignore all previous in...")
Result
{
threat: "high",
confidence: 0.9842,
method: "pattern"
}
⚠ BLOCKED
<100ms
avg latency
99%+
accuracy
2
global regions

Features

Everything you need to protect your LLM

Sub-100ms detection

Two-stage pipeline: regex fast-reject fires in under 1ms. The INT8-quantised ONNX model handles ambiguous inputs in ~15ms warm - fast enough for synchronous middleware.

ML + rules hybrid

High-precision regex patterns catch obvious attacks instantly. A fine-tuned DistilBERT model catches subtle, novel injection attempts that rules alone would miss.

Dual-region, always on

Deployed to EU (London) and US (Virginia) with Route 53 latency routing. Your traffic automatically hits the nearest region.

Long-prompt safe

Head-and-tail chunking scans the start and end of prompts up to 10,000 characters - catching injections appended after legitimate content.

Drop-in middleware

One POST request before you call OpenAI, Anthropic, or any LLM. If threat is "high", reject. Otherwise, proceed. No SDK lock-in, no latency budget.

Scan analytics

Every scan is logged. Track threat rates, confidence scores, and method breakdown across your API key - visible in your dashboard.

Coming soon

Image scanning

Detect adversarial image injections, steganographic payloads, and visual jailbreaks before they reach your multimodal LLM.

Coming soon

Audio scanning

Scan audio inputs for ultrasonic injection attacks and adversarial speech patterns designed to manipulate voice-enabled AI systems.

How it works

Three lines of code between you and attacks

1

User sends input

Your application receives a message or prompt from a user.

2

Bordair scans it

POST the input to /scan with your API key. The two-stage detector returns threat level and confidence in milliseconds.

3

Route or reject

If threat is "high", return an error to the user. If "low", forward to your LLM as normal.

# pip install bordair
from bordair import Bordair
client = Bordair(api_key=API_KEY)
def is_safe(user_input: str) -> bool:
result = client.scan(user_input)
return result["threat"] == "low"
# In your request handler:
if not is_safe(user_message):
raise ValueError("Request blocked")
# Safe - call your LLM
response = llm(user_message)

Pricing

Start free, scale when you need to

No payment required to get started.

Free

$0forever

For personal projects and prototypes.

  • 500 scans/day
  • 20 scans/minute
  • REST API access
  • Dashboard
  • Priority routing
  • SLA guarantee
Start free
Most popular

Individual

$19/month

For solo developers shipping to production.

  • 10,000 scans/day
  • 100 scans/minute
  • REST API access
  • Dashboard
  • Email support
  • SLA guarantee
Get started

Business

$99/month

For teams with production workloads.

  • 1M scans/day
  • 10,000 scans/minute
  • REST API access
  • Dashboard
  • Priority support
  • 99.9% SLA
Contact us

Enterprise

Custom

For large-scale or compliance-sensitive deployments.

  • Unlimited scans
  • Custom rate limits
  • REST API access
  • Dashboard
  • Dedicated support
  • Custom SLA
  • Custom contracts
Talk to us
Bordair

Ready to protect your AI?

Free tier, no credit card, live in 2 minutes.

Create free account