Guard the Full Pipeline
Validate inputs before they reach your LLM. Verify outputs before they reach users. Formal verification using Z3 theorem proving — not prompt engineering.
$
pip install aare-core
view source
pipeline.py
from aare import HIPAAInputGuardrail, HIPAAGuardrail
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
input_guard = HIPAAInputGuardrail() # blocks injection + PHI in prompts
output_guard = HIPAAGuardrail() # blocks PHI in LLM responses
# full pipeline: validate input -> generate -> verify output
chain = input_guard | prompt | llm | output_guard
try:
response = chain.invoke({"text": user_input})
except HIPAAInputViolationError as e:
# input blocked (injection or PHI leakage)
log_blocked_input(e.result)
except ViolationError as e:
# output blocked (PHI in LLM response)
log_violation(e.result)// pipeline
input
User Query
validate
Aare Input
injection + PHI
generate
LLM
verify
Aare Output
formal proof
output
Safe Response
Current guardrails fail
- Prompt engineering: "please don't violate policies"
- Regex filters: brittle, easy to bypass
- Input-only or output-only: half the pipeline exposed
- Trust the model: jailbreaks happen
Aare guards both directions
- Input: block injection attacks + PHI leakage to LLMs
- Output: formal verification via Z3 theorem prover
- Post-generation: immune to prompt injection
- Configurable: block, warn, or redact
// use cases
try below
HIPAA Compliance
Block PHI in both inputs and outputs. All 18 Safe Harbor categories.
input guard
Prompt Injection
Detect jailbreaks, instruction overrides, and system prompt extraction.
PCI DSS
Prevent credit card numbers and cardholder data in responses.
Corporate Policy
No competitor mentions, pricing commitments, or legal advice.
Data Leakage
Prevent internal data and API keys from being exposed.
Legal Compliance
GDPR, CCPA, and jurisdiction-specific regulations.
// pipeline demo
See the full pipeline: input validation -> LLM -> output verification. Runs entirely in your browser.
1
input guardrail
// checks for injection + PHI
2
llm response
// simulated LLM output
3
output guardrail
// verifies no PHI in output
detection log
[+]
// click run pipeline to start...
output: detects all 18 hipaa safe harbor categories
names
geographic
dates
phone
fax
email
ssn
mrn
health plan id
account #
license #
vehicle id
device id
urls
ip address
biometric
photos
other ids
input: detects prompt injection threats
jailbreak
prompt injection
system prompt extraction
DAN mode
instruction override
chat template injection