content transformation and safety preprocessing for llm and agentic apps
before a request can be validated or routed, its content must be transformed into a safe, structured, model-ready form.
transformabl defines this layer.
transformabl is the content transformation and safety preprocessing gateway for llm apps.
it lets you:
π¦ implementation:
ai-content-gateway(roadmap)
transformabl is identity-agnostic; policy decisions happen in validatabl.
as llm apps operate on personal, financial, or regulated data, preprocessing becomes critical.
policies depend on what the content is, not just who sent it.
transformabl prepares content for the rest of the gatewaystack pipeline.
all gatewaystack modules operate on a shared RequestContext object.
transformabl is responsible for:
identity (optional), raw content, upstream hints from clientmetadata (classification, pii flags, risk score, topics), transformed content (redacted/pseudonymized), transformation logstransformabl receives raw user input and applies transformations that:
it ensures that everything passed to validatabl and proxyabl is safe, structured, and explainable.
transformabl operates on both requests and responses:
request transformation:
response transformation:
transformabl supports both:
this enables workflows where the llm processes anonymized data, but the end user sees the original context.
transformabl detects:
safety risks:
regulated content:
business policies:
1. detectPII β identify personal or sensitive data
emails, phone numbers, addresses, ids, biometrics.
2. redactPII β remove or mask detected pii
full or partial redaction options.
3. classifyContent β detect unsafe / legal / regulated categories
4. segmentInput β split content into safe, unsafe, and neutral regions
5. normalize β formatting cleanup, whitespace, casing, structure
6. extractMetadata β topics, intent, sentiment, risk
7. analyzeForRouting β suggest optimal provider/model/tool based on content
(e.g., contains phi β route to hipaa-compliant model; python code β route to code-optimized model)
8. pseudonymize β reversible tokenization of pii regions
9. rehydrate β restore pseudonymized values on response
metadata field in RequestContextidentifiabl to authenticate usersvalidatabl to enforce permissionslimitabl to apply rate limits or quotasproxyabl to perform provider routingexplicabl to store or ship audit logsrunning pii detection, content classification, and metadata extraction on every request can be expensive:
transformabl pipelines are configurable β enable only what you actually need.
common patterns:
optimization strategies:
transformations:
pii_detection:
enabled: true
cache_ttl: 300s # cache results for repeated content
content_classification:
enabled_for:
- org_healthcare
- org_finance
skip_for:
- tier_free # skip classification for free tier
routing_analysis:
enabled: true
use_lightweight_model: true # faster, less accurate
user
β identifiabl (who is calling?)
β transformabl (prepare, clean, classify, anonymize)
β validatabl (is this allowed?)
β limitabl (can they afford it? pre-flight constraints)
β proxyabl (where does it go? execute)
β llm provider (model call)
β [limitabl] (deduct actual usage, update quotas/budgets)
β explicabl (what happened?)
β response
every request becomes safer and more structured before governance rules run.
transformabl plugs into gatewaystack and your existing llm stack without requiring application-level changes. it exposes http middleware and sdk hooks for:
for transformation configuration:
β PII detection patterns
β content classification guide
β pseudonymization strategies
for compliance use cases:
β HIPAA compliance setup
β GDPR data handling
for implementation:
β integration guide
want to explore the full gatewaystack architecture?
β view the gatewaystack github repo
want to contact us for enterprise deployments?
β reducibl applied ai studio
every request flows from your app through gatewaystack's modules before it reaches an llm provider β identified, transformed, validated, constrained, routed, and audited.