Private AI that answers questions about your company's documents.
No third-party training. No leaving AWS.
Isolated multi-tenant by default. Dedicated deployment in your own AWS account for Enterprise. Built for multinationals, regulated industries, and mid-sized organizations that cannot afford for their knowledge to leave the perimeter.
The problem
Critical knowledge is scattered
Methodologies, manuals, and processes live across SharePoint, Drive, Confluence, PDFs, and people's heads. New hires take months to onboard.
ChatGPT is not an option
Customer data, contracts, regulated information (GDPR, HIPAA-adjacent, finance). Uploading to public SaaS is a legal and reputational risk.
No traceability, no compliance
Without verifiable citations and forensic audit, no recommendation passes legal, risk, or internal compliance review.
A complete, governed, enterprise-ready RAG platform
We don't sell a chatbot. We sell the full pipeline that turns your private knowledge into audited, evidence-backed answers — inside your own AWS infrastructure.
Secure ingestion
PDFs, DOCX, Markdown, CSV, JSON. Parsing, chunking with overlap, metadata extraction, vector embeddings with Amazon Titan. All inside your AWS VPC.
Retrieval with Row-Level Security
Semantic search over pgvector in Aurora PostgreSQL with multi-tenant isolation verified by automated tests. Metadata filters for your vertical or tenant.
Answers with Claude via Bedrock
Claude Haiku 4.5 / Sonnet 4.6 / Opus 4.6 with automatic cost-aware routing. Guardrails against cross-tenant leaks, prompt injection, and evidence-free responses. Real-time streaming.
Verifiable citations
Every answer links to the exact chunk in the source document. Forensic auditability reconstructible months later: who asked, which chunks were shown, which model answered, how much it cost.
Enterprise governance
Workspace RBAC, per-tenant policies, append-only audit log, per-query cost tracking, configurable retention, GDPR export, SAML/OIDC SSO, BYO KMS keys.
BYO-LLM and data residency
Plug in your own Anthropic, OpenAI, or private model credentials. Deploy in us-east-1 or eu-west-1. Zero vendor lock-in, zero transatlantic data transfer.
Who it's for
Built from day one for organizations that see their data as competitive advantage and legal obligation — not fuel for public SaaS.
Multinationals with regulated data
Banking, insurance, healthcare, energy, corporate legal. Industries under GDPR, HIPAA-adjacent, SOX, PCI-DSS. They need AI over their knowledge without a single record leaving their AWS perimeter.
Large enterprises with sensitive IP
Manufacturing, pharma, engineering, defense, strategy consulting. Patents, formulas, proprietary methods, strict NDAs. ChatGPT Enterprise is not legally viable.
Chains and franchises with proprietary methodology
50+ locations needing operational consistency, standardized training, and answers aligned with HQ methodology at every point. AI distributes the corporate know-how.
Mid-sized organizations scaling up
200–2000 employees with enterprise-equivalent compliance but without the Glean budget ($50k+ entry). The exact gap rags.cc fills.
Public and semi-public sector
Municipalities, regulators, public universities, public hospitals. Non-negotiable data sovereignty, tender budgets, multi-year contracts.
Teams already deep in AWS
Any organization that already chose AWS as its primary cloud and requires its AI tooling to respect its perimeter, IAM roles, KMS keys, and CloudTrail.
✕Who it's NOT for
- Individual users chatting with personal notes (use NotebookLM)
- Startups under 50 people with no regulatory requirements
- Companies comfortable uploading data to ChatGPT Enterprise or Copilot
- Teams not on AWS or with cloud-agnostic strategy
- Cases where LLM creativity matters more than traceability
Two ways to keep your data protected
Pick the model that fits your risk profile, compliance, and budget. Both honor the fundamental rule: your content never trains a third party and inference never leaves AWS.
Shared multi-tenant
Operated by rags.cc in AWS us-east-1
The fastest way to get started. Sign up, pay by card, start uploading documents in minutes. We run the infra.
- ✓Infra running in our dedicated rags.cc AWS account
- ✓Multi-tenant isolation with Row-Level Security verified by automated tests
- ✓Inference via Amazon Bedrock with private VPC endpoint (data never touches public internet)
- ✓Encryption at rest with KMS managed by rags.cc
- ✓Append-only audit log, exportable on request
- ✓Anthropic does NOT use your data to train (Bedrock policy)
- ✓rags.cc NEVER sells, transfers, or trains models with your content
- ✓Optional BYO-LLM on Business (your own Anthropic / OpenAI credentials)
Dedicated deployment in your AWS account
Your account. Your VPC. Your KMS. Your CloudTrail.
We deploy the rags.cc stack literally inside your AWS account via Terraform and a limited cross-account IAM role. Your data never exists outside your perimeter.
- ✓Infra inside the AWS account you authorize
- ✓Your data physically lives in your Aurora, your S3, your VPC
- ✓BYO KMS keys: the cryptographic root belongs to you, rotate whenever
- ✓Your CloudTrail sees every operation; your security team audits everything
- ✓rags.cc operates via temporary, auditable assume-role, with no direct data access
- ✓You can revoke our access at any moment without losing your data
- ✓SSO SAML/OIDC with your corporate IdP
- ✓Negotiable SLA, 24/7 support, dedicated CSM
| Quick comparison | Shared multi-tenant | Dedicated deployment in your AWS account |
|---|---|---|
| Inference never touches public internet | ✓ | ✓ |
| Data NOT used to train third parties | ✓ | ✓ |
| Row-Level Security multi-tenant | ✓ | n/a (single-tenant) |
| KMS encryption at rest | rags.cc KMS | your KMS (BYO) |
| CloudTrail visible to you | reports | yes, direct |
| Data in your AWS account | — | ✓ |
| Revoke access in 1 click | — | ✓ |
| Minimum contract | monthly | annual |
| Time to start | < 1 hour | 2-4 weeks |
| Price range | $99-$1,499 USD/mo | $5k-$25k+ USD/mo |
Why rags.cc
Private by design
Everything runs in your AWS VPC. Bedrock via private endpoint. Zero egress.
Verifiable citations
Every answer links to the exact chunk. Forensic auditability months later.
True multi-tenant
Row-Level Security in PostgreSQL. Isolation verified by automated tests.
LLM replaceability
Decoupled Model Gateway. Switch from Claude to Llama without rewriting the product.
Simple public pricing
All prices in USD. No surprises. Monthly or annual billing with a discount.
Starter
- 1 workspace
- 500 documents
- 1,000 queries / mo
- 10M LLM tokens / mo
- Claude Haiku 4.5
- 30-day audit log
Pro
- 5 workspaces
- 5,000 documents
- 10,000 queries / mo
- 50M LLM tokens / mo
- Haiku 4.5 + Sonnet 4.6
- 90-day audit log
- 99.5% SLA
Business
- 25 workspaces
- 50,000 documents
- 50,000 queries / mo
- 200M LLM tokens / mo
- Haiku + Sonnet + Opus 4.6
- BYO-LLM
- 1-year audit log
- 99.9% SLA
Prices shown in US dollars (USD). Taxes not included. Monthly or annual billing with a discount. Enterprise tier with SSO, dedicated deployment, and 99.95% SLA available under contract.