Frequently asked questions
Everything you need to know about SimplyRAG, pricing, security, and use cases.
What is SimplyRAG?
What is SimplyRAG?▼
SimplyRAG is an AI knowledge assistant that turns your company data—documents, SQL databases, websites, and Firebase—into searchable, chat-ready knowledge. You get instant answers with verified sources and citations.
Who is SimplyRAG for?▼
Teams that need AI search and chat over their own data: customer support, internal knowledge, documentation and DevRel, legal and compliance, sales enablement, and self-serve analytics from SQL. Same simple platform—connect your data and ship.
What data sources does SimplyRAG support?▼
File buckets (S3, Azure Blob, GCS), SQL databases (PostgreSQL, MySQL, SQLite), website crawlers, and Firebase/Firestore. Each can feed into collections. SimplyRAG indexes and makes your data queryable in plain language.
RAG & AI search
What is RAG (Retrieval Augmented Generation)?▼
RAG is a technique that combines a large language model (LLM) with your own data. When you ask a question, the system first retrieves relevant content from your indexed documents or databases, then sends that context to the LLM. The model generates an answer grounded in your data—with source citations—instead of relying only on its training. SimplyRAG handles retrieval, context assembly, and chat so you don't build it from scratch.
Why use RAG instead of fine-tuning or plain LLM chat?▼
Fine-tuning is expensive and your model goes stale as data changes. Plain LLM chat only knows its training data and has a cutoff date—it can't see your internal docs or latest updates. RAG keeps the model general-purpose and injects your current data at query time. You get accurate, up-to-date answers without retraining. It's the standard way to make AI useful on your company's knowledge.
How does RAG improve answer accuracy?▼
By retrieving the right chunks first, the LLM has concrete evidence to base its answer on. That reduces hallucinations and lets you cite sources. You can also filter by metadata, mix keyword and semantic search (hybrid), and tune how much context is sent. SimplyRAG does the retrieval and orchestration so answers stay grounded in your data.
What are embeddings and vector search in RAG?▼
Embeddings are numerical representations of text that capture meaning. Similar content gets similar vectors. In RAG, your documents are split into chunks, each chunk is embedded, and the vectors are stored in a vector database. At query time, the question is embedded and the database returns the most similar chunks. That's vector (semantic) search—finding by meaning, not just keywords. SimplyRAG supports multiple embedding providers and vector DBs (BYOK).
What's the difference between RAG and a regular search engine?▼
A search engine returns a list of links or snippets. RAG goes further: it takes those relevant snippets, feeds them to an LLM as context, and returns a direct answer in natural language with citations. So instead of 'here are 10 pages that might contain the answer,' you get 'the answer is X, and here are the sources.' SimplyRAG combines search (including semantic and hybrid) with chat and citations in one flow.
How it works
How does SimplyRAG work?▼
You connect your data, create collections (knowledge bases), and SimplyRAG indexes automatically. Then you ask questions in the built-in Chat UI or via API; answers include source citations. No infrastructure to manage.
How do I connect my documents or database to SimplyRAG?▼
Create file buckets or add a SQL, website, or Firebase source in the dashboard. Add the source to a collection and configure your embedding and vector DB keys (BYOK). SimplyRAG indexes and makes it queryable.
How do I embed SimplyRAG chat on my website?▼
Use embeddable widgets: create a widget in the dashboard, get the snippet or public embed URL, and add it to your site. Access can be secured with API tokens. No backend required.
Pricing & plans
Is SimplyRAG free?▼
Yes. The Starter plan is free (e.g. up to 1,000,000 tokens, a few data sources) so you can evaluate AI search. No credit card required. Upgrade to Pro or Enterprise when you need more.
How much does SimplyRAG cost?▼
Starter is free. Pro is $99/month with higher limits, pipelines, and BYOK. Enterprise is custom with dedicated infra, SLA, and SSO.
What's included in the free tier vs Pro?▼
Free: small token limit, a few data sources, shared rate limits. Pro: pipelines, unlimited sources, 10M tokens included, BYOK, and priority support.
Security & compliance
Is my data secure with SimplyRAG?▼
Yes. BYOK for embeddings, vectors, and storage; keys encrypted at rest; per-organization isolation; optional activity and audit logs for compliance.
Can I use my own API keys with SimplyRAG?▼
Yes (BYOK). Use your own keys for embeddings (e.g. OpenAI, Groq, Google, HuggingFace), vector DB (e.g. Pinecone), and file storage (S3, Azure, GCS). Keys are never exposed in logs or responses.
Does SimplyRAG support SSO or enterprise compliance?▼
Enterprise includes SSO, audit logs, and compliance-oriented options. Contact sales for dedicated infra, VPC, and custom contracts.
Use cases
Can SimplyRAG power customer support or replace our FAQ?▼
Yes. Index FAQs, help docs, and knowledge bases so users get natural-language answers with citations—deflecting tickets and speeding resolution.
Can I use SimplyRAG for an internal knowledge base?▼
Yes. Index policies, handbooks, and wikis into collections; anyone can ask in plain language and get cited answers.
Can I query my SQL database with natural language using SimplyRAG?▼
Yes. Connect PostgreSQL, MySQL, SQLite, etc.; SimplyRAG uses the schema so users ask questions in plain language without writing SQL.
Technical & integrations
What file types does SimplyRAG support?▼
PDF, text, and other document types. File buckets support S3, Azure Blob, and GCS.
Does SimplyRAG work with Pinecone / OpenAI / my own models?▼
Yes. BYOK supports Pinecone for vectors and multiple embedding providers (OpenAI, Groq, Google, HuggingFace). You keep control of keys and models.
Can I use SimplyRAG via API?▼
Yes. REST APIs for chat/query with streaming and collection selection. Power search, support bots, or internal tools from your app.
Still have questions?