
Product Overview
AnythingLLM is a full-stack application that allows you to build a private ChatGPT using off-the-shelf commercial biglanguage models or popular open-source biglanguage models, combined with a vector database solution, and no longer be constrained: you can run it locally or host it remotely, and be able to chat intelligently with any document you provide.
AnythingLLM divides your documents into groups calledworkspaces (workspace) objects. Workspaces function like threads with the added containerization of documents. Workspaces can share documents, but the contents of the workspaces do not interfere with or pollute each other, so you can keep the context of each workspace clear.
Some Cool Features of AnythingLLM
- 🆕 Custom AI Agent
- 🆕 No-Code AI Agent Builder
- 🖼️ Multi-user instance support and privilege management (closed source and open source LLM support!)
- 👤 Multi-user instance support and privilege management Docker version only
- 🦾 Intelligent body Agents in the workspace (browsing the web, running code, etc.)
- 💬 Customizable embeddable chat window for your website
- 📖 Support for multiple document types (PDF, TXT, DOCX, etc.)
- Manage documents in vector databases through a simple user interface
- Two modes of dialog:
chatsrespond in singingconsult (a document etc). Chat mode keeps a record of previous conversations. Query mode does simple questions and answers about your documents - The content of the referenced document will be provided in the chat
- 100% cloud deployment ready.
- "Deploying your own LLM model".
- Manage very large documents with high efficiency and low consumption. Embedding a huge document or transcript only once. Save 90% over other document chatbot solutions.
- Full developer API for custom integrations!
Supported LLMs, embedding models, transcription models and vector databases
Supported by LLM:
- Any open-source model compatible with llama.cpp
- OpenAI
- OpenAI (General)
- Azure OpenAI
- Anthropic
- Google Gemini Pro
- Hugging Face (Chat Model)
- Ollama (Chat Model)
- LM Studio (All Models)
- LocalAi (All Models)
- Together AI (Chat Model)
- Fireworks AI (Chat Model)
- Perplexity (Chat Model)
- OpenRouter (Chat Model)
- Novita AI (Chat Model)
- Mistral
- Groq
- Cohere
- KoboldCPP
- PPIO (Chat Model)
Supported Embedding Models:
- AnythingLLM Native Embedder(default)
- OpenAI
- Azure OpenAI
- LocalAi (All)
- Ollama (All)
- LM Studio (All)
- Cohere
Supported transcription models:
- AnythingLLM built-in (default)
- OpenAI
TTS (Text to Speech) support:
- Browser built-in (default)
- PiperTTSLocal – Runs in your browser
- OpenAI Text-to-Speech
- ElevenLabs
- Any OpenAI-compatible TTS service
STT (Speech to Text) support:
- Browser built-in (default)
Supported vector databases:
Technology Overview
This single library consists of three main parts:
frontend: A viteJS + React front end that you can run to easily create and manage all the content LLM can use.server: A NodeJS express server that handles all interactions and does all vector database management and LLM interactions.docker: Docker commands and build process + information about building from source.collector: NodeJS express server for processing and parsing documents from the UI.
- ¥Download for freeDownload after commentDownload after login
- {{attr.name}}: