
6 Apps tagged with “LLM”
E2B
Build AI agents with safe, ephemeral sandboxes
Secure execution environments for AI-powered apps
E2B is an open-source platform that allows developers to create and run AI agents and applications in secure, isolated sandboxes. It provides ephemeral, serverless environments where code can be executed safely, enabling scalable and secure AI-powered workflows.
With E2B, you can build AI apps that can execute untrusted code, automate workflows, and integrate with external APIs without worrying about security or infrastructure management.
Website: E2B.dev
LibreChat
Self‑host your own AI assistant
LibreChat is a powerful, open-source, self-hosted AI chat interface compatible with OpenAI, Azure, Google PaLM, Anthropic, Hugging Face, and more. With a sleek UI and robust backend, it enables users to privately run their own chatbot with features like multi-user support, chat history, prompt management, file uploads, plugin support, and advanced system prompts. LibreChat brings the full power of modern LLMs into your own hands — no external dependencies required.
Website: librechat.ai
LiteLLM
LLM Gateway for authentication, load balancing, and spend tracking across 100+ LLMs.
Lightweight gateway for managing and unifying LLM APIs
LiteLLM is an open-source gateway that lets you call 100+ LLM APIs using the
OpenAI API format. It simplifies authentication, handles load balancing, and
tracks usage and costs across providers. Designed for developers who want a
unified, efficient way to integrate LLMs into their applications.
Website: LiteLLM
MCP Context Forge
A customizable context generation server for MCP.
MCP Context Forge is a server for the Model Context Protocol designed to dynamically generate, modify, and manage contextual information for LLM agents. It allows developers to customize how context is fetched, combined, and delivered to AI models, enabling more relevant and adaptive responses.
Website: MCP Context Forge
Open WebUI
A sleek, self-hosted interface for LLMs
Open WebUI is an open-source, self-hostable web-based UI designed to work seamlessly with local or remote Large Language Models (LLMs). It provides a privacy-respecting, responsive, and extensible interface for running your own AI assistant — whether for personal productivity, creative tasks, or private chat. Compatible with OpenAI-compatible APIs (like OpenRouter or LocalAI), it offers session memory, file uploads, prompt customization, and multi-user support.
Website: openwebui.com
Text Generation WebUI
Self‑host large language models with a powerful web interface
Text Generation WebUI is an extensible, locally hosted interface for running large language models (LLMs) on your own machine or server. Designed for flexibility, it supports a wide range of models and backends including Transformers, llama.cpp, ExLlama, GGUF, KoboldAI, and more. The interface offers chat modes, text completions, roleplay, character memory, plugin tools, and powerful fine-tuning options. Whether you're a researcher, developer, or AI enthusiast, this tool puts model inference fully in your hands.
Website: GitHub Project Page