Home GitHub Trending MCP Awesome-llm-apps
🔌 MCP Intermediate

Awesome-llm-apps

by Shubhamsaboo

Production-Ready LLM Applications with AI Agents & RAG

Comprehensive collection of working LLM applications featuring AI agents and RAG implementations across major AI providers and open-source models.

88,282 Stars
12,644 Forks
88,282 Watchers
11 Issues
🔌

About This Project

This repository serves as a practical blueprint for developers building production-grade applications powered by Large Language Models. It bridges the gap between theoretical AI concepts and real-world implementation by providing fully functional examples that demonstrate how to integrate AI agents, Retrieval-Augmented Generation (RAG), and various LLM providers into working applications.

The collection showcases integration patterns for multiple AI platforms including OpenAI's GPT models, Anthropic's Claude, Google's Gemini, and various open-source alternatives. Each example is crafted to demonstrate best practices in prompt engineering, context management, and efficient API usage, making it an invaluable resource for teams looking to accelerate their AI development timeline.

What sets this repository apart is its focus on practical, deployable solutions rather than toy examples. Developers can explore different architectural approaches to common LLM challenges such as maintaining conversation context, implementing semantic search, orchestrating multi-agent workflows, and optimizing response quality through retrieval augmentation.

With over 87,000 stars and an active community, this project has become a go-to reference for Python developers entering the LLM application space. The diverse range of examples allows teams to quickly prototype ideas, compare different AI providers, and understand the trade-offs between various implementation strategies.

Key Features

  • Ready-to-run examples for OpenAI, Anthropic, Gemini, and open-source LLMs
  • Complete RAG implementations with vector databases and semantic search
  • AI agent architectures for autonomous task execution and decision-making
  • Multi-provider integration patterns for flexible LLM deployment
  • Production-focused code with error handling and best practices

How You Can Use It

1

Building conversational AI chatbots with memory and context awareness

2

Implementing document Q&A systems using RAG for enterprise knowledge bases

3

Creating multi-agent workflows for complex task automation and orchestration

4

Prototyping and comparing different LLM providers for specific use cases

5

Learning production patterns for integrating AI capabilities into existing applications

Who Is This For?

Python developers, AI engineers, and technical teams looking to build LLM-powered applications with practical, production-ready code examples