Skip to content

This repository provides programs to build Retrieval Augmented Generation (RAG) code for Generative AI with LlamaIndex, Deep Lake, and Pinecone leveraging the power of OpenAI and Hugging Face models for generation and evaluation.

License

Notifications You must be signed in to change notification settings

Denis2054/RAG-Driven-Generative-AI

Repository files navigation

RAG-driven Generative AI, First Edition

This is the code repository for RAG Driven GenAI, First Edition, published by Packt.

Last updated: December 5, 2024.

See the CHANGELOG.md for details.

Build custom retrieval augmented generation pipelines with LlamaIndex, Deep Lake, and Pinecone

Denis Rothman

      Free PDF       Graphic Bundle       Amazon      

About the book

RAG for GenAI, First Edition

RAG-Driven Generative AI provides a roadmap for building effective LLM, computer vision, and generative AI systems that balance performance and costs. This book offers a detailed exploration of RAG and how to design, manage, and control multimodal AI pipelines. By connecting outputs to traceable source documents, RAG improves output accuracy and contextual relevance, offering a dynamic approach to managing large volumes of information. This AI book also shows you how to build a RAG framework, providing practical knowledge on vector stores, chunking, indexing, and ranking. You'll discover techniques to optimize your project's performance and better understand your data, including using adaptive RAG and human feedback to refine retrieval accuracy, balancing RAG with fine-tuning, implementing dynamic RAG to enhance real-time decision-making, and visualizing complex data with knowledge graphs. You'll be exposed to a hands-on blend of frameworks like LlamaIndex and Deep Lake, vector databases such as Pinecone and Chroma, and models from Hugging Face and OpenAI. By the end of this book, you will have acquired the skills to implement intelligent solutions, keeping you competitive in fields ranging from production to customer service across any project.

Key Learnings

  • Scale RAG pipelines to handle large datasets efficiently
  • Employ techniques that minimize hallucinations and ensure accurate responses
  • Implement indexing techniques to improve AI accuracy with traceable and transparent outputs
  • Customize and scale RAG-driven generative AI systems across domains
  • Find out how to use Deep Lake and Pinecone for efficient and fast data retrieval
  • Control and build robust generative AI systems grounded in real-world data
  • Combine text and image data for richer, more informative AI responses

Chapters

This repo is continually updated and upgraded.
📝 For details on updates and improvements, see the Changelog.
🐬 New bonus notebooks to explore, see Changelog.
🚩 If you see anything that doesn't run as expected, raise an issue, and we'll work on it!

Platforms

You can run the notebooks directly from the table below:

Chapter Colab Kaggle
Part I The RAG Framework
Chapter 1: Why Retrieval Augmented Generation(RAG)?
  • RAG_Overview.ipynb
Open In Colab Open In Kaggle
RAG overview with Elon Musk's xAI grok-beta LLM model
  • 🐬RAG_Overview_Grok.ipynb with Elon Musk's xAI grok-beta
Open In Colab Open In Kaggle
Chapter 2, RAG Embeddings and Vector Stores with Deep Lake and OpenAI
  • 1_Data_collection_preparation.ipynb
Open In Colab Open In Kaggle
  • 2_Embeddings_vector_store.ipynb
Open In Colab Open In Kaggle
  • 3_Augmented_Generation.ipynb
Open In Colab Open In Kaggle
RAG with OpenAI Reasoning models: the o1-preview API
  • 🐬3_Augmented_Generation_o1_preview.ipynb
Open In Colab Open In Kaggle
Chapter 3, Building Index-based RAG with LlamaIndex, Deep Lake, and OpenAI
  • Deep_Lake_LlamaIndex_OpenAI_RAG.ipynb
Open In Colab Open In Kaggle
Chapter 4, Multimodal Modular RAG for Drone Technology
  • Multimodal_Modular_RAG_Drones.ipynb
Open In Colab Open In Kaggle
Chapter 5: Boosting RAG Performance with Expert Human Feedback
  • Adaptive_RAG.ipynb
Open In Colab Open In Kaggle
Chapter 6, Scaling RAG Bank Customer Data with Pinecone
  • Pipeline_1_Collecting_and_preparing_the_dataset.ipynb
Open In Colab Open In Kaggle
  • Pipeline_2_Scaling_a_Pinecone_Index.ipynb
Open In Colab Open In Kaggle
  • Pipeline_3_RAG_Generative_AI.ipynb
Open In Colab Open In Kaggle
Chapter 7, Building Scalable Knowledge Graph-based RAG with Wikipedia and LlamaIndex
  • Tree_2_Graph.ipynb
Open In Colab Open In Kaggle
  • Wikipedia_API.ipynb
Open In Colab Open In Kaggle
  • Knowledge_Graph__Deep_Lake_LlamaIndex_OpenAI_RAG.ipynb
Open In Colab Open In Kaggle
Chapter 8, Dynamic RAG with Chroma and Hugging Face Llama
  • Dynamic_RAG_with_Chroma_and_Hugging_Face.ipynb
Open In Colab Open In Kaggle
Chapter 9, Empowering AI Models: Fine-Tuning RAG Data and Human Feedback
  • Fine_tuning_OpenAI_GPT-4o-mini.ipynb
Open In Colab Open In Kaggle
Chapter 10, RAG for Video Stock Production with Pinecone and OpenAI
  • Video_dataset_visualization.ipynb
Open In Colab Open In Kaggle
  • Pipeline_1_Generator_and_Commentator.ipynb
Open In Colab Open In Kaggle
  • Pipeline_2_The_Vector_Store_Administrator.ipynb
Open In Colab Open In Kaggle
  • Pipeline_3_The_Video_Expert.ipynb
Open In Colab Open In Kaggle

Requirements for this book

You should have basic Natural Processing Language (NLP) knowledge and some experience with Python. Additionally, most of the programs in this book are provided as Jupyter notebooks. To run them, all you need is a free Google Gmail account, allowing you to execute the notebooks on Google Colaboratory’s free virtual machine (VM). You will also need to generate API tokens for OpenAI, Activeloop, and Pinecone. You might require to download modules while running the notebooks or you can simply run the requirements_01.txt file in the env you create. Some of the modules are as follows:

Modules Version
deeplake 3.9.18 (with Pillow)
openai 1.40.3
transformers 4.41.2
numpy >=1.24.1
deepspeed 0.10.1

Note: This GitHub repository will be continually maintained and updated as the platforms evolve. As such, the versions will evolve in time in this repo so that you will always have access to state-of-art programs!

Get to know Author

Denis Rothman graduated from Sorbonne University and Paris-Cité University, designing one of the first patented encoding and embedding systems and teaching at Paris-I Panthéon Sorbonne.He authored one of the first patented word encoding and AI bots/robots. He began his career delivering a Natural Language Processing (NLP) chatbot for Moët et Chandon(LVMH) and an AI tactical defense optimizer for Airbus (formerly Aerospatiale). Denis then authored an AI optimizer for IBM and luxury brands, leading to an Advanced Planning and Scheduling (APS) solution used worldwide. LinkedIn

Other Related Books

About

This repository provides programs to build Retrieval Augmented Generation (RAG) code for Generative AI with LlamaIndex, Deep Lake, and Pinecone leveraging the power of OpenAI and Hugging Face models for generation and evaluation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages