Build, audit, and optimize RAG pipelines with Claude Code. Ideal for operations teams managing AI-driven knowledge retrieval systems. Connects to vector databases and LLM workflows for enhanced document processing and retrieval.
git clone https://github.com/floflo777/claude-rag-skills.gitBuild, audit, and optimize RAG pipelines with Claude Code. Ideal for operations teams managing AI-driven knowledge retrieval systems. Connects to vector databases and LLM workflows for enhanced document processing and retrieval.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/floflo777/claude-rag-skillsCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
I need to build a RAG pipeline for [COMPANY] in the [INDUSTRY] sector. The pipeline should connect to [VECTOR_DATABASE] and process [DATA_TYPE] documents. Provide a step-by-step guide with Claude Code snippets to set up the pipeline, including data ingestion, vectorization, and retrieval components.
# RAG Pipeline Setup Guide for [COMPANY]
## Step 1: Data Ingestion
```python
# Claude Code snippet for data ingestion
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("[DATA_PATH]/documents/")
documents = loader.load()
```
## Step 2: Vectorization
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import Chroma
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
docs = text_splitter.split_documents(documents)
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
vectorstore = Chroma.from_documents(docs, embeddings)
```
## Step 3: Retrieval
```python
from langchain.chains import RetrievalQA
from langchain.llms import Claude
llm = Claude(temperature=0)
qa_chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever())
```
## Optimization Tips
- Ensure chunk size and overlap are optimized for your document type.
- Monitor and update embeddings regularly to maintain retrieval accuracy.
- Implement caching for frequently accessed documents to improve performance.AI assistant built for thoughtful, nuanced conversation
IronCalc is a spreadsheet engine and ecosystem
Service Management That Turns Chaos Into Control
Customer feedback management made simple
Enterprise workflow automation and service management platform
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan