Nostr RAG (Retrieval-Augmented Generation)¶
This module provides classes for building Retrieval-Augmented Generation (RAG) agents that can store and retrieve information from the Nostr network.
Usage¶
import asyncio
from agentstr import NostrRAG
# Note: To use NostrRAG, you must install the required dependencies:
# pip install "agentstr-sdk[rag]"
# You will also need an LLM API key
# Create a RAG agent, connecting to Nostr relays and using an LLM.
rag_agent = NostrRAG()
async def main():
    # Ask a question. The agent will build a knowledge base from
    # recent Nostr posts related to the query and generate an answer.
    question = "What's new with AI?"
    answer = await rag_agent.query(question, limit=5)
    print(f"Question: {question}")
    print(f"Answer: {answer}")
if __name__ == "__main__":
    asyncio.run(main())
Note
For a complete, working example, check out the Nostr RAG example.
Environment Variables¶
NostrRAG uses the following environment variables by default through its underlying components:
- NOSTR_RELAYS: A comma-separated list of relay URLs to connect to. If not provided as a parameter, the agent will use this environment variable. 
- NOSTR_NSEC: The Nostr private key in ‘nsec’ format for authenticated operations. If not provided as a parameter, the agent will look for this environment variable. 
- LLM_BASE_URL: The base URL for the LLM API endpoint. If not provided as a parameter, the agent will use this environment variable. 
- LLM_API_KEY: The API key for accessing the LLM service. If not provided as a parameter, the agent will use this environment variable. 
- LLM_MODEL_NAME: The name of the LLM model to use for chat interactions. If not provided as a parameter, the agent will use this environment variable. 
Note
You can override these environment variables by passing explicit parameters to the NostrRAG constructor, such as relays, private_key, llm_base_url, llm_api_key, or llm_model_name.
Reference¶
- pydantic model agentstr.agents.nostr_rag.Author[source]¶
- Bases: - BaseModel- Show JSON schema- { "title": "Author", "type": "object", "properties": { "pubkey": { "title": "Pubkey", "type": "string" }, "name": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Name" } }, "required": [ "pubkey" ] } - Fields:
 
- class agentstr.agents.nostr_rag.NostrRAG(nostr_client: NostrClient | None = None, vector_store=None, relays: list[str] | None = None, private_key: str | None = None, nwc_str: str | None = None, embeddings=None, llm=None, llm_model_name=None, llm_base_url=None, llm_api_key=None, known_authors: list[Author] | None = None)[source]¶
- Bases: - object- Retrieval-Augmented Generation (RAG) system for Nostr events. - This class fetches Nostr events, builds a vector store knowledge base, and enables semantic search and question answering over the indexed content. - Examples - Simple question answering over recent posts: - import asyncio from langchain_openai import ChatOpenAI from agentstr import NostrRAG relays = ["wss://relay.damus.io"] rag = NostrRAG(relays=relays, llm=ChatOpenAI(model_name="gpt-3.5-turbo")) async def main(): answer = await rag.query(question="What's new with Bitcoin?", limit=8) print(answer) asyncio.run(main()) - Full runnable script: rag.py - __init__(nostr_client: NostrClient | None = None, vector_store=None, relays: list[str] | None = None, private_key: str | None = None, nwc_str: str | None = None, embeddings=None, llm=None, llm_model_name=None, llm_base_url=None, llm_api_key=None, known_authors: list[Author] | None = None)[source]¶
- Initialize the NostrRAG system. - Parameters:
- nostr_client – An existing NostrClient instance (optional). 
- vector_store – An existing vector store instance (optional). 
- relays – List of Nostr relay URLs (if no client provided). 
- private_key – Nostr private key in ‘nsec’ format (if no client provided). 
- nwc_str – Nostr Wallet Connect string for payments (optional). 
- embeddings – Embedding model for vectorizing documents (defaults to FakeEmbeddings with size 256). 
- llm – Language model (optional). 
- llm_model_name – Name of the language model to use (optional). 
- llm_base_url – Base URL for the language model (optional). 
- llm_api_key – API key for the language model (optional). 
 
- Raises:
- ImportError – If LangChain is not installed. 
 
 - async build_knowledge_base(question: str, limit: int = 10, query_type: Literal['hashtags', 'authors'] = 'hashtags') list[dict][source]¶
- Build a knowledge base from Nostr events relevant to the question. - Parameters:
- question – The user’s question to guide hashtag selection 
- limit – Maximum number of posts to retrieve 
 
- Returns:
- List of retrieved events 
 
 - async retrieve(question: str, limit: int = 5, query_type: Literal['hashtags', 'authors'] = 'hashtags') list[Document][source]¶
- Retrieve relevant documents from the knowledge base. - Parameters:
- question – The user’s question 
- limit – Maximum number of documents to retrieve 
- query_type – Type of query to use (hashtags or authors) 
 
- Returns:
- List of retrieved documents 
 
 - async query(question: str, limit: int = 5, query_type: Literal['hashtags', 'authors'] = 'hashtags') str[source]¶
- Ask a question using the knowledge base. - Parameters:
- question – The user’s question 
- limit – Number of documents to retrieve for context 
- query_type – Type of query to use (hashtags or authors) 
 
- Returns:
- The generated response 
 
 
See Also¶
- agentstr.agents.nostr_rag.NostrRAG— for the base class of Nostr RAG.