WLhI-yLBLWlfL8c6.mp4
A fully open-source alternative to NotebookLM, backed by LlamaCloud.
This project uses uv to manage dependencies. Before you begin, make sure you have uv installed.
On macOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | shOn Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"For more install options, see uv's official documentation.
1. Clone the Repository
git clone https://github.com/run-llama/notebookllama
cd notebookllama/2. Install Dependencies
uv sync3. Configure API Keys
First, create your .env file by renaming the example file:
mv .env.example .envNext, open the .env file and add your API keys:
OPENAI_API_KEY: find it on OpenAI PlatformELEVENLABS_API_KEY: find it on ElevenLabs SettingsLLAMACLOUD_API_KEY: find it on LlamaCloud Dashboard
🌍 Regional Support: LlamaCloud operates in multiple regions. If you're using a European region, configure it in your
.envfile:
- For North America: This is the default region - no configuration necesary.
- For Europe (EU): Uncomment and set
LLAMACLOUD_REGION="eu"
4. Activate the Virtual Environment
(on mac/unix)
source .venv/bin/activate(on Windows):
.\.venv\Scripts\activate5. Create LlamaCloud Agent & Pipeline
You will now execute two scripts to configure your backend agents and pipelines.
First, create the data extraction agent:
uv run tools/create_llama_extract_agent.pyNext, run the interactive setup wizard to configure your index pipeline.
⚡ Quick Start (Default OpenAI): For the fastest setup, select "With Default Settings" when prompted. This will automatically create a pipeline using OpenAI's
text-embedding-3-smallembedding model.
🧠 Advanced (Custom Embedding Models): To use a different embedding model, select "With Custom Settings" and follow the on-screen instructions.
Run the wizard with the following command:
uv run tools/create_llama_cloud_index.py6. Launch Backend Services
This command will start the required Postgres and Jaeger containers.
docker compose up -d7. Run the Application
First, run the MCP server:
uv run src/notebookllama/server.pyThen, in a new terminal window, launch the Streamlit app:
streamlit run src/notebookllama/Home.pyImportant
You might need to install ffmpeg if you do not have it installed already
And start exploring the app at http://localhost:8501/.
Contribute to this project following the guidelines.
This project is provided under an MIT License.
