Use UV environment manager to run the examples.
# Install uv on MacOS
brew install uv
# Install uv on Windows
curl -LsSf https://astral.sh/uv/install.sh | sh# Clone the Git repository
git clone <this_repository_url># Sync the environment
uv syncCreate a .env file in the root directory and add your GitHub inference credentials:
GITHUB_INFERENCE_ENDPOINT="<github_inference_endpoint>"
GITHUB_TOKEN="<github_token>"
LOG_LEVEL="<log_level>"
OPIK_API_KEY="<opik_api_key>" # Optional for observability
TAVILY_API_KEY="<tavily_api_key>"
DEEPGRAM_API_KEY="<deepgram_api_key>"The examples are currently configured to use GitHub Models via AzureAIChatCompletionsModel.
If you wish to use a different provider (e.g., OpenAI, Azure OpenAI, or Anthropic), code changes are required.
Example:
- Open src/graph_examples/doc_generator/doc_gen.py
- Update the DocGen class initialization to use your preferred LangChain chat model.
- Update the model parameters to match your provider's available models.
| AI Product Comparision Studio Production Studio powered by LangGraph Agents, Tavily, DuckDuckGo & Deepgram |
|
Sample Output: apple_watch_se3_vs_fitbit_sense.mp3, apple_watch_se3_vs_fitbit_sense.txt From the repository root folder, execute: uv run product_review
|
|
| Document Generator | Search and Reranking Analysis reranking via FlashRank & ms-marco-MiniLM-L-12-v2 cross-encoder |
|
|
From the repository root folder, execute: uv run doc_gen |
From the repository root folder, execute: uv run rag_search |
This is a living repository. Expect regular updates as the design matures, new workflows are added, and existing workflows are refined.





