@@ -67,53 +67,79 @@ anthropic_model = AnthropicModel(
67
67
agent = Agent(model = anthropic_model)
68
68
```
69
69
70
- ### Ollama (Local Models)
70
+ ### LiteLLM
71
71
72
- First install the ` ollama ` python client:
72
+ LiteLLM is a unified interface for various LLM providers that allows you to interact with models from OpenAI and many others.
73
+
74
+ First install the ` litellm ` python client:
73
75
74
76
``` bash
75
- pip install strands-agents[ollama ]
77
+ pip install strands-agents[litellm ]
76
78
```
77
79
78
- Next, import and initialize the ` OllamaModel ` provider:
80
+ Next, import and initialize the ` LiteLLMModel ` provider:
79
81
80
82
``` python
81
83
from strands import Agent
82
- from strands.models.ollama import OllamaModel
84
+ from strands.models.litellm import LiteLLMModel
83
85
84
- ollama_model = OllamaModel(
85
- host = " http://localhost:11434" # Ollama server address
86
- model_id = " llama3" , # Specify which model to use
87
- temperature = 0.3 ,
86
+ litellm_model = LiteLLMModel(
87
+ client_args = {
88
+ " api_key" : " <KEY>" ,
89
+ },
90
+ model_id = " gpt-4o"
88
91
)
89
92
90
- agent = Agent(model = ollama_model )
93
+ agent = Agent(model = litellm_model )
91
94
```
92
95
93
- ### LiteLLM
96
+ ### Llama API
94
97
95
- LiteLLM is a unified interface for various LLM providers that allows you to interact with models from OpenAI and many others.
96
-
97
- First install the ` litellm ` python client:
98
+ Llama API is a Meta-hosted API service that helps you integrate Llama models into your applications quickly and efficiently.
98
99
100
+ First install the `` python client:
99
101
``` bash
100
- pip install strands-agents[litellm ]
102
+ pip install strands-agents[llamaapi ]
101
103
```
102
104
103
- Next, import and initialize the ` LiteLLMModel ` provider:
105
+ Next, import and initialize the ` LlamaAPIModel ` provider:
104
106
105
107
``` python
106
108
from strands import Agent
107
- from strands.models.litellm import LiteLLMModel
109
+ from strands.models.llamaapi import LLamaAPIModel
108
110
109
- litellm_model = LiteLLMModel (
111
+ model = LlamaAPIModel (
110
112
client_args = {
111
113
" api_key" : " <KEY>" ,
112
114
},
113
- model_id = " gpt-4o"
115
+ # **model_config
116
+ model_id = " Llama-4-Maverick-17B-128E-Instruct-FP8" ,
114
117
)
115
118
116
- agent = Agent(model = litellm_model)
119
+ agent = Agent(models = LLamaAPIModel)
120
+ ```
121
+
122
+ ### Ollama (Local Models)
123
+
124
+ First install the ` ollama ` python client:
125
+
126
+ ``` bash
127
+ pip install strands-agents[ollama]
128
+ ```
129
+
130
+ Next, import and initialize the ` OllamaModel ` provider:
131
+
132
+ ``` python
133
+ from strands import Agent
134
+ from strands.models.ollama import OllamaModel
135
+
136
+ ollama_model = OllamaModel(
137
+ host = " http://localhost:11434" # Ollama server address
138
+ model_id = " llama3" , # Specify which model to use
139
+ temperature = 0.3 ,
140
+ )
141
+
142
+ agent = Agent(model = ollama_model)
117
143
```
118
144
119
145
### Custom Model Providers
0 commit comments