You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -52,15 +52,61 @@ Here is a screenshot showing the chatting web application with requests and resp
52
52
53
53
## Getting Started
54
54
55
-
### Quick Deploy
55
+
### Quick Start
56
56
57
57
|[](https://codespaces.new/Azure-Samples/get-started-with-ai-agents)|[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/Azure-Samples/get-started-with-ai-agents)|
58
58
|---|---|
59
59
60
-
Github Codespaces and Dev Containers both allow you to download and deploy the code for development. You can also continue with local development. Once you have selected your environment, [click here to launch the development and deployment guide](./docs/deployment.md)
60
+
1. Click `Open in GitHub Codespaces` or `Dev Containers` button above
61
+
2. Wait for the environment to load
62
+
3. Run the following commands in the terminal:
63
+
```bash
64
+
azd up
65
+
```
66
+
4. Follow the prompts to select your Azure subscription and region
67
+
5. Wait for deployment to complete (5-20 minutes) - you'll get a web app URL when finished
61
68
69
+
For detailed deployment options and troubleshooting, see the [full deployment guide](./docs/deployment.md).
62
70
**After deployment, try these [sample questions](./docs/sample_questions.md) to test your agent.**
63
71
72
+
## Agent Evaluation
73
+
74
+
AI Foundry offers a number of [built-in evaluators](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/agent-evaluate-sdk) to measure the quality, efficiency, risk and safety of your agents. For example, intent resolution, tool call accuracy, and task adherence evaluators are targeted to assess the performance of agent workflow, while content safety evaluator checks for inappropriate content in the responses such as violence or hate.
75
+
76
+
In this template, we show how these evaluations can be performed during different phases of your development cycle.
77
+
78
+
-**Local development**: You can use this [local evaluation script](evals/evaluate.py) to get performance and evaluation metrics based on a set of [test queries](evals/eval-queries.json) for a sample set of built-in evaluators.
79
+
80
+
The script reads the following environment variables:
81
+
-`AZURE_EXISTING_AIPROJECT_ENDPOINT`: AI Project endpoint
82
+
-`AZURE_EXISTING_AGENT_ID`: AI Agent Id, with fallback logic to look up agent Id by name `AZURE_AI_AGENT_NAME`
83
+
-`AZURE_AI_AGENT_DEPLOYMENT_NAME`: Deployment model used by the AI-assisted evaluators, with fallback logic to your agent model
84
+
85
+
To install required packages and run the script:
86
+
87
+
```shell
88
+
python -m pip install -r src/requirements.txt
89
+
python -m pip install azure-ai-evaluation
90
+
91
+
python evals/evaluate.py
92
+
```
93
+
94
+
-**Monitoring**: When tracing is enabled, the [application code](src/api/routes.py) sends an asynchronous evaluation request after processing a thread run, allowing continuous monitoring of your agent. You can view results from the AI Foundry Tracing tab.
Alternatively, you can go to your Application Insights logs for an interactive experience. Here is an example query to see logs on thread runs and related events.
-**Continuous Integration**: You can try the [AI Agent Evaluation GitHub action](https://github.com/microsoft/ai-agent-evals) using the [sample GitHub workflow](.github/workflows/ai-evaluation.yaml) in your CI/CD pipeline. This GitHub action runs a set of queries against your agent, performs evaluations with evaluators of your choice, and produce a summary report. It also supports a comparison mode with statistical test, allowing you to iterate agent changes on your production environment with confidence. See [documentation](https://github.com/microsoft/ai-agent-evals) for more details.
109
+
64
110
65
111
## Other Features
66
112
Once you have the agents and the web app working, you are encouraged to try one of the following:
Copy file name to clipboardExpand all lines: docs/other_features.md
-38Lines changed: 0 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,44 +14,6 @@ After accessing you resource group in Azure portal, choose your container app fr
14
14
15
15
You can view the App Insights tracing in Azure AI Foundry. Select your project on the Azure AI Foundry page and then click 'Tracing'.
16
16
17
-
## Agent Evaluation
18
-
19
-
AI Foundry offers a number of [built-in evaluators](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/agent-evaluate-sdk) to measure the quality, efficiency, risk and safety of your agents. For example, intent resolution, tool call accuracy, and task adherence evaluators are targeted to assess the performance of agent workflow, while content safety evaluator checks for inappropriate content in the responses such as violence or hate.
20
-
21
-
In this template, we show how these evaluations can be performed during different phases of your development cycle.
22
-
23
-
-**Local development**: You can use this [local evaluation script](../evals/evaluate.py) to get performance and evaluation metrics based on a set of [test queries](../evals/eval-queries.json) for a sample set of built-in evaluators.
24
-
25
-
The script reads the following environment variables:
26
-
-`AZURE_EXISTING_AIPROJECT_ENDPOINT`: AI Project endpoint
27
-
-`AZURE_EXISTING_AGENT_ID`: AI Agent Id, with fallback logic to look up agent Id by name `AZURE_AI_AGENT_NAME`
28
-
-`AZURE_AI_AGENT_DEPLOYMENT_NAME`: Deployment model used by the AI-assisted evaluators, with fallback logic to your agent model
29
-
30
-
To install required packages and run the script:
31
-
32
-
```shell
33
-
python -m pip install -r src/requirements.txt
34
-
python -m pip install azure-ai-evaluation
35
-
36
-
python evals/evaluate.py
37
-
```
38
-
39
-
-**Monitoring**: When tracing is enabled, the [application code](../src/api/routes.py) sends an asynchronous evaluation request after processing a thread run, allowing continuous monitoring of your agent. You can view results from the AI Foundry Tracing tab.
40
-

41
-
Alternatively, you can go to your Application Insights logs for an interactive experience. Here is an example query to see logs on thread runs and related events.
-**Continuous Integration**: You can try the [AI Agent Evaluation GitHub action](https://github.com/microsoft/ai-agent-evals) using the [sample GitHub workflow](../.github/workflows/ai-evaluation.yaml) in your CI/CD pipeline. This GitHub action runs a set of queries against your agent, performs evaluations with evaluators of your choice, and produce a summary report. It also supports a comparison mode with statistical test, allowing you to iterate agent changes on your production environment with confidence. See [documentation](https://github.com/microsoft/ai-agent-evals) for more details.
54
-
55
17
## AI Red Teaming Agent
56
18
57
19
The [AI Red Teaming Agent](https://learn.microsoft.com/azure/ai-foundry/concepts/ai-red-teaming-agent) is a powerful tool designed to help organizations proactively find security and safety risks associated with generative AI systems during design and development of generative AI models and applications.
0 commit comments