You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/other_features.md
+38-8Lines changed: 38 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,32 +2,53 @@
2
2
3
3
## Tracing and Monitoring
4
4
5
-
You can view console logs in Azure portal. You can get the link to the resource group with the azd tool:
5
+
**First, if tracing isn't enabled yet, enable tracing by setting the environment variable:**
6
+
7
+
```shell
8
+
azd env set ENABLE_AZURE_MONITOR_TRACING true
9
+
azd deploy
10
+
```
11
+
12
+
You can view console logs in the Azure portal. You can get the link to the resource group with the azd tool:
6
13
7
14
```shell
8
15
azd show
9
16
```
10
17
11
18
Or if you want to navigate from the Azure portal main page, select your resource group from the 'Recent' list, or by clicking the 'Resource groups' and searching your resource group there.
12
19
13
-
After accessing you resource group in Azure portal, choose your container app from the list of resources. Then open 'Monitoring' and 'Log Stream'. Choose the 'Application' radio button to view application logs. You can choose between real-time and historical using the corresponding radio buttons. Note that it may take some time for the historical view to be updated with the latest logs.
20
+
After accessing your resource group in Azure portal, choose your container app from the list of resources. Then open 'Monitoring' and 'Log Stream'. Choose the 'Application' radio button to view application logs. You can choose between real-time and historical using the corresponding radio buttons. Note that it may take some time for the historical view to be updated with the latest logs.
21
+
22
+
You can view the App Insights tracing in Azure AI Foundry. Select your project on the Azure AI Foundry page and then click 'Tracing'.
14
23
15
-
You can view the App Insights tracing in Azure AI Foundry. Select your project on the Azure AI Foundry page and then click 'Tracing'.
24
+

16
25
17
26
## Agent Evaluation
18
27
19
-
AI Foundry offers a number of [built-in evaluators](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/agent-evaluate-sdk) to measure the quality, efficiency, risk and safety of your agents. For example, intent resolution, tool call accuracy, and task adherence evaluators are targeted to assess the performance of agent workflow, while content safety evaluator checks for inappropriate content in the responses such as violence or hate.
28
+
**First, make sure tracing is working by following the steps in the [Tracing and Monitoring](#tracing-and-monitoring) section above.**
20
29
21
-
In this template, we show how these evaluations can be performed during different phases of your development cycle.
30
+
AI Foundry offers a number of [built-in evaluators](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk) to measure the quality, efficiency, risk and safety of your agents. For example, intent resolution, tool call accuracy, and task adherence evaluators are targeted to assess the performance of agent workflow, while content safety evaluator checks for inappropriate content in the responses such as violence or hate. (screenshot)
31
+
32
+
In this template, we show how these evaluations can be performed during different phases of your development cycle.
22
33
23
34
-**Local development**: You can use this [local evaluation script](../evals/evaluate.py) to get performance and evaluation metrics based on a set of [test queries](../evals/eval-queries.json) for a sample set of built-in evaluators.
24
35
25
36
The script reads the following environment variables:
26
37
-`AZURE_EXISTING_AIPROJECT_ENDPOINT`: AI Project endpoint
27
38
-`AZURE_EXISTING_AGENT_ID`: AI Agent Id, with fallback logic to look up agent Id by name `AZURE_AI_AGENT_NAME`
28
39
-`AZURE_AI_AGENT_DEPLOYMENT_NAME`: Deployment model used by the AI-assisted evaluators, with fallback logic to your agent model
40
+
41
+
** (Optional) All of these are generated locally in [`.env`](../src/.env) after executing `azd up` except `AZURE_EXISTING_AGENT_ID` which is generated remotely. To find this variables remotely in Container App, follow this:
42
+
43
+
1. Go to [Azure AI Foundry Portal](https://ai.azure.com/) and sign in
44
+
2. Click on your project from the homepage
45
+
3. In the left-hand menu, select Agents
46
+
4. Choose the agent you want to inspect
47
+
5. The Agent ID will be shown in the agent’s detail panel—usually near the top or under the “Properties” or “Overview” tab [Entra Agent ID Spec]
48
+

49
+
29
50
30
-
To install required packages and run the script:
51
+
To install required packages and run the script:
31
52
32
53
```shell
33
54
python -m pip install -r src/requirements.txt
@@ -38,7 +59,14 @@ AI Foundry offers a number of [built-in evaluators](https://learn.microsoft.com/
38
59
39
60
-**Monitoring**: When tracing is enabled, the [application code](../src/api/routes.py) sends an asynchronous evaluation request after processing a thread run, allowing continuous monitoring of your agent. You can view results from the AI Foundry Tracing tab.
40
61

41
-
Alternatively, you can go to your Application Insights logs for an interactive experience. Here is an example query to see logs on thread runs and related events.
62
+
Alternatively, you can go to your Application Insights logs for an interactive experience. To access Application Insights logs in the Azure portal:
63
+
64
+
1. Navigate to your resource group (use `azd show` to get the link)
65
+
2. Find and click on the Application Insights resource (usually named starts with `appi-`)
66
+
3. In the left menu, click on **Logs** under the **Monitoring** section
67
+
4. You can now run KQL queries in the query editor
68
+
69
+
Here is an example query to see logs on thread runs and related events:
42
70
43
71
```kql
44
72
let thread_run_events = traces
@@ -50,6 +78,8 @@ AI Foundry offers a number of [built-in evaluators](https://learn.microsoft.com/
-**Continuous Integration**: You can try the [AI Agent Evaluation GitHub action](https://github.com/microsoft/ai-agent-evals) using the [sample GitHub workflow](../.github/workflows/ai-evaluation.yaml) in your CI/CD pipeline. This GitHub action runs a set of queries against your agent, performs evaluations with evaluators of your choice, and produce a summary report. It also supports a comparison mode with statistical test, allowing you to iterate agent changes on your production environment with confidence. See [documentation](https://github.com/microsoft/ai-agent-evals) for more details.
54
84
55
85
## AI Red Teaming Agent
@@ -58,7 +88,7 @@ The [AI Red Teaming Agent](https://learn.microsoft.com/azure/ai-foundry/concepts
58
88
59
89
In this [script](../airedteaming/ai_redteaming.py), you will be able to set up an AI Red Teaming Agent to run an automated scan of your agent in this sample. No test dataset or adversarial LLM is needed as the AI Red Teaming Agent will generate all the attack prompts for you.
60
90
61
-
To install required extra package from Azure AI Evaluation SDK and run the script in your local development environment:
91
+
To install required extra packages from Azure AI Evaluation SDK and run the script in your local development environment:
0 commit comments