Skip to content

Commit 5bca5f5

Browse files
authored
Move some contents to other fearures page (#161)
* Move some contents to other fearures page * Update other_features.md * Update other_features.md
1 parent 721a340 commit 5bca5f5

File tree

5 files changed

+78
-68
lines changed

5 files changed

+78
-68
lines changed

README.md

Lines changed: 8 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The agent leverages the Azure AI Agent service and utilizes file search for know
44

55
<div style="text-align:center;">
66

7-
[**SOLUTION OVERVIEW**](#solution-overview) \| [**GETTING STARTED**](#getting-started) \| [**TRACING AND MONITORING**](#tracing-and-monitoring) \| [**AGENT EVALUATION**](#agent-evaluation) \| [**AI RED TEAMING AGENT**](#ai-red-teaming-agent) \| [**RESOURCE CLEAN-UP**](#resource-clean-up) \| [**GUIDANCE**](#guidance) \| [**TROUBLESHOOTING**](#troubleshooting)
7+
[**SOLUTION OVERVIEW**](#solution-overview) \| [**GETTING STARTED**](#getting-started) \| [**OTHER FEATURES**](#other-features) \| [**RESOURCE CLEAN-UP**](#resource-clean-up) \| [**GUIDANCE**](#guidance) \| [**TROUBLESHOOTING**](#troubleshooting)
88

99
</div>
1010

@@ -20,7 +20,7 @@ Instructions are provided for deployment through GitHub Codespaces, VS Code Dev
2020

2121
### Solution Architecture
2222

23-
![Architecture diagram showing that user input is provided to the Azure Container App, which contains the app code. With user identity and resource access through managed identity, the input is used to form a response. The input and the Azure monitor are able to use the Azure resources deployed in the solution: Application Insights, Azure AI Foundry Project, Azure AI Services, Storage account, Azure Container App, and Log Analytics Workspace.](docs/architecture.png)
23+
![Architecture diagram showing that user input is provided to the Azure Container App, which contains the app code. With user identity and resource access through managed identity, the input is used to form a response. The input and the Azure monitor are able to use the Azure resources deployed in the solution: Application Insights, Azure AI Foundry Project, Azure AI Services, Storage account, Azure Container App, and Log Analytics Workspace.](docs/images/architecture.png)
2424

2525
The app code runs in Azure Container App to process the user input and generate a response to the user. It leverages Azure AI projects and Azure AI services, including the model and agent.
2626

@@ -48,7 +48,7 @@ Facilitates the creation of an AI Red Teaming Agent that can run batch automated
4848

4949
Here is a screenshot showing the chatting web application with requests and responses between the system and the user:
5050

51-
![Screenshot of chatting web application showing requests and responses between agent and the user.](docs/webapp_screenshot.png)
51+
![Screenshot of chatting web application showing requests and responses between agent and the user.](docs/images/webapp_screenshot.png)
5252

5353
## Getting Started
5454

@@ -62,74 +62,14 @@ Github Codespaces and Dev Containers both allow you to download and deploy the c
6262
**After deployment, try these [sample questions](./docs/sample_questions.md) to test your agent.**
6363

6464

65-
## Tracing and Monitoring
65+
## Other Features
66+
Once you have the agents and the web app working, you are encouraged to try one of the following:
6667

67-
You can view console logs in Azure portal. You can get the link to the resource group with the azd tool:
68+
- **[Tracing and Monitoring](./docs/other_features.md#tracing-and-monitoring)** - View console logs in Azure portal and App Insights tracing in Azure AI Foundry for debugging and performance monitoring.
6869

69-
```shell
70-
azd show
71-
```
70+
- **[Agent Evaluation](./docs/other_features.md#agent-evaluation)** - Evaluate your agent's performance and quality using built-in evaluators for local development, continuous monitoring, and CI/CD integration.
7271

73-
Or if you want to navigate from the Azure portal main page, select your resource group from the 'Recent' list, or by clicking the 'Resource groups' and searching your resource group there.
74-
75-
After accessing you resource group in Azure portal, choose your container app from the list of resources. Then open 'Monitoring' and 'Log Stream'. Choose the 'Application' radio button to view application logs. You can choose between real-time and historical using the corresponding radio buttons. Note that it may take some time for the historical view to be updated with the latest logs.
76-
77-
You can view the App Insights tracing in Azure AI Foundry. Select your project on the Azure AI Foundry page and then click 'Tracing'.
78-
79-
## Agent Evaluation
80-
81-
AI Foundry offers a number of [built-in evaluators](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/agent-evaluate-sdk) to measure the quality, efficiency, risk and safety of your agents. For example, intent resolution, tool call accuracy, and task adherence evaluators are targeted to assess the performance of agent workflow, while content safety evaluator checks for inappropriate content in the responses such as violence or hate.
82-
83-
In this template, we show how these evaluations can be performed during different phases of your development cycle.
84-
85-
- **Local development**: You can use this [local evaluation script](./evals/evaluate.py) to get performance and evaluation metrics based on a set of [test queries](./evals/eval-queries.json) for a sample set of built-in evaluators.
86-
87-
The script reads the following environment variables:
88-
- `AZURE_EXISTING_AIPROJECT_ENDPOINT`: AI Project endpoint
89-
- `AZURE_EXISTING_AGENT_ID`: AI Agent Id, with fallback logic to look up agent Id by name `AZURE_AI_AGENT_NAME`
90-
- `AZURE_AI_AGENT_DEPLOYMENT_NAME`: Deployment model used by the AI-assisted evaluators, with fallback logic to your agent model
91-
92-
To install required packages and run the script:
93-
94-
```shell
95-
python -m pip install -r src/requirements.txt
96-
python -m pip install azure-ai-evaluation
97-
98-
python evals/evaluate.py
99-
```
100-
101-
- **Monitoring**: When tracing is enabled, the [application code](./src/api/routes.py) sends an asynchronous evaluation request after processing a thread run, allowing continuous monitoring of your agent. You can view results from the AI Foundry Tracing tab.
102-
![Tracing](docs/tracing_eval_screenshot.png)
103-
Alternatively, you can go to your Application Insights logs for an interactive experience. Here is an example query to see logs on thread runs and related events.
104-
105-
```kql
106-
let thread_run_events = traces
107-
| extend thread_run_id = tostring(customDimensions.["gen_ai.thread.run.id"]);
108-
dependencies
109-
| extend thread_run_id = tostring(customDimensions.["gen_ai.thread.run.id"])
110-
| join kind=leftouter thread_run_events on thread_run_id
111-
| where isnotempty(thread_run_id)
112-
| project timestamp, thread_run_id, name, success, duration, event_message = message, event_dimensions=customDimensions1
113-
```
114-
115-
- **Continuous Integration**: You can try the [AI Agent Evaluation GitHub action](https://github.com/microsoft/ai-agent-evals) using the [sample GitHub workflow](./.github/workflows/ai-evaluation.yaml) in your CI/CD pipeline. This GitHub action runs a set of queries against your agent, performs evaluations with evaluators of your choice, and produce a summary report. It also supports a comparison mode with statistical test, allowing you to iterate agent changes on your production environment with confidence. See [documentation](https://github.com/microsoft/ai-agent-evals) for more details.
116-
117-
## AI Red Teaming Agent
118-
119-
The [AI Red Teaming Agent](https://learn.microsoft.com/azure/ai-foundry/concepts/ai-red-teaming-agent) is a powerful tool designed to help organizations proactively find security and safety risks associated with generative AI systems during design and development of generative AI models and applications.
120-
121-
In this [script](airedteaming/ai_redteaming.py), you will be able to set up an AI Red Teaming Agent to run an automated scan of your agent in this sample. No test dataset or adversarial LLM is needed as the AI Red Teaming Agent will generate all the attack prompts for you.
122-
123-
To install required extra package from Azure AI Evaluation SDK and run the script in your local development environment:
124-
125-
```shell
126-
python -m pip install -r src/requirements.txt
127-
python -m pip install azure-ai-evaluation[redteam]
128-
129-
python evals/airedteaming.py
130-
```
131-
132-
Read more on supported attack techniques and risk categories in our [documentation](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/run-scans-ai-red-teaming-agent).
72+
- **[AI Red Teaming Agent](./docs/other_features.md#ai-red-teaming-agent)** - Run automated security and safety scans on your agent solution to check your risk posture before production deployment.
13373

13474
## Resource Clean-up
13575

File renamed without changes.
File renamed without changes.
File renamed without changes.

docs/other_features.md

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
# Other Features
2+
3+
## Tracing and Monitoring
4+
5+
You can view console logs in Azure portal. You can get the link to the resource group with the azd tool:
6+
7+
```shell
8+
azd show
9+
```
10+
11+
Or if you want to navigate from the Azure portal main page, select your resource group from the 'Recent' list, or by clicking the 'Resource groups' and searching your resource group there.
12+
13+
After accessing you resource group in Azure portal, choose your container app from the list of resources. Then open 'Monitoring' and 'Log Stream'. Choose the 'Application' radio button to view application logs. You can choose between real-time and historical using the corresponding radio buttons. Note that it may take some time for the historical view to be updated with the latest logs.
14+
15+
You can view the App Insights tracing in Azure AI Foundry. Select your project on the Azure AI Foundry page and then click 'Tracing'.
16+
17+
## Agent Evaluation
18+
19+
AI Foundry offers a number of [built-in evaluators](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/agent-evaluate-sdk) to measure the quality, efficiency, risk and safety of your agents. For example, intent resolution, tool call accuracy, and task adherence evaluators are targeted to assess the performance of agent workflow, while content safety evaluator checks for inappropriate content in the responses such as violence or hate.
20+
21+
In this template, we show how these evaluations can be performed during different phases of your development cycle.
22+
23+
- **Local development**: You can use this [local evaluation script](./evals/evaluate.py) to get performance and evaluation metrics based on a set of [test queries](./evals/eval-queries.json) for a sample set of built-in evaluators.
24+
25+
The script reads the following environment variables:
26+
- `AZURE_EXISTING_AIPROJECT_ENDPOINT`: AI Project endpoint
27+
- `AZURE_EXISTING_AGENT_ID`: AI Agent Id, with fallback logic to look up agent Id by name `AZURE_AI_AGENT_NAME`
28+
- `AZURE_AI_AGENT_DEPLOYMENT_NAME`: Deployment model used by the AI-assisted evaluators, with fallback logic to your agent model
29+
30+
To install required packages and run the script:
31+
32+
```shell
33+
python -m pip install -r src/requirements.txt
34+
python -m pip install azure-ai-evaluation
35+
36+
python evals/evaluate.py
37+
```
38+
39+
- **Monitoring**: When tracing is enabled, the [application code](./src/api/routes.py) sends an asynchronous evaluation request after processing a thread run, allowing continuous monitoring of your agent. You can view results from the AI Foundry Tracing tab.
40+
![Tracing](./images/tracing_eval_screenshot.png)
41+
Alternatively, you can go to your Application Insights logs for an interactive experience. Here is an example query to see logs on thread runs and related events.
42+
43+
```kql
44+
let thread_run_events = traces
45+
| extend thread_run_id = tostring(customDimensions.["gen_ai.thread.run.id"]);
46+
dependencies
47+
| extend thread_run_id = tostring(customDimensions.["gen_ai.thread.run.id"])
48+
| join kind=leftouter thread_run_events on thread_run_id
49+
| where isnotempty(thread_run_id)
50+
| project timestamp, thread_run_id, name, success, duration, event_message = message, event_dimensions=customDimensions1
51+
```
52+
53+
- **Continuous Integration**: You can try the [AI Agent Evaluation GitHub action](https://github.com/microsoft/ai-agent-evals) using the [sample GitHub workflow](./.github/workflows/ai-evaluation.yaml) in your CI/CD pipeline. This GitHub action runs a set of queries against your agent, performs evaluations with evaluators of your choice, and produce a summary report. It also supports a comparison mode with statistical test, allowing you to iterate agent changes on your production environment with confidence. See [documentation](https://github.com/microsoft/ai-agent-evals) for more details.
54+
55+
## AI Red Teaming Agent
56+
57+
The [AI Red Teaming Agent](https://learn.microsoft.com/azure/ai-foundry/concepts/ai-red-teaming-agent) is a powerful tool designed to help organizations proactively find security and safety risks associated with generative AI systems during design and development of generative AI models and applications.
58+
59+
In this [script](airedteaming/ai_redteaming.py), you will be able to set up an AI Red Teaming Agent to run an automated scan of your agent in this sample. No test dataset or adversarial LLM is needed as the AI Red Teaming Agent will generate all the attack prompts for you.
60+
61+
To install required extra package from Azure AI Evaluation SDK and run the script in your local development environment:
62+
63+
```shell
64+
python -m pip install -r src/requirements.txt
65+
python -m pip install azure-ai-evaluation[redteam]
66+
67+
python evals/airedteaming.py
68+
```
69+
70+
Read more on supported attack techniques and risk categories in our [documentation](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/run-scans-ai-red-teaming-agent).

0 commit comments

Comments
 (0)