- Visual Studio 2022 or higher
- .NET Desktop Development Workload
- .NET 8 SDK (May be included with your VS version, depending.)
- Foundry Local:
winget install Microsoft.FoundryLocal
Double click the FoundryLocalLabDemo.sln file in the root directory to open the solution.
Click the Start Debugging button to deploy the app. We'll keep it open while we edit, and see changes appear live!
The app should look like this when it launches.
Notice that no models appear in the middle section. We're going to add the ability to use local LLMs using Foundry Local!
Back in Visual Studio, open the Solution Explorer and inspect the dependencies of the project. Notice that the Foundry Local NuGet package is installed, along with some Azure AI (cloud) libraries. If you were starting new, you would install the Foundry Local NuGet package yourself. We have it pre-installed since we leverage some of the types in the sample project.
Further down in the Solution Explorer, find and open the ExecutionLogic.cs file. Notice that we have a static FoundryLocalManager initialized, but the rest has not been implemented.
First, we have to start the Foundry Local service. Update StartServiceAsync to return the result from manager.StartServiceAsync().
public static Task StartServiceAsync()
{
// Start the Foundry Local service
return manager.StartServiceAsync();
}Then, we need to implement the method to list all the models available in the catalog. This will return a list of all local models that your device can run, even if they haven't been downloaded.
Within ListCatalogModelsAsync, simply call