Leveraging Azure AI Studio for .NET Developers: Building End-to-End AI Solutions

Summary: This post explores how .NET developers can leverage Azure AI Studio to build comprehensive AI solutions. Learn how to use this unified platform to develop, test, and deploy AI applications with seamless integration to your .NET projects.

Introduction

Azure AI Studio represents a significant evolution in Microsoft’s AI development ecosystem, providing a unified platform for building end-to-end AI solutions. For .NET developers, this platform offers powerful capabilities to integrate advanced AI features into applications without having to manage multiple disparate services.

In this post, we’ll explore how .NET developers can leverage Azure AI Studio to build comprehensive AI solutions. We’ll cover everything from setting up your environment to developing, testing, and deploying AI applications with seamless integration to your .NET projects. By the end of this article, you’ll understand how to use Azure AI Studio to enhance your .NET applications with sophisticated AI capabilities.

Understanding Azure AI Studio

Before diving into implementation, let’s understand what Azure AI Studio is and how it fits into the broader Azure AI ecosystem.

What is Azure AI Studio?

Azure AI Studio is a unified platform that brings together various Azure AI services, tools, and infrastructure to simplify the development of AI solutions. It provides a comprehensive environment for:

  • Building and fine-tuning AI models
  • Creating and managing prompts
  • Developing and testing AI applications
  • Deploying and monitoring AI solutions
  • Managing the entire AI application lifecycle

Key Components of Azure AI Studio

Azure AI Studio consists of several key components:

  1. Model Catalog: Access to a wide range of AI models, including Azure OpenAI models, open-source models, and custom models.
  2. Prompt Flow: A tool for creating, testing, and managing prompts for large language models.
  3. Evaluation Tools: Capabilities for evaluating model performance and comparing different models.
  4. Projects: Organizational units for managing AI solutions, including resources, models, and deployments.
  5. Deployments: Tools for deploying AI solutions to production environments.
  6. Monitoring: Features for monitoring the performance and usage of deployed AI solutions.

How Azure AI Studio Benefits .NET Developers

For .NET developers, Azure AI Studio offers several specific benefits:

  • Seamless Integration: Easy integration with .NET applications through SDKs and APIs.
  • Unified Development Experience: A single platform for all AI development needs.
  • Simplified Deployment: Streamlined deployment of AI solutions to Azure.
  • Comprehensive Monitoring: Built-in monitoring and logging capabilities.
  • Cost Management: Tools for managing and optimizing AI-related costs.

Setting Up Your Development Environment

Let’s start by setting up your development environment for working with Azure AI Studio.

Prerequisites

To follow along with this tutorial, you’ll need:

  • Visual Studio 2022 or Visual Studio Code
  • .NET 8 SDK
  • An Azure subscription with access to Azure AI Studio
  • Azure CLI installed locally

Creating an Azure AI Studio Project

First, let’s create a new project in Azure AI Studio:

  1. Navigate to the Azure AI Studio portal
  2. Click on “Create a new project”
  3. Fill in the project details:
    • Name: DotNetAIDemo
    • Description: A demo project for .NET developers
    • Resource Group: Create a new one or select an existing one
    • Region: Select a region where Azure OpenAI is available
  4. Click “Create”

Setting Up Your .NET Project

Now, let’s create a new .NET project that will integrate with Azure AI Studio:

bash

# Create a new .NET Web API project
dotnet new webapi -n AzureAIStudioDemo
cd AzureAIStudioDemo

# Add the necessary packages
dotnet add package Azure.AI.OpenAI
dotnet add package Microsoft.Extensions.Azure
dotnet add package Azure.Identity

Configuring Azure Authentication

Let’s configure Azure authentication in our .NET application:

csharp

// Program.cs
using Azure.Identity;
using Microsoft.Extensions.Azure;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddControllers();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

// Add Azure services
builder.Services.AddAzureClients(clientBuilder =>
{
    clientBuilder.UseCredential(new DefaultAzureCredential());
    
    // Add OpenAI client
    clientBuilder.AddOpenAIClient(new Uri(builder.Configuration["AzureOpenAI:Endpoint"]));
});

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();

app.Run();

Setting Up Configuration

Create or update your appsettings.json file:

json

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "AllowedHosts": "*",
  "AzureOpenAI": {
    "Endpoint": "https://your-resource-name.openai.azure.com/",
    "DeploymentName": "your-deployment-name"
  }
}

Working with Models in Azure AI Studio

Now, let’s explore how to work with AI models in Azure AI Studio and integrate them into your .NET application.

Exploring the Model Catalog

Azure AI Studio provides a comprehensive catalog of AI models, including:

  1. Azure OpenAI Models: GPT-4, GPT-3.5-Turbo, etc.
  2. Open Source Models: Llama 2, Mistral, etc.
  3. Custom Models: Models that you’ve fine-tuned or imported

To explore the model catalog:

  1. In Azure AI Studio, navigate to “Model catalog” in the left sidebar
  2. Browse through the available models
  3. Click on a model to view its details, including capabilities, parameters, and usage examples

Deploying a Model

Before we can use a model in our .NET application, we need to deploy it:

  1. In the model catalog, select a model (e.g., GPT-4 )
  2. Click “Deploy”
  3. Fill in the deployment details:
    • Deployment name: gpt4-deployment
    • Model version: Select the latest version
    • Content filter: Enable as needed
    • Deployment type: Standard
  4. Click “Deploy”

Creating a Basic Controller for Model Interaction

Let’s create a controller in our .NET application to interact with the deployed model:

csharp

using Azure;
using Azure.AI.OpenAI;
using Microsoft.AspNetCore.Mvc;

[ApiController]
[Route("api/[controller]")]
public class OpenAIController : ControllerBase
{
    private readonly OpenAIClient _openAIClient;
    private readonly IConfiguration _configuration;
    
    public OpenAIController(OpenAIClient openAIClient, IConfiguration configuration)
    {
        _openAIClient = openAIClient;
        _configuration = configuration;
    }
    
    [HttpPost("chat")]
    public async Task<IActionResult> Chat([FromBody] ChatRequest request)
    {
        try
        {
            var deploymentName = _configuration["AzureOpenAI:DeploymentName"];
            
            var chatCompletionsOptions = new ChatCompletionsOptions
            {
                Messages =
                {
                    new ChatMessage(ChatRole.System, "You are a helpful AI assistant."),
                    new ChatMessage(ChatRole.User, request.Message)
                },
                Temperature = 0.7f,
                MaxTokens = 800,
                NucleusSamplingFactor = 0.95f
            };
            
            var response = await _openAIClient.GetChatCompletionsAsync(deploymentName, chatCompletionsOptions);
            var responseMessage = response.Value.Choices[0].Message.Content;
            
            return Ok(new ChatResponse { Message = responseMessage });
        }
        catch (Exception ex)
        {
            return StatusCode(500, $"An error occurred: {ex.Message}");
        }
    }
}

public class ChatRequest
{
    public string Message { get; set; }
}

public class ChatResponse
{
    public string Message { get; set; }
}

Working with Prompt Flow

Prompt Flow is a key component of Azure AI Studio that helps you create, test, and manage prompts for large language models. Let’s explore how to use Prompt Flow in your .NET applications.

Understanding Prompt Flow

Prompt Flow allows you to:

  • Create and manage prompts
  • Test prompts with different inputs
  • Chain prompts together in a workflow
  • Evaluate prompt performance
  • Deploy prompts as APIs

Creating a Prompt Flow

Let’s create a simple prompt flow in Azure AI Studio:

  1. In your Azure AI Studio project, navigate to “Prompt flow” in the left sidebar
  2. Click “Create”
  3. Select “Standard flow”
  4. Name your flow ProductDescriptionGenerator
  5. Click “Create”

Now, let’s design our prompt flow:

  1. Add an input node for product information
  2. Add a prompt node with the following template:
You are a professional product description writer.
Write a compelling product description for the following product:

Product Name: {{product_name}}
Product Category: {{product_category}}
Key Features: {{key_features}}

The description should be engaging, highlight the key features, and be approximately 200 words.
  1. Add an output node to return the generated description
  2. Connect the nodes to create a flow

Testing the Prompt Flow

Once you’ve created your prompt flow, you can test it:

  1. Click on the “Test” tab
  2. Enter test values for the inputs:
    • product_name: “EcoBoost Pro 5000”
    • product_category: “Smart Home Energy Monitor”
    • key_features: “Real-time energy tracking, AI-powered usage predictions, Smart device integration, Mobile app control”
  3. Click “Run” to see the generated product description

Exporting the Prompt Flow as an API

After testing and refining your prompt flow, you can deploy it as an API:

  1. Click on the “Deploy” button
  2. Select “Deploy as API”
  3. Fill in the deployment details:
    • Endpoint name: product-description-api
    • Compute: Select an appropriate compute option
  4. Click “Deploy”

Integrating Prompt Flow API in .NET

Now, let’s create a controller to interact with our deployed Prompt Flow API:

csharp

using System.Net.Http.Headers;
using System.Text;
using System.Text.Json;
using Microsoft.AspNetCore.Mvc;

[ApiController]
[Route("api/[controller]")]
public class ProductDescriptionController : ControllerBase
{
    private readonly HttpClient _httpClient;
    private readonly IConfiguration _configuration;
    
    public ProductDescriptionController(HttpClient httpClient, IConfiguration configuration )
    {
        _httpClient = httpClient;
        _configuration = configuration;
    }
    
    [HttpPost("generate" )]
    public async Task<IActionResult> GenerateDescription([FromBody] ProductDescriptionRequest request)
    {
        try
        {
            var promptFlowEndpoint = _configuration["PromptFlow:Endpoint"];
            var apiKey = _configuration["PromptFlow:ApiKey"];
            
            var requestBody = new
            {
                product_name = request.ProductName,
                product_category = request.ProductCategory,
                key_features = request.KeyFeatures
            };
            
            var content = new StringContent(
                JsonSerializer.Serialize(requestBody),
                Encoding.UTF8,
                "application/json");
                
            _httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", apiKey );
            
            var response = await _httpClient.PostAsync(promptFlowEndpoint, content );
            response.EnsureSuccessStatusCode();
            
            var responseContent = await response.Content.ReadAsStringAsync();
            var responseObject = JsonSerializer.Deserialize<PromptFlowResponse>(responseContent);
            
            return Ok(new ProductDescriptionResponse { Description = responseObject.output });
        }
        catch (Exception ex)
        {
            return StatusCode(500, $"An error occurred: {ex.Message}");
        }
    }
}

public class ProductDescriptionRequest
{
    public string ProductName { get; set; }
    public string ProductCategory { get; set; }
    public string KeyFeatures { get; set; }
}

public class ProductDescriptionResponse
{
    public string Description { get; set; }
}

public class PromptFlowResponse
{
    public string output { get; set; }
}

Update your appsettings.json to include the Prompt Flow API details:

json

{
  "PromptFlow": {
    "Endpoint": "https://your-prompt-flow-endpoint.azurewebsites.net/api/v1/flows/product-description-api/invoke",
    "ApiKey": "your-api-key"
  }
}

Building a RAG Application with Azure AI Studio

Let’s explore how to build a Retrieval-Augmented Generation (RAG ) application using Azure AI Studio and integrate it with your .NET application.

Understanding RAG in Azure AI Studio

Azure AI Studio provides built-in support for RAG applications, including:

  • Document processing and chunking
  • Vector storage and search
  • Integration with large language models
  • End-to-end RAG workflows

Setting Up a Vector Store

First, let’s set up a vector store in Azure AI Studio:

  1. In your Azure AI Studio project, navigate to “Data” in the left sidebar
  2. Click “Create”
  3. Select “Vector index”
  4. Fill in the details:
    • Name: dotnet-documentation-index
    • Description: Vector index for .NET documentation
    • Azure AI Search resource: Create a new one or select an existing one
  5. Click “Create”

Ingesting Documents

Now, let’s ingest some documents into our vector store:

  1. In the vector index view, click on the “Ingest data” tab
  2. Select “Upload files”
  3. Upload some .NET documentation files (PDFs, Markdown, etc.)
  4. Configure the chunking settings:
    • Chunk size: 1000
    • Chunk overlap: 100
  5. Click “Ingest”

Creating a RAG Flow

Let’s create a RAG flow in Prompt Flow:

  1. In your Azure AI Studio project, navigate to “Prompt flow” in the left sidebar
  2. Click “Create”
  3. Select “RAG flow”
  4. Name your flow DotNetDocumentationRAG
  5. Click “Create”

Now, let’s configure our RAG flow:

  1. In the “Retrieve” step, connect to our dotnet-documentation-index vector store
  2. In the “Generate” step, use a prompt like:
You are a helpful AI assistant specializing in .NET development.
Use the following retrieved information to answer the user's question.
If you don't know the answer based on the retrieved information, say so.

Retrieved information:
{{retrieved_documents}}

User question: {{question}}

Answer:
  1. Save and test the flow with a sample question like “How do I use dependency injection in ASP.NET Core?”

Deploying the RAG Flow

After testing and refining your RAG flow, deploy it as an API:

  1. Click on the “Deploy” button
  2. Select “Deploy as API”
  3. Fill in the deployment details:
    • Endpoint name: dotnet-documentation-rag-api
    • Compute: Select an appropriate compute option
  4. Click “Deploy”