Summary: This post introduces the newly GA Azure OpenAI Service and demonstrates how to integrate it into .NET applications, covering authentication, basic API calls, and best practices for getting started with AI-powered features.
Introduction
The AI landscape changed dramatically this week with Microsoft’s announcement of Azure OpenAI Service general availability on January 17, 2023. This service brings the power of OpenAI’s large language models like GPT-3.5, Codex, and DALL-E 2 to Azure, with added enterprise-grade security, compliance features, and the reliability developers expect from Azure services.
For .NET developers, this represents an exciting opportunity to integrate advanced AI capabilities into applications with the security and scalability of the Azure ecosystem. In this post, we’ll explore how to get started with Azure OpenAI Service in .NET applications, from setting up the necessary resources to making your first API calls.
Setting Up Azure OpenAI Service
Before writing any code, you’ll need to set up Azure OpenAI Service in your Azure subscription. Currently, access requires an application process due to high demand and Microsoft’s responsible AI framework.
- Apply for access through the Azure OpenAI Service request form
- Once approved, create an Azure OpenAI resource in the Azure portal
- Deploy a model (like
text-davinci-003orgpt-35-turbo) through the Azure OpenAI Studio - Note your endpoint URL and API key for authentication
Installing the Required Packages
To interact with Azure OpenAI Service from your .NET application, you’ll need the Azure OpenAI client library. Add it to your project using NuGet:
csharp
dotnet add package Azure.AI.OpenAI --version 1.0.0-beta.1
This package provides a convenient .NET client for the Azure OpenAI REST API, handling authentication, serialization, and other low-level concerns.
Authentication and Configuration
Let’s start by setting up the client with proper authentication. Create a new console application and add the following code:
csharp
using Azure;
using Azure.AI.OpenAI;
using System;
using System.Threading.Tasks;
namespace AzureOpenAIDemo
{
class Program
{
// Store these in configuration, not in code
private static string endpoint = "https://your-resource-name.openai.azure.com/";
private static string key = "your-api-key";
private static string deploymentName = "your-deployment-name"; // e.g., "text-davinci-003"
static async Task Main(string[] args )
{
// Initialize the Azure OpenAI client
OpenAIClient client = new OpenAIClient(
new Uri(endpoint),
new AzureKeyCredential(key));
// Ready to make API calls
Console.WriteLine("Azure OpenAI client initialized successfully!");
}
}
}
In a production environment, you should store the endpoint, key, and deployment name in Azure Key Vault or another secure configuration system, not hardcoded in your application.
Making Your First Completion Request
Now let’s make a simple completion request to generate text based on a prompt:
csharp
static async Task Main(string[] args)
{
// Initialize the Azure OpenAI client
OpenAIClient client = new OpenAIClient(
new Uri(endpoint),
new AzureKeyCredential(key));
// Create completion options
CompletionsOptions completionsOptions = new CompletionsOptions()
{
Prompts = { "Write a short poem about coding in C#" },
MaxTokens = 100,
Temperature = 0.7,
DeploymentName = deploymentName
};
// Send the request
Response<Completions> completionsResponse = await client.GetCompletionsAsync(completionsOptions);
// Process the response
Completions completions = completionsResponse.Value;
Console.WriteLine("Generated text:");
Console.WriteLine(completions.Choices[0].Text.Trim());
}
When you run this code, you should see a poem about C# generated by the AI model. The parameters in CompletionsOptions control various aspects of the generation:
Prompts: The input text that the model will completeMaxTokens: The maximum length of the generated textTemperature: Controls randomness (higher values = more creative, lower values = more deterministic)DeploymentName: The name of your model deployment in Azure OpenAI Service
Working with Chat Models
If you’ve deployed a chat model like gpt-35-turbo, you’ll use a slightly different approach:
csharp
static async Task Main(string[] args)
{
// Initialize the Azure OpenAI client
OpenAIClient client = new OpenAIClient(
new Uri(endpoint),
new AzureKeyCredential(key));
// Create chat completion options
ChatCompletionsOptions chatCompletionsOptions = new ChatCompletionsOptions()
{
Messages =
{
new ChatMessage(ChatRole.System, "You are a helpful assistant that provides information about C# programming."),
new ChatMessage(ChatRole.User, "What are the new features in C# 11?")
},
MaxTokens = 150,
Temperature = 0.7,
DeploymentName = "gpt-35-turbo" // Your chat model deployment name
};
// Send the request
Response<ChatCompletions> chatCompletionsResponse = await client.GetChatCompletionsAsync(chatCompletionsOptions);
// Process the response
ChatCompletions chatCompletions = chatCompletionsResponse.Value;
Console.WriteLine("Assistant's response:");
Console.WriteLine(chatCompletions.Choices[0].Message.Content);
}
The chat API allows for more structured conversations with system messages (instructions to the AI), user messages (input from the user), and assistant messages (previous responses from the AI).
Error Handling and Best Practices
When working with AI services, robust error handling is essential. Here’s an improved version of our code with proper error handling:
csharp
static async Task Main(string[] args)
{
try
{
// Initialize the Azure OpenAI client
OpenAIClient client = new OpenAIClient(
new Uri(endpoint),
new AzureKeyCredential(key));
// Create completion options
CompletionsOptions completionsOptions = new CompletionsOptions()
{
Prompts = { "Write a short poem about coding in C#" },
MaxTokens = 100,
Temperature = 0.7,
DeploymentName = deploymentName
};
// Send the request
Response<Completions> completionsResponse = await client.GetCompletionsAsync(completionsOptions);
// Process the response
Completions completions = completionsResponse.Value;
Console.WriteLine("Generated text:");
Console.WriteLine(completions.Choices[0].Text.Trim());
}
catch (RequestFailedException ex) when (ex.Status == 401)
{
Console.WriteLine("Authentication error: Please check your API key and endpoint.");
}
catch (RequestFailedException ex) when (ex.Status == 400)
{
Console.WriteLine($"Bad request: {ex.Message}");
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Azure OpenAI service error: {ex.Message}");
}
catch (Exception ex)
{
Console.WriteLine($"Unexpected error: {ex.Message}");
}
}
Additional best practices to consider:
- Implement retry logic for transient failures using the Azure SDK’s built-in retry policies
- Monitor token usage to control costs and stay within your quota limits
- Cache responses when appropriate to reduce API calls
- Validate user inputs before sending them to the API
- Implement content filtering for user-facing applications
Building a Simple Q&A Application
Let’s put everything together to build a simple Q&A console application:
csharp
using Azure;
using Azure.AI.OpenAI;
using System;
using System.Threading.Tasks;
namespace AzureOpenAIQnA
{
class Program
{
private static string endpoint = "https://your-resource-name.openai.azure.com/";
private static string key = "your-api-key";
private static string deploymentName = "gpt-35-turbo"; // Your chat model deployment
static async Task Main(string[] args )
{
try
{
// Initialize the Azure OpenAI client
OpenAIClient client = new OpenAIClient(
new Uri(endpoint),
new AzureKeyCredential(key));
Console.WriteLine("Azure OpenAI Q&A Bot (type 'exit' to quit)");
Console.WriteLine("------------------------------------------");
// Create a conversation with a system message
var messages = new List<ChatMessage>
{
new ChatMessage(ChatRole.System, "You are a helpful assistant that provides concise answers about .NET development.")
};
while (true)
{
// Get user input
Console.Write("\nYour question: ");
string userInput = Console.ReadLine();
if (string.IsNullOrEmpty(userInput) || userInput.ToLower() == "exit")
break;
// Add user message to conversation
messages.Add(new ChatMessage(ChatRole.User, userInput));
// Create chat completion options
ChatCompletionsOptions chatCompletionsOptions = new ChatCompletionsOptions()
{
Messages = messages,
MaxTokens = 150,
Temperature = 0.7,
DeploymentName = deploymentName
};
// Send the request
Response<ChatCompletions> chatCompletionsResponse =
await client.GetChatCompletionsAsync(chatCompletionsOptions);
// Process the response
ChatCompletions chatCompletions = chatCompletionsResponse.Value;
string assistantResponse = chatCompletions.Choices[0].Message.Content;
// Display the response
Console.WriteLine($"\nAssistant: {assistantResponse}");
// Add assistant response to conversation history
messages.Add(new ChatMessage(ChatRole.Assistant, assistantResponse));
}
}
catch (Exception ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
}
}
}
This application maintains a conversation history, allowing for more contextual interactions with the AI assistant.
Conclusion
Azure OpenAI Service brings powerful AI capabilities to .NET developers with the enterprise-grade security and compliance features of Azure. In this post, we’ve covered the basics of getting started with the service, from setting up resources to building a simple Q&A application.
As you continue exploring Azure OpenAI Service, consider how these AI capabilities can enhance your applications. Whether you’re building chatbots, content generation tools, or data analysis systems, the combination of .NET and Azure OpenAI Service provides a robust foundation for AI-powered features.
In future posts, we’ll dive deeper into advanced topics like prompt engineering, fine-tuning models, and integrating Azure OpenAI Service with other Azure services for comprehensive AI solutions.