Developing Cross-Platform AI Applications with .NET MAUI and Azure AI

Summary: This post explores how to build cross-platform AI-powered applications using .NET MAUI and Azure AI services. Learn how to create intelligent mobile and desktop apps that leverage AI capabilities while maintaining a single codebase.

Introduction

The demand for intelligent applications that work seamlessly across multiple platforms continues to grow. For .NET developers, the combination of .NET Multi-platform App UI (.NET MAUI) and Azure AI services provides a powerful toolkit for building cross-platform AI-powered applications with a single codebase.

In this post, we’ll explore how to develop cross-platform AI applications using .NET MAUI and Azure AI services. We’ll cover everything from setting up your development environment to implementing various AI capabilities, including image recognition, text analysis, and conversational AI. By the end of this article, you’ll have the knowledge to build sophisticated AI-powered applications that run on iOS, Android, macOS, and Windows from a single codebase.

Understanding .NET MAUI and Azure AI

Before diving into implementation, let’s understand the key technologies we’ll be working with.

What is .NET MAUI?

.NET Multi-platform App UI (.NET MAUI) is Microsoft’s evolution of Xamarin.Forms, providing a framework for building native mobile and desktop apps with C# and XAML. Key features include:

  • Single codebase for iOS, Android, macOS, and Windows
  • Native UI controls and performance
  • Access to platform-specific APIs when needed
  • Integration with .NET 6+ ecosystem
  • Modern development patterns and tools

Azure AI Services Overview

Azure AI services provide a comprehensive suite of AI capabilities that can be integrated into applications:

  • Azure OpenAI Service: Access to advanced language models like GPT-4
  • Azure Cognitive Services: Pre-built AI capabilities including vision, speech, language, and decision
  • Azure AI Search: Intelligent search with semantic capabilities
  • Azure Bot Service: Framework for building conversational experiences
  • Azure Machine Learning: Platform for training and deploying custom ML models

Benefits of Combining .NET MAUI and Azure AI

The combination of .NET MAUI and Azure AI offers several advantages:

  1. Unified Development: Build once, deploy everywhere with a single codebase
  2. Native Performance: Access AI capabilities with native performance on each platform
  3. Simplified Integration: Easy integration with Azure services through .NET SDKs
  4. Offline Capabilities: Support for on-device AI when connectivity is limited
  5. Consistent User Experience: Deliver consistent AI experiences across platforms

Setting Up Your Development Environment

Let’s start by setting up your development environment for .NET MAUI and Azure AI development.

Prerequisites

To follow along with this tutorial, you’ll need:

  • Visual Studio 2022 (17.3 or later) with the .NET MAUI workload installed
  • .NET 8 SDK
  • An Azure subscription
  • Android SDK and/or Xcode (depending on your target platforms)

Creating a New .NET MAUI Project

Let’s create a new .NET MAUI project:

  1. Open Visual Studio 2022
  2. Select “Create a new project”
  3. Search for “MAUI” and select “.NET MAUI App”
  4. Name your project “MauiAIDemo” and click “Create”
  5. Select .NET 8 as the target framework and click “Create”

Installing Required NuGet Packages

Add the necessary packages to your project:

xml

<!-- In your .csproj file -->
<ItemGroup>
    <PackageReference Include="Azure.AI.OpenAI" Version="1.0.0-beta.9" />
    <PackageReference Include="Microsoft.Azure.CognitiveServices.Vision.ComputerVision" Version="7.0.1" />
    <PackageReference Include="Microsoft.Azure.CognitiveServices.Language.TextAnalytics" Version="5.1.0" />
    <PackageReference Include="Microsoft.CognitiveServices.Speech" Version="1.31.0" />
    <PackageReference Include="CommunityToolkit.Mvvm" Version="8.2.1" />
    <PackageReference Include="CommunityToolkit.Maui" Version="6.0.0" />
</ItemGroup>

Setting Up Azure Services

Before we start coding, let’s set up the Azure services we’ll need:

  1. Azure OpenAI Service:
    • Go to the Azure portal and create a new Azure OpenAI resource
    • Deploy a model (e.g., GPT-4) and note the endpoint and API key
  2. Azure Computer Vision:
    • Create a new Computer Vision resource
    • Note the endpoint and API key
  3. Azure Text Analytics:
    • Create a new Language Service resource
    • Note the endpoint and API key
  4. Azure Speech Service:
    • Create a new Speech Service resource
    • Note the endpoint and API key

Configuring App Settings

Let’s set up our app settings to store the Azure service credentials:

csharp

// Create a new file: AppSettings.cs
namespace MauiAIDemo
{
    public static class AppSettings
    {
        // Azure OpenAI
        public static string OpenAIEndpoint = "https://your-openai-resource.openai.azure.com/";
        public static string OpenAIKey = "your-openai-key";
        public static string OpenAIDeploymentName = "your-deployment-name";
        
        // Azure Computer Vision
        public static string ComputerVisionEndpoint = "https://your-vision-resource.cognitiveservices.azure.com/";
        public static string ComputerVisionKey = "your-vision-key";
        
        // Azure Text Analytics
        public static string TextAnalyticsEndpoint = "https://your-language-resource.cognitiveservices.azure.com/";
        public static string TextAnalyticsKey = "your-language-key";
        
        // Azure Speech Service
        public static string SpeechServiceRegion = "your-speech-region";
        public static string SpeechServiceKey = "your-speech-key";
    }
}

In a production application, you would store these settings securely and potentially retrieve them from a secure storage or service.

Implementing Image Recognition with Azure Computer Vision

Let’s start by implementing image recognition capabilities using Azure Computer Vision.

Creating the Image Recognition Service

First, let’s create a service to handle image recognition:

csharp

// Create a new file: Services/ComputerVisionService.cs
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using System.Text;

namespace MauiAIDemo.Services
{
    public class ComputerVisionService
    {
        private readonly ComputerVisionClient _client;
        
        public ComputerVisionService( )
        {
            _client = new ComputerVisionClient(
                new ApiKeyServiceClientCredentials(AppSettings.ComputerVisionKey))
            {
                Endpoint = AppSettings.ComputerVisionEndpoint
            };
        }
        
        public async Task<string> AnalyzeImageAsync(Stream imageStream)
        {
            try
            {
                // Define visual features to analyze
                var features = new List<VisualFeatureTypes?>
                {
                    VisualFeatureTypes.Categories,
                    VisualFeatureTypes.Description,
                    VisualFeatureTypes.Tags,
                    VisualFeatureTypes.Objects,
                    VisualFeatureTypes.Faces
                };
                
                // Analyze the image
                var result = await _client.AnalyzeImageInStreamAsync(imageStream, features);
                
                // Build a description of the image
                var sb = new StringBuilder();
                
                // Add image description
                if (result.Description?.Captions.Count > 0)
                {
                    var caption = result.Description.Captions[0];
                    sb.AppendLine($"Description: {caption.Text} (Confidence: {caption.Confidence:P1})");
                    sb.AppendLine();
                }
                
                // Add tags
                if (result.Tags.Count > 0)
                {
                    sb.AppendLine("Tags:");
                    foreach (var tag in result.Tags.OrderByDescending(t => t.Confidence).Take(10))
                    {
                        sb.AppendLine($"- {tag.Name} (Confidence: {tag.Confidence:P1})");
                    }
                    sb.AppendLine();
                }
                
                // Add objects
                if (result.Objects.Count > 0)
                {
                    sb.AppendLine("Objects:");
                    foreach (var obj in result.Objects)
                    {
                        sb.AppendLine($"- {obj.ObjectProperty} (Confidence: {obj.Confidence:P1})");
                    }
                    sb.AppendLine();
                }
                
                // Add faces
                if (result.Faces.Count > 0)
                {
                    sb.AppendLine($"Detected {result.Faces.Count} faces:");
                    foreach (var face in result.Faces)
                    {
                        sb.AppendLine($"- Person of age {face.Age} at position ({face.FaceRectangle.Left}, {face.FaceRectangle.Top})");
                    }
                }
                
                return sb.ToString();
            }
            catch (Exception ex)
            {
                return $"Error analyzing image: {ex.Message}";
            }
        }
        
        public async Task<string> RecognizeTextAsync(Stream imageStream)
        {
            try
            {
                // Read text from image
                var result = await _client.RecognizePrintedTextInStreamAsync(true, imageStream);
                
                // Extract text
                var sb = new StringBuilder();
                sb.AppendLine("Extracted Text:");
                
                foreach (var region in result.Regions)
                {
                    foreach (var line in region.Lines)
                    {
                        var lineText = string.Join(" ", line.Words.Select(w => w.Text));
                        sb.AppendLine(lineText);
                    }
                    sb.AppendLine();
                }
                
                return sb.ToString();
            }
            catch (Exception ex)
            {
                return $"Error recognizing text: {ex.Message}";
            }
        }
    }
}

Creating the Image Recognition View Model

Next, let’s create a view model for our image recognition page:

csharp

// Create a new file: ViewModels/ImageRecognitionViewModel.cs
using CommunityToolkit.Mvvm.ComponentModel;
using CommunityToolkit.Mvvm.Input;
using MauiAIDemo.Services;
using System.Collections.ObjectModel;

namespace MauiAIDemo.ViewModels
{
    public partial class ImageRecognitionViewModel : ObservableObject
    {
        private readonly ComputerVisionService _visionService;
        
        [ObservableProperty]
        private string _imageSource;
        
        [ObservableProperty]
        private string _analysisResult;
        
        [ObservableProperty]
        private bool _isAnalyzing;
        
        [ObservableProperty]
        private bool _isImageSelected;
        
        public ImageRecognitionViewModel(ComputerVisionService visionService)
        {
            _visionService = visionService;
            IsImageSelected = false;
            AnalysisResult = "Select an image to analyze.";
        }
        
        [RelayCommand]
        private async Task PickAndAnalyzeImage()
        {
            try
            {
                // Pick a photo from the gallery
                var photo = await MediaPicker.PickPhotoAsync();
                if (photo == null)
                    return;
                
                // Display the selected image
                ImageSource = photo.FullPath;
                IsImageSelected = true;
                AnalysisResult = "Analyzing image...";
                IsAnalyzing = true;
                
                // Analyze the image
                using var stream = await photo.OpenReadAsync();
                var result = await _visionService.AnalyzeImageAsync(stream);
                
                // Display the result
                AnalysisResult = result;
            }
            catch (Exception ex)
            {
                AnalysisResult = $"Error: {ex.Message}";
            }
            finally
            {
                IsAnalyzing = false;
            }
        }
        
        [RelayCommand]
        private async Task PickAndRecognizeText()
        {
            try
            {
                // Pick a photo from the gallery
                var photo = await MediaPicker.PickPhotoAsync();
                if (photo == null)
                    return;
                
                // Display the selected image
                ImageSource = photo.FullPath;
                IsImageSelected = true;
                AnalysisResult = "Recognizing text...";
                IsAnalyzing = true;
                
                // Recognize text in the image
                using var stream = await photo.OpenReadAsync();
                var result = await _visionService.RecognizeTextAsync(stream);
                
                // Display the result
                AnalysisResult = result;
            }
            catch (Exception ex)
            {
                AnalysisResult = $"Error: {ex.Message}";
            }
            finally
            {
                IsAnalyzing = false;
            }
        }
        
        [RelayCommand]
        private async Task CaptureAndAnalyzeImage()
        {
            try
            {
                // Capture a photo using the camera
                var photo = await MediaPicker.CapturePhotoAsync();
                if (photo == null)
                    return;
                
                // Display the captured image
                ImageSource = photo.FullPath;
                IsImageSelected = true;
                AnalysisResult = "Analyzing image...";
                IsAnalyzing = true;
                
                // Analyze the image
                using var stream = await photo.OpenReadAsync();
                var result = await _visionService.AnalyzeImageAsync(stream);
                
                // Display the result
                AnalysisResult = result;
            }
            catch (Exception ex)
            {
                AnalysisResult = $"Error: {ex.Message}";
            }
            finally
            {
                IsAnalyzing = false;
            }
        }
    }
}