.Guarantee being compatible with multiple platforms, including.NET 6.0,. Internet Platform 4.6.2, and.NET Specification 2.0 and also above.Minimize reliances to stop model disagreements and also the need for tiing redirects.Transcribing Sound Record.Among the main functionalities of the SDK is actually audio transcription. Developers may translate audio files asynchronously or in real-time. Below is an example of just how to translate an audio report:.utilizing AssemblyAI.utilizing AssemblyAI.Transcripts.var client = new AssemblyAIClient(" YOUR_API_KEY").var transcript = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local files, similar code could be used to accomplish transcription.await utilizing var stream = new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.stream,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise holds real-time audio transcription utilizing Streaming Speech-to-Text. This feature is especially useful for uses calling for immediate handling of audio information.utilizing AssemblyAI.Realtime.wait for using var transcriber = new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Last: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for receiving sound from a microphone for instance.GetAudio( async (part) => wait for transcriber.SendAudioAsync( chunk)).await transcriber.CloseAsync().Taking Advantage Of LeMUR for LLM Functions.The SDK combines along with LeMUR to enable creators to create large language style (LLM) functions on vocal data. Right here is an instance:.var lemurTaskParams = brand-new LemurTaskParams.Motivate="Provide a brief recap of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Intelligence Styles.Additionally, the SDK comes with built-in help for audio cleverness models, allowing conviction analysis as well as other advanced components.var transcript = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = real. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or even NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For additional information, go to the official AssemblyAI blog.Image resource: Shutterstock.