我正在学习使用google_speech
^2.2.0 API的medical_dictation
模型构建flutter-firebase
应用程序,用于语音到文本。我已经在GCP上创建了一个服务帐户,启用了语音转文本API,并保存了API密钥。我已经将API密钥保存在flutter项目的根文件夹下为'assets/test_service_account.json',并将音频文件保存为'assets/audio/test.wav'。我还在pubspec.yaml文件中添加了路径。但我得到了这个例外:
无法加载模块:Public到这里去看看!“assets/service_account_key.json”。资产不存在或数据为空。)
类似的例外我得到关于'test.wav'。
这是我的代码。
import 'package:flutter/services.dart' show rootBundle;
import 'package:google_speech/google_speech.dart' as gs;
import 'package:flutter/material.dart';
import 'package:firebase_core/firebase_core.dart';
import 'package:google_speech/speech_client_authenticator.dart';
String audioFilePath = 'assets/audio/test.wav';
Future<String> transcribeAudio(String audioFilePath) async {
final serviceAccountData =
await rootBundle.loadString('assets/service_account_key.json');
final serviceAccount = ServiceAccount.fromString(serviceAccountData);
final data = await rootBundle.load('assets/audio/test.wav');
final audio = data.buffer.asUint8List();
final config = gs.RecognitionConfig(
encoding: gs.AudioEncoding.LINEAR16,
model: gs.RecognitionModel.medical_dictation,
enableAutomaticPunctuation: true,
sampleRateHertz: 16000,
languageCode: 'en-US',
);
final speechToText = gs.SpeechToText.viaServiceAccount(serviceAccount);
final response = await speechToText.recognize(config, audio);
final transcript = response.results
.map((result) => result.alternatives.first.transcript)
.join(' ');
return transcript;
}
void main() async {
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp();
String transcript = await transcribeAudio(audioFilePath);
print('Transcript: $transcript');
}
1条答案
按热度按时间icomxhvb1#
问题已解决。在pubspec.yaml文件中-assets/的位置及其缩进出现问题。谢谢