iOS:使用SFSpeechRecognizer进行录制后,AVSpeechSynthesizer不起作用

问题描述:

我正在制作一个应用程序,可以进行文本转语音和语音转文本。iOS:使用SFSpeechRecognizer进行录制后,AVSpeechSynthesizer不起作用

我现在遇到的问题是,使用AVSpeechSynthesizer的文本到语音转换工作正常。但是在我使用SFSpeechRecognizer进行录音和文字处理之后,文本到语音转换停止工作(即不会回复)。

我对swift也很陌生。但是我从几个不同的教程中获得了这些代码,并尝试将它们合并在一起。

这里是我的代码:

private var speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))! 
private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest? 
private var recognitionTask: SFSpeechRecognitionTask? 
private var audioEngine = AVAudioEngine() 

    @objc(speak:location:date:callback:) 
    func speak(name: String, location: String, date: NSNumber,_ callback: @escaping (NSObject) ->()) -> Void { 
     let utterance = AVSpeechUtterance(string: name) 
     let synthesizer = AVSpeechSynthesizer() 
     synthesizer.speak(utterance) 
    } 


    @available(iOS 10.0, *) 
    @objc(startListening:location:date:callback:) 
    func startListening(name: String, location: String, date: NSNumber,_ callback: @escaping (NSObject) ->()) -> Void { 
     if audioEngine.isRunning { 
      audioEngine.stop() 
      recognitionRequest?.endAudio() 


     } else { 

      if recognitionTask != nil { //1 
       recognitionTask?.cancel() 
       recognitionTask = nil 
      } 

      let audioSession = AVAudioSession.sharedInstance() //2 
      do { 
       try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord) 
       try audioSession.setMode(AVAudioSessionModeMeasurement) 
       try audioSession.setActive(true, with: .notifyOthersOnDeactivation) 
      } catch { 
       print("audioSession properties weren't set because of an error.") 
      } 

      recognitionRequest = SFSpeechAudioBufferRecognitionRequest() //3 

      guard let inputNode = audioEngine.inputNode else { 
       fatalError("Audio engine has no input node") 
      } //4 

      guard let recognitionRequest = recognitionRequest else { 
       fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object") 
      } //5 

      recognitionRequest.shouldReportPartialResults = true //6 

      recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in //7 

       var isFinal = false //8 

       if result != nil { 

        print(result?.bestTranscription.formattedString) //9 
        isFinal = (result?.isFinal)! 
       } 

       if error != nil || isFinal { //10 
        self.audioEngine.stop() 
        inputNode.removeTap(onBus: 0) 

        self.recognitionRequest = nil 
        self.recognitionTask = nil 


       } 
      }) 

      let recordingFormat = inputNode.outputFormat(forBus: 0) //11 
      inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in 
       self.recognitionRequest?.append(buffer) 
      } 

      audioEngine.prepare() //12 

      do { 
       try audioEngine.start() 
      } catch { 
       print("audioEngine couldn't start because of an error.") 
      } 




     } 

    } 
+0

你在哪里调用'speak'函数? –

+0

你的问题解决了@塞缪尔·门德斯 –

他们都有一个AVAudioSession。

对于AVSpeechSynthesizer我想它必须被设置为:

_audioSession.SetCategory(AVAudioSessionCategory.Playback, 
AVAudioSessionCategoryOptions.MixWithOthers); 

和对于SFSpeechRecognizer:

_audioSession.SetCategory(AVAudioSessionCategory.PlayAndRecord, 
AVAudioSessionCategoryOptions.MixWithOthers); 

希望它能帮助。