Class: Google::Apis::DialogflowV2::GoogleCloudDialogflowV2InputAudioConfig
- Inherits:
-
Object
- Object
- Google::Apis::DialogflowV2::GoogleCloudDialogflowV2InputAudioConfig
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dialogflow_v2/classes.rb,
lib/google/apis/dialogflow_v2/representations.rb,
lib/google/apis/dialogflow_v2/representations.rb
Overview
Instructs the speech recognizer how to process the audio content.
Instance Attribute Summary collapse
-
#audio_encoding ⇒ String
Required.
-
#disable_no_speech_recognized_event ⇒ Boolean
(also: #disable_no_speech_recognized_event?)
Only used in Participants.AnalyzeContent and Participants.
-
#enable_automatic_punctuation ⇒ Boolean
(also: #enable_automatic_punctuation?)
Enable automatic punctuation option at the speech backend.
-
#enable_word_info ⇒ Boolean
(also: #enable_word_info?)
If
true
, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. -
#language_code ⇒ String
Required.
-
#model ⇒ String
Optional.
-
#model_variant ⇒ String
Which variant of the Speech model to use.
-
#opt_out_conformer_model_migration ⇒ Boolean
(also: #opt_out_conformer_model_migration?)
If
true
, the request will opt out for STT conformer model migration. -
#phrase_hints ⇒ Array<String>
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
-
#sample_rate_hertz ⇒ Fixnum
Required.
-
#single_utterance ⇒ Boolean
(also: #single_utterance?)
If
false
(default), recognition does not cease until the client closes the stream. -
#speech_contexts ⇒ Array<Google::Apis::DialogflowV2::GoogleCloudDialogflowV2SpeechContext>
Context information to assist speech recognition.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudDialogflowV2InputAudioConfig
constructor
A new instance of GoogleCloudDialogflowV2InputAudioConfig.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudDialogflowV2InputAudioConfig
Returns a new instance of GoogleCloudDialogflowV2InputAudioConfig.
12004 12005 12006 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 12004 def initialize(**args) update!(**args) end |
Instance Attribute Details
#audio_encoding ⇒ String
Required. Audio encoding of the audio content to process.
Corresponds to the JSON property audioEncoding
11912 11913 11914 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11912 def audio_encoding @audio_encoding end |
#disable_no_speech_recognized_event ⇒ Boolean Also known as: disable_no_speech_recognized_event?
Only used in Participants.AnalyzeContent and Participants.
StreamingAnalyzeContent. If false
and recognition doesn't return any result,
trigger NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
Corresponds to the JSON property disableNoSpeechRecognizedEvent
11919 11920 11921 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11919 def disable_no_speech_recognized_event @disable_no_speech_recognized_event end |
#enable_automatic_punctuation ⇒ Boolean Also known as: enable_automatic_punctuation?
Enable automatic punctuation option at the speech backend.
Corresponds to the JSON property enableAutomaticPunctuation
11925 11926 11927 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11925 def enable_automatic_punctuation @enable_automatic_punctuation end |
#enable_word_info ⇒ Boolean Also known as: enable_word_info?
If true
, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
Corresponds to the JSON property enableWordInfo
11934 11935 11936 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11934 def enable_word_info @enable_word_info end |
#language_code ⇒ String
Required. The language of the supplied audio. Dialogflow does not do
translations. See Language Support for a list of the currently supported language codes. Note
that queries in the same session do not necessarily need to specify the same
language.
Corresponds to the JSON property languageCode
11944 11945 11946 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11944 def language_code @language_code end |
#model ⇒ String
Optional. Which Speech model to select for the given request. For more
information, see Speech models.
Corresponds to the JSON property model
11951 11952 11953 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11951 def model @model end |
#model_variant ⇒ String
Which variant of the Speech model to use.
Corresponds to the JSON property modelVariant
11956 11957 11958 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11956 def model_variant @model_variant end |
#opt_out_conformer_model_migration ⇒ Boolean Also known as: opt_out_conformer_model_migration?
If true
, the request will opt out for STT conformer model migration. This
field will be deprecated once force migration takes place in June 2024. Please
refer to Dialogflow ES Speech model migration.
Corresponds to the JSON property optOutConformerModelMigration
11964 11965 11966 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11964 def opt_out_conformer_model_migration @opt_out_conformer_model_migration end |
#phrase_hints ⇒ Array<String>
A list of strings containing words and phrases that the speech recognizer
should recognize with higher likelihood. See the Cloud Speech documentation for more
details. This field is deprecated. Please use speech_contexts
instead.
If you specify both phrase_hints
and speech_contexts
, Dialogflow
will treat the phrase_hints
as a single additional SpeechContext
.
Corresponds to the JSON property phraseHints
11975 11976 11977 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11975 def phrase_hints @phrase_hints end |
#sample_rate_hertz ⇒ Fixnum
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer
to Cloud Speech API documentation for more details.
Corresponds to the JSON property sampleRateHertz
11982 11983 11984 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11982 def sample_rate_hertz @sample_rate_hertz end |
#single_utterance ⇒ Boolean Also known as: single_utterance?
If false
(default), recognition does not cease until the client closes the
stream. If true
, the recognizer will detect a single spoken utterance in
input audio. Recognition ceases when it detects the audio's voice has stopped
or paused. In this case, once a detected intent is received, the client should
close the stream and start a new request with a new stream as needed. Note:
This setting is relevant only for streaming methods. Note: When specified,
InputAudioConfig.single_utterance takes precedence over
StreamingDetectIntentRequest.single_utterance.
Corresponds to the JSON property singleUtterance
11994 11995 11996 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11994 def single_utterance @single_utterance end |
#speech_contexts ⇒ Array<Google::Apis::DialogflowV2::GoogleCloudDialogflowV2SpeechContext>
Context information to assist speech recognition. See the Cloud Speech
documentation for more details.
Corresponds to the JSON property speechContexts
12002 12003 12004 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 12002 def speech_contexts @speech_contexts end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
12009 12010 12011 12012 12013 12014 12015 12016 12017 12018 12019 12020 12021 12022 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 12009 def update!(**args) @audio_encoding = args[:audio_encoding] if args.key?(:audio_encoding) @disable_no_speech_recognized_event = args[:disable_no_speech_recognized_event] if args.key?(:disable_no_speech_recognized_event) @enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation) @enable_word_info = args[:enable_word_info] if args.key?(:enable_word_info) @language_code = args[:language_code] if args.key?(:language_code) @model = args[:model] if args.key?(:model) @model_variant = args[:model_variant] if args.key?(:model_variant) @opt_out_conformer_model_migration = args[:opt_out_conformer_model_migration] if args.key?(:opt_out_conformer_model_migration) @phrase_hints = args[:phrase_hints] if args.key?(:phrase_hints) @sample_rate_hertz = args[:sample_rate_hertz] if args.key?(:sample_rate_hertz) @single_utterance = args[:single_utterance] if args.key?(:single_utterance) @speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts) end |