Class: OpenAI::Models::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig
- Defined in:
- lib/openai/models/audio/transcription_create_params.rb
Defined Under Namespace
Modules: Type
Instance Attribute Summary collapse
-
#prefix_padding_ms ⇒ Integer?
Amount of audio to include before the VAD detected speech (in milliseconds).
-
#silence_duration_ms ⇒ Integer?
Duration of silence to detect speech stop (in milliseconds).
-
#threshold ⇒ Float?
Sensitivity threshold (0.0 to 1.0) for voice activity detection.
-
#type ⇒ Symbol, OpenAI::Models::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig::Type
Must be set to ‘server_vad` to enable manual chunking using server side VAD.
Instance Method Summary collapse
-
#initialize(type: , prefix_padding_ms: nil, silence_duration_ms: nil, threshold: nil) ⇒ Object
constructor
Some parameter documentations has been truncated, see VadConfig for more details.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, #inspect, inspect, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(type: , prefix_padding_ms: nil, silence_duration_ms: nil, threshold: nil) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig for more details.
|
# File 'lib/openai/models/audio/transcription_create_params.rb', line 180
|
Instance Attribute Details
#prefix_padding_ms ⇒ Integer?
Amount of audio to include before the VAD detected speech (in milliseconds).
162 |
# File 'lib/openai/models/audio/transcription_create_params.rb', line 162 optional :prefix_padding_ms, Integer |
#silence_duration_ms ⇒ Integer?
Duration of silence to detect speech stop (in milliseconds). With shorter values the model will respond more quickly, but may jump in on short pauses from the user.
170 |
# File 'lib/openai/models/audio/transcription_create_params.rb', line 170 optional :silence_duration_ms, Integer |
#threshold ⇒ Float?
Sensitivity threshold (0.0 to 1.0) for voice activity detection. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.
178 |
# File 'lib/openai/models/audio/transcription_create_params.rb', line 178 optional :threshold, Float |
#type ⇒ Symbol, OpenAI::Models::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig::Type
Must be set to ‘server_vad` to enable manual chunking using server side VAD.
153 154 155 156 |
# File 'lib/openai/models/audio/transcription_create_params.rb', line 153 required :type, enum: -> { OpenAI::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig::Type } |