Class: Aws::TranscribeStreamingService::Types::StartMedicalStreamTranscriptionRequest
- Inherits:
-
Struct
- Object
- Struct
- Aws::TranscribeStreamingService::Types::StartMedicalStreamTranscriptionRequest
- Includes:
- Structure
- Defined in:
- lib/aws-sdk-transcribestreamingservice/types.rb
Overview
Constant Summary collapse
- SENSITIVE =
[]
Instance Attribute Summary collapse
-
#audio_stream ⇒ Types::AudioStream
An encoded stream of audio blobs.
-
#content_identification_type ⇒ String
Labels all personal health information (PHI) identified in your transcript.
-
#enable_channel_identification ⇒ Boolean
Enables channel identification in multi-channel audio.
-
#language_code ⇒ String
Specify the language code that represents the language spoken in your audio.
-
#media_encoding ⇒ String
Specify the encoding used for the input audio.
-
#media_sample_rate_hertz ⇒ Integer
The sample rate of the input audio (in hertz).
-
#number_of_channels ⇒ Integer
Specify the number of channels in your audio stream.
-
#session_id ⇒ String
Specify a name for your transcription session.
-
#show_speaker_label ⇒ Boolean
Enables speaker partitioning (diarization) in your transcription output.
-
#specialty ⇒ String
Specify the medical specialty contained in your audio.
-
#type ⇒ String
Specify the type of input audio.
-
#vocabulary_name ⇒ String
Specify the name of the custom vocabulary that you want to use when processing your transcription.
Instance Attribute Details
#audio_stream ⇒ Types::AudioStream
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
For more information, see [Transcribing streaming audio].
[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#content_identification_type ⇒ String
Labels all personal health information (PHI) identified in your transcript.
Content identification is performed at the segment level; PHI is flagged upon complete transcription of an audio segment.
For more information, see [Identifying personal health information (PHI) in a transcription].
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#enable_channel_identification ⇒ Boolean
Enables channel identification in multi-channel audio.
Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.
If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.
For more information, see [Transcribing multi-channel audio].
[1]: docs.aws.amazon.com/transcribe/latest/dg/channel-id.html
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#language_code ⇒ String
Specify the language code that represents the language spoken in your audio.
Amazon Transcribe Medical only supports US English (‘en-US`).
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#media_encoding ⇒ String
Specify the encoding used for the input audio. Supported formats are:
-
FLAC
-
OPUS-encoded audio in an Ogg container
-
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see [Media formats].
[1]: docs.aws.amazon.com/transcribe/latest/dg/how-input.html#how-input-audio
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#media_sample_rate_hertz ⇒ Integer
The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#number_of_channels ⇒ Integer
Specify the number of channels in your audio stream. Up to two channels are supported.
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#session_id ⇒ String
Specify a name for your transcription session. If you don’t include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.
You can use a session ID to retry a streaming session.
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#show_speaker_label ⇒ Boolean
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see [Partitioning speakers (diarization)].
[1]: docs.aws.amazon.com/transcribe/latest/dg/diarization.html
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#specialty ⇒ String
Specify the medical specialty contained in your audio.
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#type ⇒ String
Specify the type of input audio. For example, choose ‘DICTATION` for a provider dictating patient notes and `CONVERSATION` for a dialogue between a patient and a medical professional.
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#vocabulary_name ⇒ String
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1320 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |