Class: Google::Cloud::Dialogflow::CX::V3::SafetySettings::RaiSettings
- Inherits:
-
Object
- Object
- Google::Cloud::Dialogflow::CX::V3::SafetySettings::RaiSettings
- Extended by:
- Protobuf::MessageExts::ClassMethods
- Includes:
- Protobuf::MessageExts
- Defined in:
- proto_docs/google/cloud/dialogflow/cx/v3/safety_settings.rb
Overview
Settings for Responsible AI.
Defined Under Namespace
Modules: SafetyCategory, SafetyFilterLevel Classes: CategoryFilter
Instance Attribute Summary collapse
Instance Attribute Details
#category_filters ⇒ ::Array<::Google::Cloud::Dialogflow::CX::V3::SafetySettings::RaiSettings::CategoryFilter>
Returns Optional. RAI blocking configurations.
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
# File 'proto_docs/google/cloud/dialogflow/cx/v3/safety_settings.rb', line 63 class RaiSettings include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Configuration of the sensitivity level for blocking an RAI category. # @!attribute [rw] category # @return [::Google::Cloud::Dialogflow::CX::V3::SafetySettings::RaiSettings::SafetyCategory] # RAI category to configure. # @!attribute [rw] filter_level # @return [::Google::Cloud::Dialogflow::CX::V3::SafetySettings::RaiSettings::SafetyFilterLevel] # Blocking sensitivity level to configure for the RAI category. class CategoryFilter include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Sensitivity level for RAI categories. module SafetyFilterLevel # Unspecified -- uses default sensitivity levels. SAFETY_FILTER_LEVEL_UNSPECIFIED = 0 # Block no text -- effectively disables the category. BLOCK_NONE = 1 # Block a few suspicious texts. BLOCK_FEW = 2 # Block some suspicious texts. BLOCK_SOME = 3 # Block most suspicious texts. BLOCK_MOST = 4 end # RAI categories to configure. module SafetyCategory # Unspecified. SAFETY_CATEGORY_UNSPECIFIED = 0 # Dangerous content. DANGEROUS_CONTENT = 1 # Hate speech. HATE_SPEECH = 2 # Harassment. HARASSMENT = 3 # Sexually explicit content. SEXUALLY_EXPLICIT_CONTENT = 4 end end |