# frozen_string_literal: true # Copyright 2021 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Auto-generated by gapic-generator-ruby. DO NOT EDIT! module Google module Cloud module Dialogflow module CX module V3 # Stores information about feedback provided by users about a response. # @!attribute [rw] rating # @return [::Google::Cloud::Dialogflow::CX::V3::AnswerFeedback::Rating] # Optional. Rating from user for the specific Dialogflow response. # @!attribute [rw] rating_reason # @return [::Google::Cloud::Dialogflow::CX::V3::AnswerFeedback::RatingReason] # Optional. In case of thumbs down rating provided, users can optionally # provide context about the rating. # @!attribute [rw] custom_rating # @return [::String] # Optional. Custom rating from the user about the provided answer, with # maximum length of 1024 characters. For example, client could use a # customized JSON object to indicate the rating. class AnswerFeedback include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Stores extra information about why users provided thumbs down rating. # @!attribute [rw] reason_labels # @return [::Array<::String>] # Optional. Custom reason labels for thumbs down rating provided by the # user. The maximum number of labels allowed is 10 and the maximum length # of a single label is 128 characters. # @!attribute [rw] feedback # @return [::String] # Optional. Additional feedback about the rating. # This field can be populated without choosing a predefined `reason`. class RatingReason include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents thumbs up/down rating provided by user about a response. module Rating # Rating not specified. RATING_UNSPECIFIED = 0 # Thumbs up feedback from user. THUMBS_UP = 1 # Thumbs down feedback from user. THUMBS_DOWN = 2 end end # The request to set the feedback for a bot answer. # @!attribute [rw] session # @return [::String] # Required. The name of the session the feedback was sent to. # @!attribute [rw] response_id # @return [::String] # Required. ID of the response to update its feedback. This is the same as # DetectIntentResponse.response_id. # @!attribute [rw] answer_feedback # @return [::Google::Cloud::Dialogflow::CX::V3::AnswerFeedback] # Required. Feedback provided for a bot answer. # @!attribute [rw] update_mask # @return [::Google::Protobuf::FieldMask] # Optional. The mask to control which fields to update. If the mask is not # present, all fields will be updated. class SubmitAnswerFeedbackRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # The request to detect user's intent. # @!attribute [rw] session # @return [::String] # Required. The name of the session this query is sent to. # Format: `projects//locations//agents//sessions/` or `projects//locations//agents//environments//sessions/`. # If `Environment ID` is not specified, we assume default 'draft' # environment. # It's up to the API caller to choose an appropriate `Session ID`. It can be # a random number or some type of session identifiers (preferably hashed). # The length of the `Session ID` must not exceed 36 characters. # # For more information, see the [sessions # guide](https://cloud.google.com/dialogflow/cx/docs/concept/session). # # Note: Always use agent versions for production traffic. # See [Versions and # environments](https://cloud.google.com/dialogflow/cx/docs/concept/version). # @!attribute [rw] query_params # @return [::Google::Cloud::Dialogflow::CX::V3::QueryParameters] # The parameters of this query. # @!attribute [rw] query_input # @return [::Google::Cloud::Dialogflow::CX::V3::QueryInput] # Required. The input specification. # @!attribute [rw] output_audio_config # @return [::Google::Cloud::Dialogflow::CX::V3::OutputAudioConfig] # Instructs the speech synthesizer how to generate the output audio. class DetectIntentRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # The message returned from the DetectIntent method. # @!attribute [rw] response_id # @return [::String] # Output only. The unique identifier of the response. It can be used to # locate a response in the training example set or for reporting issues. # @!attribute [rw] query_result # @return [::Google::Cloud::Dialogflow::CX::V3::QueryResult] # The result of the conversational query. # @!attribute [rw] output_audio # @return [::String] # The audio data bytes encoded as specified in the request. # Note: The output audio is generated based on the values of default platform # text responses found in the # {::Google::Cloud::Dialogflow::CX::V3::QueryResult#response_messages `query_result.response_messages`} # field. If multiple default text responses exist, they will be concatenated # when generating audio. If no default platform text responses exist, the # generated audio content will be empty. # # In some scenarios, multiple output audio fields may be present in the # response structure. In these cases, only the top-most-level audio output # has content. # @!attribute [rw] output_audio_config # @return [::Google::Cloud::Dialogflow::CX::V3::OutputAudioConfig] # The config used by the speech synthesizer to generate the output audio. # @!attribute [rw] response_type # @return [::Google::Cloud::Dialogflow::CX::V3::DetectIntentResponse::ResponseType] # Response type. # @!attribute [rw] allow_cancellation # @return [::Boolean] # Indicates whether the partial response can be cancelled when a later # response arrives. e.g. if the agent specified some music as partial # response, it can be cancelled. class DetectIntentResponse include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Represents different DetectIntentResponse types. module ResponseType # Not specified. This should never happen. RESPONSE_TYPE_UNSPECIFIED = 0 # Partial response. e.g. Aggregated responses in a Fulfillment that enables # `return_partial_response` can be returned as partial response. # WARNING: partial response is not eligible for barge-in. PARTIAL = 1 # Final response. FINAL = 2 end end # The top-level message sent by the client to the # {::Google::Cloud::Dialogflow::CX::V3::Sessions::Client#streaming_detect_intent Sessions.StreamingDetectIntent} # method. # # Multiple request messages should be sent in order: # # 1. The first message must contain # {::Google::Cloud::Dialogflow::CX::V3::StreamingDetectIntentRequest#session session}, # {::Google::Cloud::Dialogflow::CX::V3::StreamingDetectIntentRequest#query_input query_input} # plus optionally # {::Google::Cloud::Dialogflow::CX::V3::StreamingDetectIntentRequest#query_params query_params}. # If the client wants to receive an audio response, it should also contain # {::Google::Cloud::Dialogflow::CX::V3::StreamingDetectIntentRequest#output_audio_config output_audio_config}. # # 2. If # {::Google::Cloud::Dialogflow::CX::V3::StreamingDetectIntentRequest#query_input query_input} # was set to # {::Google::Cloud::Dialogflow::CX::V3::AudioInput#config query_input.audio.config}, # all subsequent messages must contain # {::Google::Cloud::Dialogflow::CX::V3::AudioInput#audio query_input.audio.audio} # to continue with Speech recognition. If you decide to rather detect an # intent from text input after you already started Speech recognition, # please send a message with # {::Google::Cloud::Dialogflow::CX::V3::QueryInput#text query_input.text}. # # However, note that: # # * Dialogflow will bill you for the audio duration so far. # * Dialogflow discards all Speech recognition results in favor of the # input text. # * Dialogflow will use the language code from the first message. # # After you sent all input, you must half-close or abort the request stream. # @!attribute [rw] session # @return [::String] # The name of the session this query is sent to. # Format: `projects//locations//agents//sessions/` or `projects//locations//agents//environments//sessions/`. # If `Environment ID` is not specified, we assume default 'draft' # environment. # It's up to the API caller to choose an appropriate `Session ID`. It can be # a random number or some type of session identifiers (preferably hashed). # The length of the `Session ID` must not exceed 36 characters. # Note: session must be set in the first request. # # For more information, see the [sessions # guide](https://cloud.google.com/dialogflow/cx/docs/concept/session). # # Note: Always use agent versions for production traffic. # See [Versions and # environments](https://cloud.google.com/dialogflow/cx/docs/concept/version). # @!attribute [rw] query_params # @return [::Google::Cloud::Dialogflow::CX::V3::QueryParameters] # The parameters of this query. # @!attribute [rw] query_input # @return [::Google::Cloud::Dialogflow::CX::V3::QueryInput] # Required. The input specification. # @!attribute [rw] output_audio_config # @return [::Google::Cloud::Dialogflow::CX::V3::OutputAudioConfig] # Instructs the speech synthesizer how to generate the output audio. # @!attribute [rw] enable_partial_response # @return [::Boolean] # Enable partial detect intent response. If this flag is not enabled, # response stream still contains only one final `DetectIntentResponse` even # if some `Fulfillment`s in the agent have been configured to return partial # responses. # @!attribute [rw] enable_debugging_info # @return [::Boolean] # If true, `StreamingDetectIntentResponse.debugging_info` will get populated. class StreamingDetectIntentRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Cloud conversation info for easier debugging. # It will get populated in `StreamingDetectIntentResponse` or # `StreamingAnalyzeContentResponse` when the flag `enable_debugging_info` is # set to true in corresponding requests. # @!attribute [rw] audio_data_chunks # @return [::Integer] # Number of input audio data chunks in streaming requests. # @!attribute [rw] result_end_time_offset # @return [::Google::Protobuf::Duration] # Time offset of the end of speech utterance relative to the # beginning of the first audio chunk. # @!attribute [rw] first_audio_duration # @return [::Google::Protobuf::Duration] # Duration of first audio chunk. # @!attribute [rw] single_utterance # @return [::Boolean] # Whether client used single utterance mode. # @!attribute [rw] speech_partial_results_end_times # @return [::Array<::Google::Protobuf::Duration>] # Time offsets of the speech partial results relative to the beginning of # the stream. # @!attribute [rw] speech_final_results_end_times # @return [::Array<::Google::Protobuf::Duration>] # Time offsets of the speech final results (is_final=true) relative to the # beginning of the stream. # @!attribute [rw] partial_responses # @return [::Integer] # Total number of partial responses. # @!attribute [rw] speaker_id_passive_latency_ms_offset # @return [::Integer] # Time offset of Speaker ID stream close time relative to the Speech stream # close time in milliseconds. Only meaningful for conversations involving # passive verification. # @!attribute [rw] bargein_event_triggered # @return [::Boolean] # Whether a barge-in event is triggered in this request. # @!attribute [rw] speech_single_utterance # @return [::Boolean] # Whether speech uses single utterance mode. # @!attribute [rw] dtmf_partial_results_times # @return [::Array<::Google::Protobuf::Duration>] # Time offsets of the DTMF partial results relative to the beginning of # the stream. # @!attribute [rw] dtmf_final_results_times # @return [::Array<::Google::Protobuf::Duration>] # Time offsets of the DTMF final results relative to the beginning of # the stream. # @!attribute [rw] single_utterance_end_time_offset # @return [::Google::Protobuf::Duration] # Time offset of the end-of-single-utterance signal relative to the # beginning of the stream. # @!attribute [rw] no_speech_timeout # @return [::Google::Protobuf::Duration] # No speech timeout settings for the stream. # @!attribute [rw] endpointing_timeout # @return [::Google::Protobuf::Duration] # Speech endpointing timeout settings for the stream. # @!attribute [rw] is_input_text # @return [::Boolean] # Whether the streaming terminates with an injected text query. # @!attribute [rw] client_half_close_time_offset # @return [::Google::Protobuf::Duration] # Client half close time in terms of input audio duration. # @!attribute [rw] client_half_close_streaming_time_offset # @return [::Google::Protobuf::Duration] # Client half close time in terms of API streaming duration. class CloudConversationDebuggingInfo include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # The top-level message returned from the # {::Google::Cloud::Dialogflow::CX::V3::Sessions::Client#streaming_detect_intent StreamingDetectIntent} # method. # # Multiple response messages (N) can be returned in order. # # The first (N-1) responses set either the `recognition_result` or # `detect_intent_response` field, depending on the request: # # * If the `StreamingDetectIntentRequest.query_input.audio` field was # set, and the `StreamingDetectIntentRequest.enable_partial_response` # field was false, the `recognition_result` field is populated for each # of the (N-1) responses. # See the # {::Google::Cloud::Dialogflow::CX::V3::StreamingRecognitionResult StreamingRecognitionResult} # message for details about the result message sequence. # # * If the `StreamingDetectIntentRequest.enable_partial_response` field was # true, the `detect_intent_response` field is populated for each # of the (N-1) responses, where 1 <= N <= 4. # These responses set the # {::Google::Cloud::Dialogflow::CX::V3::DetectIntentResponse#response_type DetectIntentResponse.response_type} # field to `PARTIAL`. # # For the final Nth response message, the `detect_intent_response` is fully # populated, and # {::Google::Cloud::Dialogflow::CX::V3::DetectIntentResponse#response_type DetectIntentResponse.response_type} # is set to `FINAL`. # @!attribute [rw] recognition_result # @return [::Google::Cloud::Dialogflow::CX::V3::StreamingRecognitionResult] # The result of speech recognition. # @!attribute [rw] detect_intent_response # @return [::Google::Cloud::Dialogflow::CX::V3::DetectIntentResponse] # The response from detect intent. # @!attribute [rw] debugging_info # @return [::Google::Cloud::Dialogflow::CX::V3::CloudConversationDebuggingInfo] # Debugging info that would get populated when # `StreamingDetectIntentRequest.enable_debugging_info` is set to true. class StreamingDetectIntentResponse include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Contains a speech recognition result corresponding to a portion of the audio # that is currently being processed or an indication that this is the end # of the single requested utterance. # # While end-user audio is being processed, Dialogflow sends a series of # results. Each result may contain a `transcript` value. A transcript # represents a portion of the utterance. While the recognizer is processing # audio, transcript values may be interim values or finalized values. # Once a transcript is finalized, the `is_final` value is set to true and # processing continues for the next transcript. # # If `StreamingDetectIntentRequest.query_input.audio.config.single_utterance` # was true, and the recognizer has completed processing audio, # the `message_type` value is set to `END_OF_SINGLE_UTTERANCE and the # following (last) result contains the last finalized transcript. # # The complete end-user utterance is determined by concatenating the # finalized transcript values received for the series of results. # # In the following example, single utterance is enabled. In the case where # single utterance is not enabled, result 7 would not occur. # # ``` # Num | transcript | message_type | is_final # --- | ----------------------- | ----------------------- | -------- # 1 | "tube" | TRANSCRIPT | false # 2 | "to be a" | TRANSCRIPT | false # 3 | "to be" | TRANSCRIPT | false # 4 | "to be or not to be" | TRANSCRIPT | true # 5 | "that's" | TRANSCRIPT | false # 6 | "that is | TRANSCRIPT | false # 7 | unset | END_OF_SINGLE_UTTERANCE | unset # 8 | " that is the question" | TRANSCRIPT | true # ``` # # Concatenating the finalized transcripts with `is_final` set to true, # the complete utterance becomes "to be or not to be that is the question". # @!attribute [rw] message_type # @return [::Google::Cloud::Dialogflow::CX::V3::StreamingRecognitionResult::MessageType] # Type of the result message. # @!attribute [rw] transcript # @return [::String] # Transcript text representing the words that the user spoke. # Populated if and only if `message_type` = `TRANSCRIPT`. # @!attribute [rw] is_final # @return [::Boolean] # If `false`, the `StreamingRecognitionResult` represents an # interim result that may change. If `true`, the recognizer will not return # any further hypotheses about this piece of the audio. May only be populated # for `message_type` = `TRANSCRIPT`. # @!attribute [rw] confidence # @return [::Float] # The Speech confidence between 0.0 and 1.0 for the current portion of audio. # A higher number indicates an estimated greater likelihood that the # recognized words are correct. The default of 0.0 is a sentinel value # indicating that confidence was not set. # # This field is typically only provided if `is_final` is true and you should # not rely on it being accurate or even set. # @!attribute [rw] stability # @return [::Float] # An estimate of the likelihood that the speech recognizer will # not change its guess about this interim recognition result: # * If the value is unspecified or 0.0, Dialogflow didn't compute the # stability. In particular, Dialogflow will only provide stability for # `TRANSCRIPT` results with `is_final = false`. # * Otherwise, the value is in (0.0, 1.0] where 0.0 means completely # unstable and 1.0 means completely stable. # @!attribute [rw] speech_word_info # @return [::Array<::Google::Cloud::Dialogflow::CX::V3::SpeechWordInfo>] # Word-specific information for the words recognized by Speech in # {::Google::Cloud::Dialogflow::CX::V3::StreamingRecognitionResult#transcript transcript}. # Populated if and only if `message_type` = `TRANSCRIPT` and # [InputAudioConfig.enable_word_info] is set. # @!attribute [rw] speech_end_offset # @return [::Google::Protobuf::Duration] # Time offset of the end of this Speech recognition result relative to the # beginning of the audio. Only populated for `message_type` = # `TRANSCRIPT`. # @!attribute [rw] language_code # @return [::String] # Detected language code for the transcript. class StreamingRecognitionResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of the response message. module MessageType # Not specified. Should never be used. MESSAGE_TYPE_UNSPECIFIED = 0 # Message contains a (possibly partial) transcript. TRANSCRIPT = 1 # Event indicates that the server has detected the end of the user's speech # utterance and expects no additional speech. Therefore, the server will # not process additional audio (although it may subsequently return # additional results). The client should stop sending additional audio # data, half-close the gRPC connection, and wait for any additional results # until the server closes the gRPC connection. This message is only sent if # {::Google::Cloud::Dialogflow::CX::V3::InputAudioConfig#single_utterance `single_utterance`} # was set to `true`, and is not used otherwise. END_OF_SINGLE_UTTERANCE = 2 end end # Represents the parameters of a conversational query. # @!attribute [rw] time_zone # @return [::String] # The time zone of this conversational query from the [time zone # database](https://www.iana.org/time-zones), e.g., America/New_York, # Europe/Paris. If not provided, the time zone specified in the agent is # used. # @!attribute [rw] geo_location # @return [::Google::Type::LatLng] # The geo location of this conversational query. # @!attribute [rw] session_entity_types # @return [::Array<::Google::Cloud::Dialogflow::CX::V3::SessionEntityType>] # Additional session entity types to replace or extend developer entity types # with. The entity synonyms apply to all languages and persist for the # session of this query. # @!attribute [rw] payload # @return [::Google::Protobuf::Struct] # This field can be used to pass custom data into the webhook associated with # the agent. Arbitrary JSON objects are supported. # Some integrations that query a Dialogflow agent may provide additional # information in the payload. # In particular, for the Dialogflow Phone Gateway integration, this field has # the form: # ``` # { # "telephony": { # "caller_id": "+18558363987" # } # } # ``` # @!attribute [rw] parameters # @return [::Google::Protobuf::Struct] # Additional parameters to be put into [session # parameters][SessionInfo.parameters]. To remove a # parameter from the session, clients should explicitly set the parameter # value to null. # # You can reference the session parameters in the agent with the following # format: $session.params.parameter-id. # # Depending on your protocol or client library language, this is a # map, associative array, symbol table, dictionary, or JSON object # composed of a collection of (MapKey, MapValue) pairs: # # * MapKey type: string # * MapKey value: parameter name # * MapValue type: If parameter's entity type is a composite entity then use # map, otherwise, depending on the parameter value type, it could be one of # string, number, boolean, null, list or map. # * MapValue value: If parameter's entity type is a composite entity then use # map from composite entity property names to property values, otherwise, # use parameter value. # @!attribute [rw] current_page # @return [::String] # The unique identifier of the {::Google::Cloud::Dialogflow::CX::V3::Page page} to # override the [current page][QueryResult.current_page] in the session. # Format: `projects//locations//agents//flows//pages/`. # # If `current_page` is specified, the previous state of the session will be # ignored by Dialogflow, including the [previous # page][QueryResult.current_page] and the [previous session # parameters][QueryResult.parameters]. # In most cases, # {::Google::Cloud::Dialogflow::CX::V3::QueryParameters#current_page current_page} # and {::Google::Cloud::Dialogflow::CX::V3::QueryParameters#parameters parameters} # should be configured together to direct a session to a specific state. # @!attribute [rw] disable_webhook # @return [::Boolean] # Whether to disable webhook calls for this request. # @!attribute [rw] analyze_query_text_sentiment # @return [::Boolean] # Configures whether sentiment analysis should be performed. If not # provided, sentiment analysis is not performed. # @!attribute [rw] webhook_headers # @return [::Google::Protobuf::Map{::String => ::String}] # This field can be used to pass HTTP headers for a webhook # call. These headers will be sent to webhook along with the headers that # have been configured through Dialogflow web console. The headers defined # within this field will overwrite the headers configured through Dialogflow # console if there is a conflict. Header names are case-insensitive. # Google's specified headers are not allowed. Including: "Host", # "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", # "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc. # @!attribute [rw] flow_versions # @return [::Array<::String>] # A list of flow versions to override for the request. # Format: `projects//locations//agents//flows//versions/`. # # If version 1 of flow X is included in this list, the traffic of # flow X will go through version 1 regardless of the version configuration in # the environment. Each flow can have at most one version specified in this # list. # @!attribute [rw] channel # @return [::String] # The channel which this query is for. # # If specified, only the # {::Google::Cloud::Dialogflow::CX::V3::ResponseMessage ResponseMessage} associated # with the channel will be returned. If no # {::Google::Cloud::Dialogflow::CX::V3::ResponseMessage ResponseMessage} is # associated with the channel, it falls back to the # {::Google::Cloud::Dialogflow::CX::V3::ResponseMessage ResponseMessage} with # unspecified channel. # # If unspecified, the # {::Google::Cloud::Dialogflow::CX::V3::ResponseMessage ResponseMessage} with # unspecified channel will be returned. # @!attribute [rw] session_ttl # @return [::Google::Protobuf::Duration] # Optional. Sets Dialogflow session life time. # By default, a Dialogflow session remains active and its data is stored for # 30 minutes after the last request is sent for the session. # This value should be no longer than 1 day. # @!attribute [rw] end_user_metadata # @return [::Google::Protobuf::Struct] # Optional. Information about the end-user to improve the relevance and # accuracy of generative answers. # # This will be interpreted and used by a language model, so, for good # results, the data should be self-descriptive, and in a simple structure. # # Example: # # ```json # { # "subscription plan": "Business Premium Plus", # "devices owned": [ # \\{"model": "Google Pixel 7"}, # \\{"model": "Google Pixel Tablet"} # ] # } # ``` # @!attribute [rw] search_config # @return [::Google::Cloud::Dialogflow::CX::V3::SearchConfig] # Optional. Search configuration for UCS search queries. class QueryParameters include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # @!attribute [rw] key # @return [::String] # @!attribute [rw] value # @return [::String] class WebhookHeadersEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end # Search configuration for UCS search queries. # @!attribute [rw] boost_specs # @return [::Array<::Google::Cloud::Dialogflow::CX::V3::BoostSpecs>] # Optional. Boosting configuration for the datastores. # @!attribute [rw] filter_specs # @return [::Array<::Google::Cloud::Dialogflow::CX::V3::FilterSpecs>] # Optional. Filter configuration for the datastores. class SearchConfig include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Boost specification to boost certain documents. # A copy of google.cloud.discoveryengine.v1main.BoostSpec, field documentation # is available at # https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1alpha/BoostSpec # @!attribute [rw] condition_boost_specs # @return [::Array<::Google::Cloud::Dialogflow::CX::V3::BoostSpec::ConditionBoostSpec>] # Optional. Condition boost specifications. If a document matches multiple # conditions in the specifictions, boost scores from these specifications are # all applied and combined in a non-linear way. Maximum number of # specifications is 20. class BoostSpec include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Boost applies to documents which match a condition. # @!attribute [rw] condition # @return [::String] # Optional. An expression which specifies a boost condition. The syntax and # supported fields are the same as a filter expression. # Examples: # # * To boost documents with document ID "doc_1" or "doc_2", and # color # "Red" or "Blue": # * (id: ANY("doc_1", "doc_2")) AND (color: ANY("Red","Blue")) # @!attribute [rw] boost # @return [::Float] # Optional. Strength of the condition boost, which should be in [-1, 1]. # Negative boost means demotion. Default is 0.0. # # Setting to 1.0 gives the document a big promotion. However, it does not # necessarily mean that the boosted document will be the top result at # all times, nor that other documents will be excluded. Results could # still be shown even when none of them matches the condition. And # results that are significantly more relevant to the search query can # still trump your heavily favored but irrelevant documents. # # Setting to -1.0 gives the document a big demotion. However, results # that are deeply relevant might still be shown. The document will have # an upstream battle to get a fairly high ranking, but it is not blocked # out completely. # # Setting to 0.0 means no boost applied. The boosting condition is # ignored. class ConditionBoostSpec include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end # Boost specifications for data stores. # @!attribute [rw] data_stores # @return [::Array<::String>] # Optional. Data Stores where the boosting configuration is applied. The full # names of the referenced data stores. Formats: # `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}` # `projects/{project}/locations/{location}/dataStores/{data_store}` # @!attribute [rw] spec # @return [::Array<::Google::Cloud::Dialogflow::CX::V3::BoostSpec>] # Optional. A list of boosting specifications. class BoostSpecs include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Filter specifications for data stores. # @!attribute [rw] data_stores # @return [::Array<::String>] # Optional. Data Stores where the boosting configuration is applied. The full # names of the referenced data stores. Formats: # `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}` # `projects/{project}/locations/{location}/dataStores/{data_store}` # @!attribute [rw] filter # @return [::String] # Optional. The filter expression to be applied. # Expression syntax is documented at # https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata#filter-expression-syntax class FilterSpecs include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents the query input. It can contain one of: # # 1. A conversational query in the form of text. # # 2. An intent query that specifies which intent to trigger. # # 3. Natural language speech audio to be processed. # # 4. An event to be triggered. # # 5. DTMF digits to invoke an intent and fill in parameter value. # @!attribute [rw] text # @return [::Google::Cloud::Dialogflow::CX::V3::TextInput] # The natural language text to be processed. # @!attribute [rw] intent # @return [::Google::Cloud::Dialogflow::CX::V3::IntentInput] # The intent to be triggered. # @!attribute [rw] audio # @return [::Google::Cloud::Dialogflow::CX::V3::AudioInput] # The natural language speech audio to be processed. # @!attribute [rw] event # @return [::Google::Cloud::Dialogflow::CX::V3::EventInput] # The event to be triggered. # @!attribute [rw] dtmf # @return [::Google::Cloud::Dialogflow::CX::V3::DtmfInput] # The DTMF event to be handled. # @!attribute [rw] language_code # @return [::String] # Required. The language of the input. See [Language # Support](https://cloud.google.com/dialogflow/cx/docs/reference/language) # for a list of the currently supported language codes. Note that queries in # the same session do not necessarily need to specify the same language. class QueryInput include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents the result of a conversational query. # @!attribute [rw] text # @return [::String] # If {::Google::Cloud::Dialogflow::CX::V3::TextInput natural language text} was # provided as input, this field will contain a copy of the text. # @!attribute [rw] trigger_intent # @return [::String] # If an {::Google::Cloud::Dialogflow::CX::V3::IntentInput intent} was provided as # input, this field will contain a copy of the intent identifier. Format: # `projects//locations//agents//intents/`. # @!attribute [rw] transcript # @return [::String] # If [natural language speech # audio][google.cloud.dialogflow.cx.v3.AudioInput] was provided as input, # this field will contain the transcript for the audio. # @!attribute [rw] trigger_event # @return [::String] # If an {::Google::Cloud::Dialogflow::CX::V3::EventInput event} was provided as # input, this field will contain the name of the event. # @!attribute [rw] dtmf # @return [::Google::Cloud::Dialogflow::CX::V3::DtmfInput] # If a {::Google::Cloud::Dialogflow::CX::V3::DtmfInput DTMF} was provided as # input, this field will contain a copy of the # {::Google::Cloud::Dialogflow::CX::V3::DtmfInput DtmfInput}. # @!attribute [rw] language_code # @return [::String] # The language that was triggered during intent detection. # See [Language # Support](https://cloud.google.com/dialogflow/cx/docs/reference/language) # for a list of the currently supported language codes. # @!attribute [rw] parameters # @return [::Google::Protobuf::Struct] # The collected [session # parameters][google.cloud.dialogflow.cx.v3.SessionInfo.parameters]. # # Depending on your protocol or client library language, this is a # map, associative array, symbol table, dictionary, or JSON object # composed of a collection of (MapKey, MapValue) pairs: # # * MapKey type: string # * MapKey value: parameter name # * MapValue type: If parameter's entity type is a composite entity then use # map, otherwise, depending on the parameter value type, it could be one of # string, number, boolean, null, list or map. # * MapValue value: If parameter's entity type is a composite entity then use # map from composite entity property names to property values, otherwise, # use parameter value. # @!attribute [rw] response_messages # @return [::Array<::Google::Cloud::Dialogflow::CX::V3::ResponseMessage>] # The list of rich messages returned to the client. Responses vary from # simple text messages to more sophisticated, structured payloads used # to drive complex logic. # @!attribute [rw] webhook_statuses # @return [::Array<::Google::Rpc::Status>] # The list of webhook call status in the order of call sequence. # @!attribute [rw] webhook_payloads # @return [::Array<::Google::Protobuf::Struct>] # The list of webhook payload in # {::Google::Cloud::Dialogflow::CX::V3::WebhookResponse#payload WebhookResponse.payload}, # in the order of call sequence. If some webhook call fails or doesn't return # any payload, an empty `Struct` would be used instead. # @!attribute [rw] current_page # @return [::Google::Cloud::Dialogflow::CX::V3::Page] # The current {::Google::Cloud::Dialogflow::CX::V3::Page Page}. Some, not all # fields are filled in this message, including but not limited to `name` and # `display_name`. # @!attribute [rw] intent # @deprecated This field is deprecated and may be removed in the next major version update. # @return [::Google::Cloud::Dialogflow::CX::V3::Intent] # The {::Google::Cloud::Dialogflow::CX::V3::Intent Intent} that matched the # conversational query. Some, not all fields are filled in this message, # including but not limited to: `name` and `display_name`. This field is # deprecated, please use # {::Google::Cloud::Dialogflow::CX::V3::QueryResult#match QueryResult.match} # instead. # @!attribute [rw] intent_detection_confidence # @deprecated This field is deprecated and may be removed in the next major version update. # @return [::Float] # The intent detection confidence. Values range from 0.0 (completely # uncertain) to 1.0 (completely certain). # This value is for informational purpose only and is only used to # help match the best intent within the classification threshold. # This value may change for the same end-user expression at any time due to a # model retraining or change in implementation. # This field is deprecated, please use # {::Google::Cloud::Dialogflow::CX::V3::QueryResult#match QueryResult.match} # instead. # @!attribute [rw] match # @return [::Google::Cloud::Dialogflow::CX::V3::Match] # Intent match result, could be an intent or an event. # @!attribute [rw] diagnostic_info # @return [::Google::Protobuf::Struct] # The free-form diagnostic info. For example, this field could contain # webhook call latency. The fields of this data can change without notice, # so you should not write code that depends on its structure. # # One of the fields is called "Alternative Matched Intents", which may # aid with debugging. The following describes these intent results: # # - The list is empty if no intent was matched to end-user input. # - Only intents that are referenced in the currently active flow are # included. # - The matched intent is included. # - Other intents that could have matched end-user input, but did not match # because they are referenced by intent routes that are out of # [scope](https://cloud.google.com/dialogflow/cx/docs/concept/handler#scope), # are included. # - Other intents referenced by intent routes in scope that matched end-user # input, but had a lower confidence score. # @!attribute [rw] sentiment_analysis_result # @return [::Google::Cloud::Dialogflow::CX::V3::SentimentAnalysisResult] # The sentiment analyss result, which depends on # [`analyze_query_text_sentiment`] # [google.cloud.dialogflow.cx.v3.QueryParameters.analyze_query_text_sentiment], # specified in the request. # @!attribute [rw] advanced_settings # @return [::Google::Cloud::Dialogflow::CX::V3::AdvancedSettings] # Returns the current advanced settings including IVR settings. Even though # the operations configured by these settings are performed by Dialogflow, # the client may need to perform special logic at the moment. For example, if # Dialogflow exports audio to Google Cloud Storage, then the client may need # to wait for the resulting object to appear in the bucket before proceeding. # @!attribute [rw] allow_answer_feedback # @return [::Boolean] # Indicates whether the Thumbs up/Thumbs down rating controls are need to be # shown for the response in the Dialogflow Messenger widget. class QueryResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents the natural language text to be processed. # @!attribute [rw] text # @return [::String] # Required. The UTF-8 encoded natural language text to be processed. Text # length must not exceed 256 characters. class TextInput include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents the intent to trigger programmatically rather than as a result of # natural language processing. # @!attribute [rw] intent # @return [::String] # Required. The unique identifier of the intent. # Format: `projects//locations//agents//intents/`. class IntentInput include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents the natural speech audio to be processed. # @!attribute [rw] config # @return [::Google::Cloud::Dialogflow::CX::V3::InputAudioConfig] # Required. Instructs the speech recognizer how to process the speech audio. # @!attribute [rw] audio # @return [::String] # The natural language speech audio to be processed. # A single request can contain up to 2 minutes of speech audio data. # The [transcribed # text][google.cloud.dialogflow.cx.v3.QueryResult.transcript] cannot contain # more than 256 bytes. # # For non-streaming audio detect intent, both `config` and `audio` must be # provided. # For streaming audio detect intent, `config` must be provided in # the first request and `audio` must be provided in all following requests. class AudioInput include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents the event to trigger. # @!attribute [rw] event # @return [::String] # Name of the event. class EventInput include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents the input for dtmf event. # @!attribute [rw] digits # @return [::String] # The dtmf digits. # @!attribute [rw] finish_digit # @return [::String] # The finish digit (if any). class DtmfInput include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Represents one match result of [MatchIntent][]. # @!attribute [rw] intent # @return [::Google::Cloud::Dialogflow::CX::V3::Intent] # The {::Google::Cloud::Dialogflow::CX::V3::Intent Intent} that matched the query. # Some, not all fields are filled in this message, including but not limited # to: `name` and `display_name`. Only filled for # {::Google::Cloud::Dialogflow::CX::V3::Match::MatchType `INTENT`} match type. # @!attribute [rw] event # @return [::String] # The event that matched the query. Filled for # {::Google::Cloud::Dialogflow::CX::V3::Match::MatchType `EVENT`}, # {::Google::Cloud::Dialogflow::CX::V3::Match::MatchType `NO_MATCH`} and # {::Google::Cloud::Dialogflow::CX::V3::Match::MatchType `NO_INPUT`} match types. # @!attribute [rw] parameters # @return [::Google::Protobuf::Struct] # The collection of parameters extracted from the query. # # Depending on your protocol or client library language, this is a # map, associative array, symbol table, dictionary, or JSON object # composed of a collection of (MapKey, MapValue) pairs: # # * MapKey type: string # * MapKey value: parameter name # * MapValue type: If parameter's entity type is a composite entity then use # map, otherwise, depending on the parameter value type, it could be one of # string, number, boolean, null, list or map. # * MapValue value: If parameter's entity type is a composite entity then use # map from composite entity property names to property values, otherwise, # use parameter value. # @!attribute [rw] resolved_input # @return [::String] # Final text input which was matched during MatchIntent. This value can be # different from original input sent in request because of spelling # correction or other processing. # @!attribute [rw] match_type # @return [::Google::Cloud::Dialogflow::CX::V3::Match::MatchType] # Type of this {::Google::Cloud::Dialogflow::CX::V3::Match Match}. # @!attribute [rw] confidence # @return [::Float] # The confidence of this match. Values range from 0.0 (completely uncertain) # to 1.0 (completely certain). # This value is for informational purpose only and is only used to help match # the best intent within the classification threshold. This value may change # for the same end-user expression at any time due to a model retraining or # change in implementation. class Match include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of a Match. module MatchType # Not specified. Should never be used. MATCH_TYPE_UNSPECIFIED = 0 # The query was matched to an intent. INTENT = 1 # The query directly triggered an intent. DIRECT_INTENT = 2 # The query was used for parameter filling. PARAMETER_FILLING = 3 # No match was found for the query. NO_MATCH = 4 # Indicates an empty query. NO_INPUT = 5 # The query directly triggered an event. EVENT = 6 end end # Request of [MatchIntent][]. # @!attribute [rw] session # @return [::String] # Required. The name of the session this query is sent to. # Format: `projects//locations//agents//sessions/` or `projects//locations//agents//environments//sessions/`. # If `Environment ID` is not specified, we assume default 'draft' # environment. # It's up to the API caller to choose an appropriate `Session ID`. It can be # a random number or some type of session identifiers (preferably hashed). # The length of the `Session ID` must not exceed 36 characters. # # For more information, see the [sessions # guide](https://cloud.google.com/dialogflow/cx/docs/concept/session). # @!attribute [rw] query_params # @return [::Google::Cloud::Dialogflow::CX::V3::QueryParameters] # The parameters of this query. # @!attribute [rw] query_input # @return [::Google::Cloud::Dialogflow::CX::V3::QueryInput] # Required. The input specification. # @!attribute [rw] persist_parameter_changes # @return [::Boolean] # Persist session parameter changes from `query_params`. class MatchIntentRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Response of [MatchIntent][]. # @!attribute [rw] text # @return [::String] # If {::Google::Cloud::Dialogflow::CX::V3::TextInput natural language text} was # provided as input, this field will contain a copy of the text. # @!attribute [rw] trigger_intent # @return [::String] # If an {::Google::Cloud::Dialogflow::CX::V3::IntentInput intent} was provided as # input, this field will contain a copy of the intent identifier. Format: # `projects//locations//agents//intents/`. # @!attribute [rw] transcript # @return [::String] # If [natural language speech # audio][google.cloud.dialogflow.cx.v3.AudioInput] was provided as input, # this field will contain the transcript for the audio. # @!attribute [rw] trigger_event # @return [::String] # If an {::Google::Cloud::Dialogflow::CX::V3::EventInput event} was provided as # input, this field will contain a copy of the event name. # @!attribute [rw] matches # @return [::Array<::Google::Cloud::Dialogflow::CX::V3::Match>] # Match results, if more than one, ordered descendingly by the confidence # we have that the particular intent matches the query. # @!attribute [rw] current_page # @return [::Google::Cloud::Dialogflow::CX::V3::Page] # The current {::Google::Cloud::Dialogflow::CX::V3::Page Page}. Some, not all # fields are filled in this message, including but not limited to `name` and # `display_name`. class MatchIntentResponse include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Request of [FulfillIntent][] # @!attribute [rw] match_intent_request # @return [::Google::Cloud::Dialogflow::CX::V3::MatchIntentRequest] # Must be same as the corresponding MatchIntent request, otherwise the # behavior is undefined. # @!attribute [rw] match # @return [::Google::Cloud::Dialogflow::CX::V3::Match] # The matched intent/event to fulfill. # @!attribute [rw] output_audio_config # @return [::Google::Cloud::Dialogflow::CX::V3::OutputAudioConfig] # Instructs the speech synthesizer how to generate output audio. class FulfillIntentRequest include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Response of [FulfillIntent][] # @!attribute [rw] response_id # @return [::String] # Output only. The unique identifier of the response. It can be used to # locate a response in the training example set or for reporting issues. # @!attribute [rw] query_result # @return [::Google::Cloud::Dialogflow::CX::V3::QueryResult] # The result of the conversational query. # @!attribute [rw] output_audio # @return [::String] # The audio data bytes encoded as specified in the request. # Note: The output audio is generated based on the values of default platform # text responses found in the # {::Google::Cloud::Dialogflow::CX::V3::QueryResult#response_messages `query_result.response_messages`} # field. If multiple default text responses exist, they will be concatenated # when generating audio. If no default platform text responses exist, the # generated audio content will be empty. # # In some scenarios, multiple output audio fields may be present in the # response structure. In these cases, only the top-most-level audio output # has content. # @!attribute [rw] output_audio_config # @return [::Google::Cloud::Dialogflow::CX::V3::OutputAudioConfig] # The config used by the speech synthesizer to generate the output audio. class FulfillIntentResponse include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # The result of sentiment analysis. Sentiment analysis inspects user input # and identifies the prevailing subjective opinion, especially to determine a # user's attitude as positive, negative, or neutral. # @!attribute [rw] score # @return [::Float] # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive # sentiment). # @!attribute [rw] magnitude # @return [::Float] # A non-negative number in the [0, +inf) range, which represents the absolute # magnitude of sentiment, regardless of score (positive or negative). class SentimentAnalysisResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end end end end