# frozen_string_literal: true # Copyright 2022 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Auto-generated by gapic-generator-ruby. DO NOT EDIT! module Google module Cloud module AIPlatform module V1 # A job that uses a # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob#model Model} to produce # predictions on multiple [input # instances][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If # predictions for significant portion of the instances fail, the job may finish # without attempting predictions for all remaining instances. # @!attribute [r] name # @return [::String] # Output only. Resource name of the BatchPredictionJob. # @!attribute [rw] display_name # @return [::String] # Required. The user-defined name of this BatchPredictionJob. # @!attribute [rw] model # @return [::String] # The name of the Model resource that produces the predictions via this job, # must share the same ancestor Location. # Starting this job has no impact on any existing deployments of the Model # and their resources. # Exactly one of model and unmanaged_container_model must be set. # # The model resource name may contain version id or version alias to specify # the version. # Example: `projects/{project}/locations/{location}/models/{model}@2` # or # `projects/{project}/locations/{location}/models/{model}@golden` # if no version is specified, the default version will be deployed. # # The model resource could also be a publisher model. # Example: `publishers/{publisher}/models/{model}` # or # `projects/{project}/locations/{location}/publishers/{publisher}/models/{model}` # @!attribute [r] model_version_id # @return [::String] # Output only. The version ID of the Model that produces the predictions via # this job. # @!attribute [rw] unmanaged_container_model # @return [::Google::Cloud::AIPlatform::V1::UnmanagedContainerModel] # Contains model information necessary to perform batch prediction without # requiring uploading to model registry. # Exactly one of model and unmanaged_container_model must be set. # @!attribute [rw] input_config # @return [::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InputConfig] # Required. Input configuration of the instances on which predictions are # performed. The schema of any single instance may be specified via the # [Model's][google.cloud.aiplatform.v1.BatchPredictionJob.model] # [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] # {::Google::Cloud::AIPlatform::V1::PredictSchemata#instance_schema_uri instance_schema_uri}. # @!attribute [rw] instance_config # @return [::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig] # Configuration for how to convert batch prediction input instances to the # prediction instances that are sent to the Model. # @!attribute [rw] model_parameters # @return [::Google::Protobuf::Value] # The parameters that govern the predictions. The schema of the parameters # may be specified via the # [Model's][google.cloud.aiplatform.v1.BatchPredictionJob.model] # [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] # {::Google::Cloud::AIPlatform::V1::PredictSchemata#parameters_schema_uri parameters_schema_uri}. # @!attribute [rw] output_config # @return [::Google::Cloud::AIPlatform::V1::BatchPredictionJob::OutputConfig] # Required. The Configuration specifying where output predictions should # be written. # The schema of any single prediction may be specified as a concatenation # of [Model's][google.cloud.aiplatform.v1.BatchPredictionJob.model] # [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] # {::Google::Cloud::AIPlatform::V1::PredictSchemata#instance_schema_uri instance_schema_uri} # and # {::Google::Cloud::AIPlatform::V1::PredictSchemata#prediction_schema_uri prediction_schema_uri}. # @!attribute [rw] dedicated_resources # @return [::Google::Cloud::AIPlatform::V1::BatchDedicatedResources] # The config of resources used by the Model during the batch prediction. If # the Model # {::Google::Cloud::AIPlatform::V1::Model#supported_deployment_resources_types supports} # DEDICATED_RESOURCES this config may be provided (and the job will use these # resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config # must be provided. # @!attribute [rw] service_account # @return [::String] # The service account that the DeployedModel's container runs as. If not # specified, a system generated one will be used, which # has minimal permissions and the custom container, if used, may not have # enough permission to access other Google Cloud resources. # # Users deploying the Model must have the `iam.serviceAccounts.actAs` # permission on this service account. # @!attribute [rw] manual_batch_tuning_parameters # @return [::Google::Cloud::AIPlatform::V1::ManualBatchTuningParameters] # Immutable. Parameters configuring the batch behavior. Currently only # applicable when # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob#dedicated_resources dedicated_resources} # are used (in other cases Vertex AI does the tuning itself). # @!attribute [rw] generate_explanation # @return [::Boolean] # Generate explanation with the batch prediction results. # # When set to `true`, the batch prediction output changes based on the # `predictions_format` field of the # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob#output_config BatchPredictionJob.output_config} # object: # # * `bigquery`: output includes a column named `explanation`. The value # is a struct that conforms to the # {::Google::Cloud::AIPlatform::V1::Explanation Explanation} object. # * `jsonl`: The JSON objects on each line include an additional entry # keyed `explanation`. The value of the entry is a JSON object that # conforms to the {::Google::Cloud::AIPlatform::V1::Explanation Explanation} # object. # * `csv`: Generating explanations for CSV format is not supported. # # If this field is set to true, either the # {::Google::Cloud::AIPlatform::V1::Model#explanation_spec Model.explanation_spec} # or # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob#explanation_spec explanation_spec} # must be populated. # @!attribute [rw] explanation_spec # @return [::Google::Cloud::AIPlatform::V1::ExplanationSpec] # Explanation configuration for this BatchPredictionJob. Can be # specified only if # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob#generate_explanation generate_explanation} # is set to `true`. # # This value overrides the value of # {::Google::Cloud::AIPlatform::V1::Model#explanation_spec Model.explanation_spec}. # All fields of # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob#explanation_spec explanation_spec} # are optional in the request. If a field of the # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob#explanation_spec explanation_spec} # object is not populated, the corresponding field of the # {::Google::Cloud::AIPlatform::V1::Model#explanation_spec Model.explanation_spec} # object is inherited. # @!attribute [r] output_info # @return [::Google::Cloud::AIPlatform::V1::BatchPredictionJob::OutputInfo] # Output only. Information further describing the output of this job. # @!attribute [r] state # @return [::Google::Cloud::AIPlatform::V1::JobState] # Output only. The detailed state of the job. # @!attribute [r] error # @return [::Google::Rpc::Status] # Output only. Only populated when the job's state is JOB_STATE_FAILED or # JOB_STATE_CANCELLED. # @!attribute [r] partial_failures # @return [::Array<::Google::Rpc::Status>] # Output only. Partial failures encountered. # For example, single files that can't be read. # This field never exceeds 20 entries. # Status details fields contain standard Google Cloud error details. # @!attribute [r] resources_consumed # @return [::Google::Cloud::AIPlatform::V1::ResourcesConsumed] # Output only. Information about resources that had been consumed by this # job. Provided in real time at best effort basis, as well as a final value # once the job completes. # # Note: This field currently may be not populated for batch predictions that # use AutoML Models. # @!attribute [r] completion_stats # @return [::Google::Cloud::AIPlatform::V1::CompletionStats] # Output only. Statistics on completed and failed prediction instances. # @!attribute [r] create_time # @return [::Google::Protobuf::Timestamp] # Output only. Time when the BatchPredictionJob was created. # @!attribute [r] start_time # @return [::Google::Protobuf::Timestamp] # Output only. Time when the BatchPredictionJob for the first time entered # the `JOB_STATE_RUNNING` state. # @!attribute [r] end_time # @return [::Google::Protobuf::Timestamp] # Output only. Time when the BatchPredictionJob entered any of the following # states: `JOB_STATE_SUCCEEDED`, `JOB_STATE_FAILED`, `JOB_STATE_CANCELLED`. # @!attribute [r] update_time # @return [::Google::Protobuf::Timestamp] # Output only. Time when the BatchPredictionJob was most recently updated. # @!attribute [rw] labels # @return [::Google::Protobuf::Map{::String => ::String}] # The labels with user-defined metadata to organize BatchPredictionJobs. # # Label keys and values can be no longer than 64 characters # (Unicode codepoints), can only contain lowercase letters, numeric # characters, underscores and dashes. International characters are allowed. # # See https://goo.gl/xmQnxf for more information and examples of labels. # @!attribute [rw] encryption_spec # @return [::Google::Cloud::AIPlatform::V1::EncryptionSpec] # Customer-managed encryption key options for a BatchPredictionJob. If this # is set, then all resources created by the BatchPredictionJob will be # encrypted with the provided encryption key. # @!attribute [rw] disable_container_logging # @return [::Boolean] # For custom-trained Models and AutoML Tabular Models, the container of the # DeployedModel instances will send `stderr` and `stdout` streams to # Cloud Logging by default. Please note that the logs incur cost, # which are subject to [Cloud Logging # pricing](https://cloud.google.com/logging/pricing). # # User can disable container logging by setting this flag to true. class BatchPredictionJob include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Configures the input to # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob BatchPredictionJob}. See # {::Google::Cloud::AIPlatform::V1::Model#supported_input_storage_formats Model.supported_input_storage_formats} # for Model's supported input formats, and how instances should be expressed # via any of them. # @!attribute [rw] gcs_source # @return [::Google::Cloud::AIPlatform::V1::GcsSource] # The Cloud Storage location for the input instances. # @!attribute [rw] bigquery_source # @return [::Google::Cloud::AIPlatform::V1::BigQuerySource] # The BigQuery location of the input table. # The schema of the table should be in the format described by the given # context OpenAPI Schema, if one is provided. The table may contain # additional columns that are not described by the schema, and they will # be ignored. # @!attribute [rw] instances_format # @return [::String] # Required. The format in which instances are given, must be one of the # [Model's][google.cloud.aiplatform.v1.BatchPredictionJob.model] # {::Google::Cloud::AIPlatform::V1::Model#supported_input_storage_formats supported_input_storage_formats}. class InputConfig include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Configuration defining how to transform batch prediction input instances to # the instances that the Model accepts. # @!attribute [rw] instance_type # @return [::String] # The format of the instance that the Model accepts. Vertex AI will # convert compatible # [batch prediction input instance # formats][google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig.instances_format] # to the specified format. # # Supported values are: # # * `object`: Each input is converted to JSON object format. # * For `bigquery`, each row is converted to an object. # * For `jsonl`, each line of the JSONL input must be an object. # * Does not apply to `csv`, `file-list`, `tf-record`, or # `tf-record-gzip`. # # * `array`: Each input is converted to JSON array format. # * For `bigquery`, each row is converted to an array. The order # of columns is determined by the BigQuery column order, unless # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#included_fields included_fields} # is populated. # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#included_fields included_fields} # must be populated for specifying field orders. # * For `jsonl`, if each line of the JSONL input is an object, # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#included_fields included_fields} # must be populated for specifying field orders. # * Does not apply to `csv`, `file-list`, `tf-record`, or # `tf-record-gzip`. # # If not specified, Vertex AI converts the batch prediction input as # follows: # # * For `bigquery` and `csv`, the behavior is the same as `array`. The # order of columns is the same as defined in the file or table, unless # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#included_fields included_fields} # is populated. # * For `jsonl`, the prediction instance format is determined by # each line of the input. # * For `tf-record`/`tf-record-gzip`, each record will be converted to # an object in the format of `{"b64": }`, where `` is # the Base64-encoded string of the content of the record. # * For `file-list`, each file in the list will be converted to an # object in the format of `{"b64": }`, where `` is # the Base64-encoded string of the content of the file. # @!attribute [rw] key_field # @return [::String] # The name of the field that is considered as a key. # # The values identified by the key field is not included in the transformed # instances that is sent to the Model. This is similar to # specifying this name of the field in # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#excluded_fields excluded_fields}. # In addition, the batch prediction output will not include the instances. # Instead the output will only include the value of the key field, in a # field named `key` in the output: # # * For `jsonl` output format, the output will have a `key` field # instead of the `instance` field. # * For `csv`/`bigquery` output format, the output will have have a `key` # column instead of the instance feature columns. # # The input must be JSONL with objects at each line, CSV, BigQuery # or TfRecord. # @!attribute [rw] included_fields # @return [::Array<::String>] # Fields that will be included in the prediction instance that is # sent to the Model. # # If # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#instance_type instance_type} # is `array`, the order of field names in included_fields also determines # the order of the values in the array. # # When included_fields is populated, # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#excluded_fields excluded_fields} # must be empty. # # The input must be JSONL with objects at each line, BigQuery # or TfRecord. # @!attribute [rw] excluded_fields # @return [::Array<::String>] # Fields that will be excluded in the prediction instance that is # sent to the Model. # # Excluded will be attached to the batch prediction output if # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#key_field key_field} # is not specified. # # When excluded_fields is populated, # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::InstanceConfig#included_fields included_fields} # must be empty. # # The input must be JSONL with objects at each line, BigQuery # or TfRecord. class InstanceConfig include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Configures the output of # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob BatchPredictionJob}. See # {::Google::Cloud::AIPlatform::V1::Model#supported_output_storage_formats Model.supported_output_storage_formats} # for supported output formats, and how predictions are expressed via any of # them. # @!attribute [rw] gcs_destination # @return [::Google::Cloud::AIPlatform::V1::GcsDestination] # The Cloud Storage location of the directory where the output is # to be written to. In the given directory a new directory is created. # Its name is `prediction--`, # where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. # Inside of it files `predictions_0001.`, # `predictions_0002.`, ..., `predictions_N.` # are created where `` depends on chosen # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::OutputConfig#predictions_format predictions_format}, # and N may equal 0001 and depends on the total number of successfully # predicted instances. If the Model has both # {::Google::Cloud::AIPlatform::V1::PredictSchemata#instance_schema_uri instance} # and # {::Google::Cloud::AIPlatform::V1::PredictSchemata#parameters_schema_uri prediction} # schemata defined then each such file contains predictions as per the # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob::OutputConfig#predictions_format predictions_format}. # If prediction for any instance failed (partially or completely), then # an additional `errors_0001.`, `errors_0002.`,..., # `errors_N.` files are created (N depends on total number # of failed predictions). These files contain the failed instances, # as per their schema, followed by an additional `error` field which as # value has {::Google::Rpc::Status google.rpc.Status} # containing only `code` and `message` fields. # @!attribute [rw] bigquery_destination # @return [::Google::Cloud::AIPlatform::V1::BigQueryDestination] # The BigQuery project or dataset location where the output is to be # written to. If project is provided, a new dataset is created with name # `prediction__` # where is made # BigQuery-dataset-name compatible (for example, most special characters # become underscores), and timestamp is in # YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In the dataset # two tables will be created, `predictions`, and `errors`. # If the Model has both # {::Google::Cloud::AIPlatform::V1::PredictSchemata#instance_schema_uri instance} # and # {::Google::Cloud::AIPlatform::V1::PredictSchemata#parameters_schema_uri prediction} # schemata defined then the tables have columns as follows: The # `predictions` table contains instances for which the prediction # succeeded, it has columns as per a concatenation of the Model's # instance and prediction schemata. The `errors` table contains rows for # which the prediction has failed, it has instance columns, as per the # instance schema, followed by a single "errors" column, which as values # has {::Google::Rpc::Status google.rpc.Status} # represented as a STRUCT, and containing only `code` and `message`. # @!attribute [rw] predictions_format # @return [::String] # Required. The format in which Vertex AI gives the predictions, must be # one of the [Model's][google.cloud.aiplatform.v1.BatchPredictionJob.model] # {::Google::Cloud::AIPlatform::V1::Model#supported_output_storage_formats supported_output_storage_formats}. class OutputConfig include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # Further describes this job's output. # Supplements # {::Google::Cloud::AIPlatform::V1::BatchPredictionJob#output_config output_config}. # @!attribute [r] gcs_output_directory # @return [::String] # Output only. The full path of the Cloud Storage directory created, into # which the prediction output is written. # @!attribute [r] bigquery_output_dataset # @return [::String] # Output only. The path of the BigQuery dataset created, in # `bq://projectId.bqDatasetId` # format, into which the prediction output is written. # @!attribute [r] bigquery_output_table # @return [::String] # Output only. The name of the BigQuery table created, in # `predictions_` # format, into which the prediction output is written. # Can be used by UI to generate the BigQuery output path, for example. class OutputInfo include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end # @!attribute [rw] key # @return [::String] # @!attribute [rw] value # @return [::String] class LabelsEntry include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods end end end end end end