1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1
  6. getBatch

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1.getBatch

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Gets the batch workload resource representation.

Using getBatch

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getBatch(args: GetBatchArgs, opts?: InvokeOptions): Promise<GetBatchResult>
function getBatchOutput(args: GetBatchOutputArgs, opts?: InvokeOptions): Output<GetBatchResult>
Copy
def get_batch(batch_id: Optional[str] = None,
              location: Optional[str] = None,
              project: Optional[str] = None,
              opts: Optional[InvokeOptions] = None) -> GetBatchResult
def get_batch_output(batch_id: Optional[pulumi.Input[str]] = None,
              location: Optional[pulumi.Input[str]] = None,
              project: Optional[pulumi.Input[str]] = None,
              opts: Optional[InvokeOptions] = None) -> Output[GetBatchResult]
Copy
func LookupBatch(ctx *Context, args *LookupBatchArgs, opts ...InvokeOption) (*LookupBatchResult, error)
func LookupBatchOutput(ctx *Context, args *LookupBatchOutputArgs, opts ...InvokeOption) LookupBatchResultOutput
Copy

> Note: This function is named LookupBatch in the Go SDK.

public static class GetBatch 
{
    public static Task<GetBatchResult> InvokeAsync(GetBatchArgs args, InvokeOptions? opts = null)
    public static Output<GetBatchResult> Invoke(GetBatchInvokeArgs args, InvokeOptions? opts = null)
}
Copy
public static CompletableFuture<GetBatchResult> getBatch(GetBatchArgs args, InvokeOptions options)
public static Output<GetBatchResult> getBatch(GetBatchArgs args, InvokeOptions options)
Copy
fn::invoke:
  function: google-native:dataproc/v1:getBatch
  arguments:
    # arguments dictionary
Copy

The following arguments are supported:

BatchId This property is required. string
Location This property is required. string
Project string
BatchId This property is required. string
Location This property is required. string
Project string
batchId This property is required. String
location This property is required. String
project String
batchId This property is required. string
location This property is required. string
project string
batch_id This property is required. str
location This property is required. str
project str
batchId This property is required. String
location This property is required. String
project String

getBatch Result

The following output properties are available:

CreateTime string
The time when the batch was created.
Creator string
The email address of the user who created the batch.
EnvironmentConfig Pulumi.GoogleNative.Dataproc.V1.Outputs.EnvironmentConfigResponse
Optional. Environment configuration for the batch execution.
Labels Dictionary<string, string>
Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
Name string
The resource name of the batch.
Operation string
The resource name of the operation associated with this batch.
PysparkBatch Pulumi.GoogleNative.Dataproc.V1.Outputs.PySparkBatchResponse
Optional. PySpark batch config.
RuntimeConfig Pulumi.GoogleNative.Dataproc.V1.Outputs.RuntimeConfigResponse
Optional. Runtime configuration for the batch execution.
RuntimeInfo Pulumi.GoogleNative.Dataproc.V1.Outputs.RuntimeInfoResponse
Runtime information about batch execution.
SparkBatch Pulumi.GoogleNative.Dataproc.V1.Outputs.SparkBatchResponse
Optional. Spark batch config.
SparkRBatch Pulumi.GoogleNative.Dataproc.V1.Outputs.SparkRBatchResponse
Optional. SparkR batch config.
SparkSqlBatch Pulumi.GoogleNative.Dataproc.V1.Outputs.SparkSqlBatchResponse
Optional. SparkSql batch config.
State string
The state of the batch.
StateHistory List<Pulumi.GoogleNative.Dataproc.V1.Outputs.StateHistoryResponse>
Historical state information for the batch.
StateMessage string
Batch state details, such as a failure description if the state is FAILED.
StateTime string
The time when the batch entered a current state.
Uuid string
A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
CreateTime string
The time when the batch was created.
Creator string
The email address of the user who created the batch.
EnvironmentConfig EnvironmentConfigResponse
Optional. Environment configuration for the batch execution.
Labels map[string]string
Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
Name string
The resource name of the batch.
Operation string
The resource name of the operation associated with this batch.
PysparkBatch PySparkBatchResponse
Optional. PySpark batch config.
RuntimeConfig RuntimeConfigResponse
Optional. Runtime configuration for the batch execution.
RuntimeInfo RuntimeInfoResponse
Runtime information about batch execution.
SparkBatch SparkBatchResponse
Optional. Spark batch config.
SparkRBatch SparkRBatchResponse
Optional. SparkR batch config.
SparkSqlBatch SparkSqlBatchResponse
Optional. SparkSql batch config.
State string
The state of the batch.
StateHistory []StateHistoryResponse
Historical state information for the batch.
StateMessage string
Batch state details, such as a failure description if the state is FAILED.
StateTime string
The time when the batch entered a current state.
Uuid string
A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
createTime String
The time when the batch was created.
creator String
The email address of the user who created the batch.
environmentConfig EnvironmentConfigResponse
Optional. Environment configuration for the batch execution.
labels Map<String,String>
Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
name String
The resource name of the batch.
operation String
The resource name of the operation associated with this batch.
pysparkBatch PySparkBatchResponse
Optional. PySpark batch config.
runtimeConfig RuntimeConfigResponse
Optional. Runtime configuration for the batch execution.
runtimeInfo RuntimeInfoResponse
Runtime information about batch execution.
sparkBatch SparkBatchResponse
Optional. Spark batch config.
sparkRBatch SparkRBatchResponse
Optional. SparkR batch config.
sparkSqlBatch SparkSqlBatchResponse
Optional. SparkSql batch config.
state String
The state of the batch.
stateHistory List<StateHistoryResponse>
Historical state information for the batch.
stateMessage String
Batch state details, such as a failure description if the state is FAILED.
stateTime String
The time when the batch entered a current state.
uuid String
A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
createTime string
The time when the batch was created.
creator string
The email address of the user who created the batch.
environmentConfig EnvironmentConfigResponse
Optional. Environment configuration for the batch execution.
labels {[key: string]: string}
Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
name string
The resource name of the batch.
operation string
The resource name of the operation associated with this batch.
pysparkBatch PySparkBatchResponse
Optional. PySpark batch config.
runtimeConfig RuntimeConfigResponse
Optional. Runtime configuration for the batch execution.
runtimeInfo RuntimeInfoResponse
Runtime information about batch execution.
sparkBatch SparkBatchResponse
Optional. Spark batch config.
sparkRBatch SparkRBatchResponse
Optional. SparkR batch config.
sparkSqlBatch SparkSqlBatchResponse
Optional. SparkSql batch config.
state string
The state of the batch.
stateHistory StateHistoryResponse[]
Historical state information for the batch.
stateMessage string
Batch state details, such as a failure description if the state is FAILED.
stateTime string
The time when the batch entered a current state.
uuid string
A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
create_time str
The time when the batch was created.
creator str
The email address of the user who created the batch.
environment_config EnvironmentConfigResponse
Optional. Environment configuration for the batch execution.
labels Mapping[str, str]
Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
name str
The resource name of the batch.
operation str
The resource name of the operation associated with this batch.
pyspark_batch PySparkBatchResponse
Optional. PySpark batch config.
runtime_config RuntimeConfigResponse
Optional. Runtime configuration for the batch execution.
runtime_info RuntimeInfoResponse
Runtime information about batch execution.
spark_batch SparkBatchResponse
Optional. Spark batch config.
spark_r_batch SparkRBatchResponse
Optional. SparkR batch config.
spark_sql_batch SparkSqlBatchResponse
Optional. SparkSql batch config.
state str
The state of the batch.
state_history Sequence[StateHistoryResponse]
Historical state information for the batch.
state_message str
Batch state details, such as a failure description if the state is FAILED.
state_time str
The time when the batch entered a current state.
uuid str
A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
createTime String
The time when the batch was created.
creator String
The email address of the user who created the batch.
environmentConfig Property Map
Optional. Environment configuration for the batch execution.
labels Map<String>
Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
name String
The resource name of the batch.
operation String
The resource name of the operation associated with this batch.
pysparkBatch Property Map
Optional. PySpark batch config.
runtimeConfig Property Map
Optional. Runtime configuration for the batch execution.
runtimeInfo Property Map
Runtime information about batch execution.
sparkBatch Property Map
Optional. Spark batch config.
sparkRBatch Property Map
Optional. SparkR batch config.
sparkSqlBatch Property Map
Optional. SparkSql batch config.
state String
The state of the batch.
stateHistory List<Property Map>
Historical state information for the batch.
stateMessage String
Batch state details, such as a failure description if the state is FAILED.
stateTime String
The time when the batch entered a current state.
uuid String
A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.

Supporting Types

EnvironmentConfigResponse

ExecutionConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.ExecutionConfigResponse
Optional. Execution configuration for a workload.
PeripheralsConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
ExecutionConfig This property is required. ExecutionConfigResponse
Optional. Execution configuration for a workload.
PeripheralsConfig This property is required. PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
executionConfig This property is required. ExecutionConfigResponse
Optional. Execution configuration for a workload.
peripheralsConfig This property is required. PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
executionConfig This property is required. ExecutionConfigResponse
Optional. Execution configuration for a workload.
peripheralsConfig This property is required. PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
execution_config This property is required. ExecutionConfigResponse
Optional. Execution configuration for a workload.
peripherals_config This property is required. PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
executionConfig This property is required. Property Map
Optional. Execution configuration for a workload.
peripheralsConfig This property is required. Property Map
Optional. Peripherals configuration that workload has access to.

ExecutionConfigResponse

IdleTtl This property is required. string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
KmsKey This property is required. string
Optional. The Cloud KMS key to use for encryption.
NetworkTags This property is required. List<string>
Optional. Tags used for network traffic control.
NetworkUri This property is required. string
Optional. Network URI to connect workload to.
ServiceAccount This property is required. string
Optional. Service account that used to execute workload.
StagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
SubnetworkUri This property is required. string
Optional. Subnetwork URI to connect workload to.
Ttl This property is required. string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
IdleTtl This property is required. string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
KmsKey This property is required. string
Optional. The Cloud KMS key to use for encryption.
NetworkTags This property is required. []string
Optional. Tags used for network traffic control.
NetworkUri This property is required. string
Optional. Network URI to connect workload to.
ServiceAccount This property is required. string
Optional. Service account that used to execute workload.
StagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
SubnetworkUri This property is required. string
Optional. Subnetwork URI to connect workload to.
Ttl This property is required. string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl This property is required. String
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey This property is required. String
Optional. The Cloud KMS key to use for encryption.
networkTags This property is required. List<String>
Optional. Tags used for network traffic control.
networkUri This property is required. String
Optional. Network URI to connect workload to.
serviceAccount This property is required. String
Optional. Service account that used to execute workload.
stagingBucket This property is required. String
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri This property is required. String
Optional. Subnetwork URI to connect workload to.
ttl This property is required. String
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl This property is required. string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey This property is required. string
Optional. The Cloud KMS key to use for encryption.
networkTags This property is required. string[]
Optional. Tags used for network traffic control.
networkUri This property is required. string
Optional. Network URI to connect workload to.
serviceAccount This property is required. string
Optional. Service account that used to execute workload.
stagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri This property is required. string
Optional. Subnetwork URI to connect workload to.
ttl This property is required. string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idle_ttl This property is required. str
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kms_key This property is required. str
Optional. The Cloud KMS key to use for encryption.
network_tags This property is required. Sequence[str]
Optional. Tags used for network traffic control.
network_uri This property is required. str
Optional. Network URI to connect workload to.
service_account This property is required. str
Optional. Service account that used to execute workload.
staging_bucket This property is required. str
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetwork_uri This property is required. str
Optional. Subnetwork URI to connect workload to.
ttl This property is required. str
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl This property is required. String
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey This property is required. String
Optional. The Cloud KMS key to use for encryption.
networkTags This property is required. List<String>
Optional. Tags used for network traffic control.
networkUri This property is required. String
Optional. Network URI to connect workload to.
serviceAccount This property is required. String
Optional. Service account that used to execute workload.
stagingBucket This property is required. String
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri This property is required. String
Optional. Subnetwork URI to connect workload to.
ttl This property is required. String
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

PeripheralsConfigResponse

MetastoreService This property is required. string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
SparkHistoryServerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
MetastoreService This property is required. string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
SparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreService This property is required. String
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreService This property is required. string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastore_service This property is required. str
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
spark_history_server_config This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreService This property is required. String
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig This property is required. Property Map
Optional. The Spark History Server configuration for the workload.

PyPiRepositoryConfigResponse

PypiRepository This property is required. string
Optional. PyPi repository address
PypiRepository This property is required. string
Optional. PyPi repository address
pypiRepository This property is required. String
Optional. PyPi repository address
pypiRepository This property is required. string
Optional. PyPi repository address
pypi_repository This property is required. str
Optional. PyPi repository address
pypiRepository This property is required. String
Optional. PyPi repository address

PySparkBatchResponse

ArchiveUris This property is required. List<string>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. List<string>
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
FileUris This property is required. List<string>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
JarFileUris This property is required. List<string>
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
MainPythonFileUri This property is required. string
The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
PythonFileUris This property is required. List<string>
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
ArchiveUris This property is required. []string
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. []string
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
FileUris This property is required. []string
Optional. HCFS URIs of files to be placed in the working directory of each executor.
JarFileUris This property is required. []string
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
MainPythonFileUri This property is required. string
The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
PythonFileUris This property is required. []string
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
mainPythonFileUri This property is required. String
The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
pythonFileUris This property is required. List<String>
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
archiveUris This property is required. string[]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. string[]
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. string[]
Optional. HCFS URIs of files to be placed in the working directory of each executor.
jarFileUris This property is required. string[]
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
mainPythonFileUri This property is required. string
The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
pythonFileUris This property is required. string[]
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
archive_uris This property is required. Sequence[str]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. Sequence[str]
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
file_uris This property is required. Sequence[str]
Optional. HCFS URIs of files to be placed in the working directory of each executor.
jar_file_uris This property is required. Sequence[str]
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
main_python_file_uri This property is required. str
The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
python_file_uris This property is required. Sequence[str]
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
mainPythonFileUri This property is required. String
The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
pythonFileUris This property is required. List<String>
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

RepositoryConfigResponse

PypiRepositoryConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
PypiRepositoryConfig This property is required. PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
pypiRepositoryConfig This property is required. PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
pypiRepositoryConfig This property is required. PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
pypi_repository_config This property is required. PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
pypiRepositoryConfig This property is required. Property Map
Optional. Configuration for PyPi repository.

RuntimeConfigResponse

ContainerImage This property is required. string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values, which are used to configure workload execution.
RepositoryConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.RepositoryConfigResponse
Optional. Dependency repository configuration.
Version This property is required. string
Optional. Version of the batch runtime.
ContainerImage This property is required. string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
Properties This property is required. map[string]string
Optional. A mapping of property names to values, which are used to configure workload execution.
RepositoryConfig This property is required. RepositoryConfigResponse
Optional. Dependency repository configuration.
Version This property is required. string
Optional. Version of the batch runtime.
containerImage This property is required. String
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties This property is required. Map<String,String>
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig This property is required. RepositoryConfigResponse
Optional. Dependency repository configuration.
version This property is required. String
Optional. Version of the batch runtime.
containerImage This property is required. string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig This property is required. RepositoryConfigResponse
Optional. Dependency repository configuration.
version This property is required. string
Optional. Version of the batch runtime.
container_image This property is required. str
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values, which are used to configure workload execution.
repository_config This property is required. RepositoryConfigResponse
Optional. Dependency repository configuration.
version This property is required. str
Optional. Version of the batch runtime.
containerImage This property is required. String
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties This property is required. Map<String>
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig This property is required. Property Map
Optional. Dependency repository configuration.
version This property is required. String
Optional. Version of the batch runtime.

RuntimeInfoResponse

ApproximateUsage This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.UsageMetricsResponse
Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
CurrentUsage This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.UsageSnapshotResponse
Snapshot of current workload resource usage.
DiagnosticOutputUri This property is required. string
A URI pointing to the location of the diagnostics tarball.
Endpoints This property is required. Dictionary<string, string>
Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
OutputUri This property is required. string
A URI pointing to the location of the stdout and stderr of the workload.
ApproximateUsage This property is required. UsageMetricsResponse
Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
CurrentUsage This property is required. UsageSnapshotResponse
Snapshot of current workload resource usage.
DiagnosticOutputUri This property is required. string
A URI pointing to the location of the diagnostics tarball.
Endpoints This property is required. map[string]string
Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
OutputUri This property is required. string
A URI pointing to the location of the stdout and stderr of the workload.
approximateUsage This property is required. UsageMetricsResponse
Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
currentUsage This property is required. UsageSnapshotResponse
Snapshot of current workload resource usage.
diagnosticOutputUri This property is required. String
A URI pointing to the location of the diagnostics tarball.
endpoints This property is required. Map<String,String>
Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
outputUri This property is required. String
A URI pointing to the location of the stdout and stderr of the workload.
approximateUsage This property is required. UsageMetricsResponse
Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
currentUsage This property is required. UsageSnapshotResponse
Snapshot of current workload resource usage.
diagnosticOutputUri This property is required. string
A URI pointing to the location of the diagnostics tarball.
endpoints This property is required. {[key: string]: string}
Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
outputUri This property is required. string
A URI pointing to the location of the stdout and stderr of the workload.
approximate_usage This property is required. UsageMetricsResponse
Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
current_usage This property is required. UsageSnapshotResponse
Snapshot of current workload resource usage.
diagnostic_output_uri This property is required. str
A URI pointing to the location of the diagnostics tarball.
endpoints This property is required. Mapping[str, str]
Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
output_uri This property is required. str
A URI pointing to the location of the stdout and stderr of the workload.
approximateUsage This property is required. Property Map
Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
currentUsage This property is required. Property Map
Snapshot of current workload resource usage.
diagnosticOutputUri This property is required. String
A URI pointing to the location of the diagnostics tarball.
endpoints This property is required. Map<String>
Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
outputUri This property is required. String
A URI pointing to the location of the stdout and stderr of the workload.

SparkBatchResponse

ArchiveUris This property is required. List<string>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. List<string>
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
FileUris This property is required. List<string>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
JarFileUris This property is required. List<string>
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
MainClass This property is required. string
Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
MainJarFileUri This property is required. string
Optional. The HCFS URI of the jar file that contains the main class.
ArchiveUris This property is required. []string
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. []string
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
FileUris This property is required. []string
Optional. HCFS URIs of files to be placed in the working directory of each executor.
JarFileUris This property is required. []string
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
MainClass This property is required. string
Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
MainJarFileUri This property is required. string
Optional. The HCFS URI of the jar file that contains the main class.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
mainClass This property is required. String
Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
mainJarFileUri This property is required. String
Optional. The HCFS URI of the jar file that contains the main class.
archiveUris This property is required. string[]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. string[]
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. string[]
Optional. HCFS URIs of files to be placed in the working directory of each executor.
jarFileUris This property is required. string[]
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
mainClass This property is required. string
Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
mainJarFileUri This property is required. string
Optional. The HCFS URI of the jar file that contains the main class.
archive_uris This property is required. Sequence[str]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. Sequence[str]
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
file_uris This property is required. Sequence[str]
Optional. HCFS URIs of files to be placed in the working directory of each executor.
jar_file_uris This property is required. Sequence[str]
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
main_class This property is required. str
Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
main_jar_file_uri This property is required. str
Optional. The HCFS URI of the jar file that contains the main class.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
mainClass This property is required. String
Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
mainJarFileUri This property is required. String
Optional. The HCFS URI of the jar file that contains the main class.

SparkHistoryServerConfigResponse

DataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
DataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataproc_cluster This property is required. str
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

SparkRBatchResponse

ArchiveUris This property is required. List<string>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. List<string>
Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
FileUris This property is required. List<string>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
MainRFileUri This property is required. string
The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
ArchiveUris This property is required. []string
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. []string
Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
FileUris This property is required. []string
Optional. HCFS URIs of files to be placed in the working directory of each executor.
MainRFileUri This property is required. string
The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
mainRFileUri This property is required. String
The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
archiveUris This property is required. string[]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. string[]
Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. string[]
Optional. HCFS URIs of files to be placed in the working directory of each executor.
mainRFileUri This property is required. string
The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
archive_uris This property is required. Sequence[str]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. Sequence[str]
Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
file_uris This property is required. Sequence[str]
Optional. HCFS URIs of files to be placed in the working directory of each executor.
main_r_file_uri This property is required. str
The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor.
mainRFileUri This property is required. String
The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.

SparkSqlBatchResponse

JarFileUris This property is required. List<string>
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
QueryFileUri This property is required. string
The HCFS URI of the script that contains Spark SQL queries to execute.
QueryVariables This property is required. Dictionary<string, string>
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
JarFileUris This property is required. []string
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
QueryFileUri This property is required. string
The HCFS URI of the script that contains Spark SQL queries to execute.
QueryVariables This property is required. map[string]string
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
queryFileUri This property is required. String
The HCFS URI of the script that contains Spark SQL queries to execute.
queryVariables This property is required. Map<String,String>
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
jarFileUris This property is required. string[]
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
queryFileUri This property is required. string
The HCFS URI of the script that contains Spark SQL queries to execute.
queryVariables This property is required. {[key: string]: string}
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
jar_file_uris This property is required. Sequence[str]
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
query_file_uri This property is required. str
The HCFS URI of the script that contains Spark SQL queries to execute.
query_variables This property is required. Mapping[str, str]
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
queryFileUri This property is required. String
The HCFS URI of the script that contains Spark SQL queries to execute.
queryVariables This property is required. Map<String>
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

StateHistoryResponse

State This property is required. string
The state of the batch at this point in history.
StateMessage This property is required. string
Details about the state at this point in history.
StateStartTime This property is required. string
The time when the batch entered the historical state.
State This property is required. string
The state of the batch at this point in history.
StateMessage This property is required. string
Details about the state at this point in history.
StateStartTime This property is required. string
The time when the batch entered the historical state.
state This property is required. String
The state of the batch at this point in history.
stateMessage This property is required. String
Details about the state at this point in history.
stateStartTime This property is required. String
The time when the batch entered the historical state.
state This property is required. string
The state of the batch at this point in history.
stateMessage This property is required. string
Details about the state at this point in history.
stateStartTime This property is required. string
The time when the batch entered the historical state.
state This property is required. str
The state of the batch at this point in history.
state_message This property is required. str
Details about the state at this point in history.
state_start_time This property is required. str
The time when the batch entered the historical state.
state This property is required. String
The state of the batch at this point in history.
stateMessage This property is required. String
Details about the state at this point in history.
stateStartTime This property is required. String
The time when the batch entered the historical state.

UsageMetricsResponse

AcceleratorType This property is required. string
Optional. Accelerator type being used, if any
MilliAcceleratorSeconds This property is required. string
Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
MilliDcuSeconds This property is required. string
Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
ShuffleStorageGbSeconds This property is required. string
Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
AcceleratorType This property is required. string
Optional. Accelerator type being used, if any
MilliAcceleratorSeconds This property is required. string
Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
MilliDcuSeconds This property is required. string
Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
ShuffleStorageGbSeconds This property is required. string
Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
acceleratorType This property is required. String
Optional. Accelerator type being used, if any
milliAcceleratorSeconds This property is required. String
Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
milliDcuSeconds This property is required. String
Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
shuffleStorageGbSeconds This property is required. String
Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
acceleratorType This property is required. string
Optional. Accelerator type being used, if any
milliAcceleratorSeconds This property is required. string
Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
milliDcuSeconds This property is required. string
Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
shuffleStorageGbSeconds This property is required. string
Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
accelerator_type This property is required. str
Optional. Accelerator type being used, if any
milli_accelerator_seconds This property is required. str
Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
milli_dcu_seconds This property is required. str
Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
shuffle_storage_gb_seconds This property is required. str
Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
acceleratorType This property is required. String
Optional. Accelerator type being used, if any
milliAcceleratorSeconds This property is required. String
Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
milliDcuSeconds This property is required. String
Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
shuffleStorageGbSeconds This property is required. String
Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).

UsageSnapshotResponse

AcceleratorType This property is required. string
Optional. Accelerator type being used, if any
MilliAccelerator This property is required. string
Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
MilliDcu This property is required. string
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
MilliDcuPremium This property is required. string
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
ShuffleStorageGb This property is required. string
Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
ShuffleStorageGbPremium This property is required. string
Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
SnapshotTime This property is required. string
Optional. The timestamp of the usage snapshot.
AcceleratorType This property is required. string
Optional. Accelerator type being used, if any
MilliAccelerator This property is required. string
Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
MilliDcu This property is required. string
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
MilliDcuPremium This property is required. string
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
ShuffleStorageGb This property is required. string
Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
ShuffleStorageGbPremium This property is required. string
Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
SnapshotTime This property is required. string
Optional. The timestamp of the usage snapshot.
acceleratorType This property is required. String
Optional. Accelerator type being used, if any
milliAccelerator This property is required. String
Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
milliDcu This property is required. String
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
milliDcuPremium This property is required. String
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
shuffleStorageGb This property is required. String
Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
shuffleStorageGbPremium This property is required. String
Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
snapshotTime This property is required. String
Optional. The timestamp of the usage snapshot.
acceleratorType This property is required. string
Optional. Accelerator type being used, if any
milliAccelerator This property is required. string
Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
milliDcu This property is required. string
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
milliDcuPremium This property is required. string
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
shuffleStorageGb This property is required. string
Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
shuffleStorageGbPremium This property is required. string
Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
snapshotTime This property is required. string
Optional. The timestamp of the usage snapshot.
accelerator_type This property is required. str
Optional. Accelerator type being used, if any
milli_accelerator This property is required. str
Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
milli_dcu This property is required. str
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
milli_dcu_premium This property is required. str
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
shuffle_storage_gb This property is required. str
Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
shuffle_storage_gb_premium This property is required. str
Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
snapshot_time This property is required. str
Optional. The timestamp of the usage snapshot.
acceleratorType This property is required. String
Optional. Accelerator type being used, if any
milliAccelerator This property is required. String
Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
milliDcu This property is required. String
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
milliDcuPremium This property is required. String
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
shuffleStorageGb This property is required. String
Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
shuffleStorageGbPremium This property is required. String
Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
snapshotTime This property is required. String
Optional. The timestamp of the usage snapshot.

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi