Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.dataproc/v1.getBatch
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets the batch workload resource representation.
Using getBatch
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getBatch(args: GetBatchArgs, opts?: InvokeOptions): Promise<GetBatchResult>
function getBatchOutput(args: GetBatchOutputArgs, opts?: InvokeOptions): Output<GetBatchResult>
def get_batch(batch_id: Optional[str] = None,
location: Optional[str] = None,
project: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetBatchResult
def get_batch_output(batch_id: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetBatchResult]
func LookupBatch(ctx *Context, args *LookupBatchArgs, opts ...InvokeOption) (*LookupBatchResult, error)
func LookupBatchOutput(ctx *Context, args *LookupBatchOutputArgs, opts ...InvokeOption) LookupBatchResultOutput
> Note: This function is named LookupBatch
in the Go SDK.
public static class GetBatch
{
public static Task<GetBatchResult> InvokeAsync(GetBatchArgs args, InvokeOptions? opts = null)
public static Output<GetBatchResult> Invoke(GetBatchInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetBatchResult> getBatch(GetBatchArgs args, InvokeOptions options)
public static Output<GetBatchResult> getBatch(GetBatchArgs args, InvokeOptions options)
fn::invoke:
function: google-native:dataproc/v1:getBatch
arguments:
# arguments dictionary
The following arguments are supported:
getBatch Result
The following output properties are available:
- Create
Time string - The time when the batch was created.
- Creator string
- The email address of the user who created the batch.
- Environment
Config Pulumi.Google Native. Dataproc. V1. Outputs. Environment Config Response - Optional. Environment configuration for the batch execution.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- Name string
- The resource name of the batch.
- Operation string
- The resource name of the operation associated with this batch.
- Pyspark
Batch Pulumi.Google Native. Dataproc. V1. Outputs. Py Spark Batch Response - Optional. PySpark batch config.
- Runtime
Config Pulumi.Google Native. Dataproc. V1. Outputs. Runtime Config Response - Optional. Runtime configuration for the batch execution.
- Runtime
Info Pulumi.Google Native. Dataproc. V1. Outputs. Runtime Info Response - Runtime information about batch execution.
- Spark
Batch Pulumi.Google Native. Dataproc. V1. Outputs. Spark Batch Response - Optional. Spark batch config.
- Spark
RBatch Pulumi.Google Native. Dataproc. V1. Outputs. Spark RBatch Response - Optional. SparkR batch config.
- Spark
Sql Pulumi.Batch Google Native. Dataproc. V1. Outputs. Spark Sql Batch Response - Optional. SparkSql batch config.
- State string
- The state of the batch.
- State
History List<Pulumi.Google Native. Dataproc. V1. Outputs. State History Response> - Historical state information for the batch.
- State
Message string - Batch state details, such as a failure description if the state is FAILED.
- State
Time string - The time when the batch entered a current state.
- Uuid string
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- Create
Time string - The time when the batch was created.
- Creator string
- The email address of the user who created the batch.
- Environment
Config EnvironmentConfig Response - Optional. Environment configuration for the batch execution.
- Labels map[string]string
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- Name string
- The resource name of the batch.
- Operation string
- The resource name of the operation associated with this batch.
- Pyspark
Batch PySpark Batch Response - Optional. PySpark batch config.
- Runtime
Config RuntimeConfig Response - Optional. Runtime configuration for the batch execution.
- Runtime
Info RuntimeInfo Response - Runtime information about batch execution.
- Spark
Batch SparkBatch Response - Optional. Spark batch config.
- Spark
RBatch SparkRBatch Response - Optional. SparkR batch config.
- Spark
Sql SparkBatch Sql Batch Response - Optional. SparkSql batch config.
- State string
- The state of the batch.
- State
History []StateHistory Response - Historical state information for the batch.
- State
Message string - Batch state details, such as a failure description if the state is FAILED.
- State
Time string - The time when the batch entered a current state.
- Uuid string
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- create
Time String - The time when the batch was created.
- creator String
- The email address of the user who created the batch.
- environment
Config EnvironmentConfig Response - Optional. Environment configuration for the batch execution.
- labels Map<String,String>
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- name String
- The resource name of the batch.
- operation String
- The resource name of the operation associated with this batch.
- pyspark
Batch PySpark Batch Response - Optional. PySpark batch config.
- runtime
Config RuntimeConfig Response - Optional. Runtime configuration for the batch execution.
- runtime
Info RuntimeInfo Response - Runtime information about batch execution.
- spark
Batch SparkBatch Response - Optional. Spark batch config.
- spark
RBatch SparkRBatch Response - Optional. SparkR batch config.
- spark
Sql SparkBatch Sql Batch Response - Optional. SparkSql batch config.
- state String
- The state of the batch.
- state
History List<StateHistory Response> - Historical state information for the batch.
- state
Message String - Batch state details, such as a failure description if the state is FAILED.
- state
Time String - The time when the batch entered a current state.
- uuid String
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- create
Time string - The time when the batch was created.
- creator string
- The email address of the user who created the batch.
- environment
Config EnvironmentConfig Response - Optional. Environment configuration for the batch execution.
- labels {[key: string]: string}
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- name string
- The resource name of the batch.
- operation string
- The resource name of the operation associated with this batch.
- pyspark
Batch PySpark Batch Response - Optional. PySpark batch config.
- runtime
Config RuntimeConfig Response - Optional. Runtime configuration for the batch execution.
- runtime
Info RuntimeInfo Response - Runtime information about batch execution.
- spark
Batch SparkBatch Response - Optional. Spark batch config.
- spark
RBatch SparkRBatch Response - Optional. SparkR batch config.
- spark
Sql SparkBatch Sql Batch Response - Optional. SparkSql batch config.
- state string
- The state of the batch.
- state
History StateHistory Response[] - Historical state information for the batch.
- state
Message string - Batch state details, such as a failure description if the state is FAILED.
- state
Time string - The time when the batch entered a current state.
- uuid string
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- create_
time str - The time when the batch was created.
- creator str
- The email address of the user who created the batch.
- environment_
config EnvironmentConfig Response - Optional. Environment configuration for the batch execution.
- labels Mapping[str, str]
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- name str
- The resource name of the batch.
- operation str
- The resource name of the operation associated with this batch.
- pyspark_
batch PySpark Batch Response - Optional. PySpark batch config.
- runtime_
config RuntimeConfig Response - Optional. Runtime configuration for the batch execution.
- runtime_
info RuntimeInfo Response - Runtime information about batch execution.
- spark_
batch SparkBatch Response - Optional. Spark batch config.
- spark_
r_ Sparkbatch RBatch Response - Optional. SparkR batch config.
- spark_
sql_ Sparkbatch Sql Batch Response - Optional. SparkSql batch config.
- state str
- The state of the batch.
- state_
history Sequence[StateHistory Response] - Historical state information for the batch.
- state_
message str - Batch state details, such as a failure description if the state is FAILED.
- state_
time str - The time when the batch entered a current state.
- uuid str
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
- create
Time String - The time when the batch was created.
- creator String
- The email address of the user who created the batch.
- environment
Config Property Map - Optional. Environment configuration for the batch execution.
- labels Map<String>
- Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a batch.
- name String
- The resource name of the batch.
- operation String
- The resource name of the operation associated with this batch.
- pyspark
Batch Property Map - Optional. PySpark batch config.
- runtime
Config Property Map - Optional. Runtime configuration for the batch execution.
- runtime
Info Property Map - Runtime information about batch execution.
- spark
Batch Property Map - Optional. Spark batch config.
- spark
RBatch Property Map - Optional. SparkR batch config.
- spark
Sql Property MapBatch - Optional. SparkSql batch config.
- state String
- The state of the batch.
- state
History List<Property Map> - Historical state information for the batch.
- state
Message String - Batch state details, such as a failure description if the state is FAILED.
- state
Time String - The time when the batch entered a current state.
- uuid String
- A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch.
Supporting Types
EnvironmentConfigResponse
- Execution
Config This property is required. Pulumi.Google Native. Dataproc. V1. Inputs. Execution Config Response - Optional. Execution configuration for a workload.
- Peripherals
Config This property is required. Pulumi.Google Native. Dataproc. V1. Inputs. Peripherals Config Response - Optional. Peripherals configuration that workload has access to.
- Execution
Config This property is required. ExecutionConfig Response - Optional. Execution configuration for a workload.
- Peripherals
Config This property is required. PeripheralsConfig Response - Optional. Peripherals configuration that workload has access to.
- execution
Config This property is required. ExecutionConfig Response - Optional. Execution configuration for a workload.
- peripherals
Config This property is required. PeripheralsConfig Response - Optional. Peripherals configuration that workload has access to.
- execution
Config This property is required. ExecutionConfig Response - Optional. Execution configuration for a workload.
- peripherals
Config This property is required. PeripheralsConfig Response - Optional. Peripherals configuration that workload has access to.
- execution_
config This property is required. ExecutionConfig Response - Optional. Execution configuration for a workload.
- peripherals_
config This property is required. PeripheralsConfig Response - Optional. Peripherals configuration that workload has access to.
- execution
Config This property is required. Property Map - Optional. Execution configuration for a workload.
- peripherals
Config This property is required. Property Map - Optional. Peripherals configuration that workload has access to.
ExecutionConfigResponse
- Idle
Ttl This property is required. string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Kms
Key This property is required. string - Optional. The Cloud KMS key to use for encryption.
This property is required. List<string>- Optional. Tags used for network traffic control.
- Network
Uri This property is required. string - Optional. Network URI to connect workload to.
- Service
Account This property is required. string - Optional. Service account that used to execute workload.
- Staging
Bucket This property is required. string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Subnetwork
Uri This property is required. string - Optional. Subnetwork URI to connect workload to.
- Ttl
This property is required. string - Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Idle
Ttl This property is required. string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Kms
Key This property is required. string - Optional. The Cloud KMS key to use for encryption.
This property is required. []string- Optional. Tags used for network traffic control.
- Network
Uri This property is required. string - Optional. Network URI to connect workload to.
- Service
Account This property is required. string - Optional. Service account that used to execute workload.
- Staging
Bucket This property is required. string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Subnetwork
Uri This property is required. string - Optional. Subnetwork URI to connect workload to.
- Ttl
This property is required. string - Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl This property is required. String - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key This property is required. String - Optional. The Cloud KMS key to use for encryption.
This property is required. List<String>- Optional. Tags used for network traffic control.
- network
Uri This property is required. String - Optional. Network URI to connect workload to.
- service
Account This property is required. String - Optional. Service account that used to execute workload.
- staging
Bucket This property is required. String - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri This property is required. String - Optional. Subnetwork URI to connect workload to.
- ttl
This property is required. String - Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl This property is required. string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key This property is required. string - Optional. The Cloud KMS key to use for encryption.
This property is required. string[]- Optional. Tags used for network traffic control.
- network
Uri This property is required. string - Optional. Network URI to connect workload to.
- service
Account This property is required. string - Optional. Service account that used to execute workload.
- staging
Bucket This property is required. string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri This property is required. string - Optional. Subnetwork URI to connect workload to.
- ttl
This property is required. string - Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle_
ttl This property is required. str - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms_
key This property is required. str - Optional. The Cloud KMS key to use for encryption.
This property is required. Sequence[str]- Optional. Tags used for network traffic control.
- network_
uri This property is required. str - Optional. Network URI to connect workload to.
- service_
account This property is required. str - Optional. Service account that used to execute workload.
- staging_
bucket This property is required. str - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork_
uri This property is required. str - Optional. Subnetwork URI to connect workload to.
- ttl
This property is required. str - Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl This property is required. String - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key This property is required. String - Optional. The Cloud KMS key to use for encryption.
This property is required. List<String>- Optional. Tags used for network traffic control.
- network
Uri This property is required. String - Optional. Network URI to connect workload to.
- service
Account This property is required. String - Optional. Service account that used to execute workload.
- staging
Bucket This property is required. String - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri This property is required. String - Optional. Subnetwork URI to connect workload to.
- ttl
This property is required. String - Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
PeripheralsConfigResponse
- Metastore
Service This property is required. string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- Spark
History Server Config This property is required. Pulumi.Google Native. Dataproc. V1. Inputs. Spark History Server Config Response - Optional. The Spark History Server configuration for the workload.
- Metastore
Service This property is required. string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- Spark
History Server Config This property is required. SparkHistory Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Service This property is required. String - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History Server Config This property is required. SparkHistory Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Service This property is required. string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History Server Config This property is required. SparkHistory Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore_
service This property is required. str - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark_
history_ server_ config This property is required. SparkHistory Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Service This property is required. String - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History Server Config This property is required. Property Map - Optional. The Spark History Server configuration for the workload.
PyPiRepositoryConfigResponse
- Pypi
Repository This property is required. string - Optional. PyPi repository address
- Pypi
Repository This property is required. string - Optional. PyPi repository address
- pypi
Repository This property is required. String - Optional. PyPi repository address
- pypi
Repository This property is required. string - Optional. PyPi repository address
- pypi_
repository This property is required. str - Optional. PyPi repository address
- pypi
Repository This property is required. String - Optional. PyPi repository address
PySparkBatchResponse
- Archive
Uris This property is required. List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. List<string> - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- File
Uris This property is required. List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- Jar
File Uris This property is required. List<string> - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- Main
Python File Uri This property is required. string - The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- Python
File Uris This property is required. List<string> - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- Archive
Uris This property is required. []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. []string - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- File
Uris This property is required. []string - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- Jar
File Uris This property is required. []string - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- Main
Python File Uri This property is required. string - The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- Python
File Uris This property is required. []string - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main
Python File Uri This property is required. String - The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- python
File Uris This property is required. List<String> - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris This property is required. string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. string[] - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar
File Uris This property is required. string[] - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main
Python File Uri This property is required. string - The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- python
File Uris This property is required. string[] - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive_
uris This property is required. Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. Sequence[str] - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_
uris This property is required. Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar_
file_ uris This property is required. Sequence[str] - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main_
python_ file_ uri This property is required. str - The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- python_
file_ uris This property is required. Sequence[str] - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main
Python File Uri This property is required. String - The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file.
- python
File Uris This property is required. List<String> - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
RepositoryConfigResponse
- Pypi
Repository Config This property is required. Pulumi.Google Native. Dataproc. V1. Inputs. Py Pi Repository Config Response - Optional. Configuration for PyPi repository.
- Pypi
Repository Config This property is required. PyPi Repository Config Response - Optional. Configuration for PyPi repository.
- pypi
Repository Config This property is required. PyPi Repository Config Response - Optional. Configuration for PyPi repository.
- pypi
Repository Config This property is required. PyPi Repository Config Response - Optional. Configuration for PyPi repository.
- pypi_
repository_ config This property is required. PyPi Repository Config Response - Optional. Configuration for PyPi repository.
- pypi
Repository Config This property is required. Property Map - Optional. Configuration for PyPi repository.
RuntimeConfigResponse
- Container
Image This property is required. string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names to values, which are used to configure workload execution.
- Repository
Config This property is required. Pulumi.Google Native. Dataproc. V1. Inputs. Repository Config Response - Optional. Dependency repository configuration.
- Version
This property is required. string - Optional. Version of the batch runtime.
- Container
Image This property is required. string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties
This property is required. map[string]string - Optional. A mapping of property names to values, which are used to configure workload execution.
- Repository
Config This property is required. RepositoryConfig Response - Optional. Dependency repository configuration.
- Version
This property is required. string - Optional. Version of the batch runtime.
- container
Image This property is required. String - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties
This property is required. Map<String,String> - Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config This property is required. RepositoryConfig Response - Optional. Dependency repository configuration.
- version
This property is required. String - Optional. Version of the batch runtime.
- container
Image This property is required. string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config This property is required. RepositoryConfig Response - Optional. Dependency repository configuration.
- version
This property is required. string - Optional. Version of the batch runtime.
- container_
image This property is required. str - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names to values, which are used to configure workload execution.
- repository_
config This property is required. RepositoryConfig Response - Optional. Dependency repository configuration.
- version
This property is required. str - Optional. Version of the batch runtime.
- container
Image This property is required. String - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties
This property is required. Map<String> - Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config This property is required. Property Map - Optional. Dependency repository configuration.
- version
This property is required. String - Optional. Version of the batch runtime.
RuntimeInfoResponse
- Approximate
Usage This property is required. Pulumi.Google Native. Dataproc. V1. Inputs. Usage Metrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- Current
Usage This property is required. Pulumi.Google Native. Dataproc. V1. Inputs. Usage Snapshot Response - Snapshot of current workload resource usage.
- Diagnostic
Output Uri This property is required. string - A URI pointing to the location of the diagnostics tarball.
- Endpoints
This property is required. Dictionary<string, string> - Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- Output
Uri This property is required. string - A URI pointing to the location of the stdout and stderr of the workload.
- Approximate
Usage This property is required. UsageMetrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- Current
Usage This property is required. UsageSnapshot Response - Snapshot of current workload resource usage.
- Diagnostic
Output Uri This property is required. string - A URI pointing to the location of the diagnostics tarball.
- Endpoints
This property is required. map[string]string - Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- Output
Uri This property is required. string - A URI pointing to the location of the stdout and stderr of the workload.
- approximate
Usage This property is required. UsageMetrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current
Usage This property is required. UsageSnapshot Response - Snapshot of current workload resource usage.
- diagnostic
Output Uri This property is required. String - A URI pointing to the location of the diagnostics tarball.
- endpoints
This property is required. Map<String,String> - Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output
Uri This property is required. String - A URI pointing to the location of the stdout and stderr of the workload.
- approximate
Usage This property is required. UsageMetrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current
Usage This property is required. UsageSnapshot Response - Snapshot of current workload resource usage.
- diagnostic
Output Uri This property is required. string - A URI pointing to the location of the diagnostics tarball.
- endpoints
This property is required. {[key: string]: string} - Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output
Uri This property is required. string - A URI pointing to the location of the stdout and stderr of the workload.
- approximate_
usage This property is required. UsageMetrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current_
usage This property is required. UsageSnapshot Response - Snapshot of current workload resource usage.
- diagnostic_
output_ uri This property is required. str - A URI pointing to the location of the diagnostics tarball.
- endpoints
This property is required. Mapping[str, str] - Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output_
uri This property is required. str - A URI pointing to the location of the stdout and stderr of the workload.
- approximate
Usage This property is required. Property Map - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current
Usage This property is required. Property Map - Snapshot of current workload resource usage.
- diagnostic
Output Uri This property is required. String - A URI pointing to the location of the diagnostics tarball.
- endpoints
This property is required. Map<String> - Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output
Uri This property is required. String - A URI pointing to the location of the stdout and stderr of the workload.
SparkBatchResponse
- Archive
Uris This property is required. List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. List<string> - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- File
Uris This property is required. List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- Jar
File Uris This property is required. List<string> - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- Main
Class This property is required. string - Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- Main
Jar File Uri This property is required. string - Optional. The HCFS URI of the jar file that contains the main class.
- Archive
Uris This property is required. []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. []string - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- File
Uris This property is required. []string - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- Jar
File Uris This property is required. []string - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- Main
Class This property is required. string - Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- Main
Jar File Uri This property is required. string - Optional. The HCFS URI of the jar file that contains the main class.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main
Class This property is required. String - Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- main
Jar File Uri This property is required. String - Optional. The HCFS URI of the jar file that contains the main class.
- archive
Uris This property is required. string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. string[] - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar
File Uris This property is required. string[] - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main
Class This property is required. string - Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- main
Jar File Uri This property is required. string - Optional. The HCFS URI of the jar file that contains the main class.
- archive_
uris This property is required. Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. Sequence[str] - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_
uris This property is required. Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar_
file_ uris This property is required. Sequence[str] - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main_
class This property is required. str - Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- main_
jar_ file_ uri This property is required. str - Optional. The HCFS URI of the jar file that contains the main class.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks.
- main
Class This property is required. String - Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in jar_file_uris.
- main
Jar File Uri This property is required. String - Optional. The HCFS URI of the jar file that contains the main class.
SparkHistoryServerConfigResponse
- Dataproc
Cluster This property is required. string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- Dataproc
Cluster This property is required. string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster This property is required. String - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster This property is required. string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc_
cluster This property is required. str - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster This property is required. String - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
SparkRBatchResponse
- Archive
Uris This property is required. List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. List<string> - Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- File
Uris This property is required. List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- Main
RFile Uri This property is required. string - The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- Archive
Uris This property is required. []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. []string - Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- File
Uris This property is required. []string - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- Main
RFile Uri This property is required. string - The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- main
RFile Uri This property is required. String - The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archive
Uris This property is required. string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. string[] - Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- main
RFile Uri This property is required. string - The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archive_
uris This property is required. Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. Sequence[str] - Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file_
uris This property is required. Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- main_
r_ file_ uri This property is required. str - The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as --conf, since a collision can occur that causes an incorrect batch submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor.
- main
RFile Uri This property is required. String - The HCFS URI of the main R file to use as the driver. Must be a .R or .r file.
SparkSqlBatchResponse
- Jar
File Uris This property is required. List<string> - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Query
File Uri This property is required. string - The HCFS URI of the script that contains Spark SQL queries to execute.
- Query
Variables This property is required. Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- Jar
File Uris This property is required. []string - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Query
File Uri This property is required. string - The HCFS URI of the script that contains Spark SQL queries to execute.
- Query
Variables This property is required. map[string]string - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- query
File Uri This property is required. String - The HCFS URI of the script that contains Spark SQL queries to execute.
- query
Variables This property is required. Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File Uris This property is required. string[] - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- query
File Uri This property is required. string - The HCFS URI of the script that contains Spark SQL queries to execute.
- query
Variables This property is required. {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar_
file_ uris This property is required. Sequence[str] - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- query_
file_ uri This property is required. str - The HCFS URI of the script that contains Spark SQL queries to execute.
- query_
variables This property is required. Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- query
File Uri This property is required. String - The HCFS URI of the script that contains Spark SQL queries to execute.
- query
Variables This property is required. Map<String> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
StateHistoryResponse
- State
This property is required. string - The state of the batch at this point in history.
- State
Message This property is required. string - Details about the state at this point in history.
- State
Start Time This property is required. string - The time when the batch entered the historical state.
- State
This property is required. string - The state of the batch at this point in history.
- State
Message This property is required. string - Details about the state at this point in history.
- State
Start Time This property is required. string - The time when the batch entered the historical state.
- state
This property is required. String - The state of the batch at this point in history.
- state
Message This property is required. String - Details about the state at this point in history.
- state
Start Time This property is required. String - The time when the batch entered the historical state.
- state
This property is required. string - The state of the batch at this point in history.
- state
Message This property is required. string - Details about the state at this point in history.
- state
Start Time This property is required. string - The time when the batch entered the historical state.
- state
This property is required. str - The state of the batch at this point in history.
- state_
message This property is required. str - Details about the state at this point in history.
- state_
start_ time This property is required. str - The time when the batch entered the historical state.
- state
This property is required. String - The state of the batch at this point in history.
- state
Message This property is required. String - Details about the state at this point in history.
- state
Start Time This property is required. String - The time when the batch entered the historical state.
UsageMetricsResponse
- Accelerator
Type This property is required. string - Optional. Accelerator type being used, if any
- Milli
Accelerator Seconds This property is required. string - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Milli
Dcu Seconds This property is required. string - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Shuffle
Storage Gb Seconds This property is required. string - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Accelerator
Type This property is required. string - Optional. Accelerator type being used, if any
- Milli
Accelerator Seconds This property is required. string - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Milli
Dcu Seconds This property is required. string - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Shuffle
Storage Gb Seconds This property is required. string - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator
Type This property is required. String - Optional. Accelerator type being used, if any
- milli
Accelerator Seconds This property is required. String - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli
Dcu Seconds This property is required. String - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage Gb Seconds This property is required. String - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator
Type This property is required. string - Optional. Accelerator type being used, if any
- milli
Accelerator Seconds This property is required. string - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli
Dcu Seconds This property is required. string - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage Gb Seconds This property is required. string - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator_
type This property is required. str - Optional. Accelerator type being used, if any
- milli_
accelerator_ seconds This property is required. str - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli_
dcu_ seconds This property is required. str - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle_
storage_ gb_ seconds This property is required. str - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator
Type This property is required. String - Optional. Accelerator type being used, if any
- milli
Accelerator Seconds This property is required. String - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli
Dcu Seconds This property is required. String - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage Gb Seconds This property is required. String - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
UsageSnapshotResponse
- Accelerator
Type This property is required. string - Optional. Accelerator type being used, if any
- Milli
Accelerator This property is required. string - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- Milli
Dcu This property is required. string - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
This property is required. string- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Shuffle
Storage Gb This property is required. string - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
This property is required. string- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- Snapshot
Time This property is required. string - Optional. The timestamp of the usage snapshot.
- Accelerator
Type This property is required. string - Optional. Accelerator type being used, if any
- Milli
Accelerator This property is required. string - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- Milli
Dcu This property is required. string - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
This property is required. string- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Shuffle
Storage Gb This property is required. string - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
This property is required. string- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- Snapshot
Time This property is required. string - Optional. The timestamp of the usage snapshot.
- accelerator
Type This property is required. String - Optional. Accelerator type being used, if any
- milli
Accelerator This property is required. String - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli
Dcu This property is required. String - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
This property is required. String- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage Gb This property is required. String - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
This property is required. String- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot
Time This property is required. String - Optional. The timestamp of the usage snapshot.
- accelerator
Type This property is required. string - Optional. Accelerator type being used, if any
- milli
Accelerator This property is required. string - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli
Dcu This property is required. string - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
This property is required. string- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage Gb This property is required. string - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
This property is required. string- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot
Time This property is required. string - Optional. The timestamp of the usage snapshot.
- accelerator_
type This property is required. str - Optional. Accelerator type being used, if any
- milli_
accelerator This property is required. str - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli_
dcu This property is required. str - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
This property is required. str- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle_
storage_ gb This property is required. str - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
This property is required. str- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot_
time This property is required. str - Optional. The timestamp of the usage snapshot.
- accelerator
Type This property is required. String - Optional. Accelerator type being used, if any
- milli
Accelerator This property is required. String - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli
Dcu This property is required. String - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
This property is required. String- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage Gb This property is required. String - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
This property is required. String- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot
Time This property is required. String - Optional. The timestamp of the usage snapshot.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi