1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1
  6. Cluster

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1.Cluster

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Creates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#clusteroperationmetadata). Auto-naming is currently not supported for this resource.

Create Cluster Resource

Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

Constructor syntax

new Cluster(name: string, args: ClusterArgs, opts?: CustomResourceOptions);
@overload
def Cluster(resource_name: str,
            args: ClusterArgs,
            opts: Optional[ResourceOptions] = None)

@overload
def Cluster(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            cluster_name: Optional[str] = None,
            region: Optional[str] = None,
            action_on_failed_primary_workers: Optional[str] = None,
            config: Optional[ClusterConfigArgs] = None,
            labels: Optional[Mapping[str, str]] = None,
            project: Optional[str] = None,
            request_id: Optional[str] = None,
            virtual_cluster_config: Optional[VirtualClusterConfigArgs] = None)
func NewCluster(ctx *Context, name string, args ClusterArgs, opts ...ResourceOption) (*Cluster, error)
public Cluster(string name, ClusterArgs args, CustomResourceOptions? opts = null)
public Cluster(String name, ClusterArgs args)
public Cluster(String name, ClusterArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1:Cluster
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.

Parameters

name This property is required. string
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name This property is required. str
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name This property is required. string
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name This property is required. string
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name This property is required. String
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.

Constructor example

The following reference example uses placeholder values for all input properties.

var exampleclusterResourceResourceFromDataprocv1 = new GoogleNative.Dataproc.V1.Cluster("exampleclusterResourceResourceFromDataprocv1", new()
{
    ClusterName = "string",
    Region = "string",
    ActionOnFailedPrimaryWorkers = "string",
    Config = new GoogleNative.Dataproc.V1.Inputs.ClusterConfigArgs
    {
        AutoscalingConfig = new GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigArgs
        {
            PolicyUri = "string",
        },
        AuxiliaryNodeGroups = new[]
        {
            new GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupArgs
            {
                NodeGroup = new GoogleNative.Dataproc.V1.Inputs.NodeGroupArgs
                {
                    Roles = new[]
                    {
                        GoogleNative.Dataproc.V1.NodeGroupRolesItem.RoleUnspecified,
                    },
                    Labels = 
                    {
                        { "string", "string" },
                    },
                    Name = "string",
                    NodeGroupConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
                    {
                        Accelerators = new[]
                        {
                            new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                            {
                                AcceleratorCount = 0,
                                AcceleratorTypeUri = "string",
                            },
                        },
                        DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
                        {
                            BootDiskSizeGb = 0,
                            BootDiskType = "string",
                            LocalSsdInterface = "string",
                            NumLocalSsds = 0,
                        },
                        ImageUri = "string",
                        InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
                        {
                            InstanceSelectionList = new[]
                            {
                                new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                                {
                                    MachineTypes = new[]
                                    {
                                        "string",
                                    },
                                    Rank = 0,
                                },
                            },
                        },
                        MachineTypeUri = "string",
                        MinCpuPlatform = "string",
                        MinNumInstances = 0,
                        NumInstances = 0,
                        Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                        StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
                        {
                            RequiredRegistrationFraction = 0,
                        },
                    },
                },
                NodeGroupId = "string",
            },
        },
        ConfigBucket = "string",
        DataprocMetricConfig = new GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigArgs
        {
            Metrics = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.MetricArgs
                {
                    MetricSource = GoogleNative.Dataproc.V1.MetricMetricSource.MetricSourceUnspecified,
                    MetricOverrides = new[]
                    {
                        "string",
                    },
                },
            },
        },
        EncryptionConfig = new GoogleNative.Dataproc.V1.Inputs.EncryptionConfigArgs
        {
            GcePdKmsKeyName = "string",
            KmsKey = "string",
        },
        EndpointConfig = new GoogleNative.Dataproc.V1.Inputs.EndpointConfigArgs
        {
            EnableHttpPortAccess = false,
        },
        GceClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GceClusterConfigArgs
        {
            ConfidentialInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigArgs
            {
                EnableConfidentialCompute = false,
            },
            InternalIpOnly = false,
            Metadata = 
            {
                { "string", "string" },
            },
            NetworkUri = "string",
            NodeGroupAffinity = new GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityArgs
            {
                NodeGroupUri = "string",
            },
            PrivateIpv6GoogleAccess = GoogleNative.Dataproc.V1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
            ReservationAffinity = new GoogleNative.Dataproc.V1.Inputs.ReservationAffinityArgs
            {
                ConsumeReservationType = GoogleNative.Dataproc.V1.ReservationAffinityConsumeReservationType.TypeUnspecified,
                Key = "string",
                Values = new[]
                {
                    "string",
                },
            },
            ServiceAccount = "string",
            ServiceAccountScopes = new[]
            {
                "string",
            },
            ShieldedInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigArgs
            {
                EnableIntegrityMonitoring = false,
                EnableSecureBoot = false,
                EnableVtpm = false,
            },
            SubnetworkUri = "string",
            Tags = new[]
            {
                "string",
            },
            ZoneUri = "string",
        },
        GkeClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigArgs
        {
            GkeClusterTarget = "string",
            NodePoolTarget = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetArgs
                {
                    NodePool = "string",
                    Roles = new[]
                    {
                        GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem.RoleUnspecified,
                    },
                    NodePoolConfig = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigArgs
                    {
                        Autoscaling = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigArgs
                        {
                            MaxNodeCount = 0,
                            MinNodeCount = 0,
                        },
                        Config = new GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigArgs
                        {
                            Accelerators = new[]
                            {
                                new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigArgs
                                {
                                    AcceleratorCount = "string",
                                    AcceleratorType = "string",
                                    GpuPartitionSize = "string",
                                },
                            },
                            BootDiskKmsKey = "string",
                            LocalSsdCount = 0,
                            MachineType = "string",
                            MinCpuPlatform = "string",
                            Preemptible = false,
                            Spot = false,
                        },
                        Locations = new[]
                        {
                            "string",
                        },
                    },
                },
            },
        },
        InitializationActions = new[]
        {
            new GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionArgs
            {
                ExecutableFile = "string",
                ExecutionTimeout = "string",
            },
        },
        LifecycleConfig = new GoogleNative.Dataproc.V1.Inputs.LifecycleConfigArgs
        {
            AutoDeleteTime = "string",
            AutoDeleteTtl = "string",
            IdleDeleteTtl = "string",
        },
        MasterConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                LocalSsdInterface = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
            {
                InstanceSelectionList = new[]
                {
                    new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                    {
                        MachineTypes = new[]
                        {
                            "string",
                        },
                        Rank = 0,
                    },
                },
            },
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            MinNumInstances = 0,
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
            {
                RequiredRegistrationFraction = 0,
            },
        },
        MetastoreConfig = new GoogleNative.Dataproc.V1.Inputs.MetastoreConfigArgs
        {
            DataprocMetastoreService = "string",
        },
        SecondaryWorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                LocalSsdInterface = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
            {
                InstanceSelectionList = new[]
                {
                    new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                    {
                        MachineTypes = new[]
                        {
                            "string",
                        },
                        Rank = 0,
                    },
                },
            },
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            MinNumInstances = 0,
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
            {
                RequiredRegistrationFraction = 0,
            },
        },
        SecurityConfig = new GoogleNative.Dataproc.V1.Inputs.SecurityConfigArgs
        {
            IdentityConfig = new GoogleNative.Dataproc.V1.Inputs.IdentityConfigArgs
            {
                UserServiceAccountMapping = 
                {
                    { "string", "string" },
                },
            },
            KerberosConfig = new GoogleNative.Dataproc.V1.Inputs.KerberosConfigArgs
            {
                CrossRealmTrustAdminServer = "string",
                CrossRealmTrustKdc = "string",
                CrossRealmTrustRealm = "string",
                CrossRealmTrustSharedPasswordUri = "string",
                EnableKerberos = false,
                KdcDbKeyUri = "string",
                KeyPasswordUri = "string",
                KeystorePasswordUri = "string",
                KeystoreUri = "string",
                KmsKeyUri = "string",
                Realm = "string",
                RootPrincipalPasswordUri = "string",
                TgtLifetimeHours = 0,
                TruststorePasswordUri = "string",
                TruststoreUri = "string",
            },
        },
        SoftwareConfig = new GoogleNative.Dataproc.V1.Inputs.SoftwareConfigArgs
        {
            ImageVersion = "string",
            OptionalComponents = new[]
            {
                GoogleNative.Dataproc.V1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified,
            },
            Properties = 
            {
                { "string", "string" },
            },
        },
        TempBucket = "string",
        WorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                LocalSsdInterface = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
            {
                InstanceSelectionList = new[]
                {
                    new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                    {
                        MachineTypes = new[]
                        {
                            "string",
                        },
                        Rank = 0,
                    },
                },
            },
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            MinNumInstances = 0,
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
            {
                RequiredRegistrationFraction = 0,
            },
        },
    },
    Labels = 
    {
        { "string", "string" },
    },
    Project = "string",
    RequestId = "string",
    VirtualClusterConfig = new GoogleNative.Dataproc.V1.Inputs.VirtualClusterConfigArgs
    {
        KubernetesClusterConfig = new GoogleNative.Dataproc.V1.Inputs.KubernetesClusterConfigArgs
        {
            GkeClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigArgs
            {
                GkeClusterTarget = "string",
                NodePoolTarget = new[]
                {
                    new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetArgs
                    {
                        NodePool = "string",
                        Roles = new[]
                        {
                            GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem.RoleUnspecified,
                        },
                        NodePoolConfig = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigArgs
                        {
                            Autoscaling = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigArgs
                            {
                                MaxNodeCount = 0,
                                MinNodeCount = 0,
                            },
                            Config = new GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigArgs
                            {
                                Accelerators = new[]
                                {
                                    new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigArgs
                                    {
                                        AcceleratorCount = "string",
                                        AcceleratorType = "string",
                                        GpuPartitionSize = "string",
                                    },
                                },
                                BootDiskKmsKey = "string",
                                LocalSsdCount = 0,
                                MachineType = "string",
                                MinCpuPlatform = "string",
                                Preemptible = false,
                                Spot = false,
                            },
                            Locations = new[]
                            {
                                "string",
                            },
                        },
                    },
                },
            },
            KubernetesNamespace = "string",
            KubernetesSoftwareConfig = new GoogleNative.Dataproc.V1.Inputs.KubernetesSoftwareConfigArgs
            {
                ComponentVersion = 
                {
                    { "string", "string" },
                },
                Properties = 
                {
                    { "string", "string" },
                },
            },
        },
        AuxiliaryServicesConfig = new GoogleNative.Dataproc.V1.Inputs.AuxiliaryServicesConfigArgs
        {
            MetastoreConfig = new GoogleNative.Dataproc.V1.Inputs.MetastoreConfigArgs
            {
                DataprocMetastoreService = "string",
            },
            SparkHistoryServerConfig = new GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigArgs
            {
                DataprocCluster = "string",
            },
        },
        StagingBucket = "string",
    },
});
Copy
example, err := dataproc.NewCluster(ctx, "exampleclusterResourceResourceFromDataprocv1", &dataproc.ClusterArgs{
	ClusterName:                  pulumi.String("string"),
	Region:                       pulumi.String("string"),
	ActionOnFailedPrimaryWorkers: pulumi.String("string"),
	Config: &dataproc.ClusterConfigArgs{
		AutoscalingConfig: &dataproc.AutoscalingConfigArgs{
			PolicyUri: pulumi.String("string"),
		},
		AuxiliaryNodeGroups: dataproc.AuxiliaryNodeGroupArray{
			&dataproc.AuxiliaryNodeGroupArgs{
				NodeGroup: &dataproc.NodeGroupTypeArgs{
					Roles: dataproc.NodeGroupRolesItemArray{
						dataproc.NodeGroupRolesItemRoleUnspecified,
					},
					Labels: pulumi.StringMap{
						"string": pulumi.String("string"),
					},
					Name: pulumi.String("string"),
					NodeGroupConfig: &dataproc.InstanceGroupConfigArgs{
						Accelerators: dataproc.AcceleratorConfigArray{
							&dataproc.AcceleratorConfigArgs{
								AcceleratorCount:   pulumi.Int(0),
								AcceleratorTypeUri: pulumi.String("string"),
							},
						},
						DiskConfig: &dataproc.DiskConfigArgs{
							BootDiskSizeGb:    pulumi.Int(0),
							BootDiskType:      pulumi.String("string"),
							LocalSsdInterface: pulumi.String("string"),
							NumLocalSsds:      pulumi.Int(0),
						},
						ImageUri: pulumi.String("string"),
						InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
							InstanceSelectionList: dataproc.InstanceSelectionArray{
								&dataproc.InstanceSelectionArgs{
									MachineTypes: pulumi.StringArray{
										pulumi.String("string"),
									},
									Rank: pulumi.Int(0),
								},
							},
						},
						MachineTypeUri:  pulumi.String("string"),
						MinCpuPlatform:  pulumi.String("string"),
						MinNumInstances: pulumi.Int(0),
						NumInstances:    pulumi.Int(0),
						Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
						StartupConfig: &dataproc.StartupConfigArgs{
							RequiredRegistrationFraction: pulumi.Float64(0),
						},
					},
				},
				NodeGroupId: pulumi.String("string"),
			},
		},
		ConfigBucket: pulumi.String("string"),
		DataprocMetricConfig: &dataproc.DataprocMetricConfigArgs{
			Metrics: dataproc.MetricArray{
				&dataproc.MetricArgs{
					MetricSource: dataproc.MetricMetricSourceMetricSourceUnspecified,
					MetricOverrides: pulumi.StringArray{
						pulumi.String("string"),
					},
				},
			},
		},
		EncryptionConfig: &dataproc.EncryptionConfigArgs{
			GcePdKmsKeyName: pulumi.String("string"),
			KmsKey:          pulumi.String("string"),
		},
		EndpointConfig: &dataproc.EndpointConfigArgs{
			EnableHttpPortAccess: pulumi.Bool(false),
		},
		GceClusterConfig: &dataproc.GceClusterConfigArgs{
			ConfidentialInstanceConfig: &dataproc.ConfidentialInstanceConfigArgs{
				EnableConfidentialCompute: pulumi.Bool(false),
			},
			InternalIpOnly: pulumi.Bool(false),
			Metadata: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
			NetworkUri: pulumi.String("string"),
			NodeGroupAffinity: &dataproc.NodeGroupAffinityArgs{
				NodeGroupUri: pulumi.String("string"),
			},
			PrivateIpv6GoogleAccess: dataproc.GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified,
			ReservationAffinity: &dataproc.ReservationAffinityArgs{
				ConsumeReservationType: dataproc.ReservationAffinityConsumeReservationTypeTypeUnspecified,
				Key:                    pulumi.String("string"),
				Values: pulumi.StringArray{
					pulumi.String("string"),
				},
			},
			ServiceAccount: pulumi.String("string"),
			ServiceAccountScopes: pulumi.StringArray{
				pulumi.String("string"),
			},
			ShieldedInstanceConfig: &dataproc.ShieldedInstanceConfigArgs{
				EnableIntegrityMonitoring: pulumi.Bool(false),
				EnableSecureBoot:          pulumi.Bool(false),
				EnableVtpm:                pulumi.Bool(false),
			},
			SubnetworkUri: pulumi.String("string"),
			Tags: pulumi.StringArray{
				pulumi.String("string"),
			},
			ZoneUri: pulumi.String("string"),
		},
		GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
			GkeClusterTarget: pulumi.String("string"),
			NodePoolTarget: dataproc.GkeNodePoolTargetArray{
				&dataproc.GkeNodePoolTargetArgs{
					NodePool: pulumi.String("string"),
					Roles: dataproc.GkeNodePoolTargetRolesItemArray{
						dataproc.GkeNodePoolTargetRolesItemRoleUnspecified,
					},
					NodePoolConfig: &dataproc.GkeNodePoolConfigArgs{
						Autoscaling: &dataproc.GkeNodePoolAutoscalingConfigArgs{
							MaxNodeCount: pulumi.Int(0),
							MinNodeCount: pulumi.Int(0),
						},
						Config: &dataproc.GkeNodeConfigArgs{
							Accelerators: dataproc.GkeNodePoolAcceleratorConfigArray{
								&dataproc.GkeNodePoolAcceleratorConfigArgs{
									AcceleratorCount: pulumi.String("string"),
									AcceleratorType:  pulumi.String("string"),
									GpuPartitionSize: pulumi.String("string"),
								},
							},
							BootDiskKmsKey: pulumi.String("string"),
							LocalSsdCount:  pulumi.Int(0),
							MachineType:    pulumi.String("string"),
							MinCpuPlatform: pulumi.String("string"),
							Preemptible:    pulumi.Bool(false),
							Spot:           pulumi.Bool(false),
						},
						Locations: pulumi.StringArray{
							pulumi.String("string"),
						},
					},
				},
			},
		},
		InitializationActions: dataproc.NodeInitializationActionArray{
			&dataproc.NodeInitializationActionArgs{
				ExecutableFile:   pulumi.String("string"),
				ExecutionTimeout: pulumi.String("string"),
			},
		},
		LifecycleConfig: &dataproc.LifecycleConfigArgs{
			AutoDeleteTime: pulumi.String("string"),
			AutoDeleteTtl:  pulumi.String("string"),
			IdleDeleteTtl:  pulumi.String("string"),
		},
		MasterConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb:    pulumi.Int(0),
				BootDiskType:      pulumi.String("string"),
				LocalSsdInterface: pulumi.String("string"),
				NumLocalSsds:      pulumi.Int(0),
			},
			ImageUri: pulumi.String("string"),
			InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
				InstanceSelectionList: dataproc.InstanceSelectionArray{
					&dataproc.InstanceSelectionArgs{
						MachineTypes: pulumi.StringArray{
							pulumi.String("string"),
						},
						Rank: pulumi.Int(0),
					},
				},
			},
			MachineTypeUri:  pulumi.String("string"),
			MinCpuPlatform:  pulumi.String("string"),
			MinNumInstances: pulumi.Int(0),
			NumInstances:    pulumi.Int(0),
			Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
			StartupConfig: &dataproc.StartupConfigArgs{
				RequiredRegistrationFraction: pulumi.Float64(0),
			},
		},
		MetastoreConfig: &dataproc.MetastoreConfigArgs{
			DataprocMetastoreService: pulumi.String("string"),
		},
		SecondaryWorkerConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb:    pulumi.Int(0),
				BootDiskType:      pulumi.String("string"),
				LocalSsdInterface: pulumi.String("string"),
				NumLocalSsds:      pulumi.Int(0),
			},
			ImageUri: pulumi.String("string"),
			InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
				InstanceSelectionList: dataproc.InstanceSelectionArray{
					&dataproc.InstanceSelectionArgs{
						MachineTypes: pulumi.StringArray{
							pulumi.String("string"),
						},
						Rank: pulumi.Int(0),
					},
				},
			},
			MachineTypeUri:  pulumi.String("string"),
			MinCpuPlatform:  pulumi.String("string"),
			MinNumInstances: pulumi.Int(0),
			NumInstances:    pulumi.Int(0),
			Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
			StartupConfig: &dataproc.StartupConfigArgs{
				RequiredRegistrationFraction: pulumi.Float64(0),
			},
		},
		SecurityConfig: &dataproc.SecurityConfigArgs{
			IdentityConfig: &dataproc.IdentityConfigArgs{
				UserServiceAccountMapping: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
			},
			KerberosConfig: &dataproc.KerberosConfigArgs{
				CrossRealmTrustAdminServer:       pulumi.String("string"),
				CrossRealmTrustKdc:               pulumi.String("string"),
				CrossRealmTrustRealm:             pulumi.String("string"),
				CrossRealmTrustSharedPasswordUri: pulumi.String("string"),
				EnableKerberos:                   pulumi.Bool(false),
				KdcDbKeyUri:                      pulumi.String("string"),
				KeyPasswordUri:                   pulumi.String("string"),
				KeystorePasswordUri:              pulumi.String("string"),
				KeystoreUri:                      pulumi.String("string"),
				KmsKeyUri:                        pulumi.String("string"),
				Realm:                            pulumi.String("string"),
				RootPrincipalPasswordUri:         pulumi.String("string"),
				TgtLifetimeHours:                 pulumi.Int(0),
				TruststorePasswordUri:            pulumi.String("string"),
				TruststoreUri:                    pulumi.String("string"),
			},
		},
		SoftwareConfig: &dataproc.SoftwareConfigArgs{
			ImageVersion: pulumi.String("string"),
			OptionalComponents: dataproc.SoftwareConfigOptionalComponentsItemArray{
				dataproc.SoftwareConfigOptionalComponentsItemComponentUnspecified,
			},
			Properties: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		TempBucket: pulumi.String("string"),
		WorkerConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb:    pulumi.Int(0),
				BootDiskType:      pulumi.String("string"),
				LocalSsdInterface: pulumi.String("string"),
				NumLocalSsds:      pulumi.Int(0),
			},
			ImageUri: pulumi.String("string"),
			InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
				InstanceSelectionList: dataproc.InstanceSelectionArray{
					&dataproc.InstanceSelectionArgs{
						MachineTypes: pulumi.StringArray{
							pulumi.String("string"),
						},
						Rank: pulumi.Int(0),
					},
				},
			},
			MachineTypeUri:  pulumi.String("string"),
			MinCpuPlatform:  pulumi.String("string"),
			MinNumInstances: pulumi.Int(0),
			NumInstances:    pulumi.Int(0),
			Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
			StartupConfig: &dataproc.StartupConfigArgs{
				RequiredRegistrationFraction: pulumi.Float64(0),
			},
		},
	},
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Project:   pulumi.String("string"),
	RequestId: pulumi.String("string"),
	VirtualClusterConfig: &dataproc.VirtualClusterConfigArgs{
		KubernetesClusterConfig: &dataproc.KubernetesClusterConfigArgs{
			GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
				GkeClusterTarget: pulumi.String("string"),
				NodePoolTarget: dataproc.GkeNodePoolTargetArray{
					&dataproc.GkeNodePoolTargetArgs{
						NodePool: pulumi.String("string"),
						Roles: dataproc.GkeNodePoolTargetRolesItemArray{
							dataproc.GkeNodePoolTargetRolesItemRoleUnspecified,
						},
						NodePoolConfig: &dataproc.GkeNodePoolConfigArgs{
							Autoscaling: &dataproc.GkeNodePoolAutoscalingConfigArgs{
								MaxNodeCount: pulumi.Int(0),
								MinNodeCount: pulumi.Int(0),
							},
							Config: &dataproc.GkeNodeConfigArgs{
								Accelerators: dataproc.GkeNodePoolAcceleratorConfigArray{
									&dataproc.GkeNodePoolAcceleratorConfigArgs{
										AcceleratorCount: pulumi.String("string"),
										AcceleratorType:  pulumi.String("string"),
										GpuPartitionSize: pulumi.String("string"),
									},
								},
								BootDiskKmsKey: pulumi.String("string"),
								LocalSsdCount:  pulumi.Int(0),
								MachineType:    pulumi.String("string"),
								MinCpuPlatform: pulumi.String("string"),
								Preemptible:    pulumi.Bool(false),
								Spot:           pulumi.Bool(false),
							},
							Locations: pulumi.StringArray{
								pulumi.String("string"),
							},
						},
					},
				},
			},
			KubernetesNamespace: pulumi.String("string"),
			KubernetesSoftwareConfig: &dataproc.KubernetesSoftwareConfigArgs{
				ComponentVersion: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				Properties: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
			},
		},
		AuxiliaryServicesConfig: &dataproc.AuxiliaryServicesConfigArgs{
			MetastoreConfig: &dataproc.MetastoreConfigArgs{
				DataprocMetastoreService: pulumi.String("string"),
			},
			SparkHistoryServerConfig: &dataproc.SparkHistoryServerConfigArgs{
				DataprocCluster: pulumi.String("string"),
			},
		},
		StagingBucket: pulumi.String("string"),
	},
})
Copy
var exampleclusterResourceResourceFromDataprocv1 = new Cluster("exampleclusterResourceResourceFromDataprocv1", ClusterArgs.builder()
    .clusterName("string")
    .region("string")
    .actionOnFailedPrimaryWorkers("string")
    .config(ClusterConfigArgs.builder()
        .autoscalingConfig(AutoscalingConfigArgs.builder()
            .policyUri("string")
            .build())
        .auxiliaryNodeGroups(AuxiliaryNodeGroupArgs.builder()
            .nodeGroup(NodeGroupArgs.builder()
                .roles("ROLE_UNSPECIFIED")
                .labels(Map.of("string", "string"))
                .name("string")
                .nodeGroupConfig(InstanceGroupConfigArgs.builder()
                    .accelerators(AcceleratorConfigArgs.builder()
                        .acceleratorCount(0)
                        .acceleratorTypeUri("string")
                        .build())
                    .diskConfig(DiskConfigArgs.builder()
                        .bootDiskSizeGb(0)
                        .bootDiskType("string")
                        .localSsdInterface("string")
                        .numLocalSsds(0)
                        .build())
                    .imageUri("string")
                    .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                        .instanceSelectionList(InstanceSelectionArgs.builder()
                            .machineTypes("string")
                            .rank(0)
                            .build())
                        .build())
                    .machineTypeUri("string")
                    .minCpuPlatform("string")
                    .minNumInstances(0)
                    .numInstances(0)
                    .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
                    .startupConfig(StartupConfigArgs.builder()
                        .requiredRegistrationFraction(0)
                        .build())
                    .build())
                .build())
            .nodeGroupId("string")
            .build())
        .configBucket("string")
        .dataprocMetricConfig(DataprocMetricConfigArgs.builder()
            .metrics(MetricArgs.builder()
                .metricSource("METRIC_SOURCE_UNSPECIFIED")
                .metricOverrides("string")
                .build())
            .build())
        .encryptionConfig(EncryptionConfigArgs.builder()
            .gcePdKmsKeyName("string")
            .kmsKey("string")
            .build())
        .endpointConfig(EndpointConfigArgs.builder()
            .enableHttpPortAccess(false)
            .build())
        .gceClusterConfig(GceClusterConfigArgs.builder()
            .confidentialInstanceConfig(ConfidentialInstanceConfigArgs.builder()
                .enableConfidentialCompute(false)
                .build())
            .internalIpOnly(false)
            .metadata(Map.of("string", "string"))
            .networkUri("string")
            .nodeGroupAffinity(NodeGroupAffinityArgs.builder()
                .nodeGroupUri("string")
                .build())
            .privateIpv6GoogleAccess("PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED")
            .reservationAffinity(ReservationAffinityArgs.builder()
                .consumeReservationType("TYPE_UNSPECIFIED")
                .key("string")
                .values("string")
                .build())
            .serviceAccount("string")
            .serviceAccountScopes("string")
            .shieldedInstanceConfig(ShieldedInstanceConfigArgs.builder()
                .enableIntegrityMonitoring(false)
                .enableSecureBoot(false)
                .enableVtpm(false)
                .build())
            .subnetworkUri("string")
            .tags("string")
            .zoneUri("string")
            .build())
        .gkeClusterConfig(GkeClusterConfigArgs.builder()
            .gkeClusterTarget("string")
            .nodePoolTarget(GkeNodePoolTargetArgs.builder()
                .nodePool("string")
                .roles("ROLE_UNSPECIFIED")
                .nodePoolConfig(GkeNodePoolConfigArgs.builder()
                    .autoscaling(GkeNodePoolAutoscalingConfigArgs.builder()
                        .maxNodeCount(0)
                        .minNodeCount(0)
                        .build())
                    .config(GkeNodeConfigArgs.builder()
                        .accelerators(GkeNodePoolAcceleratorConfigArgs.builder()
                            .acceleratorCount("string")
                            .acceleratorType("string")
                            .gpuPartitionSize("string")
                            .build())
                        .bootDiskKmsKey("string")
                        .localSsdCount(0)
                        .machineType("string")
                        .minCpuPlatform("string")
                        .preemptible(false)
                        .spot(false)
                        .build())
                    .locations("string")
                    .build())
                .build())
            .build())
        .initializationActions(NodeInitializationActionArgs.builder()
            .executableFile("string")
            .executionTimeout("string")
            .build())
        .lifecycleConfig(LifecycleConfigArgs.builder()
            .autoDeleteTime("string")
            .autoDeleteTtl("string")
            .idleDeleteTtl("string")
            .build())
        .masterConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .localSsdInterface("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                .instanceSelectionList(InstanceSelectionArgs.builder()
                    .machineTypes("string")
                    .rank(0)
                    .build())
                .build())
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .minNumInstances(0)
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .startupConfig(StartupConfigArgs.builder()
                .requiredRegistrationFraction(0)
                .build())
            .build())
        .metastoreConfig(MetastoreConfigArgs.builder()
            .dataprocMetastoreService("string")
            .build())
        .secondaryWorkerConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .localSsdInterface("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                .instanceSelectionList(InstanceSelectionArgs.builder()
                    .machineTypes("string")
                    .rank(0)
                    .build())
                .build())
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .minNumInstances(0)
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .startupConfig(StartupConfigArgs.builder()
                .requiredRegistrationFraction(0)
                .build())
            .build())
        .securityConfig(SecurityConfigArgs.builder()
            .identityConfig(IdentityConfigArgs.builder()
                .userServiceAccountMapping(Map.of("string", "string"))
                .build())
            .kerberosConfig(KerberosConfigArgs.builder()
                .crossRealmTrustAdminServer("string")
                .crossRealmTrustKdc("string")
                .crossRealmTrustRealm("string")
                .crossRealmTrustSharedPasswordUri("string")
                .enableKerberos(false)
                .kdcDbKeyUri("string")
                .keyPasswordUri("string")
                .keystorePasswordUri("string")
                .keystoreUri("string")
                .kmsKeyUri("string")
                .realm("string")
                .rootPrincipalPasswordUri("string")
                .tgtLifetimeHours(0)
                .truststorePasswordUri("string")
                .truststoreUri("string")
                .build())
            .build())
        .softwareConfig(SoftwareConfigArgs.builder()
            .imageVersion("string")
            .optionalComponents("COMPONENT_UNSPECIFIED")
            .properties(Map.of("string", "string"))
            .build())
        .tempBucket("string")
        .workerConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .localSsdInterface("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                .instanceSelectionList(InstanceSelectionArgs.builder()
                    .machineTypes("string")
                    .rank(0)
                    .build())
                .build())
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .minNumInstances(0)
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .startupConfig(StartupConfigArgs.builder()
                .requiredRegistrationFraction(0)
                .build())
            .build())
        .build())
    .labels(Map.of("string", "string"))
    .project("string")
    .requestId("string")
    .virtualClusterConfig(VirtualClusterConfigArgs.builder()
        .kubernetesClusterConfig(KubernetesClusterConfigArgs.builder()
            .gkeClusterConfig(GkeClusterConfigArgs.builder()
                .gkeClusterTarget("string")
                .nodePoolTarget(GkeNodePoolTargetArgs.builder()
                    .nodePool("string")
                    .roles("ROLE_UNSPECIFIED")
                    .nodePoolConfig(GkeNodePoolConfigArgs.builder()
                        .autoscaling(GkeNodePoolAutoscalingConfigArgs.builder()
                            .maxNodeCount(0)
                            .minNodeCount(0)
                            .build())
                        .config(GkeNodeConfigArgs.builder()
                            .accelerators(GkeNodePoolAcceleratorConfigArgs.builder()
                                .acceleratorCount("string")
                                .acceleratorType("string")
                                .gpuPartitionSize("string")
                                .build())
                            .bootDiskKmsKey("string")
                            .localSsdCount(0)
                            .machineType("string")
                            .minCpuPlatform("string")
                            .preemptible(false)
                            .spot(false)
                            .build())
                        .locations("string")
                        .build())
                    .build())
                .build())
            .kubernetesNamespace("string")
            .kubernetesSoftwareConfig(KubernetesSoftwareConfigArgs.builder()
                .componentVersion(Map.of("string", "string"))
                .properties(Map.of("string", "string"))
                .build())
            .build())
        .auxiliaryServicesConfig(AuxiliaryServicesConfigArgs.builder()
            .metastoreConfig(MetastoreConfigArgs.builder()
                .dataprocMetastoreService("string")
                .build())
            .sparkHistoryServerConfig(SparkHistoryServerConfigArgs.builder()
                .dataprocCluster("string")
                .build())
            .build())
        .stagingBucket("string")
        .build())
    .build());
Copy
examplecluster_resource_resource_from_dataprocv1 = google_native.dataproc.v1.Cluster("exampleclusterResourceResourceFromDataprocv1",
    cluster_name="string",
    region="string",
    action_on_failed_primary_workers="string",
    config={
        "autoscaling_config": {
            "policy_uri": "string",
        },
        "auxiliary_node_groups": [{
            "node_group": {
                "roles": [google_native.dataproc.v1.NodeGroupRolesItem.ROLE_UNSPECIFIED],
                "labels": {
                    "string": "string",
                },
                "name": "string",
                "node_group_config": {
                    "accelerators": [{
                        "accelerator_count": 0,
                        "accelerator_type_uri": "string",
                    }],
                    "disk_config": {
                        "boot_disk_size_gb": 0,
                        "boot_disk_type": "string",
                        "local_ssd_interface": "string",
                        "num_local_ssds": 0,
                    },
                    "image_uri": "string",
                    "instance_flexibility_policy": {
                        "instance_selection_list": [{
                            "machine_types": ["string"],
                            "rank": 0,
                        }],
                    },
                    "machine_type_uri": "string",
                    "min_cpu_platform": "string",
                    "min_num_instances": 0,
                    "num_instances": 0,
                    "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
                    "startup_config": {
                        "required_registration_fraction": 0,
                    },
                },
            },
            "node_group_id": "string",
        }],
        "config_bucket": "string",
        "dataproc_metric_config": {
            "metrics": [{
                "metric_source": google_native.dataproc.v1.MetricMetricSource.METRIC_SOURCE_UNSPECIFIED,
                "metric_overrides": ["string"],
            }],
        },
        "encryption_config": {
            "gce_pd_kms_key_name": "string",
            "kms_key": "string",
        },
        "endpoint_config": {
            "enable_http_port_access": False,
        },
        "gce_cluster_config": {
            "confidential_instance_config": {
                "enable_confidential_compute": False,
            },
            "internal_ip_only": False,
            "metadata": {
                "string": "string",
            },
            "network_uri": "string",
            "node_group_affinity": {
                "node_group_uri": "string",
            },
            "private_ipv6_google_access": google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED,
            "reservation_affinity": {
                "consume_reservation_type": google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TYPE_UNSPECIFIED,
                "key": "string",
                "values": ["string"],
            },
            "service_account": "string",
            "service_account_scopes": ["string"],
            "shielded_instance_config": {
                "enable_integrity_monitoring": False,
                "enable_secure_boot": False,
                "enable_vtpm": False,
            },
            "subnetwork_uri": "string",
            "tags": ["string"],
            "zone_uri": "string",
        },
        "gke_cluster_config": {
            "gke_cluster_target": "string",
            "node_pool_target": [{
                "node_pool": "string",
                "roles": [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.ROLE_UNSPECIFIED],
                "node_pool_config": {
                    "autoscaling": {
                        "max_node_count": 0,
                        "min_node_count": 0,
                    },
                    "config": {
                        "accelerators": [{
                            "accelerator_count": "string",
                            "accelerator_type": "string",
                            "gpu_partition_size": "string",
                        }],
                        "boot_disk_kms_key": "string",
                        "local_ssd_count": 0,
                        "machine_type": "string",
                        "min_cpu_platform": "string",
                        "preemptible": False,
                        "spot": False,
                    },
                    "locations": ["string"],
                },
            }],
        },
        "initialization_actions": [{
            "executable_file": "string",
            "execution_timeout": "string",
        }],
        "lifecycle_config": {
            "auto_delete_time": "string",
            "auto_delete_ttl": "string",
            "idle_delete_ttl": "string",
        },
        "master_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "local_ssd_interface": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "instance_flexibility_policy": {
                "instance_selection_list": [{
                    "machine_types": ["string"],
                    "rank": 0,
                }],
            },
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "min_num_instances": 0,
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
            "startup_config": {
                "required_registration_fraction": 0,
            },
        },
        "metastore_config": {
            "dataproc_metastore_service": "string",
        },
        "secondary_worker_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "local_ssd_interface": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "instance_flexibility_policy": {
                "instance_selection_list": [{
                    "machine_types": ["string"],
                    "rank": 0,
                }],
            },
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "min_num_instances": 0,
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
            "startup_config": {
                "required_registration_fraction": 0,
            },
        },
        "security_config": {
            "identity_config": {
                "user_service_account_mapping": {
                    "string": "string",
                },
            },
            "kerberos_config": {
                "cross_realm_trust_admin_server": "string",
                "cross_realm_trust_kdc": "string",
                "cross_realm_trust_realm": "string",
                "cross_realm_trust_shared_password_uri": "string",
                "enable_kerberos": False,
                "kdc_db_key_uri": "string",
                "key_password_uri": "string",
                "keystore_password_uri": "string",
                "keystore_uri": "string",
                "kms_key_uri": "string",
                "realm": "string",
                "root_principal_password_uri": "string",
                "tgt_lifetime_hours": 0,
                "truststore_password_uri": "string",
                "truststore_uri": "string",
            },
        },
        "software_config": {
            "image_version": "string",
            "optional_components": [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.COMPONENT_UNSPECIFIED],
            "properties": {
                "string": "string",
            },
        },
        "temp_bucket": "string",
        "worker_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "local_ssd_interface": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "instance_flexibility_policy": {
                "instance_selection_list": [{
                    "machine_types": ["string"],
                    "rank": 0,
                }],
            },
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "min_num_instances": 0,
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
            "startup_config": {
                "required_registration_fraction": 0,
            },
        },
    },
    labels={
        "string": "string",
    },
    project="string",
    request_id="string",
    virtual_cluster_config={
        "kubernetes_cluster_config": {
            "gke_cluster_config": {
                "gke_cluster_target": "string",
                "node_pool_target": [{
                    "node_pool": "string",
                    "roles": [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.ROLE_UNSPECIFIED],
                    "node_pool_config": {
                        "autoscaling": {
                            "max_node_count": 0,
                            "min_node_count": 0,
                        },
                        "config": {
                            "accelerators": [{
                                "accelerator_count": "string",
                                "accelerator_type": "string",
                                "gpu_partition_size": "string",
                            }],
                            "boot_disk_kms_key": "string",
                            "local_ssd_count": 0,
                            "machine_type": "string",
                            "min_cpu_platform": "string",
                            "preemptible": False,
                            "spot": False,
                        },
                        "locations": ["string"],
                    },
                }],
            },
            "kubernetes_namespace": "string",
            "kubernetes_software_config": {
                "component_version": {
                    "string": "string",
                },
                "properties": {
                    "string": "string",
                },
            },
        },
        "auxiliary_services_config": {
            "metastore_config": {
                "dataproc_metastore_service": "string",
            },
            "spark_history_server_config": {
                "dataproc_cluster": "string",
            },
        },
        "staging_bucket": "string",
    })
Copy
const exampleclusterResourceResourceFromDataprocv1 = new google_native.dataproc.v1.Cluster("exampleclusterResourceResourceFromDataprocv1", {
    clusterName: "string",
    region: "string",
    actionOnFailedPrimaryWorkers: "string",
    config: {
        autoscalingConfig: {
            policyUri: "string",
        },
        auxiliaryNodeGroups: [{
            nodeGroup: {
                roles: [google_native.dataproc.v1.NodeGroupRolesItem.RoleUnspecified],
                labels: {
                    string: "string",
                },
                name: "string",
                nodeGroupConfig: {
                    accelerators: [{
                        acceleratorCount: 0,
                        acceleratorTypeUri: "string",
                    }],
                    diskConfig: {
                        bootDiskSizeGb: 0,
                        bootDiskType: "string",
                        localSsdInterface: "string",
                        numLocalSsds: 0,
                    },
                    imageUri: "string",
                    instanceFlexibilityPolicy: {
                        instanceSelectionList: [{
                            machineTypes: ["string"],
                            rank: 0,
                        }],
                    },
                    machineTypeUri: "string",
                    minCpuPlatform: "string",
                    minNumInstances: 0,
                    numInstances: 0,
                    preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                    startupConfig: {
                        requiredRegistrationFraction: 0,
                    },
                },
            },
            nodeGroupId: "string",
        }],
        configBucket: "string",
        dataprocMetricConfig: {
            metrics: [{
                metricSource: google_native.dataproc.v1.MetricMetricSource.MetricSourceUnspecified,
                metricOverrides: ["string"],
            }],
        },
        encryptionConfig: {
            gcePdKmsKeyName: "string",
            kmsKey: "string",
        },
        endpointConfig: {
            enableHttpPortAccess: false,
        },
        gceClusterConfig: {
            confidentialInstanceConfig: {
                enableConfidentialCompute: false,
            },
            internalIpOnly: false,
            metadata: {
                string: "string",
            },
            networkUri: "string",
            nodeGroupAffinity: {
                nodeGroupUri: "string",
            },
            privateIpv6GoogleAccess: google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
            reservationAffinity: {
                consumeReservationType: google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TypeUnspecified,
                key: "string",
                values: ["string"],
            },
            serviceAccount: "string",
            serviceAccountScopes: ["string"],
            shieldedInstanceConfig: {
                enableIntegrityMonitoring: false,
                enableSecureBoot: false,
                enableVtpm: false,
            },
            subnetworkUri: "string",
            tags: ["string"],
            zoneUri: "string",
        },
        gkeClusterConfig: {
            gkeClusterTarget: "string",
            nodePoolTarget: [{
                nodePool: "string",
                roles: [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.RoleUnspecified],
                nodePoolConfig: {
                    autoscaling: {
                        maxNodeCount: 0,
                        minNodeCount: 0,
                    },
                    config: {
                        accelerators: [{
                            acceleratorCount: "string",
                            acceleratorType: "string",
                            gpuPartitionSize: "string",
                        }],
                        bootDiskKmsKey: "string",
                        localSsdCount: 0,
                        machineType: "string",
                        minCpuPlatform: "string",
                        preemptible: false,
                        spot: false,
                    },
                    locations: ["string"],
                },
            }],
        },
        initializationActions: [{
            executableFile: "string",
            executionTimeout: "string",
        }],
        lifecycleConfig: {
            autoDeleteTime: "string",
            autoDeleteTtl: "string",
            idleDeleteTtl: "string",
        },
        masterConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                localSsdInterface: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            instanceFlexibilityPolicy: {
                instanceSelectionList: [{
                    machineTypes: ["string"],
                    rank: 0,
                }],
            },
            machineTypeUri: "string",
            minCpuPlatform: "string",
            minNumInstances: 0,
            numInstances: 0,
            preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            startupConfig: {
                requiredRegistrationFraction: 0,
            },
        },
        metastoreConfig: {
            dataprocMetastoreService: "string",
        },
        secondaryWorkerConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                localSsdInterface: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            instanceFlexibilityPolicy: {
                instanceSelectionList: [{
                    machineTypes: ["string"],
                    rank: 0,
                }],
            },
            machineTypeUri: "string",
            minCpuPlatform: "string",
            minNumInstances: 0,
            numInstances: 0,
            preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            startupConfig: {
                requiredRegistrationFraction: 0,
            },
        },
        securityConfig: {
            identityConfig: {
                userServiceAccountMapping: {
                    string: "string",
                },
            },
            kerberosConfig: {
                crossRealmTrustAdminServer: "string",
                crossRealmTrustKdc: "string",
                crossRealmTrustRealm: "string",
                crossRealmTrustSharedPasswordUri: "string",
                enableKerberos: false,
                kdcDbKeyUri: "string",
                keyPasswordUri: "string",
                keystorePasswordUri: "string",
                keystoreUri: "string",
                kmsKeyUri: "string",
                realm: "string",
                rootPrincipalPasswordUri: "string",
                tgtLifetimeHours: 0,
                truststorePasswordUri: "string",
                truststoreUri: "string",
            },
        },
        softwareConfig: {
            imageVersion: "string",
            optionalComponents: [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified],
            properties: {
                string: "string",
            },
        },
        tempBucket: "string",
        workerConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                localSsdInterface: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            instanceFlexibilityPolicy: {
                instanceSelectionList: [{
                    machineTypes: ["string"],
                    rank: 0,
                }],
            },
            machineTypeUri: "string",
            minCpuPlatform: "string",
            minNumInstances: 0,
            numInstances: 0,
            preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            startupConfig: {
                requiredRegistrationFraction: 0,
            },
        },
    },
    labels: {
        string: "string",
    },
    project: "string",
    requestId: "string",
    virtualClusterConfig: {
        kubernetesClusterConfig: {
            gkeClusterConfig: {
                gkeClusterTarget: "string",
                nodePoolTarget: [{
                    nodePool: "string",
                    roles: [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.RoleUnspecified],
                    nodePoolConfig: {
                        autoscaling: {
                            maxNodeCount: 0,
                            minNodeCount: 0,
                        },
                        config: {
                            accelerators: [{
                                acceleratorCount: "string",
                                acceleratorType: "string",
                                gpuPartitionSize: "string",
                            }],
                            bootDiskKmsKey: "string",
                            localSsdCount: 0,
                            machineType: "string",
                            minCpuPlatform: "string",
                            preemptible: false,
                            spot: false,
                        },
                        locations: ["string"],
                    },
                }],
            },
            kubernetesNamespace: "string",
            kubernetesSoftwareConfig: {
                componentVersion: {
                    string: "string",
                },
                properties: {
                    string: "string",
                },
            },
        },
        auxiliaryServicesConfig: {
            metastoreConfig: {
                dataprocMetastoreService: "string",
            },
            sparkHistoryServerConfig: {
                dataprocCluster: "string",
            },
        },
        stagingBucket: "string",
    },
});
Copy
type: google-native:dataproc/v1:Cluster
properties:
    actionOnFailedPrimaryWorkers: string
    clusterName: string
    config:
        autoscalingConfig:
            policyUri: string
        auxiliaryNodeGroups:
            - nodeGroup:
                labels:
                    string: string
                name: string
                nodeGroupConfig:
                    accelerators:
                        - acceleratorCount: 0
                          acceleratorTypeUri: string
                    diskConfig:
                        bootDiskSizeGb: 0
                        bootDiskType: string
                        localSsdInterface: string
                        numLocalSsds: 0
                    imageUri: string
                    instanceFlexibilityPolicy:
                        instanceSelectionList:
                            - machineTypes:
                                - string
                              rank: 0
                    machineTypeUri: string
                    minCpuPlatform: string
                    minNumInstances: 0
                    numInstances: 0
                    preemptibility: PREEMPTIBILITY_UNSPECIFIED
                    startupConfig:
                        requiredRegistrationFraction: 0
                roles:
                    - ROLE_UNSPECIFIED
              nodeGroupId: string
        configBucket: string
        dataprocMetricConfig:
            metrics:
                - metricOverrides:
                    - string
                  metricSource: METRIC_SOURCE_UNSPECIFIED
        encryptionConfig:
            gcePdKmsKeyName: string
            kmsKey: string
        endpointConfig:
            enableHttpPortAccess: false
        gceClusterConfig:
            confidentialInstanceConfig:
                enableConfidentialCompute: false
            internalIpOnly: false
            metadata:
                string: string
            networkUri: string
            nodeGroupAffinity:
                nodeGroupUri: string
            privateIpv6GoogleAccess: PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
            reservationAffinity:
                consumeReservationType: TYPE_UNSPECIFIED
                key: string
                values:
                    - string
            serviceAccount: string
            serviceAccountScopes:
                - string
            shieldedInstanceConfig:
                enableIntegrityMonitoring: false
                enableSecureBoot: false
                enableVtpm: false
            subnetworkUri: string
            tags:
                - string
            zoneUri: string
        gkeClusterConfig:
            gkeClusterTarget: string
            nodePoolTarget:
                - nodePool: string
                  nodePoolConfig:
                    autoscaling:
                        maxNodeCount: 0
                        minNodeCount: 0
                    config:
                        accelerators:
                            - acceleratorCount: string
                              acceleratorType: string
                              gpuPartitionSize: string
                        bootDiskKmsKey: string
                        localSsdCount: 0
                        machineType: string
                        minCpuPlatform: string
                        preemptible: false
                        spot: false
                    locations:
                        - string
                  roles:
                    - ROLE_UNSPECIFIED
        initializationActions:
            - executableFile: string
              executionTimeout: string
        lifecycleConfig:
            autoDeleteTime: string
            autoDeleteTtl: string
            idleDeleteTtl: string
        masterConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                localSsdInterface: string
                numLocalSsds: 0
            imageUri: string
            instanceFlexibilityPolicy:
                instanceSelectionList:
                    - machineTypes:
                        - string
                      rank: 0
            machineTypeUri: string
            minCpuPlatform: string
            minNumInstances: 0
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
            startupConfig:
                requiredRegistrationFraction: 0
        metastoreConfig:
            dataprocMetastoreService: string
        secondaryWorkerConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                localSsdInterface: string
                numLocalSsds: 0
            imageUri: string
            instanceFlexibilityPolicy:
                instanceSelectionList:
                    - machineTypes:
                        - string
                      rank: 0
            machineTypeUri: string
            minCpuPlatform: string
            minNumInstances: 0
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
            startupConfig:
                requiredRegistrationFraction: 0
        securityConfig:
            identityConfig:
                userServiceAccountMapping:
                    string: string
            kerberosConfig:
                crossRealmTrustAdminServer: string
                crossRealmTrustKdc: string
                crossRealmTrustRealm: string
                crossRealmTrustSharedPasswordUri: string
                enableKerberos: false
                kdcDbKeyUri: string
                keyPasswordUri: string
                keystorePasswordUri: string
                keystoreUri: string
                kmsKeyUri: string
                realm: string
                rootPrincipalPasswordUri: string
                tgtLifetimeHours: 0
                truststorePasswordUri: string
                truststoreUri: string
        softwareConfig:
            imageVersion: string
            optionalComponents:
                - COMPONENT_UNSPECIFIED
            properties:
                string: string
        tempBucket: string
        workerConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                localSsdInterface: string
                numLocalSsds: 0
            imageUri: string
            instanceFlexibilityPolicy:
                instanceSelectionList:
                    - machineTypes:
                        - string
                      rank: 0
            machineTypeUri: string
            minCpuPlatform: string
            minNumInstances: 0
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
            startupConfig:
                requiredRegistrationFraction: 0
    labels:
        string: string
    project: string
    region: string
    requestId: string
    virtualClusterConfig:
        auxiliaryServicesConfig:
            metastoreConfig:
                dataprocMetastoreService: string
            sparkHistoryServerConfig:
                dataprocCluster: string
        kubernetesClusterConfig:
            gkeClusterConfig:
                gkeClusterTarget: string
                nodePoolTarget:
                    - nodePool: string
                      nodePoolConfig:
                        autoscaling:
                            maxNodeCount: 0
                            minNodeCount: 0
                        config:
                            accelerators:
                                - acceleratorCount: string
                                  acceleratorType: string
                                  gpuPartitionSize: string
                            bootDiskKmsKey: string
                            localSsdCount: 0
                            machineType: string
                            minCpuPlatform: string
                            preemptible: false
                            spot: false
                        locations:
                            - string
                      roles:
                        - ROLE_UNSPECIFIED
            kubernetesNamespace: string
            kubernetesSoftwareConfig:
                componentVersion:
                    string: string
                properties:
                    string: string
        stagingBucket: string
Copy

Cluster Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

Inputs

In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

The Cluster resource accepts the following input properties:

ClusterName This property is required. string
The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
Region
This property is required.
Changes to this property will trigger replacement.
string
ActionOnFailedPrimaryWorkers string
Optional. Failure action when primary worker creation fails.
Config Pulumi.GoogleNative.Dataproc.V1.Inputs.ClusterConfig
Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
Labels Dictionary<string, string>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
Project string
The Google Cloud Platform project ID that the cluster belongs to.
RequestId string
Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
VirtualClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.VirtualClusterConfig
Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
ClusterName This property is required. string
The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
Region
This property is required.
Changes to this property will trigger replacement.
string
ActionOnFailedPrimaryWorkers string
Optional. Failure action when primary worker creation fails.
Config ClusterConfigArgs
Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
Labels map[string]string
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
Project string
The Google Cloud Platform project ID that the cluster belongs to.
RequestId string
Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
VirtualClusterConfig VirtualClusterConfigArgs
Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
clusterName This property is required. String
The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
region
This property is required.
Changes to this property will trigger replacement.
String
actionOnFailedPrimaryWorkers String
Optional. Failure action when primary worker creation fails.
config ClusterConfig
Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
labels Map<String,String>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
project String
The Google Cloud Platform project ID that the cluster belongs to.
requestId String
Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
virtualClusterConfig VirtualClusterConfig
Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
clusterName This property is required. string
The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
region
This property is required.
Changes to this property will trigger replacement.
string
actionOnFailedPrimaryWorkers string
Optional. Failure action when primary worker creation fails.
config ClusterConfig
Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
labels {[key: string]: string}
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
project string
The Google Cloud Platform project ID that the cluster belongs to.
requestId string
Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
virtualClusterConfig VirtualClusterConfig
Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
cluster_name This property is required. str
The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
region
This property is required.
Changes to this property will trigger replacement.
str
action_on_failed_primary_workers str
Optional. Failure action when primary worker creation fails.
config ClusterConfigArgs
Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
labels Mapping[str, str]
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
project str
The Google Cloud Platform project ID that the cluster belongs to.
request_id str
Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
virtual_cluster_config VirtualClusterConfigArgs
Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
clusterName This property is required. String
The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
region
This property is required.
Changes to this property will trigger replacement.
String
actionOnFailedPrimaryWorkers String
Optional. Failure action when primary worker creation fails.
config Property Map
Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
labels Map<String>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
project String
The Google Cloud Platform project ID that the cluster belongs to.
requestId String
Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
virtualClusterConfig Property Map
Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.

Outputs

All input properties are implicitly available as output properties. Additionally, the Cluster resource produces the following output properties:

ClusterUuid string
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
Id string
The provider-assigned unique ID for this managed resource.
Metrics Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Status Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterStatusResponse
Cluster status.
StatusHistory List<Pulumi.GoogleNative.Dataproc.V1.Outputs.ClusterStatusResponse>
The previous cluster status.
ClusterUuid string
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
Id string
The provider-assigned unique ID for this managed resource.
Metrics ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Status ClusterStatusResponse
Cluster status.
StatusHistory []ClusterStatusResponse
The previous cluster status.
clusterUuid String
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
id String
The provider-assigned unique ID for this managed resource.
metrics ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
status ClusterStatusResponse
Cluster status.
statusHistory List<ClusterStatusResponse>
The previous cluster status.
clusterUuid string
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
id string
The provider-assigned unique ID for this managed resource.
metrics ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
status ClusterStatusResponse
Cluster status.
statusHistory ClusterStatusResponse[]
The previous cluster status.
cluster_uuid str
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
id str
The provider-assigned unique ID for this managed resource.
metrics ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
status ClusterStatusResponse
Cluster status.
status_history Sequence[ClusterStatusResponse]
The previous cluster status.
clusterUuid String
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
id String
The provider-assigned unique ID for this managed resource.
metrics Property Map
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
status Property Map
Cluster status.
statusHistory List<Property Map>
The previous cluster status.

Supporting Types

AcceleratorConfig
, AcceleratorConfigArgs

AcceleratorCount int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorCount int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount Integer
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
accelerator_count int
The number of the accelerator cards of this type exposed to this instance.
accelerator_type_uri str
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount Number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AcceleratorConfigResponse
, AcceleratorConfigResponseArgs

AcceleratorCount This property is required. int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorCount This property is required. int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. Integer
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
accelerator_count This property is required. int
The number of the accelerator cards of this type exposed to this instance.
accelerator_type_uri This property is required. str
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. Number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AutoscalingConfig
, AutoscalingConfigArgs

PolicyUri string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
PolicyUri string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policy_uri str
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

AutoscalingConfigResponse
, AutoscalingConfigResponseArgs

PolicyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
PolicyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policy_uri This property is required. str
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

AuxiliaryNodeGroup
, AuxiliaryNodeGroupArgs

NodeGroup This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroup
Node group configuration.
NodeGroupId string
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
NodeGroup This property is required. NodeGroupType
Node group configuration.
NodeGroupId string
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
nodeGroup This property is required. NodeGroup
Node group configuration.
nodeGroupId String
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
nodeGroup This property is required. NodeGroup
Node group configuration.
nodeGroupId string
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
node_group This property is required. NodeGroup
Node group configuration.
node_group_id str
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
nodeGroup This property is required. Property Map
Node group configuration.
nodeGroupId String
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

AuxiliaryNodeGroupResponse
, AuxiliaryNodeGroupResponseArgs

NodeGroup This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupResponse
Node group configuration.
NodeGroupId This property is required. string
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
NodeGroup This property is required. NodeGroupResponse
Node group configuration.
NodeGroupId This property is required. string
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
nodeGroup This property is required. NodeGroupResponse
Node group configuration.
nodeGroupId This property is required. String
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
nodeGroup This property is required. NodeGroupResponse
Node group configuration.
nodeGroupId This property is required. string
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
node_group This property is required. NodeGroupResponse
Node group configuration.
node_group_id This property is required. str
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
nodeGroup This property is required. Property Map
Node group configuration.
nodeGroupId This property is required. String
Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

AuxiliaryServicesConfig
, AuxiliaryServicesConfigArgs

MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfig
Optional. The Hive Metastore configuration for this workload.
SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
MetastoreConfig MetastoreConfig
Optional. The Hive Metastore configuration for this workload.
SparkHistoryServerConfig SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
metastoreConfig MetastoreConfig
Optional. The Hive Metastore configuration for this workload.
sparkHistoryServerConfig SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
metastoreConfig MetastoreConfig
Optional. The Hive Metastore configuration for this workload.
sparkHistoryServerConfig SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
metastore_config MetastoreConfig
Optional. The Hive Metastore configuration for this workload.
spark_history_server_config SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
metastoreConfig Property Map
Optional. The Hive Metastore configuration for this workload.
sparkHistoryServerConfig Property Map
Optional. The Spark History Server configuration for the workload.

AuxiliaryServicesConfigResponse
, AuxiliaryServicesConfigResponseArgs

MetastoreConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfigResponse
Optional. The Hive Metastore configuration for this workload.
SparkHistoryServerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
MetastoreConfig This property is required. MetastoreConfigResponse
Optional. The Hive Metastore configuration for this workload.
SparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreConfig This property is required. MetastoreConfigResponse
Optional. The Hive Metastore configuration for this workload.
sparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreConfig This property is required. MetastoreConfigResponse
Optional. The Hive Metastore configuration for this workload.
sparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastore_config This property is required. MetastoreConfigResponse
Optional. The Hive Metastore configuration for this workload.
spark_history_server_config This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreConfig This property is required. Property Map
Optional. The Hive Metastore configuration for this workload.
sparkHistoryServerConfig This property is required. Property Map
Optional. The Spark History Server configuration for the workload.

ClusterConfig
, ClusterConfigArgs

AutoscalingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
AuxiliaryNodeGroups List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroup>
Optional. The node group settings.
ConfigBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
DataprocMetricConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfig
Optional. The config for Dataproc metrics.
EncryptionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EncryptionConfig
Optional. Encryption settings for the cluster.
EndpointConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EndpointConfig
Optional. Port/endpoint configuration for this cluster
GceClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfig
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions List<Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeInitializationAction>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LifecycleConfig
Optional. Lifecycle setting for the cluster.
MasterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's master instance.
MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfig
Optional. Metastore configuration.
SecondaryWorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig
Optional. The Compute Engine config settings for a cluster's secondary worker instances
SecurityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SecurityConfig
Optional. Security settings for the cluster.
SoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SoftwareConfig
Optional. The config settings for cluster software.
TempBucket string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
WorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's worker instances.
AutoscalingConfig AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
AuxiliaryNodeGroups []AuxiliaryNodeGroup
Optional. The node group settings.
ConfigBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
DataprocMetricConfig DataprocMetricConfig
Optional. The config for Dataproc metrics.
EncryptionConfig EncryptionConfig
Optional. Encryption settings for the cluster.
EndpointConfig EndpointConfig
Optional. Port/endpoint configuration for this cluster
GceClusterConfig GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig GkeClusterConfig
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions []NodeInitializationAction
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig LifecycleConfig
Optional. Lifecycle setting for the cluster.
MasterConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's master instance.
MetastoreConfig MetastoreConfig
Optional. Metastore configuration.
SecondaryWorkerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for a cluster's secondary worker instances
SecurityConfig SecurityConfig
Optional. Security settings for the cluster.
SoftwareConfig SoftwareConfig
Optional. The config settings for cluster software.
TempBucket string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
WorkerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's worker instances.
autoscalingConfig AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliaryNodeGroups List<AuxiliaryNodeGroup>
Optional. The node group settings.
configBucket String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
dataprocMetricConfig DataprocMetricConfig
Optional. The config for Dataproc metrics.
encryptionConfig EncryptionConfig
Optional. Encryption settings for the cluster.
endpointConfig EndpointConfig
Optional. Port/endpoint configuration for this cluster
gceClusterConfig GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig GkeClusterConfig
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions List<NodeInitializationAction>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig LifecycleConfig
Optional. Lifecycle setting for the cluster.
masterConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's master instance.
metastoreConfig MetastoreConfig
Optional. Metastore configuration.
secondaryWorkerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for a cluster's secondary worker instances
securityConfig SecurityConfig
Optional. Security settings for the cluster.
softwareConfig SoftwareConfig
Optional. The config settings for cluster software.
tempBucket String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
workerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's worker instances.
autoscalingConfig AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliaryNodeGroups AuxiliaryNodeGroup[]
Optional. The node group settings.
configBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
dataprocMetricConfig DataprocMetricConfig
Optional. The config for Dataproc metrics.
encryptionConfig EncryptionConfig
Optional. Encryption settings for the cluster.
endpointConfig EndpointConfig
Optional. Port/endpoint configuration for this cluster
gceClusterConfig GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig GkeClusterConfig
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions NodeInitializationAction[]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig LifecycleConfig
Optional. Lifecycle setting for the cluster.
masterConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's master instance.
metastoreConfig MetastoreConfig
Optional. Metastore configuration.
secondaryWorkerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for a cluster's secondary worker instances
securityConfig SecurityConfig
Optional. Security settings for the cluster.
softwareConfig SoftwareConfig
Optional. The config settings for cluster software.
tempBucket string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
workerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's worker instances.
autoscaling_config AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliary_node_groups Sequence[AuxiliaryNodeGroup]
Optional. The node group settings.
config_bucket str
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
dataproc_metric_config DataprocMetricConfig
Optional. The config for Dataproc metrics.
encryption_config EncryptionConfig
Optional. Encryption settings for the cluster.
endpoint_config EndpointConfig
Optional. Port/endpoint configuration for this cluster
gce_cluster_config GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
gke_cluster_config GkeClusterConfig
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initialization_actions Sequence[NodeInitializationAction]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycle_config LifecycleConfig
Optional. Lifecycle setting for the cluster.
master_config InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's master instance.
metastore_config MetastoreConfig
Optional. Metastore configuration.
secondary_worker_config InstanceGroupConfig
Optional. The Compute Engine config settings for a cluster's secondary worker instances
security_config SecurityConfig
Optional. Security settings for the cluster.
software_config SoftwareConfig
Optional. The config settings for cluster software.
temp_bucket str
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
worker_config InstanceGroupConfig
Optional. The Compute Engine config settings for the cluster's worker instances.
autoscalingConfig Property Map
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliaryNodeGroups List<Property Map>
Optional. The node group settings.
configBucket String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
dataprocMetricConfig Property Map
Optional. The config for Dataproc metrics.
encryptionConfig Property Map
Optional. Encryption settings for the cluster.
endpointConfig Property Map
Optional. Port/endpoint configuration for this cluster
gceClusterConfig Property Map
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig Property Map
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions List<Property Map>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig Property Map
Optional. Lifecycle setting for the cluster.
masterConfig Property Map
Optional. The Compute Engine config settings for the cluster's master instance.
metastoreConfig Property Map
Optional. Metastore configuration.
secondaryWorkerConfig Property Map
Optional. The Compute Engine config settings for a cluster's secondary worker instances
securityConfig Property Map
Optional. Security settings for the cluster.
softwareConfig Property Map
Optional. The config settings for cluster software.
tempBucket String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
workerConfig Property Map
Optional. The Compute Engine config settings for the cluster's worker instances.

ClusterConfigResponse
, ClusterConfigResponseArgs

AutoscalingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
AuxiliaryNodeGroups This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupResponse>
Optional. The node group settings.
ConfigBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
DataprocMetricConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigResponse
Optional. The config for Dataproc metrics.
EncryptionConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.EncryptionConfigResponse
Optional. Encryption settings for the cluster.
EndpointConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
GceClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigResponse
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionResponse>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.LifecycleConfigResponse
Optional. Lifecycle setting for the cluster.
MasterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's master instance.
MetastoreConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfigResponse
Optional. Metastore configuration.
SecondaryWorkerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for a cluster's secondary worker instances
SecurityConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.SecurityConfigResponse
Optional. Security settings for the cluster.
SoftwareConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.SoftwareConfigResponse
Optional. The config settings for cluster software.
TempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
WorkerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's worker instances.
AutoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
AuxiliaryNodeGroups This property is required. []AuxiliaryNodeGroupResponse
Optional. The node group settings.
ConfigBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
DataprocMetricConfig This property is required. DataprocMetricConfigResponse
Optional. The config for Dataproc metrics.
EncryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
EndpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
GceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions This property is required. []NodeInitializationActionResponse
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig This property is required. LifecycleConfigResponse
Optional. Lifecycle setting for the cluster.
MasterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's master instance.
MetastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
SecondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for a cluster's secondary worker instances
SecurityConfig This property is required. SecurityConfigResponse
Optional. Security settings for the cluster.
SoftwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for cluster software.
TempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
WorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's worker instances.
autoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliaryNodeGroups This property is required. List<AuxiliaryNodeGroupResponse>
Optional. The node group settings.
configBucket This property is required. String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
dataprocMetricConfig This property is required. DataprocMetricConfigResponse
Optional. The config for Dataproc metrics.
encryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. List<NodeInitializationActionResponse>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. LifecycleConfigResponse
Optional. Lifecycle setting for the cluster.
masterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's master instance.
metastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for a cluster's secondary worker instances
securityConfig This property is required. SecurityConfigResponse
Optional. Security settings for the cluster.
softwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for cluster software.
tempBucket This property is required. String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
workerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's worker instances.
autoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliaryNodeGroups This property is required. AuxiliaryNodeGroupResponse[]
Optional. The node group settings.
configBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
dataprocMetricConfig This property is required. DataprocMetricConfigResponse
Optional. The config for Dataproc metrics.
encryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. NodeInitializationActionResponse[]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. LifecycleConfigResponse
Optional. Lifecycle setting for the cluster.
masterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's master instance.
metastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for a cluster's secondary worker instances
securityConfig This property is required. SecurityConfigResponse
Optional. Security settings for the cluster.
softwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for cluster software.
tempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
workerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's worker instances.
autoscaling_config This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliary_node_groups This property is required. Sequence[AuxiliaryNodeGroupResponse]
Optional. The node group settings.
config_bucket This property is required. str
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
dataproc_metric_config This property is required. DataprocMetricConfigResponse
Optional. The config for Dataproc metrics.
encryption_config This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpoint_config This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gce_cluster_config This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gke_cluster_config This property is required. GkeClusterConfigResponse
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initialization_actions This property is required. Sequence[NodeInitializationActionResponse]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycle_config This property is required. LifecycleConfigResponse
Optional. Lifecycle setting for the cluster.
master_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's master instance.
metastore_config This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondary_worker_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for a cluster's secondary worker instances
security_config This property is required. SecurityConfigResponse
Optional. Security settings for the cluster.
software_config This property is required. SoftwareConfigResponse
Optional. The config settings for cluster software.
temp_bucket This property is required. str
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
worker_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the cluster's worker instances.
autoscalingConfig This property is required. Property Map
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliaryNodeGroups This property is required. List<Property Map>
Optional. The node group settings.
configBucket This property is required. String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
dataprocMetricConfig This property is required. Property Map
Optional. The config for Dataproc metrics.
encryptionConfig This property is required. Property Map
Optional. Encryption settings for the cluster.
endpointConfig This property is required. Property Map
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. Property Map
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. Property Map
Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. List<Property Map>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. Property Map
Optional. Lifecycle setting for the cluster.
masterConfig This property is required. Property Map
Optional. The Compute Engine config settings for the cluster's master instance.
metastoreConfig This property is required. Property Map
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. Property Map
Optional. The Compute Engine config settings for a cluster's secondary worker instances
securityConfig This property is required. Property Map
Optional. Security settings for the cluster.
softwareConfig This property is required. Property Map
Optional. The config settings for cluster software.
tempBucket This property is required. String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
workerConfig This property is required. Property Map
Optional. The Compute Engine config settings for the cluster's worker instances.

ClusterMetricsResponse
, ClusterMetricsResponseArgs

HdfsMetrics This property is required. Dictionary<string, string>
The HDFS metrics.
YarnMetrics This property is required. Dictionary<string, string>
YARN metrics.
HdfsMetrics This property is required. map[string]string
The HDFS metrics.
YarnMetrics This property is required. map[string]string
YARN metrics.
hdfsMetrics This property is required. Map<String,String>
The HDFS metrics.
yarnMetrics This property is required. Map<String,String>
YARN metrics.
hdfsMetrics This property is required. {[key: string]: string}
The HDFS metrics.
yarnMetrics This property is required. {[key: string]: string}
YARN metrics.
hdfs_metrics This property is required. Mapping[str, str]
The HDFS metrics.
yarn_metrics This property is required. Mapping[str, str]
YARN metrics.
hdfsMetrics This property is required. Map<String>
The HDFS metrics.
yarnMetrics This property is required. Map<String>
YARN metrics.

ClusterStatusResponse
, ClusterStatusResponseArgs

Detail This property is required. string
Optional. Output only. Details of cluster's state.
State This property is required. string
The cluster's state.
StateStartTime This property is required. string
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
Substate This property is required. string
Additional state information that includes status reported by the agent.
Detail This property is required. string
Optional. Output only. Details of cluster's state.
State This property is required. string
The cluster's state.
StateStartTime This property is required. string
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
Substate This property is required. string
Additional state information that includes status reported by the agent.
detail This property is required. String
Optional. Output only. Details of cluster's state.
state This property is required. String
The cluster's state.
stateStartTime This property is required. String
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
substate This property is required. String
Additional state information that includes status reported by the agent.
detail This property is required. string
Optional. Output only. Details of cluster's state.
state This property is required. string
The cluster's state.
stateStartTime This property is required. string
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
substate This property is required. string
Additional state information that includes status reported by the agent.
detail This property is required. str
Optional. Output only. Details of cluster's state.
state This property is required. str
The cluster's state.
state_start_time This property is required. str
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
substate This property is required. str
Additional state information that includes status reported by the agent.
detail This property is required. String
Optional. Output only. Details of cluster's state.
state This property is required. String
The cluster's state.
stateStartTime This property is required. String
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
substate This property is required. String
Additional state information that includes status reported by the agent.

ConfidentialInstanceConfig
, ConfidentialInstanceConfigArgs

EnableConfidentialCompute bool
Optional. Defines whether the instance should have confidential compute enabled.
EnableConfidentialCompute bool
Optional. Defines whether the instance should have confidential compute enabled.
enableConfidentialCompute Boolean
Optional. Defines whether the instance should have confidential compute enabled.
enableConfidentialCompute boolean
Optional. Defines whether the instance should have confidential compute enabled.
enable_confidential_compute bool
Optional. Defines whether the instance should have confidential compute enabled.
enableConfidentialCompute Boolean
Optional. Defines whether the instance should have confidential compute enabled.

ConfidentialInstanceConfigResponse
, ConfidentialInstanceConfigResponseArgs

EnableConfidentialCompute This property is required. bool
Optional. Defines whether the instance should have confidential compute enabled.
EnableConfidentialCompute This property is required. bool
Optional. Defines whether the instance should have confidential compute enabled.
enableConfidentialCompute This property is required. Boolean
Optional. Defines whether the instance should have confidential compute enabled.
enableConfidentialCompute This property is required. boolean
Optional. Defines whether the instance should have confidential compute enabled.
enable_confidential_compute This property is required. bool
Optional. Defines whether the instance should have confidential compute enabled.
enableConfidentialCompute This property is required. Boolean
Optional. Defines whether the instance should have confidential compute enabled.

DataprocMetricConfig
, DataprocMetricConfigArgs

Metrics This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.Metric>
Metrics sources to enable.
Metrics This property is required. []Metric
Metrics sources to enable.
metrics This property is required. List<Metric>
Metrics sources to enable.
metrics This property is required. Metric[]
Metrics sources to enable.
metrics This property is required. Sequence[Metric]
Metrics sources to enable.
metrics This property is required. List<Property Map>
Metrics sources to enable.

DataprocMetricConfigResponse
, DataprocMetricConfigResponseArgs

Metrics This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.MetricResponse>
Metrics sources to enable.
Metrics This property is required. []MetricResponse
Metrics sources to enable.
metrics This property is required. List<MetricResponse>
Metrics sources to enable.
metrics This property is required. MetricResponse[]
Metrics sources to enable.
metrics This property is required. Sequence[MetricResponse]
Metrics sources to enable.
metrics This property is required. List<Property Map>
Metrics sources to enable.

DiskConfig
, DiskConfigArgs

BootDiskSizeGb int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
LocalSsdInterface string
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
NumLocalSsds int
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
BootDiskSizeGb int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
LocalSsdInterface string
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
NumLocalSsds int
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
bootDiskSizeGb Integer
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
localSsdInterface String
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
numLocalSsds Integer
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
bootDiskSizeGb number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
localSsdInterface string
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
numLocalSsds number
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
boot_disk_size_gb int
Optional. Size in GB of the boot disk (default is 500GB).
boot_disk_type str
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
local_ssd_interface str
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
num_local_ssds int
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
bootDiskSizeGb Number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
localSsdInterface String
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
numLocalSsds Number
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

DiskConfigResponse
, DiskConfigResponseArgs

BootDiskSizeGb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
LocalSsdInterface This property is required. string
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
NumLocalSsds This property is required. int
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
BootDiskSizeGb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
LocalSsdInterface This property is required. string
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
NumLocalSsds This property is required. int
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
bootDiskSizeGb This property is required. Integer
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
localSsdInterface This property is required. String
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
numLocalSsds This property is required. Integer
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
bootDiskSizeGb This property is required. number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
localSsdInterface This property is required. string
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
numLocalSsds This property is required. number
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
boot_disk_size_gb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
boot_disk_type This property is required. str
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
local_ssd_interface This property is required. str
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
num_local_ssds This property is required. int
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
bootDiskSizeGb This property is required. Number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
localSsdInterface This property is required. String
Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
numLocalSsds This property is required. Number
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

EncryptionConfig
, EncryptionConfigArgs

GcePdKmsKeyName string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
KmsKey string
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
GcePdKmsKeyName string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
KmsKey string
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
gcePdKmsKeyName String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
kmsKey String
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
gcePdKmsKeyName string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
kmsKey string
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
gce_pd_kms_key_name str
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
kms_key str
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
gcePdKmsKeyName String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
kmsKey String
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.

EncryptionConfigResponse
, EncryptionConfigResponseArgs

GcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
KmsKey This property is required. string
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
GcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
KmsKey This property is required. string
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
gcePdKmsKeyName This property is required. String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
kmsKey This property is required. String
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
gcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
kmsKey This property is required. string
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
gce_pd_kms_key_name This property is required. str
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
kms_key This property is required. str
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
gcePdKmsKeyName This property is required. String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
kmsKey This property is required. String
Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.

EndpointConfig
, EndpointConfigArgs

EnableHttpPortAccess bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
EnableHttpPortAccess bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
enableHttpPortAccess Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
enableHttpPortAccess boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
enable_http_port_access bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
enableHttpPortAccess Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

EndpointConfigResponse
, EndpointConfigResponseArgs

EnableHttpPortAccess This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
HttpPorts This property is required. Dictionary<string, string>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
EnableHttpPortAccess This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
HttpPorts This property is required. map[string]string
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. Map<String,String>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. {[key: string]: string}
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enable_http_port_access This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
http_ports This property is required. Mapping[str, str]
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. Map<String>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

GceClusterConfig
, GceClusterConfigArgs

ConfidentialInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfig
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
InternalIpOnly bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata Dictionary<string, string>
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
NodeGroupAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess Pulumi.GoogleNative.Dataproc.V1.GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
ReservationAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes List<string>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
Tags List<string>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri string
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
ConfidentialInstanceConfig ConfidentialInstanceConfig
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
InternalIpOnly bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata map[string]string
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
NodeGroupAffinity NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
ReservationAffinity ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes []string
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
Tags []string
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri string
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
confidentialInstanceConfig ConfidentialInstanceConfig
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
internalIpOnly Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata Map<String,String>
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
nodeGroupAffinity NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
reservationAffinity ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
tags List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri String
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
confidentialInstanceConfig ConfidentialInstanceConfig
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
internalIpOnly boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata {[key: string]: string}
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
nodeGroupAffinity NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
reservationAffinity ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes string[]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
tags string[]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri string
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
confidential_instance_config ConfidentialInstanceConfig
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
internal_ip_only bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata Mapping[str, str]
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
network_uri str
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
node_group_affinity NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
private_ipv6_google_access GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
reservation_affinity ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
service_account str
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
service_account_scopes Sequence[str]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shielded_instance_config ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetwork_uri str
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
tags Sequence[str]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zone_uri str
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
confidentialInstanceConfig Property Map
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
internalIpOnly Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata Map<String>
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
nodeGroupAffinity Property Map
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED" | "INHERIT_FROM_SUBNETWORK" | "OUTBOUND" | "BIDIRECTIONAL"
Optional. The type of IPv6 access for a cluster.
reservationAffinity Property Map
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig Property Map
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
tags List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri String
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

GceClusterConfigPrivateIpv6GoogleAccess
, GceClusterConfigPrivateIpv6GoogleAccessArgs

PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
InheritFromSubnetwork
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
Outbound
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
Bidirectional
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
GceClusterConfigPrivateIpv6GoogleAccessInheritFromSubnetwork
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
GceClusterConfigPrivateIpv6GoogleAccessOutbound
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
GceClusterConfigPrivateIpv6GoogleAccessBidirectional
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
InheritFromSubnetwork
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
Outbound
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
Bidirectional
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
InheritFromSubnetwork
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
Outbound
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
Bidirectional
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
INHERIT_FROM_SUBNETWORK
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
OUTBOUND
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
BIDIRECTIONAL
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
"PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED"
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
"INHERIT_FROM_SUBNETWORK"
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
"OUTBOUND"
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
"BIDIRECTIONAL"
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

GceClusterConfigResponse
, GceClusterConfigResponseArgs

ConfidentialInstanceConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigResponse
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
InternalIpOnly This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata This property is required. Dictionary<string, string>
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
NodeGroupAffinity This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
ReservationAffinity This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes This property is required. List<string>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
Tags This property is required. List<string>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri This property is required. string
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
ConfidentialInstanceConfig This property is required. ConfidentialInstanceConfigResponse
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
InternalIpOnly This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata This property is required. map[string]string
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
NodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
ReservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes This property is required. []string
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
Tags This property is required. []string
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri This property is required. string
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
confidentialInstanceConfig This property is required. ConfidentialInstanceConfigResponse
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
internalIpOnly This property is required. Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Map<String,String>
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
nodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. String
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
tags This property is required. List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. String
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
confidentialInstanceConfig This property is required. ConfidentialInstanceConfigResponse
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
internalIpOnly This property is required. boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. {[key: string]: string}
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
nodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. string[]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
tags This property is required. string[]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. string
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
confidential_instance_config This property is required. ConfidentialInstanceConfigResponse
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
internal_ip_only This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Mapping[str, str]
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
network_uri This property is required. str
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
node_group_affinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
private_ipv6_google_access This property is required. str
Optional. The type of IPv6 access for a cluster.
reservation_affinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
service_account This property is required. str
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
service_account_scopes This property is required. Sequence[str]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shielded_instance_config This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetwork_uri This property is required. str
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
tags This property is required. Sequence[str]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zone_uri This property is required. str
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
confidentialInstanceConfig This property is required. Property Map
Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
internalIpOnly This property is required. Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Map<String>
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
nodeGroupAffinity This property is required. Property Map
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. String
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. Property Map
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. Property Map
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
tags This property is required. List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. String
Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

GkeClusterConfig
, GkeClusterConfigArgs

GkeClusterTarget string
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NamespacedGkeDeploymentTarget Pulumi.GoogleNative.Dataproc.V1.Inputs.NamespacedGkeDeploymentTarget
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

NodePoolTarget List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTarget>
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeClusterTarget string
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NamespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

NodePoolTarget []GkeNodePoolTarget
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
gkeClusterTarget String
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
namespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget List<GkeNodePoolTarget>
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
gkeClusterTarget string
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
namespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget GkeNodePoolTarget[]
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
gke_cluster_target str
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
namespaced_gke_deployment_target NamespacedGkeDeploymentTarget
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

node_pool_target Sequence[GkeNodePoolTarget]
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
gkeClusterTarget String
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
namespacedGkeDeploymentTarget Property Map
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget List<Property Map>
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

GkeClusterConfigResponse
, GkeClusterConfigResponseArgs

GkeClusterTarget This property is required. string
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NamespacedGkeDeploymentTarget This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.NamespacedGkeDeploymentTargetResponse
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

NodePoolTarget This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetResponse>
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeClusterTarget This property is required. string
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NamespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

NodePoolTarget This property is required. []GkeNodePoolTargetResponse
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
gkeClusterTarget This property is required. String
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
namespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget This property is required. List<GkeNodePoolTargetResponse>
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
gkeClusterTarget This property is required. string
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
namespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget This property is required. GkeNodePoolTargetResponse[]
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
gke_cluster_target This property is required. str
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
namespaced_gke_deployment_target This property is required. NamespacedGkeDeploymentTargetResponse
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

node_pool_target This property is required. Sequence[GkeNodePoolTargetResponse]
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
gkeClusterTarget This property is required. String
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
namespacedGkeDeploymentTarget This property is required. Property Map
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

nodePoolTarget This property is required. List<Property Map>
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

GkeNodeConfig
, GkeNodeConfigArgs

Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfig>
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
BootDiskKmsKey string
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
LocalSsdCount int
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
MachineType string
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
MinCpuPlatform string
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
Preemptible bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
Spot bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
Accelerators []GkeNodePoolAcceleratorConfig
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
BootDiskKmsKey string
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
LocalSsdCount int
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
MachineType string
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
MinCpuPlatform string
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
Preemptible bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
Spot bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
accelerators List<GkeNodePoolAcceleratorConfig>
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
bootDiskKmsKey String
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
localSsdCount Integer
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
machineType String
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
minCpuPlatform String
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
preemptible Boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
spot Boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
accelerators GkeNodePoolAcceleratorConfig[]
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
bootDiskKmsKey string
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
localSsdCount number
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
machineType string
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
minCpuPlatform string
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
preemptible boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
spot boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
accelerators Sequence[GkeNodePoolAcceleratorConfig]
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
boot_disk_kms_key str
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
local_ssd_count int
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
machine_type str
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
min_cpu_platform str
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
preemptible bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
spot bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
accelerators List<Property Map>
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
bootDiskKmsKey String
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
localSsdCount Number
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
machineType String
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
minCpuPlatform String
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
preemptible Boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
spot Boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

GkeNodeConfigResponse
, GkeNodeConfigResponseArgs

Accelerators This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigResponse>
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
BootDiskKmsKey This property is required. string
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
LocalSsdCount This property is required. int
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
MachineType This property is required. string
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
MinCpuPlatform This property is required. string
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
Preemptible This property is required. bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
Spot This property is required. bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
Accelerators This property is required. []GkeNodePoolAcceleratorConfigResponse
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
BootDiskKmsKey This property is required. string
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
LocalSsdCount This property is required. int
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
MachineType This property is required. string
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
MinCpuPlatform This property is required. string
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
Preemptible This property is required. bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
Spot This property is required. bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
accelerators This property is required. List<GkeNodePoolAcceleratorConfigResponse>
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
bootDiskKmsKey This property is required. String
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
localSsdCount This property is required. Integer
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
machineType This property is required. String
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
minCpuPlatform This property is required. String
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
preemptible This property is required. Boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
spot This property is required. Boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
accelerators This property is required. GkeNodePoolAcceleratorConfigResponse[]
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
bootDiskKmsKey This property is required. string
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
localSsdCount This property is required. number
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
machineType This property is required. string
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
minCpuPlatform This property is required. string
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
preemptible This property is required. boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
spot This property is required. boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
accelerators This property is required. Sequence[GkeNodePoolAcceleratorConfigResponse]
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
boot_disk_kms_key This property is required. str
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
local_ssd_count This property is required. int
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
machine_type This property is required. str
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
min_cpu_platform This property is required. str
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
preemptible This property is required. bool
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
spot This property is required. bool
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
accelerators This property is required. List<Property Map>
Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
bootDiskKmsKey This property is required. String
Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
localSsdCount This property is required. Number
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
machineType This property is required. String
Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
minCpuPlatform This property is required. String
Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
preemptible This property is required. Boolean
Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
spot This property is required. Boolean
Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

GkeNodePoolAcceleratorConfig
, GkeNodePoolAcceleratorConfigArgs

AcceleratorCount string
The number of accelerator cards exposed to an instance.
AcceleratorType string
The accelerator type resource namename (see GPUs on Compute Engine).
GpuPartitionSize string
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
AcceleratorCount string
The number of accelerator cards exposed to an instance.
AcceleratorType string
The accelerator type resource namename (see GPUs on Compute Engine).
GpuPartitionSize string
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
acceleratorCount String
The number of accelerator cards exposed to an instance.
acceleratorType String
The accelerator type resource namename (see GPUs on Compute Engine).
gpuPartitionSize String
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
acceleratorCount string
The number of accelerator cards exposed to an instance.
acceleratorType string
The accelerator type resource namename (see GPUs on Compute Engine).
gpuPartitionSize string
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
accelerator_count str
The number of accelerator cards exposed to an instance.
accelerator_type str
The accelerator type resource namename (see GPUs on Compute Engine).
gpu_partition_size str
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
acceleratorCount String
The number of accelerator cards exposed to an instance.
acceleratorType String
The accelerator type resource namename (see GPUs on Compute Engine).
gpuPartitionSize String
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

GkeNodePoolAcceleratorConfigResponse
, GkeNodePoolAcceleratorConfigResponseArgs

AcceleratorCount This property is required. string
The number of accelerator cards exposed to an instance.
AcceleratorType This property is required. string
The accelerator type resource namename (see GPUs on Compute Engine).
GpuPartitionSize This property is required. string
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
AcceleratorCount This property is required. string
The number of accelerator cards exposed to an instance.
AcceleratorType This property is required. string
The accelerator type resource namename (see GPUs on Compute Engine).
GpuPartitionSize This property is required. string
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
acceleratorCount This property is required. String
The number of accelerator cards exposed to an instance.
acceleratorType This property is required. String
The accelerator type resource namename (see GPUs on Compute Engine).
gpuPartitionSize This property is required. String
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
acceleratorCount This property is required. string
The number of accelerator cards exposed to an instance.
acceleratorType This property is required. string
The accelerator type resource namename (see GPUs on Compute Engine).
gpuPartitionSize This property is required. string
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
accelerator_count This property is required. str
The number of accelerator cards exposed to an instance.
accelerator_type This property is required. str
The accelerator type resource namename (see GPUs on Compute Engine).
gpu_partition_size This property is required. str
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
acceleratorCount This property is required. String
The number of accelerator cards exposed to an instance.
acceleratorType This property is required. String
The accelerator type resource namename (see GPUs on Compute Engine).
gpuPartitionSize This property is required. String
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

GkeNodePoolAutoscalingConfig
, GkeNodePoolAutoscalingConfigArgs

MaxNodeCount int
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
MinNodeCount int
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
MaxNodeCount int
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
MinNodeCount int
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
maxNodeCount Integer
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
minNodeCount Integer
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
maxNodeCount number
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
minNodeCount number
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
max_node_count int
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
min_node_count int
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
maxNodeCount Number
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
minNodeCount Number
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

GkeNodePoolAutoscalingConfigResponse
, GkeNodePoolAutoscalingConfigResponseArgs

MaxNodeCount This property is required. int
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
MinNodeCount This property is required. int
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
MaxNodeCount This property is required. int
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
MinNodeCount This property is required. int
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
maxNodeCount This property is required. Integer
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
minNodeCount This property is required. Integer
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
maxNodeCount This property is required. number
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
minNodeCount This property is required. number
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
max_node_count This property is required. int
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
min_node_count This property is required. int
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
maxNodeCount This property is required. Number
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
minNodeCount This property is required. Number
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

GkeNodePoolConfig
, GkeNodePoolConfigArgs

Autoscaling Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfig
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
Config Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodeConfig
Optional. The node pool configuration.
Locations List<string>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
Autoscaling GkeNodePoolAutoscalingConfig
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
Config GkeNodeConfig
Optional. The node pool configuration.
Locations []string
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
autoscaling GkeNodePoolAutoscalingConfig
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
config GkeNodeConfig
Optional. The node pool configuration.
locations List<String>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
autoscaling GkeNodePoolAutoscalingConfig
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
config GkeNodeConfig
Optional. The node pool configuration.
locations string[]
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
autoscaling GkeNodePoolAutoscalingConfig
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
config GkeNodeConfig
Optional. The node pool configuration.
locations Sequence[str]
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
autoscaling Property Map
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
config Property Map
Optional. The node pool configuration.
locations List<String>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

GkeNodePoolConfigResponse
, GkeNodePoolConfigResponseArgs

Autoscaling This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigResponse
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
Config This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigResponse
Optional. The node pool configuration.
Locations This property is required. List<string>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
Autoscaling This property is required. GkeNodePoolAutoscalingConfigResponse
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
Config This property is required. GkeNodeConfigResponse
Optional. The node pool configuration.
Locations This property is required. []string
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
autoscaling This property is required. GkeNodePoolAutoscalingConfigResponse
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
config This property is required. GkeNodeConfigResponse
Optional. The node pool configuration.
locations This property is required. List<String>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
autoscaling This property is required. GkeNodePoolAutoscalingConfigResponse
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
config This property is required. GkeNodeConfigResponse
Optional. The node pool configuration.
locations This property is required. string[]
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
autoscaling This property is required. GkeNodePoolAutoscalingConfigResponse
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
config This property is required. GkeNodeConfigResponse
Optional. The node pool configuration.
locations This property is required. Sequence[str]
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
autoscaling This property is required. Property Map
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
config This property is required. Property Map
Optional. The node pool configuration.
locations This property is required. List<String>
Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

GkeNodePoolTarget
, GkeNodePoolTargetArgs

NodePool This property is required. string
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
Roles This property is required. List<Pulumi.GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem>
The roles associated with the GKE node pool.
NodePoolConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfig
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
NodePool This property is required. string
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
Roles This property is required. []GkeNodePoolTargetRolesItem
The roles associated with the GKE node pool.
NodePoolConfig GkeNodePoolConfig
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
nodePool This property is required. String
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
roles This property is required. List<GkeNodePoolTargetRolesItem>
The roles associated with the GKE node pool.
nodePoolConfig GkeNodePoolConfig
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
nodePool This property is required. string
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
roles This property is required. GkeNodePoolTargetRolesItem[]
The roles associated with the GKE node pool.
nodePoolConfig GkeNodePoolConfig
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
node_pool This property is required. str
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
roles This property is required. Sequence[GkeNodePoolTargetRolesItem]
The roles associated with the GKE node pool.
node_pool_config GkeNodePoolConfig
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
nodePool This property is required. String
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
roles This property is required. List<"ROLE_UNSPECIFIED" | "DEFAULT" | "CONTROLLER" | "SPARK_DRIVER" | "SPARK_EXECUTOR">
The roles associated with the GKE node pool.
nodePoolConfig Property Map
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

GkeNodePoolTargetResponse
, GkeNodePoolTargetResponseArgs

NodePool This property is required. string
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
NodePoolConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigResponse
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
Roles This property is required. List<string>
The roles associated with the GKE node pool.
NodePool This property is required. string
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
NodePoolConfig This property is required. GkeNodePoolConfigResponse
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
Roles This property is required. []string
The roles associated with the GKE node pool.
nodePool This property is required. String
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
nodePoolConfig This property is required. GkeNodePoolConfigResponse
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
roles This property is required. List<String>
The roles associated with the GKE node pool.
nodePool This property is required. string
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
nodePoolConfig This property is required. GkeNodePoolConfigResponse
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
roles This property is required. string[]
The roles associated with the GKE node pool.
node_pool This property is required. str
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
node_pool_config This property is required. GkeNodePoolConfigResponse
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
roles This property is required. Sequence[str]
The roles associated with the GKE node pool.
nodePool This property is required. String
The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
nodePoolConfig This property is required. Property Map
Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
roles This property is required. List<String>
The roles associated with the GKE node pool.

GkeNodePoolTargetRolesItem
, GkeNodePoolTargetRolesItemArgs

RoleUnspecified
ROLE_UNSPECIFIEDRole is unspecified.
Default
DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
Controller
CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
SparkDriver
SPARK_DRIVERRun work associated with a Spark driver of a job.
SparkExecutor
SPARK_EXECUTORRun work associated with a Spark executor of a job.
GkeNodePoolTargetRolesItemRoleUnspecified
ROLE_UNSPECIFIEDRole is unspecified.
GkeNodePoolTargetRolesItemDefault
DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
GkeNodePoolTargetRolesItemController
CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
GkeNodePoolTargetRolesItemSparkDriver
SPARK_DRIVERRun work associated with a Spark driver of a job.
GkeNodePoolTargetRolesItemSparkExecutor
SPARK_EXECUTORRun work associated with a Spark executor of a job.
RoleUnspecified
ROLE_UNSPECIFIEDRole is unspecified.
Default
DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
Controller
CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
SparkDriver
SPARK_DRIVERRun work associated with a Spark driver of a job.
SparkExecutor
SPARK_EXECUTORRun work associated with a Spark executor of a job.
RoleUnspecified
ROLE_UNSPECIFIEDRole is unspecified.
Default
DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
Controller
CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
SparkDriver
SPARK_DRIVERRun work associated with a Spark driver of a job.
SparkExecutor
SPARK_EXECUTORRun work associated with a Spark executor of a job.
ROLE_UNSPECIFIED
ROLE_UNSPECIFIEDRole is unspecified.
DEFAULT
DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
CONTROLLER
CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
SPARK_DRIVER
SPARK_DRIVERRun work associated with a Spark driver of a job.
SPARK_EXECUTOR
SPARK_EXECUTORRun work associated with a Spark executor of a job.
"ROLE_UNSPECIFIED"
ROLE_UNSPECIFIEDRole is unspecified.
"DEFAULT"
DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
"CONTROLLER"
CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
"SPARK_DRIVER"
SPARK_DRIVERRun work associated with a Spark driver of a job.
"SPARK_EXECUTOR"
SPARK_EXECUTORRun work associated with a Spark executor of a job.

IdentityConfig
, IdentityConfigArgs

UserServiceAccountMapping This property is required. Dictionary<string, string>
Map of user to service account.
UserServiceAccountMapping This property is required. map[string]string
Map of user to service account.
userServiceAccountMapping This property is required. Map<String,String>
Map of user to service account.
userServiceAccountMapping This property is required. {[key: string]: string}
Map of user to service account.
user_service_account_mapping This property is required. Mapping[str, str]
Map of user to service account.
userServiceAccountMapping This property is required. Map<String>
Map of user to service account.

IdentityConfigResponse
, IdentityConfigResponseArgs

UserServiceAccountMapping This property is required. Dictionary<string, string>
Map of user to service account.
UserServiceAccountMapping This property is required. map[string]string
Map of user to service account.
userServiceAccountMapping This property is required. Map<String,String>
Map of user to service account.
userServiceAccountMapping This property is required. {[key: string]: string}
Map of user to service account.
user_service_account_mapping This property is required. Mapping[str, str]
Map of user to service account.
userServiceAccountMapping This property is required. Map<String>
Map of user to service account.

InstanceFlexibilityPolicy
, InstanceFlexibilityPolicyArgs

InstanceSelectionList List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceSelection>
Optional. List of instance selection options that the group will use when creating new VMs.
InstanceSelectionList []InstanceSelection
Optional. List of instance selection options that the group will use when creating new VMs.
instanceSelectionList List<InstanceSelection>
Optional. List of instance selection options that the group will use when creating new VMs.
instanceSelectionList InstanceSelection[]
Optional. List of instance selection options that the group will use when creating new VMs.
instance_selection_list Sequence[InstanceSelection]
Optional. List of instance selection options that the group will use when creating new VMs.
instanceSelectionList List<Property Map>
Optional. List of instance selection options that the group will use when creating new VMs.

InstanceFlexibilityPolicyResponse
, InstanceFlexibilityPolicyResponseArgs

InstanceSelectionList This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceSelectionResponse>
Optional. List of instance selection options that the group will use when creating new VMs.
InstanceSelectionResults This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceSelectionResultResponse>
A list of instance selection results in the group.
InstanceSelectionList This property is required. []InstanceSelectionResponse
Optional. List of instance selection options that the group will use when creating new VMs.
InstanceSelectionResults This property is required. []InstanceSelectionResultResponse
A list of instance selection results in the group.
instanceSelectionList This property is required. List<InstanceSelectionResponse>
Optional. List of instance selection options that the group will use when creating new VMs.
instanceSelectionResults This property is required. List<InstanceSelectionResultResponse>
A list of instance selection results in the group.
instanceSelectionList This property is required. InstanceSelectionResponse[]
Optional. List of instance selection options that the group will use when creating new VMs.
instanceSelectionResults This property is required. InstanceSelectionResultResponse[]
A list of instance selection results in the group.
instance_selection_list This property is required. Sequence[InstanceSelectionResponse]
Optional. List of instance selection options that the group will use when creating new VMs.
instance_selection_results This property is required. Sequence[InstanceSelectionResultResponse]
A list of instance selection results in the group.
instanceSelectionList This property is required. List<Property Map>
Optional. List of instance selection options that the group will use when creating new VMs.
instanceSelectionResults This property is required. List<Property Map>
A list of instance selection results in the group.

InstanceGroupConfig
, InstanceGroupConfigArgs

Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AcceleratorConfig>
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DiskConfig
Optional. Disk option config settings.
ImageUri string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
InstanceFlexibilityPolicy Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicy
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
MachineTypeUri string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
MinCpuPlatform string
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
MinNumInstances int
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
NumInstances int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility Pulumi.GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
StartupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.StartupConfig
Optional. Configuration to handle the startup of instances during cluster create and update process.
Accelerators []AcceleratorConfig
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig DiskConfig
Optional. Disk option config settings.
ImageUri string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
InstanceFlexibilityPolicy InstanceFlexibilityPolicy
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
MachineTypeUri string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
MinCpuPlatform string
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
MinNumInstances int
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
NumInstances int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
StartupConfig StartupConfig
Optional. Configuration to handle the startup of instances during cluster create and update process.
accelerators List<AcceleratorConfig>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig DiskConfig
Optional. Disk option config settings.
imageUri String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceFlexibilityPolicy InstanceFlexibilityPolicy
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
machineTypeUri String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
minCpuPlatform String
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
minNumInstances Integer
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
numInstances Integer
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
startupConfig StartupConfig
Optional. Configuration to handle the startup of instances during cluster create and update process.
accelerators AcceleratorConfig[]
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig DiskConfig
Optional. Disk option config settings.
imageUri string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceFlexibilityPolicy InstanceFlexibilityPolicy
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
machineTypeUri string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
minCpuPlatform string
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
minNumInstances number
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
numInstances number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
startupConfig StartupConfig
Optional. Configuration to handle the startup of instances during cluster create and update process.
accelerators Sequence[AcceleratorConfig]
Optional. The Compute Engine accelerator configuration for these instances.
disk_config DiskConfig
Optional. Disk option config settings.
image_uri str
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instance_flexibility_policy InstanceFlexibilityPolicy
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
machine_type_uri str
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
min_cpu_platform str
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
min_num_instances int
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
num_instances int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
startup_config StartupConfig
Optional. Configuration to handle the startup of instances during cluster create and update process.
accelerators List<Property Map>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig Property Map
Optional. Disk option config settings.
imageUri String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceFlexibilityPolicy Property Map
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
machineTypeUri String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
minCpuPlatform String
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
minNumInstances Number
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
numInstances Number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility "PREEMPTIBILITY_UNSPECIFIED" | "NON_PREEMPTIBLE" | "PREEMPTIBLE" | "SPOT"
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
startupConfig Property Map
Optional. Configuration to handle the startup of instances during cluster create and update process.

InstanceGroupConfigPreemptibility
, InstanceGroupConfigPreemptibilityArgs

PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
NonPreemptible
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
Preemptible
PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
Spot
SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
InstanceGroupConfigPreemptibilityPreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
InstanceGroupConfigPreemptibilityNonPreemptible
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
InstanceGroupConfigPreemptibilityPreemptible
PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
InstanceGroupConfigPreemptibilitySpot
SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
NonPreemptible
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
Preemptible
PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
Spot
SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
NonPreemptible
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
Preemptible
PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
Spot
SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
PREEMPTIBILITY_UNSPECIFIED
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
NON_PREEMPTIBLE
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
PREEMPTIBLE
PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
SPOT
SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
"PREEMPTIBILITY_UNSPECIFIED"
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
"NON_PREEMPTIBLE"
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
"PREEMPTIBLE"
PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
"SPOT"
SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.

InstanceGroupConfigResponse
, InstanceGroupConfigResponseArgs

Accelerators This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigResponse>
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.DiskConfigResponse
Optional. Disk option config settings.
ImageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
InstanceFlexibilityPolicy This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyResponse
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
InstanceNames This property is required. List<string>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
InstanceReferences This property is required. List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceReferenceResponse>
List of references to Compute Engine instances.
IsPreemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
MachineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
ManagedGroupConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
MinCpuPlatform This property is required. string
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
MinNumInstances This property is required. int
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
NumInstances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
StartupConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.StartupConfigResponse
Optional. Configuration to handle the startup of instances during cluster create and update process.
Accelerators This property is required. []AcceleratorConfigResponse
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
ImageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
InstanceFlexibilityPolicy This property is required. InstanceFlexibilityPolicyResponse
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
InstanceNames This property is required. []string
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
InstanceReferences This property is required. []InstanceReferenceResponse
List of references to Compute Engine instances.
IsPreemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
MachineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
ManagedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
MinCpuPlatform This property is required. string
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
MinNumInstances This property is required. int
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
NumInstances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
StartupConfig This property is required. StartupConfigResponse
Optional. Configuration to handle the startup of instances during cluster create and update process.
accelerators This property is required. List<AcceleratorConfigResponse>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
imageUri This property is required. String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceFlexibilityPolicy This property is required. InstanceFlexibilityPolicyResponse
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
instanceNames This property is required. List<String>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. List<InstanceReferenceResponse>
List of references to Compute Engine instances.
isPreemptible This property is required. Boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. String
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
minNumInstances This property is required. Integer
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
numInstances This property is required. Integer
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. String
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
startupConfig This property is required. StartupConfigResponse
Optional. Configuration to handle the startup of instances during cluster create and update process.
accelerators This property is required. AcceleratorConfigResponse[]
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
imageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceFlexibilityPolicy This property is required. InstanceFlexibilityPolicyResponse
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
instanceNames This property is required. string[]
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. InstanceReferenceResponse[]
List of references to Compute Engine instances.
isPreemptible This property is required. boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. string
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
minNumInstances This property is required. number
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
numInstances This property is required. number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
startupConfig This property is required. StartupConfigResponse
Optional. Configuration to handle the startup of instances during cluster create and update process.
accelerators This property is required. Sequence[AcceleratorConfigResponse]
Optional. The Compute Engine accelerator configuration for these instances.
disk_config This property is required. DiskConfigResponse
Optional. Disk option config settings.
image_uri This property is required. str
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instance_flexibility_policy This property is required. InstanceFlexibilityPolicyResponse
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
instance_names This property is required. Sequence[str]
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instance_references This property is required. Sequence[InstanceReferenceResponse]
List of references to Compute Engine instances.
is_preemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
machine_type_uri This property is required. str
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managed_group_config This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
min_cpu_platform This property is required. str
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
min_num_instances This property is required. int
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
num_instances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. str
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
startup_config This property is required. StartupConfigResponse
Optional. Configuration to handle the startup of instances during cluster create and update process.
accelerators This property is required. List<Property Map>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. Property Map
Optional. Disk option config settings.
imageUri This property is required. String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceFlexibilityPolicy This property is required. Property Map
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
instanceNames This property is required. List<String>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. List<Property Map>
List of references to Compute Engine instances.
isPreemptible This property is required. Boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. Property Map
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. String
Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
minNumInstances This property is required. Number
Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
numInstances This property is required. Number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. String
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
startupConfig This property is required. Property Map
Optional. Configuration to handle the startup of instances during cluster create and update process.

InstanceReferenceResponse
, InstanceReferenceResponseArgs

InstanceId This property is required. string
The unique identifier of the Compute Engine instance.
InstanceName This property is required. string
The user-friendly name of the Compute Engine instance.
PublicEciesKey This property is required. string
The public ECIES key used for sharing data with this instance.
PublicKey This property is required. string
The public RSA key used for sharing data with this instance.
InstanceId This property is required. string
The unique identifier of the Compute Engine instance.
InstanceName This property is required. string
The user-friendly name of the Compute Engine instance.
PublicEciesKey This property is required. string
The public ECIES key used for sharing data with this instance.
PublicKey This property is required. string
The public RSA key used for sharing data with this instance.
instanceId This property is required. String
The unique identifier of the Compute Engine instance.
instanceName This property is required. String
The user-friendly name of the Compute Engine instance.
publicEciesKey This property is required. String
The public ECIES key used for sharing data with this instance.
publicKey This property is required. String
The public RSA key used for sharing data with this instance.
instanceId This property is required. string
The unique identifier of the Compute Engine instance.
instanceName This property is required. string
The user-friendly name of the Compute Engine instance.
publicEciesKey This property is required. string
The public ECIES key used for sharing data with this instance.
publicKey This property is required. string
The public RSA key used for sharing data with this instance.
instance_id This property is required. str
The unique identifier of the Compute Engine instance.
instance_name This property is required. str
The user-friendly name of the Compute Engine instance.
public_ecies_key This property is required. str
The public ECIES key used for sharing data with this instance.
public_key This property is required. str
The public RSA key used for sharing data with this instance.
instanceId This property is required. String
The unique identifier of the Compute Engine instance.
instanceName This property is required. String
The user-friendly name of the Compute Engine instance.
publicEciesKey This property is required. String
The public ECIES key used for sharing data with this instance.
publicKey This property is required. String
The public RSA key used for sharing data with this instance.

InstanceSelection
, InstanceSelectionArgs

MachineTypes List<string>
Optional. Full machine-type names, e.g. "n1-standard-16".
Rank int
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
MachineTypes []string
Optional. Full machine-type names, e.g. "n1-standard-16".
Rank int
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
machineTypes List<String>
Optional. Full machine-type names, e.g. "n1-standard-16".
rank Integer
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
machineTypes string[]
Optional. Full machine-type names, e.g. "n1-standard-16".
rank number
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
machine_types Sequence[str]
Optional. Full machine-type names, e.g. "n1-standard-16".
rank int
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
machineTypes List<String>
Optional. Full machine-type names, e.g. "n1-standard-16".
rank Number
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.

InstanceSelectionResponse
, InstanceSelectionResponseArgs

MachineTypes This property is required. List<string>
Optional. Full machine-type names, e.g. "n1-standard-16".
Rank This property is required. int
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
MachineTypes This property is required. []string
Optional. Full machine-type names, e.g. "n1-standard-16".
Rank This property is required. int
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
machineTypes This property is required. List<String>
Optional. Full machine-type names, e.g. "n1-standard-16".
rank This property is required. Integer
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
machineTypes This property is required. string[]
Optional. Full machine-type names, e.g. "n1-standard-16".
rank This property is required. number
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
machine_types This property is required. Sequence[str]
Optional. Full machine-type names, e.g. "n1-standard-16".
rank This property is required. int
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
machineTypes This property is required. List<String>
Optional. Full machine-type names, e.g. "n1-standard-16".
rank This property is required. Number
Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.

InstanceSelectionResultResponse
, InstanceSelectionResultResponseArgs

MachineType This property is required. string
Full machine-type names, e.g. "n1-standard-16".
VmCount This property is required. int
Number of VM provisioned with the machine_type.
MachineType This property is required. string
Full machine-type names, e.g. "n1-standard-16".
VmCount This property is required. int
Number of VM provisioned with the machine_type.
machineType This property is required. String
Full machine-type names, e.g. "n1-standard-16".
vmCount This property is required. Integer
Number of VM provisioned with the machine_type.
machineType This property is required. string
Full machine-type names, e.g. "n1-standard-16".
vmCount This property is required. number
Number of VM provisioned with the machine_type.
machine_type This property is required. str
Full machine-type names, e.g. "n1-standard-16".
vm_count This property is required. int
Number of VM provisioned with the machine_type.
machineType This property is required. String
Full machine-type names, e.g. "n1-standard-16".
vmCount This property is required. Number
Number of VM provisioned with the machine_type.

KerberosConfig
, KerberosConfigArgs

CrossRealmTrustAdminServer string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
CrossRealmTrustAdminServer string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours Integer
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri string
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
cross_realm_trust_admin_server str
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_kdc str
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_realm str
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
cross_realm_trust_shared_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enable_kerberos bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdc_db_key_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
key_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystore_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystore_uri str
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kms_key_uri str
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm str
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
root_principal_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgt_lifetime_hours int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststore_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststore_uri str
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours Number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KerberosConfigResponse
, KerberosConfigResponseArgs

CrossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
CrossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. Integer
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
cross_realm_trust_admin_server This property is required. str
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_kdc This property is required. str
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_realm This property is required. str
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
cross_realm_trust_shared_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enable_kerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdc_db_key_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
key_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystore_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystore_uri This property is required. str
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kms_key_uri This property is required. str
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. str
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
root_principal_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgt_lifetime_hours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststore_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststore_uri This property is required. str
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. Number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KubernetesClusterConfig
, KubernetesClusterConfigArgs

GkeClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfig
The configuration for running the Dataproc cluster on GKE.
KubernetesNamespace string
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
KubernetesSoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesSoftwareConfig
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
GkeClusterConfig This property is required. GkeClusterConfig
The configuration for running the Dataproc cluster on GKE.
KubernetesNamespace string
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
KubernetesSoftwareConfig KubernetesSoftwareConfig
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
gkeClusterConfig This property is required. GkeClusterConfig
The configuration for running the Dataproc cluster on GKE.
kubernetesNamespace String
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
kubernetesSoftwareConfig KubernetesSoftwareConfig
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
gkeClusterConfig This property is required. GkeClusterConfig
The configuration for running the Dataproc cluster on GKE.
kubernetesNamespace string
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
kubernetesSoftwareConfig KubernetesSoftwareConfig
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
gke_cluster_config This property is required. GkeClusterConfig
The configuration for running the Dataproc cluster on GKE.
kubernetes_namespace str
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
kubernetes_software_config KubernetesSoftwareConfig
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
gkeClusterConfig This property is required. Property Map
The configuration for running the Dataproc cluster on GKE.
kubernetesNamespace String
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
kubernetesSoftwareConfig Property Map
Optional. The software configuration for this Dataproc cluster running on Kubernetes.

KubernetesClusterConfigResponse
, KubernetesClusterConfigResponseArgs

GkeClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigResponse
The configuration for running the Dataproc cluster on GKE.
KubernetesNamespace This property is required. string
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
KubernetesSoftwareConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesSoftwareConfigResponse
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
GkeClusterConfig This property is required. GkeClusterConfigResponse
The configuration for running the Dataproc cluster on GKE.
KubernetesNamespace This property is required. string
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
KubernetesSoftwareConfig This property is required. KubernetesSoftwareConfigResponse
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
gkeClusterConfig This property is required. GkeClusterConfigResponse
The configuration for running the Dataproc cluster on GKE.
kubernetesNamespace This property is required. String
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
kubernetesSoftwareConfig This property is required. KubernetesSoftwareConfigResponse
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
gkeClusterConfig This property is required. GkeClusterConfigResponse
The configuration for running the Dataproc cluster on GKE.
kubernetesNamespace This property is required. string
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
kubernetesSoftwareConfig This property is required. KubernetesSoftwareConfigResponse
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
gke_cluster_config This property is required. GkeClusterConfigResponse
The configuration for running the Dataproc cluster on GKE.
kubernetes_namespace This property is required. str
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
kubernetes_software_config This property is required. KubernetesSoftwareConfigResponse
Optional. The software configuration for this Dataproc cluster running on Kubernetes.
gkeClusterConfig This property is required. Property Map
The configuration for running the Dataproc cluster on GKE.
kubernetesNamespace This property is required. String
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
kubernetesSoftwareConfig This property is required. Property Map
Optional. The software configuration for this Dataproc cluster running on Kubernetes.

KubernetesSoftwareConfig
, KubernetesSoftwareConfigArgs

ComponentVersion Dictionary<string, string>
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
Properties Dictionary<string, string>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
ComponentVersion map[string]string
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
Properties map[string]string
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
componentVersion Map<String,String>
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
properties Map<String,String>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
componentVersion {[key: string]: string}
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
properties {[key: string]: string}
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
component_version Mapping[str, str]
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
properties Mapping[str, str]
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
componentVersion Map<String>
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
properties Map<String>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

KubernetesSoftwareConfigResponse
, KubernetesSoftwareConfigResponseArgs

ComponentVersion This property is required. Dictionary<string, string>
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
Properties This property is required. Dictionary<string, string>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
ComponentVersion This property is required. map[string]string
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
Properties This property is required. map[string]string
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
componentVersion This property is required. Map<String,String>
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
properties This property is required. Map<String,String>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
componentVersion This property is required. {[key: string]: string}
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
properties This property is required. {[key: string]: string}
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
component_version This property is required. Mapping[str, str]
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
properties This property is required. Mapping[str, str]
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
componentVersion This property is required. Map<String>
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
properties This property is required. Map<String>
The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

LifecycleConfig
, LifecycleConfigArgs

AutoDeleteTime string
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTime string
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime String
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime string
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_time str
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_ttl str
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idle_delete_ttl str
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime String
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

LifecycleConfigResponse
, LifecycleConfigResponseArgs

AutoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. String
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. String
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_time This property is required. str
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_ttl This property is required. str
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idle_delete_ttl This property is required. str
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idle_start_time This property is required. str
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. String
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. String
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

ManagedGroupConfigResponse
, ManagedGroupConfigResponseArgs

InstanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
InstanceGroupManagerUri This property is required. string
The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
InstanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
InstanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
InstanceGroupManagerUri This property is required. string
The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
InstanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. String
The name of the Instance Group Manager for this group.
instanceGroupManagerUri This property is required. String
The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
instanceTemplateName This property is required. String
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
instanceGroupManagerUri This property is required. string
The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
instanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
instance_group_manager_name This property is required. str
The name of the Instance Group Manager for this group.
instance_group_manager_uri This property is required. str
The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
instance_template_name This property is required. str
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. String
The name of the Instance Group Manager for this group.
instanceGroupManagerUri This property is required. String
The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
instanceTemplateName This property is required. String
The name of the Instance Template used for the Managed Instance Group.

MetastoreConfig
, MetastoreConfigArgs

DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataproc_metastore_service This property is required. str
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

MetastoreConfigResponse
, MetastoreConfigResponseArgs

DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataproc_metastore_service This property is required. str
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

Metric
, MetricArgs

MetricSource This property is required. Pulumi.GoogleNative.Dataproc.V1.MetricMetricSource
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
MetricOverrides List<string>
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
MetricSource This property is required. MetricMetricSource
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
MetricOverrides []string
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
metricSource This property is required. MetricMetricSource
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
metricOverrides List<String>
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
metricSource This property is required. MetricMetricSource
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
metricOverrides string[]
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
metric_source This property is required. MetricMetricSource
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
metric_overrides Sequence[str]
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
metricSource This property is required. "METRIC_SOURCE_UNSPECIFIED" | "MONITORING_AGENT_DEFAULTS" | "HDFS" | "SPARK" | "YARN" | "SPARK_HISTORY_SERVER" | "HIVESERVER2" | "HIVEMETASTORE" | "FLINK"
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
metricOverrides List<String>
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.

MetricMetricSource
, MetricMetricSourceArgs

MetricSourceUnspecified
METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
MonitoringAgentDefaults
MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
Hdfs
HDFSHDFS metric source.
Spark
SPARKSpark metric source.
Yarn
YARNYARN metric source.
SparkHistoryServer
SPARK_HISTORY_SERVERSpark History Server metric source.
Hiveserver2
HIVESERVER2Hiveserver2 metric source.
Hivemetastore
HIVEMETASTOREhivemetastore metric source
Flink
FLINKflink metric source
MetricMetricSourceMetricSourceUnspecified
METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
MetricMetricSourceMonitoringAgentDefaults
MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
MetricMetricSourceHdfs
HDFSHDFS metric source.
MetricMetricSourceSpark
SPARKSpark metric source.
MetricMetricSourceYarn
YARNYARN metric source.
MetricMetricSourceSparkHistoryServer
SPARK_HISTORY_SERVERSpark History Server metric source.
MetricMetricSourceHiveserver2
HIVESERVER2Hiveserver2 metric source.
MetricMetricSourceHivemetastore
HIVEMETASTOREhivemetastore metric source
MetricMetricSourceFlink
FLINKflink metric source
MetricSourceUnspecified
METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
MonitoringAgentDefaults
MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
Hdfs
HDFSHDFS metric source.
Spark
SPARKSpark metric source.
Yarn
YARNYARN metric source.
SparkHistoryServer
SPARK_HISTORY_SERVERSpark History Server metric source.
Hiveserver2
HIVESERVER2Hiveserver2 metric source.
Hivemetastore
HIVEMETASTOREhivemetastore metric source
Flink
FLINKflink metric source
MetricSourceUnspecified
METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
MonitoringAgentDefaults
MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
Hdfs
HDFSHDFS metric source.
Spark
SPARKSpark metric source.
Yarn
YARNYARN metric source.
SparkHistoryServer
SPARK_HISTORY_SERVERSpark History Server metric source.
Hiveserver2
HIVESERVER2Hiveserver2 metric source.
Hivemetastore
HIVEMETASTOREhivemetastore metric source
Flink
FLINKflink metric source
METRIC_SOURCE_UNSPECIFIED
METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
MONITORING_AGENT_DEFAULTS
MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
HDFS
HDFSHDFS metric source.
SPARK
SPARKSpark metric source.
YARN
YARNYARN metric source.
SPARK_HISTORY_SERVER
SPARK_HISTORY_SERVERSpark History Server metric source.
HIVESERVER2
HIVESERVER2Hiveserver2 metric source.
HIVEMETASTORE
HIVEMETASTOREhivemetastore metric source
FLINK
FLINKflink metric source
"METRIC_SOURCE_UNSPECIFIED"
METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
"MONITORING_AGENT_DEFAULTS"
MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
"HDFS"
HDFSHDFS metric source.
"SPARK"
SPARKSpark metric source.
"YARN"
YARNYARN metric source.
"SPARK_HISTORY_SERVER"
SPARK_HISTORY_SERVERSpark History Server metric source.
"HIVESERVER2"
HIVESERVER2Hiveserver2 metric source.
"HIVEMETASTORE"
HIVEMETASTOREhivemetastore metric source
"FLINK"
FLINKflink metric source

MetricResponse
, MetricResponseArgs

MetricOverrides This property is required. List<string>
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
MetricSource This property is required. string
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
MetricOverrides This property is required. []string
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
MetricSource This property is required. string
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
metricOverrides This property is required. List<String>
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
metricSource This property is required. String
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
metricOverrides This property is required. string[]
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
metricSource This property is required. string
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
metric_overrides This property is required. Sequence[str]
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
metric_source This property is required. str
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
metricOverrides This property is required. List<String>
Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
metricSource This property is required. String
A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).

NamespacedGkeDeploymentTarget
, NamespacedGkeDeploymentTargetArgs

ClusterNamespace string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
ClusterNamespace string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace string
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
cluster_namespace str
Optional. A namespace within the GKE cluster to deploy into.
target_gke_cluster str
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NamespacedGkeDeploymentTargetResponse
, NamespacedGkeDeploymentTargetResponseArgs

ClusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
ClusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
cluster_namespace This property is required. str
Optional. A namespace within the GKE cluster to deploy into.
target_gke_cluster This property is required. str
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NodeGroup
, NodeGroupArgs

Roles This property is required. List<Pulumi.GoogleNative.Dataproc.V1.NodeGroupRolesItem>
Node group roles.
Labels Dictionary<string, string>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
Name string
The Node group resource name (https://aip.dev/122).
NodeGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig
Optional. The node group instance group configuration.
Roles This property is required. []NodeGroupRolesItem
Node group roles.
Labels map[string]string
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
Name string
The Node group resource name (https://aip.dev/122).
NodeGroupConfig InstanceGroupConfig
Optional. The node group instance group configuration.
roles This property is required. List<NodeGroupRolesItem>
Node group roles.
labels Map<String,String>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
name String
The Node group resource name (https://aip.dev/122).
nodeGroupConfig InstanceGroupConfig
Optional. The node group instance group configuration.
roles This property is required. NodeGroupRolesItem[]
Node group roles.
labels {[key: string]: string}
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
name string
The Node group resource name (https://aip.dev/122).
nodeGroupConfig InstanceGroupConfig
Optional. The node group instance group configuration.
roles This property is required. Sequence[NodeGroupRolesItem]
Node group roles.
labels Mapping[str, str]
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
name str
The Node group resource name (https://aip.dev/122).
node_group_config InstanceGroupConfig
Optional. The node group instance group configuration.
roles This property is required. List<"ROLE_UNSPECIFIED" | "DRIVER">
Node group roles.
labels Map<String>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
name String
The Node group resource name (https://aip.dev/122).
nodeGroupConfig Property Map
Optional. The node group instance group configuration.

NodeGroupAffinity
, NodeGroupAffinityArgs

NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
node_group_uri This property is required. str
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

NodeGroupAffinityResponse
, NodeGroupAffinityResponseArgs

NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
node_group_uri This property is required. str
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

NodeGroupResponse
, NodeGroupResponseArgs

Labels This property is required. Dictionary<string, string>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
Name This property is required. string
The Node group resource name (https://aip.dev/122).
NodeGroupConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse
Optional. The node group instance group configuration.
Roles This property is required. List<string>
Node group roles.
Labels This property is required. map[string]string
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
Name This property is required. string
The Node group resource name (https://aip.dev/122).
NodeGroupConfig This property is required. InstanceGroupConfigResponse
Optional. The node group instance group configuration.
Roles This property is required. []string
Node group roles.
labels This property is required. Map<String,String>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
name This property is required. String
The Node group resource name (https://aip.dev/122).
nodeGroupConfig This property is required. InstanceGroupConfigResponse
Optional. The node group instance group configuration.
roles This property is required. List<String>
Node group roles.
labels This property is required. {[key: string]: string}
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
name This property is required. string
The Node group resource name (https://aip.dev/122).
nodeGroupConfig This property is required. InstanceGroupConfigResponse
Optional. The node group instance group configuration.
roles This property is required. string[]
Node group roles.
labels This property is required. Mapping[str, str]
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
name This property is required. str
The Node group resource name (https://aip.dev/122).
node_group_config This property is required. InstanceGroupConfigResponse
Optional. The node group instance group configuration.
roles This property is required. Sequence[str]
Node group roles.
labels This property is required. Map<String>
Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
name This property is required. String
The Node group resource name (https://aip.dev/122).
nodeGroupConfig This property is required. Property Map
Optional. The node group instance group configuration.
roles This property is required. List<String>
Node group roles.

NodeGroupRolesItem
, NodeGroupRolesItemArgs

RoleUnspecified
ROLE_UNSPECIFIEDRequired unspecified role.
Driver
DRIVERJob drivers run on the node pool.
NodeGroupRolesItemRoleUnspecified
ROLE_UNSPECIFIEDRequired unspecified role.
NodeGroupRolesItemDriver
DRIVERJob drivers run on the node pool.
RoleUnspecified
ROLE_UNSPECIFIEDRequired unspecified role.
Driver
DRIVERJob drivers run on the node pool.
RoleUnspecified
ROLE_UNSPECIFIEDRequired unspecified role.
Driver
DRIVERJob drivers run on the node pool.
ROLE_UNSPECIFIED
ROLE_UNSPECIFIEDRequired unspecified role.
DRIVER
DRIVERJob drivers run on the node pool.
"ROLE_UNSPECIFIED"
ROLE_UNSPECIFIEDRequired unspecified role.
"DRIVER"
DRIVERJob drivers run on the node pool.

NodeInitializationAction
, NodeInitializationActionArgs

ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. string
Cloud Storage URI of executable file.
executionTimeout string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executable_file This property is required. str
Cloud Storage URI of executable file.
execution_timeout str
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

NodeInitializationActionResponse
, NodeInitializationActionResponseArgs

ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout This property is required. String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. string
Cloud Storage URI of executable file.
executionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executable_file This property is required. str
Cloud Storage URI of executable file.
execution_timeout This property is required. str
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout This property is required. String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

ReservationAffinity
, ReservationAffinityArgs

ConsumeReservationType Pulumi.GoogleNative.Dataproc.V1.ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
Key string
Optional. Corresponds to the label key of reservation resource.
Values List<string>
Optional. Corresponds to the label values of reservation resource.
ConsumeReservationType ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
Key string
Optional. Corresponds to the label key of reservation resource.
Values []string
Optional. Corresponds to the label values of reservation resource.
consumeReservationType ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
key String
Optional. Corresponds to the label key of reservation resource.
values List<String>
Optional. Corresponds to the label values of reservation resource.
consumeReservationType ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
key string
Optional. Corresponds to the label key of reservation resource.
values string[]
Optional. Corresponds to the label values of reservation resource.
consume_reservation_type ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
key str
Optional. Corresponds to the label key of reservation resource.
values Sequence[str]
Optional. Corresponds to the label values of reservation resource.
consumeReservationType "TYPE_UNSPECIFIED" | "NO_RESERVATION" | "ANY_RESERVATION" | "SPECIFIC_RESERVATION"
Optional. Type of reservation to consume
key String
Optional. Corresponds to the label key of reservation resource.
values List<String>
Optional. Corresponds to the label values of reservation resource.

ReservationAffinityConsumeReservationType
, ReservationAffinityConsumeReservationTypeArgs

TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATIONDo not consume from any allocated capacity.
AnyReservation
ANY_RESERVATIONConsume any reservation available.
SpecificReservation
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
ReservationAffinityConsumeReservationTypeTypeUnspecified
TYPE_UNSPECIFIED
ReservationAffinityConsumeReservationTypeNoReservation
NO_RESERVATIONDo not consume from any allocated capacity.
ReservationAffinityConsumeReservationTypeAnyReservation
ANY_RESERVATIONConsume any reservation available.
ReservationAffinityConsumeReservationTypeSpecificReservation
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATIONDo not consume from any allocated capacity.
AnyReservation
ANY_RESERVATIONConsume any reservation available.
SpecificReservation
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATIONDo not consume from any allocated capacity.
AnyReservation
ANY_RESERVATIONConsume any reservation available.
SpecificReservation
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
TYPE_UNSPECIFIED
TYPE_UNSPECIFIED
NO_RESERVATION
NO_RESERVATIONDo not consume from any allocated capacity.
ANY_RESERVATION
ANY_RESERVATIONConsume any reservation available.
SPECIFIC_RESERVATION
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
"TYPE_UNSPECIFIED"
TYPE_UNSPECIFIED
"NO_RESERVATION"
NO_RESERVATIONDo not consume from any allocated capacity.
"ANY_RESERVATION"
ANY_RESERVATIONConsume any reservation available.
"SPECIFIC_RESERVATION"
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.

ReservationAffinityResponse
, ReservationAffinityResponseArgs

ConsumeReservationType This property is required. string
Optional. Type of reservation to consume
Key This property is required. string
Optional. Corresponds to the label key of reservation resource.
Values This property is required. List<string>
Optional. Corresponds to the label values of reservation resource.
ConsumeReservationType This property is required. string
Optional. Type of reservation to consume
Key This property is required. string
Optional. Corresponds to the label key of reservation resource.
Values This property is required. []string
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. String
Optional. Type of reservation to consume
key This property is required. String
Optional. Corresponds to the label key of reservation resource.
values This property is required. List<String>
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. string
Optional. Type of reservation to consume
key This property is required. string
Optional. Corresponds to the label key of reservation resource.
values This property is required. string[]
Optional. Corresponds to the label values of reservation resource.
consume_reservation_type This property is required. str
Optional. Type of reservation to consume
key This property is required. str
Optional. Corresponds to the label key of reservation resource.
values This property is required. Sequence[str]
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. String
Optional. Type of reservation to consume
key This property is required. String
Optional. Corresponds to the label key of reservation resource.
values This property is required. List<String>
Optional. Corresponds to the label values of reservation resource.

SecurityConfig
, SecurityConfigArgs

IdentityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.IdentityConfig
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
KerberosConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KerberosConfig
Optional. Kerberos related configuration.
IdentityConfig IdentityConfig
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
KerberosConfig KerberosConfig
Optional. Kerberos related configuration.
identityConfig IdentityConfig
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
kerberosConfig KerberosConfig
Optional. Kerberos related configuration.
identityConfig IdentityConfig
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
kerberosConfig KerberosConfig
Optional. Kerberos related configuration.
identity_config IdentityConfig
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
kerberos_config KerberosConfig
Optional. Kerberos related configuration.
identityConfig Property Map
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
kerberosConfig Property Map
Optional. Kerberos related configuration.

SecurityConfigResponse
, SecurityConfigResponseArgs

IdentityConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.IdentityConfigResponse
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
KerberosConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.KerberosConfigResponse
Optional. Kerberos related configuration.
IdentityConfig This property is required. IdentityConfigResponse
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
KerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
identityConfig This property is required. IdentityConfigResponse
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
kerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
identityConfig This property is required. IdentityConfigResponse
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
kerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
identity_config This property is required. IdentityConfigResponse
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
kerberos_config This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
identityConfig This property is required. Property Map
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
kerberosConfig This property is required. Property Map
Optional. Kerberos related configuration.

ShieldedInstanceConfig
, ShieldedInstanceConfigArgs

EnableIntegrityMonitoring bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm bool
Optional. Defines whether instances have the vTPM enabled.
EnableIntegrityMonitoring bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm Boolean
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm boolean
Optional. Defines whether instances have the vTPM enabled.
enable_integrity_monitoring bool
Optional. Defines whether instances have integrity monitoring enabled.
enable_secure_boot bool
Optional. Defines whether instances have Secure Boot enabled.
enable_vtpm bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm Boolean
Optional. Defines whether instances have the vTPM enabled.

ShieldedInstanceConfigResponse
, ShieldedInstanceConfigResponseArgs

EnableIntegrityMonitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
EnableIntegrityMonitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. Boolean
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. boolean
Optional. Defines whether instances have the vTPM enabled.
enable_integrity_monitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
enable_secure_boot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
enable_vtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. Boolean
Optional. Defines whether instances have the vTPM enabled.

SoftwareConfig
, SoftwareConfigArgs

ImageVersion string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents List<Pulumi.GoogleNative.Dataproc.V1.SoftwareConfigOptionalComponentsItem>
Optional. The set of components to activate on the cluster.
Properties Dictionary<string, string>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
ImageVersion string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents []SoftwareConfigOptionalComponentsItem
Optional. The set of components to activate on the cluster.
Properties map[string]string
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents List<SoftwareConfigOptionalComponentsItem>
Optional. The set of components to activate on the cluster.
properties Map<String,String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents SoftwareConfigOptionalComponentsItem[]
Optional. The set of components to activate on the cluster.
properties {[key: string]: string}
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
image_version str
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optional_components Sequence[SoftwareConfigOptionalComponentsItem]
Optional. The set of components to activate on the cluster.
properties Mapping[str, str]
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents List<"COMPONENT_UNSPECIFIED" | "ANACONDA" | "DOCKER" | "DRUID" | "FLINK" | "HBASE" | "HIVE_WEBHCAT" | "HUDI" | "JUPYTER" | "PRESTO" | "TRINO" | "RANGER" | "SOLR" | "ZEPPELIN" | "ZOOKEEPER">
Optional. The set of components to activate on the cluster.
properties Map<String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

SoftwareConfigOptionalComponentsItem
, SoftwareConfigOptionalComponentsItemArgs

ComponentUnspecified
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
Anaconda
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
Docker
DOCKERDocker
Druid
DRUIDThe Druid query engine. (alpha)
Flink
FLINKFlink
Hbase
HBASEHBase. (beta)
HiveWebhcat
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
Hudi
HUDIHudi.
Jupyter
JUPYTERThe Jupyter Notebook.
Presto
PRESTOThe Presto query engine.
Trino
TRINOThe Trino query engine.
Ranger
RANGERThe Ranger service.
Solr
SOLRThe Solr service.
Zeppelin
ZEPPELINThe Zeppelin notebook.
Zookeeper
ZOOKEEPERThe Zookeeper service.
SoftwareConfigOptionalComponentsItemComponentUnspecified
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
SoftwareConfigOptionalComponentsItemAnaconda
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
SoftwareConfigOptionalComponentsItemDocker
DOCKERDocker
SoftwareConfigOptionalComponentsItemDruid
DRUIDThe Druid query engine. (alpha)
SoftwareConfigOptionalComponentsItemFlink
FLINKFlink
SoftwareConfigOptionalComponentsItemHbase
HBASEHBase. (beta)
SoftwareConfigOptionalComponentsItemHiveWebhcat
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
SoftwareConfigOptionalComponentsItemHudi
HUDIHudi.
SoftwareConfigOptionalComponentsItemJupyter
JUPYTERThe Jupyter Notebook.
SoftwareConfigOptionalComponentsItemPresto
PRESTOThe Presto query engine.
SoftwareConfigOptionalComponentsItemTrino
TRINOThe Trino query engine.
SoftwareConfigOptionalComponentsItemRanger
RANGERThe Ranger service.
SoftwareConfigOptionalComponentsItemSolr
SOLRThe Solr service.
SoftwareConfigOptionalComponentsItemZeppelin
ZEPPELINThe Zeppelin notebook.
SoftwareConfigOptionalComponentsItemZookeeper
ZOOKEEPERThe Zookeeper service.
ComponentUnspecified
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
Anaconda
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
Docker
DOCKERDocker
Druid
DRUIDThe Druid query engine. (alpha)
Flink
FLINKFlink
Hbase
HBASEHBase. (beta)
HiveWebhcat
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
Hudi
HUDIHudi.
Jupyter
JUPYTERThe Jupyter Notebook.
Presto
PRESTOThe Presto query engine.
Trino
TRINOThe Trino query engine.
Ranger
RANGERThe Ranger service.
Solr
SOLRThe Solr service.
Zeppelin
ZEPPELINThe Zeppelin notebook.
Zookeeper
ZOOKEEPERThe Zookeeper service.
ComponentUnspecified
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
Anaconda
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
Docker
DOCKERDocker
Druid
DRUIDThe Druid query engine. (alpha)
Flink
FLINKFlink
Hbase
HBASEHBase. (beta)
HiveWebhcat
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
Hudi
HUDIHudi.
Jupyter
JUPYTERThe Jupyter Notebook.
Presto
PRESTOThe Presto query engine.
Trino
TRINOThe Trino query engine.
Ranger
RANGERThe Ranger service.
Solr
SOLRThe Solr service.
Zeppelin
ZEPPELINThe Zeppelin notebook.
Zookeeper
ZOOKEEPERThe Zookeeper service.
COMPONENT_UNSPECIFIED
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
ANACONDA
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
DOCKER
DOCKERDocker
DRUID
DRUIDThe Druid query engine. (alpha)
FLINK
FLINKFlink
HBASE
HBASEHBase. (beta)
HIVE_WEBHCAT
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
HUDI
HUDIHudi.
JUPYTER
JUPYTERThe Jupyter Notebook.
PRESTO
PRESTOThe Presto query engine.
TRINO
TRINOThe Trino query engine.
RANGER
RANGERThe Ranger service.
SOLR
SOLRThe Solr service.
ZEPPELIN
ZEPPELINThe Zeppelin notebook.
ZOOKEEPER
ZOOKEEPERThe Zookeeper service.
"COMPONENT_UNSPECIFIED"
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
"ANACONDA"
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
"DOCKER"
DOCKERDocker
"DRUID"
DRUIDThe Druid query engine. (alpha)
"FLINK"
FLINKFlink
"HBASE"
HBASEHBase. (beta)
"HIVE_WEBHCAT"
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
"HUDI"
HUDIHudi.
"JUPYTER"
JUPYTERThe Jupyter Notebook.
"PRESTO"
PRESTOThe Presto query engine.
"TRINO"
TRINOThe Trino query engine.
"RANGER"
RANGERThe Ranger service.
"SOLR"
SOLRThe Solr service.
"ZEPPELIN"
ZEPPELINThe Zeppelin notebook.
"ZOOKEEPER"
ZOOKEEPERThe Zookeeper service.

SoftwareConfigResponse
, SoftwareConfigResponseArgs

ImageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents This property is required. List<string>
Optional. The set of components to activate on the cluster.
Properties This property is required. Dictionary<string, string>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
ImageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents This property is required. []string
Optional. The set of components to activate on the cluster.
Properties This property is required. map[string]string
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. List<String>
Optional. The set of components to activate on the cluster.
properties This property is required. Map<String,String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. string[]
Optional. The set of components to activate on the cluster.
properties This property is required. {[key: string]: string}
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
image_version This property is required. str
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optional_components This property is required. Sequence[str]
Optional. The set of components to activate on the cluster.
properties This property is required. Mapping[str, str]
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. List<String>
Optional. The set of components to activate on the cluster.
properties This property is required. Map<String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

SparkHistoryServerConfig
, SparkHistoryServerConfigArgs

DataprocCluster string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
DataprocCluster string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataproc_cluster str
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

SparkHistoryServerConfigResponse
, SparkHistoryServerConfigResponseArgs

DataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
DataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataproc_cluster This property is required. str
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

StartupConfig
, StartupConfigArgs

RequiredRegistrationFraction double
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
RequiredRegistrationFraction float64
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
requiredRegistrationFraction Double
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
requiredRegistrationFraction number
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
required_registration_fraction float
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
requiredRegistrationFraction Number
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).

StartupConfigResponse
, StartupConfigResponseArgs

RequiredRegistrationFraction This property is required. double
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
RequiredRegistrationFraction This property is required. float64
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
requiredRegistrationFraction This property is required. Double
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
requiredRegistrationFraction This property is required. number
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
required_registration_fraction This property is required. float
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
requiredRegistrationFraction This property is required. Number
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).

VirtualClusterConfig
, VirtualClusterConfigArgs

KubernetesClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesClusterConfig
The configuration for running the Dataproc cluster on Kubernetes.
AuxiliaryServicesConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryServicesConfig
Optional. Configuration of auxiliary services used by this cluster.
StagingBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
KubernetesClusterConfig This property is required. KubernetesClusterConfig
The configuration for running the Dataproc cluster on Kubernetes.
AuxiliaryServicesConfig AuxiliaryServicesConfig
Optional. Configuration of auxiliary services used by this cluster.
StagingBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
kubernetesClusterConfig This property is required. KubernetesClusterConfig
The configuration for running the Dataproc cluster on Kubernetes.
auxiliaryServicesConfig AuxiliaryServicesConfig
Optional. Configuration of auxiliary services used by this cluster.
stagingBucket String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
kubernetesClusterConfig This property is required. KubernetesClusterConfig
The configuration for running the Dataproc cluster on Kubernetes.
auxiliaryServicesConfig AuxiliaryServicesConfig
Optional. Configuration of auxiliary services used by this cluster.
stagingBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
kubernetes_cluster_config This property is required. KubernetesClusterConfig
The configuration for running the Dataproc cluster on Kubernetes.
auxiliary_services_config AuxiliaryServicesConfig
Optional. Configuration of auxiliary services used by this cluster.
staging_bucket str
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
kubernetesClusterConfig This property is required. Property Map
The configuration for running the Dataproc cluster on Kubernetes.
auxiliaryServicesConfig Property Map
Optional. Configuration of auxiliary services used by this cluster.
stagingBucket String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

VirtualClusterConfigResponse
, VirtualClusterConfigResponseArgs

AuxiliaryServicesConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryServicesConfigResponse
Optional. Configuration of auxiliary services used by this cluster.
KubernetesClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.KubernetesClusterConfigResponse
The configuration for running the Dataproc cluster on Kubernetes.
StagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
AuxiliaryServicesConfig This property is required. AuxiliaryServicesConfigResponse
Optional. Configuration of auxiliary services used by this cluster.
KubernetesClusterConfig This property is required. KubernetesClusterConfigResponse
The configuration for running the Dataproc cluster on Kubernetes.
StagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
auxiliaryServicesConfig This property is required. AuxiliaryServicesConfigResponse
Optional. Configuration of auxiliary services used by this cluster.
kubernetesClusterConfig This property is required. KubernetesClusterConfigResponse
The configuration for running the Dataproc cluster on Kubernetes.
stagingBucket This property is required. String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
auxiliaryServicesConfig This property is required. AuxiliaryServicesConfigResponse
Optional. Configuration of auxiliary services used by this cluster.
kubernetesClusterConfig This property is required. KubernetesClusterConfigResponse
The configuration for running the Dataproc cluster on Kubernetes.
stagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
auxiliary_services_config This property is required. AuxiliaryServicesConfigResponse
Optional. Configuration of auxiliary services used by this cluster.
kubernetes_cluster_config This property is required. KubernetesClusterConfigResponse
The configuration for running the Dataproc cluster on Kubernetes.
staging_bucket This property is required. str
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
auxiliaryServicesConfig This property is required. Property Map
Optional. Configuration of auxiliary services used by this cluster.
kubernetesClusterConfig This property is required. Property Map
The configuration for running the Dataproc cluster on Kubernetes.
stagingBucket This property is required. String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi