Skip to content
This repository has been archived by the owner on Jun 25, 2024. It is now read-only.

Add DatabricksInstancePool

Simon D'Morias edited this page Oct 24, 2019 · 1 revision

external help file: azure.databricks.cicd.tools-help.xml Module Name: azure.databricks.cicd.tools online version: schema: 2.0.0

Add-DatabricksInstancePool

SYNOPSIS

Creates a new Databricks cluster

SYNTAX

Add-DatabricksInstancePool [[-BearerToken] <String>] [[-Region] <String>] [-InstancePoolName] <String>
 [[-MinIdleInstances] <Int32>] [-MaxCapacity] <Int32> [-NodeType] <String> [[-CustomTags] <Hashtable>]
 [[-IdleInstanceAutoterminationMinutes] <Int32>] [[-PreloadedSparkVersions] <String[]>] [<CommonParameters>]

DESCRIPTION

Creates a new cluster

EXAMPLES

Example 1

PS C:\> {{ Add example code here }}

{{ Add example description here }}

PARAMETERS

-BearerToken

Your Databricks Bearer token to authenticate to your workspace (see User Settings in Databricks WebUI)

Type: String
Parameter Sets: (All)
Aliases:

Required: False
Position: 1
Default value: None
Accept pipeline input: False
Accept wildcard characters: False

-Region

Azure Region - must match the URL of your Databricks workspace, example northeurope

Type: String
Parameter Sets: (All)
Aliases:

Required: False
Position: 2
Default value: None
Accept pipeline input: False
Accept wildcard characters: False

-InstancePoolName

The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters. NOTE: If the instance pool name exist the instance pool will be updated

Type: String
Parameter Sets: (All)
Aliases:

Required: True
Position: 3
Default value: None
Accept pipeline input: True (ByValue)
Accept wildcard characters: False

-MinIdleInstances

The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.

Type: Int32
Parameter Sets: (All)
Aliases:

Required: False
Position: 4
Default value: 0
Accept pipeline input: False
Accept wildcard characters: False

-MaxCapacity

The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling.

Type: Int32
Parameter Sets: (All)
Aliases:

Required: True
Position: 5
Default value: 0
Accept pipeline input: False
Accept wildcard characters: False

-NodeType

The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool's idle instances are allocated based on this type. You can retrieve a list of available node types by using the List Node Types API call.

Type: String
Parameter Sets: (All)
Aliases:

Required: True
Position: 6
Default value: None
Accept pipeline input: False
Accept wildcard characters: False

-CustomTags

Additional tags for instance pool resources. Azure Databricks tags all pool resources (e.g. VM disk volumes) with these tags in addition to default_tags.

Azure Databricks allows up to 41 custom tags.

Type: Hashtable
Parameter Sets: (All)
Aliases:

Required: False
Position: 7
Default value: None
Accept pipeline input: False
Accept wildcard characters: False

-IdleInstanceAutoterminationMinutes

The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If 0 is supplied, excess idle instances are removed as soon as possible.

Type: Int32
Parameter Sets: (All)
Aliases:

Required: False
Position: 8
Default value: 0
Accept pipeline input: False
Accept wildcard characters: False

-PreloadedSparkVersions

A list of Spark image versions the pool installs on each instance. Pool clusters that use one of the preloaded Spark version start faster as they do have to wait for the Spark image to download. You can retrieve a list of available Spark versions by using the Spark Versions API call.

Type: String[]
Parameter Sets: (All)
Aliases:

Required: False
Position: 9
Default value: None
Accept pipeline input: False
Accept wildcard characters: False

CommonParameters

This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters.

INPUTS

OUTPUTS

NOTES

Author: Simon D'Morias / Data Thirst Ltd

RELATED LINKS

Clone this wiki locally