Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support to login mrs cluster node with password #628

Merged
merged 1 commit into from
Nov 3, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
230 changes: 113 additions & 117 deletions docs/resources/mrs_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ subcategory: "MapReduce Service (MRS)"
Manages resource cluster within HuaweiCloud MRS.
This is an alternative to `huaweicloud_mrs_cluster_v1`

## Example Usage: Creating a MRS cluster
## Example Usage: Creating an MRS cluster

```hcl
resource "huaweicloud_mrs_cluster" "cluster1" {
Expand All @@ -26,8 +26,8 @@ resource "huaweicloud_mrs_cluster" "cluster1" {
safe_mode = 0
cluster_admin_secret = var.admin_secret
node_public_cert_name = var.keypair
vpc_id = "51edfb75-f9f0-4bbc-b4dc-21466b93f60d"
subnet_id = "1d7a8646-43ee-455a-a3ab-40da87a1304c"
vpc_id = var.vpc_id
subnet_id = var.subnet_id

component_list {
component_name = "Hadoop"
Expand All @@ -45,133 +45,129 @@ resource "huaweicloud_mrs_cluster" "cluster1" {

The following arguments are supported:

* `region` - (Optional) The region in which to obtain the msr cluster resource. If omitted, the provider-level region will work as default. Changing this creates a new msr cluster resource.
* `region` - (Optional) The region in which to obtain the msr cluster resource.
If omitted, the provider-level region will work as default.
Changing this creates a new msr cluster resource.

* `billing_type` - (Required) The value is 12, indicating on-demand payment.

* `region` - (Required) Cluster region information. Obtain the value from
Regions and Endpoints.

* `cluster_name` - (Required) Cluster name, which is globally unique and contains
only 1 to 64 letters, digits, hyphens (-), and underscores (_).

* `cluster_version` - (Optional) Version of the clusters. Currently, MRS 1.8.10, MRS 1.9.2
and MRS 2.1.0 are supported.

* `cluster_type` - (Optional) Type of clusters 0: analysis cluster 1: streaming
cluster The default value is 0.
* `cluster_type` - (Optional) Type of clusters. 0: analysis cluster 1: streaming
cluster. The default value is 0.

* `master_node_num` - (Required) Number of Master nodes The value is 2.
* `master_node_num` - (Required) Number of Master nodes. Set this parameter to 2 to enable cluster HA,
set this parameter to 1 to enable cluster HA.

* `master_node_size` - (Required) Best match based on several years of commissioning
experience. MRS supports specifications of hosts, and host specifications are
determined by CPUs, memory, and disks space. MRS supports instance specifications
detailed in [MRS specifications](https://support.huaweicloud.com/en-us/api-mrs/mrs_01_9006.html)

* `core_node_num` - (Required) Number of Core nodes Value range: 3 to 100 A
maximum of 100 Core nodes are supported by default. If more than 100 Core nodes
are required, contact technical support engineers or invoke background APIs
to modify the database.
* `core_node_num` - (Required) Number of Core nodes. Value range: 1 to 500.

* `core_node_size` - (Required) Instance specification of a Core node Configuration
method of this parameter is identical to that of master_node_size.

* `available_zone_id` - (Required) ID of an available zone. Obtain the value
from Regions and Endpoints.
North China AZ1 (cn-north-1a): ae04cf9d61544df3806a3feeb401b204,
North China AZ2 (cn-north-1b): d573142f24894ef3bd3664de068b44b0,
East China AZ1 (cn-east-2a): 72d50cedc49846b9b42c21495f38d81c,
East China AZ2 (cn-east-2b): 38b0f7a602344246bcb0da47b5d548e7,
East China AZ3 (cn-east-2c): 5547fd6bf8f84bb5a7f9db062ad3d015,
South China AZ1(cn-south-1a): 34f5ff4865cf4ed6b270f15382ebdec5,
South China AZ2(cn-south-2b): 043c7e39ecb347a08dc8fcb6c35a274e,
South China AZ3(cn-south-1c): af1687643e8c4ec1b34b688e4e3b8901,

* `vpc_id` - (Required) ID of the VPC where the subnet locates Obtain the VPC
ID from the management console as follows: Register an account and log in to
the management console. Click Virtual Private Cloud and select Virtual Private
Cloud from the left list. On the Virtual Private Cloud page, obtain the VPC
ID from the list.

* `subnet_id` - (Required) Subnet ID Obtain the subnet ID from the management
console as follows: Register an account and log in to the management console.
Click Virtual Private Cloud and select Virtual Private Cloud from the left list.
On the Virtual Private Cloud page, obtain the subnet ID from the list.

* `volume_type` - (Required) Type of disks SATA and SSD are supported. SATA:
common I/O SSD: super high-speed I/O

* `volume_size` - (Required) Data disk storage space of a Core node Users can
add disks to expand storage capacity when creating a cluster. There are the
following scenarios: Separation of data storage and computing: Data is stored
in the OBS system. Costs of clusters are relatively low but computing performance
is poor. The clusters can be deleted at any time. It is recommended when data
computing is not frequently performed. Integration of data storage and computing:
Data is stored in the HDFS system. Costs of clusters are relatively high but
computing performance is good. The clusters cannot be deleted in a short term.
It is recommended when data computing is frequently performed. Value range:
100 GB to 32000 GB

* `node_public_cert_name` - (Required) Name of a key pair You can use a key
* `available_zone_id` - (Required) ID of an available zone. The value as follows:

CN North-Beijing1 AZ1 (cn-north-1a): ae04cf9d61544df3806a3feeb401b204<br>
CN North-Beijing1 AZ2 (cn-north-1b): d573142f24894ef3bd3664de068b44b0<br>
CN North-Beijing4 AZ1 (cn-north-4a): effdcbc7d4d64a02aa1fa26b42f56533<br>
CN North-Beijing4 AZ2 (cn-north-4b): a0865121f83b41cbafce65930a22a6e8<br>
CN North-Beijing4 AZ3 (cn-north-4c): 2dcb154ac2724a6d92e9bcc859657c1e<br>
CN East-Shanghai1 AZ1 (cn-east-3a): e7afd64502d64fe3bfb60c2c82ec0ec6<br>
CN East-Shanghai1 AZ2 (cn-east-3b): d90ff6d692954373bf53be49cf3900cb<br>
CN East-Shanghai1 AZ3 (cn-east-3c): 2dafb4c708da4d509d0ad24864ae1c6d<br>
CN East-Shanghai2 AZ1 (cn-east-2a): 72d50cedc49846b9b42c21495f38d81c<br>
CN East-Shanghai2 AZ2 (cn-east-2b): 38b0f7a602344246bcb0da47b5d548e7<br>
CN East-Shanghai2 AZ3 (cn-east-2c): 5547fd6bf8f84bb5a7f9db062ad3d015<br>
CN South-Guangzhou AZ1 (cn-south-1a): 34f5ff4865cf4ed6b270f15382ebdec5<br>
CN South-Guangzhou AZ2 (cn-south-2b): 043c7e39ecb347a08dc8fcb6c35a274e<br>
CN South-Guangzhou AZ3 (cn-south-1c): af1687643e8c4ec1b34b688e4e3b8901<br>

* `vpc_id` - (Required) Specifies the VPC ID. Changing this parameter will create a new resource.

* `subnet_id` - (Required) Specifies the network id of a subnet. Changing this parameter will create a new resource.

* `volume_type` - (Required) Type of disks SATA and SSD are supported. SATA: common I/O;
SSD: super high-speed I/O.

* `volume_size` - (Required) Data disk storage space of a Core node. Value range: 100 GB to 32000 GB

* `safe_mode` - (Required) running mode of an MRS cluster.
- 0: indicates that the Kerberos authentication is disabled.
Users can use all functions provided by the cluster.
- 1: indicates that the Kerberos authentication is enabled.
Common users cannot use the file management or job management functions of an MRS cluster
and cannot view cluster resource usage or the job records of Hadoop and Spark.
To use these functions, the users must obtain the relevant permissions from the MRS Manager administrator.

* `cluster_admin_secret` - (Required) Indicates the password of the MRS Manager administrator.
This parameter must meet the following requirements:
- Must contain 8 to 26 characters.
- Must contain at least three of the following: uppercase letters, lowercase letters,
digits, and special characters: `~!@#$%^&*()-_=+\|[{}];:'",<.>/? and space.
- Cannot be the username or the username spelled backwards.

* `node_password` - (Optional) Password of user **root** for logging in to a cluster node.
This parameter and `node_public_cert_name` are alternative. A password must meet the following requirements:
- Must be 8 to 26 characters.
- Must contain at least three of the following: uppercase letters, lowercase letters,
digits, and special characters (!@$%^-_=+[{}]:,./?), but must not contain spaces.
- Cannot be the username or the username spelled backwards.

* `node_public_cert_name` - (Optional) Name of a key pair. You can use a key
to log in to the Master node in the cluster.

* `safe_mode` - (Required) MRS cluster running mode 0: common mode The value
indicates that the Kerberos authentication is disabled. Users can use all functions
provided by the cluster. 1: safe mode The value indicates that the Kerberos
authentication is enabled. Common users cannot use the file management or job
management functions of an MRS cluster and cannot view cluster resource usage
or the job records of Hadoop and Spark. To use these functions, the users must
obtain the relevant permissions from the MRS Manager administrator. The request
has the cluster_admin_secret parameter only when safe_mode is set to 1.

* `cluster_admin_secret` - (Optional) Indicates the password of the MRS Manager
administrator. The password for MRS 1.5.0: Must contain 6 to 32 characters.
Must contain at least two types of the following: Lowercase letters Uppercase
letters Digits Special characters of `~!@#$%^&*()-_=+\|[{}];:'",<.>/? Spaces
Must be different from the username. Must be different from the username written
in reverse order. The password for MRS 1.3.0: Must contain 8 to 64 characters.
Must contain at least four types of the following: Lowercase letters Uppercase
letters Digits Special characters of `~!@#$%^&*()-_=+\|[{}];:'",<.>/? Spaces
Must be different from the username. Must be different from the username written
in reverse order. This parameter needs to be configured only when safe_mode
is set to 1.
This parameter and `node_password` are alternative.

* `log_collection` - (Optional) Indicates whether logs are collected when cluster
installation fails. 0: not collected 1: collected The default value is 0. If
log_collection is set to 1, OBS buckets will be created to collect the MRS logs.
These buckets will be charged.
installation fails. 0: not collected; 1: collected. The default value is 1,
indicating that OBS buckets will be created and only used to collect logs that
record MRS cluster creation failures.

* `component_list` - (Required) Service component list.
* `component_list` - (Required) List of service components to be installed.
Structure is documented below.

* `add_jobs` - (Optional) You can submit a job when you create a cluster to
save time and use MRS easily. Only one job can be added.
* `add_jobs` - (Optional) Jobs can be submitted when a cluster is created.
Currently, only one job can be created. Structure is documented below.

* `tags` - (Optional) The key/value pairs to associate with the cluster.

The `component_list` block supports:

* `component_name` - (Required) Component name Currently, Hadoop, Spark, HBase,
Hive, Hue, Loader, Flume, Kafka and Storm are supported.
* `component_name` - (Required) Component name.
- MRS 2.1.0 supports: Presto, Hadoop, Spark, HBase, Hive, Tez, Hue, Loader, Flink, Impala, Kudu, Flume, Kafka, and Storm;
- MRS 1.9.2 supports: Presto, Hadoop, Spark, HBase, OpenTSDB, Hive, Hue, Loader, Tez, Flink, Alluxio, Ranger, Flume, Kafka, KafkaManager, and Storm;
- MRS 1.8.10 supports: Presto, Hadoop, Spark, HBase, OpenTSDB, Hive, Hue, Loader, Flink, Flume, Kafka, KafkaManager, and Storm;


The `add_jobs` block supports:
* `job_type` - (Required) Job type 1: MapReduce 2: Spark 3: Hive Script 4: HiveQL
(not supported currently) 5: DistCp, importing and exporting data (not supported
in this API currently). 6: Spark Script 7: Spark SQL, submitting Spark SQL statements
(not supported in this API currently). NOTE: Spark and Hive jobs can be added
to only clusters including Spark and Hive components.
* `job_type` - (Required) Job type code. 1: MapReduce; 2: Spark; 3: Hive Script; 4: HiveQL
(not supported currently); 5: DistCp, importing and exporting data (not supported
currently); 6: Spark Script; 7: Spark SQL, submitting Spark SQL statements
(not supported currently).
NOTE: Spark and Hive jobs can be added to only clusters including Spark and Hive components.

* `job_name` - (Required) Job name It contains only 1 to 64 letters, digits,
hyphens (-), and underscores (_). NOTE: Identical job names are allowed but
not recommended.
* `job_name` - (Required) Job name. It contains 1 to 64 characters. Only letters, digits,
hyphens (-), and underscores (_) are allowed.
NOTE: Identical job names are allowed but not recommended.

* `jar_path` - (Required) Path of the .jar file or .sql file for program execution
The parameter must meet the following requirements: Contains a maximum of 1023
characters, excluding special characters such as ;|&><'$. The address cannot
be empty or full of spaces. Starts with / or s3a://. Spark Script must end with
.sql; while MapReduce and Spark Jar must end with .jar. sql and jar are case-insensitive.

* `arguments` - (Optional) Key parameter for program execution The parameter
The parameter must meet the following requirements:
- Contains a maximum of 1,023 characters, excluding special characters such as ;|&><'$.
The parameter value cannot be empty or full of spaces.
- Files can be stored in HDFS or OBS. The path varies depending on the file system.
OBS: The path must start with s3a://. Files or programs encrypted by KMS are not supported.
HDFS: The path starts with a slash (/).
- Spark Script must end with .sql while MapReduce and Spark Jar must end with .jar.
sql and jar are case-insensitive.

* `arguments` - (Optional) Key parameter for program execution. The parameter
is specified by the function of the user's program. MRS is only responsible
for loading the parameter. The parameter contains a maximum of 2047 characters,
excluding special characters such as ;|&>'<$, and can be empty.
Expand All @@ -191,15 +187,14 @@ The `add_jobs` block supports:
;|&>'<$, and can be empty.

* `shutdown_cluster` - (Optional) Whether to delete the cluster after the jobs
are complete true: Yes false: No
are complete.

* `file_action` - (Optional) Data import and export import export
* `file_action` - (Optional) Data import and export. Valid values include: import, export.

* `submit_job_once_cluster_run` - (Required) true: A job is submitted when a
cluster is created. false: A job is submitted separately. The parameter is set
to true in this example.
* `submit_job_once_cluster_run` - (Required) Whether the job is submitted during the cluster
creation or after the cluster is created.

* `hql` - (Optional) HiveQL statement
* `hql` - (Optional) HiveQL statement.

* `hive_script_path` - (Optional) SQL program path This parameter is needed
by Spark Script and Hive Script jobs only and must meet the following requirements:
Expand All @@ -211,30 +206,31 @@ The `add_jobs` block supports:

The following attributes are exported:

* `id` - Indicates the MRS cluster ID.
* `region` - See Argument Reference above.
* `available_zone_id` - See Argument Reference above.
* `available_zone_name` - Indicates the name of an availability zone.
* `billing_type` - See Argument Reference above.
* `data_center` - See Argument Reference above.
* `cluster_name` - See Argument Reference above.
* `cluster_version` - See Argument Reference above.
* `cluster_type` - See Argument Reference above.
* `master_node_num` - See Argument Reference above.
* `master_node_size` - See Argument Reference above.
* `core_node_num` - See Argument Reference above.
* `core_node_size` - See Argument Reference above.
* `available_zone_id` - See Argument Reference above.
* `cluster_name` - See Argument Reference above.
* `vpc_id` - See Argument Reference above.
* `subnet_id` - See Argument Reference above.
* `cluster_version` - See Argument Reference above.
* `cluster_type` - See Argument Reference above.
* `volume_type` - See Argument Reference above.
* `volume_size` - See Argument Reference above.
* `node_public_cert_name` - See Argument Reference above.
* `safe_mode` - See Argument Reference above.
* `cluster_admin_secret` - See Argument Reference above.
* `node_password` - See Argument Reference above.
* `node_public_cert_name` - See Argument Reference above.
* `log_collection` - See Argument Reference above.
* `vpc_id` - See Argument Reference above.
* `subnet_id` - See Argument Reference above.
* `component_list` - See Argument Reference below.
* `add_jobs` - See Argument Reference above.
* `tags` - See Argument Reference above.
* `order_id` - Order ID for creating clusters.
* `cluster_id` - Cluster ID.
* `available_zone_name` - Name of an availability zone.
* `instance_id` - Instance ID.
* `hadoop_version` - Hadoop version.
* `master_node_ip` - IP address of a Master node.
Expand All @@ -252,16 +248,16 @@ The following attributes are exported:
* `vnc` - URI address for remote login of the elastic cloud server.
* `fee` - Cluster creation fee, which is automatically calculated.
* `deployment_id` - Deployment ID of a cluster.
* `cluster_state` - Cluster status Valid values include: existing history starting
running terminated failed abnormal terminating rebooting shutdown frozen scaling-out
scaling-in scaling-error.
* `tenant_id` - Project ID.
* `cluster_state` - Cluster status. Valid values include: starting, running, terminating, terminated,
failed, abnormal, frozen, scaling-out, scaling-in.
* `create_at` - Cluster creation time.
* `update_at` - Cluster update time.
* `error_info` - Error information.
* `charging_start_time` - Time when charging starts.
* `remark` - Remarks of a cluster.

The component_list attributes:
* `component_name` - (Required) Component name Currently, Hadoop, Spark, HBase,
Hive, Hue, Loader, Flume, Kafka and Storm are supported.
The `component_list` attributes supports:
* `component_id` - Indicates the component ID.
* `component_name` - Indicates the component name.
* `component_version` - Indicates the component version.
* `component_desc` - Indicates the component description.
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ require (
github.com/hashicorp/errwrap v1.0.0
github.com/hashicorp/go-cleanhttp v0.5.1
github.com/hashicorp/terraform-plugin-sdk v1.13.0
github.com/huaweicloud/golangsdk v0.0.0-20201030072716-cb31520416ba
github.com/huaweicloud/golangsdk v0.0.0-20201103063720-d2a60b0922dd
github.com/jen20/awspolicyequivalence v0.0.0-20170831201602-3d48364a137a
github.com/mitchellh/go-homedir v1.1.0
github.com/smartystreets/goconvey v0.0.0-20190222223459-a17d461953aa // indirect
Expand Down
2 changes: 2 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,8 @@ github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/huaweicloud/golangsdk v0.0.0-20201030072716-cb31520416ba h1:HYHfyOIFOG8Y3y1XZoiyFhsTUxkg56IxPn+D9ME60pE=
github.com/huaweicloud/golangsdk v0.0.0-20201030072716-cb31520416ba/go.mod h1:fcOI5u+0f62JtJd7zkCch/Z57BNC6bhqb32TKuiF4r0=
github.com/huaweicloud/golangsdk v0.0.0-20201103063720-d2a60b0922dd h1:RBbFW2weIB9TXSLu2nAqtW13T86TafHZxEOglqRbHc4=
github.com/huaweicloud/golangsdk v0.0.0-20201103063720-d2a60b0922dd/go.mod h1:fcOI5u+0f62JtJd7zkCch/Z57BNC6bhqb32TKuiF4r0=
github.com/jen20/awspolicyequivalence v0.0.0-20170831201602-3d48364a137a h1:FyS/ubzBR5xJlnJGRTwe7GUHpJOR4ukYK3y+LFNffuA=
github.com/jen20/awspolicyequivalence v0.0.0-20170831201602-3d48364a137a/go.mod h1:uoIMjNxUfXi48Ci40IXkPRbghZ1vbti6v9LCbNqRgHY=
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
Expand Down
Loading