Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Support Partition/Bucket/Sorting when creating Hive table by Trino #1510

Closed
Tracked by #1512
yuqi1129 opened this issue Jan 16, 2024 · 0 comments · Fixed by #1539
Closed
Tracked by #1512

[FEATURE] Support Partition/Bucket/Sorting when creating Hive table by Trino #1510

yuqi1129 opened this issue Jan 16, 2024 · 0 comments · Fixed by #1539
Assignees

Comments

@yuqi1129
Copy link
Contributor

Describe the feature

Supports creating a hive table with Partition/Bucket/Sorting by Trino.

Motivation

See above

Describe the solution

See above

Additional context

No response

@jerryshao jerryshao added this to the Gravitino 0.4.0 milestone Jan 19, 2024
jerryshao added a commit that referenced this issue Jan 22, 2024
…rt order of Hive table created by Trino (#1539)

### What changes were proposed in this pull request?

We can create a Hive table with partitioning, distribution, and sorting
ordered by Trino.

### Why are the changes needed?

It's a crucial feature of the Trino connector. 

Fix: #1510 

### Does this PR introduce _any_ user-facing change?

User can create a hive table in Trino by:
```
create table t10 (id int, name varchar) with (partitioned_by = ARRAY['name'], bucketed_by = ARRAY['id'], bucket_count = 50);
```
Or  create a Hive table and loaded by Trino like 
```
trino:db2> show create table t11;
                                   Create Table
----------------------------------------------------------------------------------
 CREATE TABLE "test.hive_catalog".db2.t11 (
    id integer,
    name varchar(65535)
 )
 COMMENT ''
 WITH (
    bucket_count = 50,
    bucketed_by = ARRAY['id'],
    input_format = 'org.apache.hadoop.mapred.TextInputFormat',
    location = 'hdfs://localhost:9000/user/hive/warehouse/db2.db/t11',
    output_format = 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat',
    partitioned_by = ARRAY['name'],
    serde_lib = 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe',
    serde_name = 't11',
    table_type = 'MANAGED_TABLE'
 )
(1 row)
```

### How was this patch tested?

Add some IT

---------

Co-authored-by: Jerry Shao <[email protected]>
mchades pushed a commit to mchades/gravitino that referenced this issue Jan 24, 2024
…and sort order of Hive table created by Trino (apache#1539)

### What changes were proposed in this pull request?

We can create a Hive table with partitioning, distribution, and sorting
ordered by Trino.

### Why are the changes needed?

It's a crucial feature of the Trino connector. 

Fix: apache#1510 

### Does this PR introduce _any_ user-facing change?

User can create a hive table in Trino by:
```
create table t10 (id int, name varchar) with (partitioned_by = ARRAY['name'], bucketed_by = ARRAY['id'], bucket_count = 50);
```
Or  create a Hive table and loaded by Trino like 
```
trino:db2> show create table t11;
                                   Create Table
----------------------------------------------------------------------------------
 CREATE TABLE "test.hive_catalog".db2.t11 (
    id integer,
    name varchar(65535)
 )
 COMMENT ''
 WITH (
    bucket_count = 50,
    bucketed_by = ARRAY['id'],
    input_format = 'org.apache.hadoop.mapred.TextInputFormat',
    location = 'hdfs://localhost:9000/user/hive/warehouse/db2.db/t11',
    output_format = 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat',
    partitioned_by = ARRAY['name'],
    serde_lib = 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe',
    serde_name = 't11',
    table_type = 'MANAGED_TABLE'
 )
(1 row)
```

### How was this patch tested?

Add some IT

---------

Co-authored-by: Jerry Shao <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants