Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Spark] Fix the inaccurate issue of obtaining COLUMN_SIZE in the decimal field jdbc of spark engine #5750

Closed
wants to merge 3 commits into from

Conversation

waywtdcc
Copy link
Contributor

🔍 Description

Issue References 🔗

This pull request fixes #

Describe Your Solution 🔧

Fix the inaccuracy in getting COLUMN_SIZE from spark engine's decimal field jdbc. The current jdbc's get column size gets the decimal field as default size, which is inaccurate; if it is decimal(20,3), the obtained column size is 16; the actual is 20.

Types of changes 🔖

  • Bugfix (non-breaking change which fixes an issue)

Test Plan 🧪

Behavior Without This Pull Request ⚰️

Behavior With This Pull Request 🎉

Related Unit Tests


@yaooqinn yaooqinn modified the milestones: v1.9.0, v1.7.4 Nov 22, 2023
@yaooqinn
Copy link
Member

Nice catch! Hi @waywtdcc , can we restore the checklist from PR desc?

@codecov-commenter
Copy link

codecov-commenter commented Nov 22, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (a23b16a) 61.36% compared to head (2d288f5) 61.35%.
Report is 12 commits behind head on master.

Additional details and impacted files
@@             Coverage Diff              @@
##             master    #5750      +/-   ##
============================================
- Coverage     61.36%   61.35%   -0.01%     
  Complexity       23       23              
============================================
  Files           607      607              
  Lines         35897    35942      +45     
  Branches       4923     4933      +10     
============================================
+ Hits          22027    22052      +25     
- Misses        11479    11506      +27     
+ Partials       2391     2384       -7     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@waywtdcc waywtdcc requested a review from yaooqinn November 23, 2023 00:21
Copy link
Member

@yaooqinn yaooqinn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, please restore the PR description

@waywtdcc waywtdcc requested a review from yaooqinn November 24, 2023 06:36
@waywtdcc
Copy link
Contributor Author

Hi, please restore the PR description

ok

@yaooqinn yaooqinn closed this in eb9e88b Nov 29, 2023
yaooqinn pushed a commit that referenced this pull request Nov 29, 2023
…IZE in the decimal field jdbc of spark engine

# 🔍 Description
## Issue References 🔗

This pull request fixes #

## Describe Your Solution 🔧

Fix the inaccuracy in getting COLUMN_SIZE from spark engine's decimal field jdbc. The current jdbc's get column size gets the decimal field as default size, which is inaccurate; if it is decimal(20,3), the obtained column size is 16; the actual is 20.

## Types of changes 🔖

- [X] Bugfix (non-breaking change which fixes an issue)

## Test Plan 🧪

#### Behavior Without This Pull Request ⚰️

#### Behavior With This Pull Request 🎉

#### Related Unit Tests

---

Closes #5750 from waywtdcc/support_spark_decimal2.

Closes #5750

2d288f5 [waywtdcc] [Spark] Fix the inaccurate issue of obtaining COLUMN_SIZE in the decimal field jdbc of spark engine
4286354 [waywtdcc] Support flink engine under the select statement, the results can be read in a stream
e5b74b0 [waywtdcc] Support flink engine under the select statement, the results can be read in a stream

Authored-by: waywtdcc <[email protected]>
Signed-off-by: Kent Yao <[email protected]>
(cherry picked from commit eb9e88b)
Signed-off-by: Kent Yao <[email protected]>
yaooqinn pushed a commit that referenced this pull request Nov 29, 2023
…IZE in the decimal field jdbc of spark engine

# 🔍 Description
## Issue References 🔗

This pull request fixes #

## Describe Your Solution 🔧

Fix the inaccuracy in getting COLUMN_SIZE from spark engine's decimal field jdbc. The current jdbc's get column size gets the decimal field as default size, which is inaccurate; if it is decimal(20,3), the obtained column size is 16; the actual is 20.

## Types of changes 🔖

- [X] Bugfix (non-breaking change which fixes an issue)

## Test Plan 🧪

#### Behavior Without This Pull Request ⚰️

#### Behavior With This Pull Request 🎉

#### Related Unit Tests

---

Closes #5750 from waywtdcc/support_spark_decimal2.

Closes #5750

2d288f5 [waywtdcc] [Spark] Fix the inaccurate issue of obtaining COLUMN_SIZE in the decimal field jdbc of spark engine
4286354 [waywtdcc] Support flink engine under the select statement, the results can be read in a stream
e5b74b0 [waywtdcc] Support flink engine under the select statement, the results can be read in a stream

Authored-by: waywtdcc <[email protected]>
Signed-off-by: Kent Yao <[email protected]>
(cherry picked from commit eb9e88b)
Signed-off-by: Kent Yao <[email protected]>
@yaooqinn
Copy link
Member

Merged to master, 1.8, 1.7

pan3793 added a commit that referenced this pull request Feb 22, 2024
…se notes

# 🔍 Description
## Issue References 🔗

Currently, we use a rather primitive way to manually write release notes from scratch, and some of the mechanical and repetitive work can be simplified by the scripts.

## Describe Your Solution 🔧

Adds a script to simplify the process of creating release notes.

Note: it just simplifies some processes, the release manager still needs to tune the outputs by hand.

## Types of changes 🔖

- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

## Test Plan 🧪

```
RELEASE_TAG=v1.8.1 PREVIOUS_RELEASE_TAG=v1.8.0 build/release/pre_gen_release_notes.py
```

```
$ head build/release/commits-v1.8.1.txt
[KYUUBI #5981] Deploy Spark Hive connector with Scala 2.13 to Maven Central
[KYUUBI #6058] Make Jetty server stop timeout configurable
[KYUUBI #5952][1.8] Disconnect connections without running operations after engine maxlife time graceful period
[KYUUBI #6048] Assign serviceNode and add volatile for variables
[KYUUBI #5991] Error on reading Atlas properties composed of multi values
[KYUUBI #6045] [REST] Sync the AdminRestApi with the AdminResource Apis
[KYUUBI #6047] [CI] Free up disk space
[KYUUBI #6036] JDBC driver conditional sets fetchSize on opening session
[KYUUBI #6028] Exited spark-submit process should not block batch submit queue
[KYUUBI #6018] Speed up GetTables operation for Spark session catalog
```

```
$ head build/release/contributors-v1.8.1.txt
* Shaoyun Chen        -- [KYUUBI #5857][KYUUBI #5720][KYUUBI #5785][KYUUBI #5617]
* Chao Chen           -- [KYUUBI #5750]
* Flyangz             -- [KYUUBI #5832]
* Pengqi Li           -- [KYUUBI #5713]
* Bowen Liang         -- [KYUUBI #5730][KYUUBI #5802][KYUUBI #5767][KYUUBI #5831][KYUUBI #5801][KYUUBI #5754][KYUUBI #5626][KYUUBI #5811][KYUUBI #5853][KYUUBI #5765]
* Paul Lin            -- [KYUUBI #5799][KYUUBI #5814]
* Senmiao Liu         -- [KYUUBI #5969][KYUUBI #5244]
* Xiao Liu            -- [KYUUBI #5962]
* Peiyue Liu          -- [KYUUBI #5331]
* Junjie Ma           -- [KYUUBI #5789]
```
---

# Checklist 📝

- [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html)

**Be nice. Be informative.**

Closes #6074 from pan3793/release-script.

Closes #6074

3d5ec20 [Cheng Pan] credits
1765279 [Cheng Pan] Add a script to simplify the process of creating release notes

Authored-by: Cheng Pan <[email protected]>
Signed-off-by: Cheng Pan <[email protected]>
zhaohehuhu pushed a commit to zhaohehuhu/incubator-kyuubi that referenced this pull request Mar 21, 2024
… release notes

# 🔍 Description
## Issue References 🔗

Currently, we use a rather primitive way to manually write release notes from scratch, and some of the mechanical and repetitive work can be simplified by the scripts.

## Describe Your Solution 🔧

Adds a script to simplify the process of creating release notes.

Note: it just simplifies some processes, the release manager still needs to tune the outputs by hand.

## Types of changes 🔖

- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

## Test Plan 🧪

```
RELEASE_TAG=v1.8.1 PREVIOUS_RELEASE_TAG=v1.8.0 build/release/pre_gen_release_notes.py
```

```
$ head build/release/commits-v1.8.1.txt
[KYUUBI apache#5981] Deploy Spark Hive connector with Scala 2.13 to Maven Central
[KYUUBI apache#6058] Make Jetty server stop timeout configurable
[KYUUBI apache#5952][1.8] Disconnect connections without running operations after engine maxlife time graceful period
[KYUUBI apache#6048] Assign serviceNode and add volatile for variables
[KYUUBI apache#5991] Error on reading Atlas properties composed of multi values
[KYUUBI apache#6045] [REST] Sync the AdminRestApi with the AdminResource Apis
[KYUUBI apache#6047] [CI] Free up disk space
[KYUUBI apache#6036] JDBC driver conditional sets fetchSize on opening session
[KYUUBI apache#6028] Exited spark-submit process should not block batch submit queue
[KYUUBI apache#6018] Speed up GetTables operation for Spark session catalog
```

```
$ head build/release/contributors-v1.8.1.txt
* Shaoyun Chen        -- [KYUUBI apache#5857][KYUUBI apache#5720][KYUUBI apache#5785][KYUUBI apache#5617]
* Chao Chen           -- [KYUUBI apache#5750]
* Flyangz             -- [KYUUBI apache#5832]
* Pengqi Li           -- [KYUUBI apache#5713]
* Bowen Liang         -- [KYUUBI apache#5730][KYUUBI apache#5802][KYUUBI apache#5767][KYUUBI apache#5831][KYUUBI apache#5801][KYUUBI apache#5754][KYUUBI apache#5626][KYUUBI apache#5811][KYUUBI apache#5853][KYUUBI apache#5765]
* Paul Lin            -- [KYUUBI apache#5799][KYUUBI apache#5814]
* Senmiao Liu         -- [KYUUBI apache#5969][KYUUBI apache#5244]
* Xiao Liu            -- [KYUUBI apache#5962]
* Peiyue Liu          -- [KYUUBI apache#5331]
* Junjie Ma           -- [KYUUBI apache#5789]
```
---

# Checklist 📝

- [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html)

**Be nice. Be informative.**

Closes apache#6074 from pan3793/release-script.

Closes apache#6074

3d5ec20 [Cheng Pan] credits
1765279 [Cheng Pan] Add a script to simplify the process of creating release notes

Authored-by: Cheng Pan <[email protected]>
Signed-off-by: Cheng Pan <[email protected]>
beryllw pushed a commit to beryllw/incubator-kyuubi that referenced this pull request Jun 7, 2024
…LUMN_SIZE in the decimal field jdbc of spark engine

# 🔍 Description
## Issue References 🔗

This pull request fixes #

## Describe Your Solution 🔧

Fix the inaccuracy in getting COLUMN_SIZE from spark engine's decimal field jdbc. The current jdbc's get column size gets the decimal field as default size, which is inaccurate; if it is decimal(20,3), the obtained column size is 16; the actual is 20.

## Types of changes 🔖

- [X] Bugfix (non-breaking change which fixes an issue)

## Test Plan 🧪

#### Behavior Without This Pull Request ⚰️

#### Behavior With This Pull Request 🎉

#### Related Unit Tests

---

Closes apache#5750 from waywtdcc/support_spark_decimal2.

Closes apache#5750

2d288f5 [waywtdcc] [Spark] Fix the inaccurate issue of obtaining COLUMN_SIZE in the decimal field jdbc of spark engine
4286354 [waywtdcc] Support flink engine under the select statement, the results can be read in a stream
e5b74b0 [waywtdcc] Support flink engine under the select statement, the results can be read in a stream

Authored-by: waywtdcc <[email protected]>
Signed-off-by: Kent Yao <[email protected]>
(cherry picked from commit eb9e88b)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants