Skip to content

Commit

Permalink
Make the https:// optional
Browse files Browse the repository at this point in the history
  • Loading branch information
Fokko committed May 6, 2021
1 parent 27409c4 commit e88f0f7
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 4 deletions.
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@
### Under the hood

- Parse information returned by `list_relations_without_caching` macro to speed up catalog generation ([#93](https://github.com/fishtown-analytics/dbt-spark/issues/93), [#160](https://github.com/fishtown-analytics/dbt-spark/pull/160))
- More flexible host passing, https:// can be omitted ([#153](https://github.com/fishtown-analytics/dbt-spark/issues/153))

### Contributors
- [@friendofasquid](https://github.com/friendofasquid) ([#159](https://github.com/fishtown-analytics/dbt-spark/pull/159))
- [@franloza](https://github.com/franloza) ([#160](https://github.com/fishtown-analytics/dbt-spark/pull/160))

- [@Fokko](https://github.com/Fokko) ([#165](https://github.com/fishtown-analytics/dbt-spark/pull/165))

## dbt-spark 0.19.1 (Release TBD)

Expand Down
11 changes: 8 additions & 3 deletions dbt/adapters/spark/connections.py
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ class SparkConnectionManager(SQLConnectionManager):
SPARK_CLUSTER_HTTP_PATH = "/sql/protocolv1/o/{organization}/{cluster}"
SPARK_SQL_ENDPOINT_HTTP_PATH = "/sql/1.0/endpoints/{endpoint}"
SPARK_CONNECTION_URL = (
"https://{host}:{port}" + SPARK_CLUSTER_HTTP_PATH
"{host}:{port}" + SPARK_CLUSTER_HTTP_PATH
)

@contextmanager
Expand Down Expand Up @@ -320,8 +320,14 @@ def open(cls, connection):
cls.validate_creds(creds, ['token', 'host', 'port',
'cluster', 'organization'])

# Prepend https:// if it is missing
if creds.host.startswith('https://'):
host = creds.host
else:
host = 'https://' + creds.host

conn_url = cls.SPARK_CONNECTION_URL.format(
host=creds.host,
host=host,
port=creds.port,
organization=creds.organization,
cluster=creds.cluster
Expand Down Expand Up @@ -350,7 +356,6 @@ def open(cls, connection):
kerberos_service_name=creds.kerberos_service_name) # noqa
handle = PyhiveConnectionWrapper(conn)
elif creds.method == SparkConnectionMethod.ODBC:
http_path = None
if creds.cluster is not None:
required_fields = ['driver', 'host', 'port', 'token',
'organization', 'cluster']
Expand Down

0 comments on commit e88f0f7

Please sign in to comment.