Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JS Extension API #463

Merged
merged 1 commit into from
Aug 24, 2018
Merged

JS Extension API #463

merged 1 commit into from
Aug 24, 2018

Conversation

emtwo
Copy link

@emtwo emtwo commented Jul 18, 2018

No description provided.

@jezdez
Copy link

jezdez commented Jul 20, 2018

As mentioned in our chat I think this is a great placeholder and we can fill this with a more widely accepted bridge while working on the bulk of a frontend extension API.

Python 3.7 has a new module called importlib.resources which ports setuptools' old pkg_resources Basic Resource Access to the Python stdlib. In short:

In our terminology, a resource is a file that is located within an importable Python package. Resources can live on the file system, in a zip file, or in any place that has a loader supporting the appropriate API for reading resources. Directories are not resources.

For future compatibility we should base the code the gets the files from specific Redash extensions using the the backport for Python < 3.7. that was released to PyPI as importlib_resources.

In addition I think the pattern that we should target is that individual Python Redash extensions can ship a js directory (or web since it can also contain non-.js files?), so that the mechanism to "build" the bundles would be:

  1. iterate over all extensions using the entrypoints to get the location of web files
  2. configure webpack to use the locations of all extensions during the build process (by writing a config file (if webpack allows that?), symlinking or (at worst) copying the files.

I think copying files from the Python package into the client app directory shouldn't be something we do and maybe we can steal the method pywebpack uses to cross those two worlds.

@emtwo
Copy link
Author

emtwo commented Jul 27, 2018

@jezdez, I’ve tried your suggestions and there are some things that worked and some didn’t. Here’s a summary:

Problem with using importlib_resources:

  • My assumption is that you meant that we use this for referencing the path where the extensions are install (e.g. instead of using redash_stmo.__file__).
  • The most useful function I found for this was importlib_resource.path(). However, this function only works for accessing files in the top level directory of a package or in subdirectories with __init__.py files. So I could not use it to access a folder with web content.
  • Its predecessor pkg_resources.resource_filename() worked well for this purpose though. http://importlib-resources.readthedocs.io/en/latest/migration.html#pkg-resources-resource-filename

Problem with directly running webpack on the extension packages where they were installed:

  • This code does not have access to any of the node modules that are already installed in redash
  • This might work if the packages had an entirely different webpack setup they they shipped with but this would duplicate a lot of module installations and make porting to extensions more complex.

Problems with symlinking:

  • Docker cannot translate symlinks from container to host. e.g. when I symlinked the js/html package content from where it’s installed to client/app on the container, the extensions folder doesn’t exist on the host. Therefore running npm run build locally is broken.
  • Similar to the problem with directly running webpack on extensions above, webpack follows the symlink in the container and discovers that there are missing node modules.

Given the problems above, the latest code I’ve pushed works locally and addresses the following issues:

  • A new client/app/extensions directory is created to copy the extensions into using the environment variable + alias method described by @arikfr here Add front-end extension capability getredash/redash#2565 (comment). This would alleviate some concerns about overwriting content in client/app/components if we were to copy over files with the same names.
  • I’ve used setup.py entry points in the same way Pywebpack does in order to expose different paths for different bundles.
  • Using pkg_resource.resource_filename() to ensure safe access the the extensions' resources files

I also want to point out that Pywebpack also works by copying bundles into the directory where it will be built, so unfortunately I didn’t see any novel ideas that could help us there. Here is the code path to this:

  1. Call to create() and new project calls collect() https://github.com/inveniosoftware/pywebpack/blob/master/pywebpack/project.py#L210
  2. Call run() on storage_cls() https://github.com/inveniosoftware/pywebpack/blob/master/pywebpack/project.py#L210
  3. run() uses the copy command from shutil: https://github.com/inveniosoftware/pywebpack/blob/master/pywebpack/storage.py#L57

@emtwo emtwo force-pushed the emtwo/datasource-url-extensions branch from ddc55bc to de53afe Compare July 27, 2018 17:16
@emtwo emtwo force-pushed the emtwo/datasource-url-extensions branch 2 times, most recently from 5d977f3 to 5133c6b Compare July 31, 2018 14:25
@emtwo
Copy link
Author

emtwo commented Jul 31, 2018

@arikfr, as you can see, we're still working through some kinks with how this might work, but I'm cc'ing you early in the process in case you have some insights that we might be missing.

Here is a link to where the extension code lives: https://github.com/mozilla/redash-stmo/pull/6/files

Copy link

@jezdez jezdez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rwc+ ✨

buildscript.py Outdated

# Make a directory for extensions and set it as an environment variable
# to be picked up by webpack.
EXTENSIONS_RELATIVE_PATH = "client/app/extensions"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use os.path.join('client', 'app', 'extensions') instead here for compat reasons and less change for breakage on weird platforms.

buildscript.py Outdated
# Make a directory for extensions and set it as an environment variable
# to be picked up by webpack.
EXTENSIONS_RELATIVE_PATH = "client/app/extensions"
EXTENSIONS_DIRECTORY = os.path.join(os.getcwd(), EXTENSIONS_RELATIVE_PATH)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safer is to use os.path.join(os.path.dirname(__file__), EXTENSIONS_RELATIVE_PATH)) in case the buildscript is called from a different working dir.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this still doesn't quite work because os.path.dirname(__file__) returns /app/bin where the script lives which is then appended to EXTENSIONS_RELATIVE_PATH giving /app/bin/client/app/extensions which isn't what we want either. Perhaps in the Dockerfile we can do something like ENV ROOT_DIR=/app and use that environment variable instead in the buildscript. What do you think?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right, this needs to use the parent dir when we move the file into bin/:
os.path.join(os.path.dirname(os.path.dirname(__file__)), EXTENSIONS_RELATIVE_PATH))

buildscript.py Outdated
@@ -0,0 +1,34 @@
import os
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move the script to bin/, rename to just bundle-extensions, make it executable and add a shebang line as the first line:

#!/usr/bin/env python

That way the script is self-contained and can be called from npm.

buildscript.py Outdated

copy_tree(content_folder, destination)

call("npm run build", shell=True)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove the call to npm run build here and instead add a new item to the scripts section in the package.json file called prebuild with the content bin/bundle-extensions so it's automatically called before the build script in package.json.

For the watch script in package.json we could also add a prewatch step that calls bin/bundle-extensions with a --link option which would create symlinks instead of copies. I think. What do you think?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few problems I came across here with this:

  1. npm run build is called from docker-compose build when building the Dockerfile and this adds a new /app/client/app/extensions directory in the container. However the docker-compose.yml file then overrides this by mounting the volume .:/app since locally the extensions folder is not yet created. This results in the extensions folder not being visible as expected when running docker-compose build.

  2. npm run build won't be callable locally because the bundle-extensions requires redash-stmo to be installed so that it's contents are copied over.

For these two reasons I made this buildscript a separate executable (https://github.com/mozilla/redash/pull/463/files#diff-76ffdf10e3a9d8396a3951f272eaf403R75) that did not run from the Dockerfile or locally.

Is there perhaps some Docker magic that you know of to get around this?

In terms prewatch, I'm not quite sure what the purpose of this would be. Could you elaborate a bit? It seemed that once the client/app/extensions directory exists, changes in it it will already be capturd by watch.

Copy link
Author

@emtwo emtwo Aug 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this might work if the docker-compose.yml command field is used to run this buildscript as well as the Dockerfile. I will try that.

Edit: This doesn't quite work still because /app/client/app/extensions will not be available until docker-compose up is called which could happen after npm run build and so the new extensions directory will not be built. We could add npm run build to the command in docker-compose.yml but this will make each call to docker-compose up very slow

Copy link

@jezdez jezdez Aug 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, to summarize our discussion just now. The problem that you hinted at in 1. and 2. is basically the result of an inconsistencies of build environment between when the image is build (and calling npm run build) and when a developer is updating the build from outside the container based on that image.

On top of that docker-compose overlaps the folder that is created inside the image when mounting the working directory over it. As you correctly said, that's why Redash asks developers to run npm build from the host machine, since that's the only place that has access to the build directory during development.

A strategy to solve that inconsistency between image and development (or: disparity between development and production environment) is:

  1. move the folders that are used by both outside the path where docker-compose is mounting the working directory

    For ATMO we solved this by creating a new separate directory and installing npm modules there. And then run the build process both when the image is created as well as during development.

  2. run all build processes (npm build and bundle-extensions) inside the container for both image creation and development

  3. provide helpers such as a script that can be called from the host machine to trigger the container-side build process to help counteract the impact on the development workflow

So I think we only need /opt/npm for the NPM modules and then provide a docker entrypoint command that provides the ability to run bundle-extensions from the host machine via docker-compose, e.g. what the user would call to build the bundle including the extensions: docker-compose run server bundle.

circle.yml Outdated
@@ -7,6 +7,7 @@ machine:
6.9.1
dependencies:
override:
- python buildscript.py
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No extra need to call this here once we call bin/bundle-extensions from package.json.

@emtwo emtwo force-pushed the emtwo/datasource-url-extensions branch from ac95d9a to a306cc9 Compare August 16, 2018 17:53
@emtwo emtwo changed the title [WIP] Datasource link extension. Datasource link extension API Aug 16, 2018
@emtwo emtwo changed the title Datasource link extension API Extension API Aug 16, 2018
@emtwo emtwo changed the title Extension API JS Extension API Aug 16, 2018
@emtwo emtwo force-pushed the emtwo/datasource-url-extensions branch 4 times, most recently from 0f98fbd to d92556c Compare August 21, 2018 19:34
@emtwo
Copy link
Author

emtwo commented Aug 21, 2018

@jezdez So it turns out that telemetry-analysis-service is slightly different because it does not use webpack. webpack.config.js seemed to require some modifications and it was still not trivial to get webpack to build and work correctly from a different directory. Instead of spending too much time on this now, I propose it can be a separate issue from extensions work since I think this current PR is not affected by it.

However, I've added a Makefile with a few useful commands. Most importantly make build will both bundle the extensions and run npm run build such that the outputs of both are available on the container for both image creation and development.

In case it's not clear, the reason I called the npm script bundle and not prebuild is because bundle can only be run inside of the container while build is then run on the host.

Now building looks like this:

  1. make compose_build
  2. make create_database
  3. make build

@emtwo emtwo force-pushed the emtwo/datasource-url-extensions branch from d92556c to 46aaefe Compare August 21, 2018 20:01
Makefile Outdated
docker-compose run server tests

build: bundle
npm run build
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tasks needs to be marked as PHONY to prevent Make from skipping over the steps in case it finds file with the task names, e.g.:

.PHONY: build bundle compose_build create_database test test_db 

# ...

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any chance to add a clean task? That's a common task added to Makefiles and is often super useful to blow away everything.

Copy link

@jezdez jezdez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done!

@emtwo emtwo force-pushed the emtwo/datasource-url-extensions branch from 46aaefe to 46fbb79 Compare August 24, 2018 13:02
@emtwo emtwo merged commit 9424dad into master Aug 24, 2018
@jezdez jezdez deleted the emtwo/datasource-url-extensions branch February 25, 2019 11:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants