Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unify user session lifetime configuration #11745

Closed

Conversation

michalmisiewicz
Copy link
Contributor

@michalmisiewicz michalmisiewicz commented Oct 22, 2020

In current version of Airflow user session lifetime can be configured by session_lifetime_days and force_log_out_after options. In practise only session_lifetime_days has impact on session lifetime, but it is limited to values in day. In this PR I have simplified user session lifetime configuration.

@boring-cyborg boring-cyborg bot added the area:webserver Webserver related Issues label Oct 22, 2020
@mik-laj
Copy link
Member

mik-laj commented Oct 22, 2020

@tooptoop4 @zacharya19 Can you look at it?

tests/www/test_app.py Outdated Show resolved Hide resolved
Copy link
Contributor

@zacharya19 zacharya19 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

airflow/www/app.py Outdated Show resolved Hide resolved
@@ -66,8 +66,16 @@ def create_app(config=None, testing=False, app_name="Airflow"):
flask_app = Flask(__name__)
flask_app.secret_key = conf.get('webserver', 'SECRET_KEY')

session_lifetime_days = conf.getint('webserver', 'SESSION_LIFETIME_DAYS', fallback=30)
flask_app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(days=session_lifetime_days)
if conf.has_option('webserver', 'SESSION_LIFETIME_DAYS'):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should not warn users about configuration options that do not work if the user has the correct configuration options for Airflow 2.0. This will allow us to use one configuration file for Airflow 1.10 and 2.0. This is essential if users want to use Helm Chart and other generic deployment tools.

So I propose that you check if there is a SESSION_LIFETIME_MINUTES option in the config file. If so, use it.
If it is not there, we should display a deprecation warning and we should try to maintain backward compatibility. In this case, this means that the warning should display a message that option SESSION_LIFETIME_DAYS has been renamed to SESSION_LIFETIME_MINUTES and the unit from day to hour and that option FORCE_LOG_OUT_AFTER has been removed.

When this change has been merged, please create a ticket requesting a new rule for airflow upgrade-check.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mik-laj I've updated the code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mik-laj are we ready to go ?

ephraimbuddy and others added 22 commits October 25, 2020 22:35
First time preparing backports after converting scripts to
also support regular providers. Some small bugs were found
and fixed.
* Add uSmart Securities to the INTHEWILD.md

* change airflow dag to airflow dags in 2.0
The tests were not running for a few days because the
dependency (needs) was missing - it was needed to get the
selective checks work after the recent refactor in apache#11656.
…che#11854)

Semver module doesn't like python version specifiers such as `0.0.2a1`
-- since packaging module is already a dep from setup tools, and is what
the python ecosystem uses to do version handling it makes sense to use
it.
…1835)

With this change we attempt to better diagnose some occasional
network docker-compose issues that have beeen plaguing us after
we solved or workarounded other CI-related issues. Sometimes
the docker compose jobs fail on checking if the container is
up and running with either of the two errors:

 * 'forward host lookup failed: Unknown host`
 * 'DNS fwd/rev mismatch'

Usually this happens in rabbitMQ and openldap containers.

Both indicate a problem with DNS of the docker engine or maybe
some remnants of the previous docker run that do not allow us
to start those containers.

This change introduces few improvements:

* added --volume in `docker system prune` command which might
  clean-up some anonymous volumes left by the containers between
  runs

* removed docker-compose down --remove-orphans --down command
  after failure, as currently we are anyhow always doing it
  few lines before (before the test). This change will cause
  that our mechanism of logging container logs after failure
  will likely give us more information about in case the root
  cause is rabbitmq or openldap container failing to start

* Increases number of tries to 5 in case of failed containers.
Scheduler HA uses Serialized DAGs and hence it is a strict
the requirement for 2.0.

It also has performance benefits for the Webserver and so should
be used by default anyway.

Task execution on workers will continue to use the actual files for execution.

Scheduler, Experimental API and Webserver will read the DAGs from DB using
`DagBag(read_dags_from_db=True)`
`PostgresSQL` -> `PostgreSQL`
This change is needed to implement optimisation of running only
limited version of the matrix tests for PRs that are not yet
approved by committers.
francescomucio and others added 28 commits October 30, 2020 14:39
This was merged in apache#11472, but for some reason that didn't run tests so
the failure wasn't noticed
This was merged in apache#11472, but for some reason that didn't run tests so
the failure wasn't noticed.

a7ad204 fixed 1 error but the docs error was missed
* Fix issue rendering k8s V1Pod

Adds return statement for json serialization of k8s.V1Pod that
previously caused errors in UI

* fix task_instances
Static checks are failing because of a Bad merge to Master.
…#11866)

* Conserve horizontal space by adding hide/reveal "links" in DAGs table

* Reverse the order of links to reduce mouse distance for most popular
This is an odd one -- in making a change in another file, this started
erroring - I guess it gets confused about the fork, but I'm not sure
what changed to cause this to become a problem.
Previously it looked like this

    INFO - Marking task as SUCCESS.dag_id=scenario1_case1_1_1, ...

Now it looks like

    INFO - Marking task as SUCCESS. dag_id=scenario1_case1_1_1, ...
Right now, when we are using the CLI parser in LocalTaskJob and
executor, having to re-create the parser each time adds a noticeable
overhead for quick tasks -- by caching this we save 0.07s

(I know that doesn't sound like much, but for an `echo true`
BashOperator task that takes it from 0.25s to 0.19s - i.e. it cuts the
time down by about 20%!)
…e#11955)

Werkzeug 1.0 removed the werkzeug.contrib.caching module, so we need at
least 1.5.0 of flask-caching to deal with this -- see

https://github.com/sh4nks/flask-caching/blob/v1.9.0/CHANGES#L97-L107
…pache#11956)

These modules add about 0.3s to _every_ task run time, which is nothing
for a long task, but significant if you want to run a lot of short tasks
very quickly.

These modules were discovered by running with PYTHONPROFILEIMPORTTIME=1
environment variable set and looking at what was imported and expensive.
…11959)

Currently, if someone uses OperatorLinks that are not registered,
it will break the UI when someone clicks on that DAG.

This commit will instead log an error in the Webserver logs so that
someone can still see the DAG in different Views (graph, tree, etc).
* Clean up command-line arguments

- The `--do-pickle` argument is exclusively used by the scheduler, the `worker` command completely ignores it.
- Remove the `ARG_DAG_ID_OPT` entry entirely. The scheduler doesn't take a `--dag-id` option, and while the `dags list-runs` and `dags list-jobs` commands *do* take such a switch, the help text here is incorrect, they simply need to use `ARG_DAG_ID`.

* Put ARG_DAG_ID_OPT back

It differs from ARG_DAG_ID in that it is an optional command-line switch. I changed the help text to match its use: to select a specific DAG when listing runs or jobs. The dag is not actually triggered.
…che#11948)

Some tests (testing the structure and importability of
example) should be always run even if core part was not modified.

That's why we move it to "always" directory.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:webserver Webserver related Issues full tests needed We need to run full set of tests for this PR to merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.