From 1d9769b6370080d7c6d25b30b71c438ccb845f61 Mon Sep 17 00:00:00 2001 From: LaunchDarklyReleaseBot <86431345+LaunchDarklyReleaseBot@users.noreply.github.com> Date: Fri, 30 Dec 2022 12:17:05 -0800 Subject: [PATCH] prepare 8.0.0 release (#190) * make verify_ssl=False turn off certificate verification too (#129) * add more TLS config options and collect HTTP/HTTPS config options in a class (#130) * make stream retry/backoff/jitter behavior consistent with other SDKs + improve testing (#131) * streams shouldn't use the same read timeout as the rest of the SDK (#132) * implement our own retry logic & logging for event posts, don't use urllib3.Retry (#133) * remove support for indirect/patch and indirect/put * remove unused logic for individual flag/segment poll for indirect/patch * Ehaisley/84082/remove python2 (#136) * remove all references to six and remove queue fallback imports * remove NullHandler logger backwards compat * update circleci config to remove python 2.7 tests * remove ordereddict backwards compat * update setup.py to no longer list python 2.7 as compatible * no longer inherit from object for python 2 backwards compat * update readme and manifest to reflect python 2.7 removal * remove unicode type compatibility * remove 2.7 support from circleci * Allow authenticating with proxy This commit allows for authenticating with a proxy configured with the `http_proxy` environment variable. Authentication requires passing a header, and is not parsed by urllib3 from the proxy_url. * reimplement proxy tests for DRY and add test of proxy auth params * doc comment on auth params in proxy URL * add type hints to some of the public facing api. update some docs * Revert "add type hints to some of the public facing api." This reverts commit c35fa6184ce1a274fd5c6d226cb3f1f7a795901a. * Ehaisley/ch86857/type hints (#138) * add typehints to the public API * validate typehints in the public api and tests with mypy * remove all current deprecations (#139) * remove all currently deprecated classes, methods, arguments, and tests * also update semver usage to remove calls to deprecated functions and classes * remove global set_sdk_key, make SDK key required in Config (#140) * Removed the guides link * Pinning mypy and running it against different python versions (#141) * fix time zone mishandling that could make event debugging not work (#142) * fix 6.x build (#143) * fix time zone mishandling that could make event debugging not work (6.x) (#144) * prepare 6.13.3 release (#154) * Releasing version 6.13.3 * [ch99756] Add alias events (#145) * add support for experiment rollouts * fix unit test * address PR comments * use Releaser v2 config * Use newer docker images (#147) * Updates docs URLs * Add support for 3.10 (#150) * started work on FlagBuilder in as part of test data source implementation * finished FlagBuilder implementation and added FlagRuleBuilder implementation * added initial TestData interface and updated tests to not rely on test data internals * started data source implementation * changed FlagBuilder to public class; changed FlagBuilder attributes to be initialized in __init__ and eliminated use of try ... except: pass for handling empty attributes * (big segments 1) add public config/interface types * added implementation of test data source * docstring * formatting * ensure property doesn't return None * (big segments 2) implement evaluation, refactor eval logic & modules * linting * (big segments 3) implement big segment status tracking, wire up components * typing fixes * typing fixes * implement SSE contract tests * fix CI * fix CI again * fix CI * disable SSE tests in Python 3.5 * make test service port configurable * better SSE implementation that fixes linefeed and multi-byte char issues * fix constructor parameters in test service * comment * test improvements * rm obsolete default config logic * (big segments 4) implement big segment stores in Redis+DynamoDB, refactor db tests (#158) * converted ldclient.integrations module from file to directory; started moving public classes out of ldclient.impl.integrations.test_data* and instead into ldclient.integrations.test_data*; started adding TestData documentation * removed setup/teardown functions leftover from test scaffold * added TestData, FlagBuilder, and FlagRuleBuilder documentation; minor adjustments to implementation details * removed warning supression from TestData tests * fix big segments user hash algorithm to use SHA256 * update mypy version * updates to tests and related bug fixes * always cache Big Segment query result even if it's None * fix test assertion * lint * fix big segment ref format * fix big segments cache TTL being set to wrong value * fixed structure of fallthrough variation in result of FlagBuilder.build() * moved __test__ attribute into TestData class definition to prevent mypy from complaining about a missing class attribute * minor doc comment fix * Apply suggestions related to Sphinx docstring formatting from code review Co-authored-by: Eli Bishop * fixed errors in the implementation of FlagBuilder's fallthrough_variation and off_variation when passing boolean variation values; updated tests to assert the expected behavior * added missing value_for_all_users() method to FlagBuilder class * Fix operator parsing errors (#169) * identify should not emit event if user key is empty (#164) * secondary should be treated as built-in attribute (#168) * URIs should have trailing slashes trimmed (#165) * all_flags_state should always include flag version (#166) * output event should not include a null prereqOf key (#167) * Account for traffic allocation on all flags (#171) * Add SDK contract tests (#170) * misc fixes to test data docs + add type hints * more type hints * remove some methods from the public test_data API * can't use "x|y" shortcut in typehints in older Pythons; use Union * fix misc type mistakes because I forgot to run the linter * update CONTRIBUTING.md and provide make targets * fixed a bug with flag rule clause builder internals; added unit test to verify rule evaluation * added ready argument to _TestDataSource class and indicated ready upon start to avoid delays in TestData initialization * Update contract tests to latest flask version (#176) Our contract tests depend on flask v1, which in turn depends on Jinja 2. Both of these are terribly dated and no longer supported. Jinja depends on markupsafe. markupsafe recently updated its code to no longer provide soft_unicode which in turn broke Jinja. Updating to the latest flask keeps all transitive dependencies better aligned and addresses this mismatch. * Adds link to Relay Proxy docs * Handle explicit None values in test payload (#179) The test harness may send explicit None values which should be treated the same as if the value was omitted entirely. * Fix "unhandled response" error in test harness (#180) When we return a `('', 204)` response from the flask handler, [Werkzeug intentionally removes the 'Content-Type' header][1], which causes the response to be created as a chunked response. The test harness is likely seeing a 204 response and isn't trying to read anything more from the stream. But since we are re-using connections, the next time it reads from the stream, it sees the `0\r\n\r\n` chunk and outputs an error: > 2022/04/20 14:23:39 Unsolicited response received on idle HTTP channel starting with "0\r\n\r\n"; err= Changing this response to 202 causes Werkzeug to return an empty response and silences the error. [1]: https://github.com/pallets/werkzeug/blob/560dd5f320bff318175f209595d42f5a80045417/src/werkzeug/wrappers/response.py#L540 * Exclude booleans when getting bucketable value (#181) When calculating a bucket, we get the bucketable value from the specified bucket by attribute. If this value is a string or an int, we can use it. Otherwise, we return None. Python considers a bool an instance of an int, which isn't what we want. So we need to add an explicit exclusion for this. * master -> main (#182) * Loosen restriction on expiringdict (#183) Originally this was pinned to a max version to deal with the incompatibility of Python 3.3 and the `typing` package. See [this PR][1]. Now that we now only support >=3.5, we can safely relax this restriction again. [1]: https://github.com/launchdarkly/python-server-sdk-private/pull/120 * Fix mypy type checking (#184) A [customer requested][original-pr] that we start including a py.typed file in our repository. This would enable mypy to take advantage of our typehints. Unfortunately, this didn't completely solve the customers issue. A [second pr][second-pr] was opened to address the missing step of including the py.typed file in the `Manifest.in` file. However, this change alone is not sufficient. According to the [documentation][include_package_data], you must also include the `include_package_data=True` directive so that files specified in the `Manifest.in` file are included in distribution. [original-pr]: https://github.com/launchdarkly/python-server-sdk/pull/166 [second-pr]: https://github.com/launchdarkly/python-server-sdk/pull/172 [include_package_data]: https://setuptools.pypa.io/en/latest/userguide/datafiles.html#include-package-data * Add support for extra Redis connection parameters (#185) * Include wheel artifact when publishing package (#186) * remove warn-level logging done for every Big Segments query * skip tests that use a self-signed TLS cert in Python 3.7 * (U2C 1) drop EOL Python versions (#189) * drop EOL Python versions * misc cleanup, show Python version in CI * add Python 3.11 CI job * add Python 3.11 to package metadata * (U2C 2) remove alias event functionality (#187) * (U2C 3) remove inline users in events (#188) * (U2C 4) remove deprecated things (#192) * remove warn-level logging done for every Big Segments query (#190) * remove warn-level logging done for every Big Segments query * skip tests that use a self-signed TLS cert in Python 3.7 * implement context model * fix exports * specify exports * add copy constructor * minimal changes for SDK methods & evaluator to accept Context * update tests, add subscript method * lint * in type hints, must use Dict[A, B] rather than dict[A, B] for Python <3.9 * support context kind in clauses + enable v2 contract tests * misc fixes * misc fixes * support contextTargets * support contextKind in rollouts/experiments * support includedContexts/excludedContexts in segment * comment copyedit Co-authored-by: Matthew M. Keeler * comment fixes * rm unused Co-authored-by: Matthew M. Keeler * fix create_multi to support flattening * lint * use custom classes for flag/segment data model * use store adapter for safety * misc cleanup * misc fixes for persistent stores * more database store fixes * support attribute reference lookups in evaluations * pass logger from client * context kind logic for big segments + enable big segment contract tests * formatting fixes + test cleanup * prerequisite cycle detection * segment recursion * define custom classes for event data * add module init file * linting * fix prereq stack logic * (U2C 17) U2C changes for events, not including private attributes (#205) * private attribute redaction * move a lot of code out of top-level modules * TestData changes for contexts * general doc comment fixes for 8.0 * U2C configuration updates * update release metadata * store flag/segment target lists as sets * fix type hint * preprocess clause values for time/regex/semver operators * fix type checking for matches operator Co-authored-by: Eli Bishop Co-authored-by: LaunchDarklyCI Co-authored-by: Elliot <35050275+Apache-HB@users.noreply.github.com> Co-authored-by: Gabor Angeli Co-authored-by: Elliot Co-authored-by: Ben Woskow <48036130+bwoskow-ld@users.noreply.github.com> Co-authored-by: LaunchDarklyCI Co-authored-by: hroederld Co-authored-by: Robert J. Neal Co-authored-by: Robert J. Neal Co-authored-by: Ember Stevens Co-authored-by: ember-stevens <79482775+ember-stevens@users.noreply.github.com> Co-authored-by: Matthew M. Keeler Co-authored-by: charukiewicz Co-authored-by: LaunchDarklyReleaseBot Co-authored-by: Christian Charukiewicz Co-authored-by: Matthew M. Keeler --- .circleci/config.yml | 13 +- .ldrelease/config.yml | 3 +- .readthedocs.yml | 2 +- Makefile | 8 +- README.md | 2 +- contract-tests/big_segment_store_fixture.py | 37 + contract-tests/client_entity.py | 99 +- contract-tests/service.py | 57 +- docs/api-deprecated.rst | 12 - docs/api-main.rst | 6 +- docs/index.rst | 3 +- ldclient/__init__.py | 21 +- ldclient/client.py | 218 ++-- ldclient/config.py | 144 ++- ldclient/context.py | 1034 +++++++++++++++++ ldclient/event_summarizer.py | 45 - ldclient/feature_store.py | 15 +- ldclient/feature_store_helpers.py | 49 +- ldclient/flag.py | 27 - ldclient/flags_state.py | 8 - ldclient/impl/big_segments.py | 6 +- ldclient/impl/datasource/__init__.py | 0 .../datasource}/feature_requester.py | 6 +- ldclient/{ => impl/datasource}/polling.py | 3 +- ldclient/{ => impl/datasource}/streaming.py | 2 +- ldclient/impl/evaluator.py | 529 ++++++--- ldclient/impl/event_factory.py | 119 -- ldclient/impl/events/__init__.py | 0 ldclient/{ => impl/events}/diagnostics.py | 10 +- .../impl/events/event_context_formatter.py | 95 ++ ldclient/{ => impl/events}/event_processor.py | 312 ++--- ldclient/impl/events/event_summarizer.py | 98 ++ ldclient/impl/events/types.py | 173 +++ ldclient/{ => impl}/fixed_thread_pool.py | 7 +- .../integrations/files/file_data_source.py | 17 +- .../test_data/test_data_source.py | 6 +- ldclient/impl/listeners.py | 2 +- ldclient/{ => impl}/lru_cache.py | 5 - ldclient/impl/model/__init__.py | 6 + ldclient/impl/model/attribute_ref.py | 98 ++ ldclient/impl/model/clause.py | 73 ++ ldclient/impl/model/encoder.py | 17 + ldclient/impl/model/entity.py | 105 ++ ldclient/impl/model/feature_flag.py | 158 +++ ldclient/impl/model/segment.py | 122 ++ ldclient/impl/model/value_parsing.py | 85 ++ ldclient/impl/model/variation_or_rollout.py | 74 ++ ldclient/impl/operators.py | 112 ++ ldclient/impl/repeating_task.py | 2 +- ldclient/{ => impl}/rwlock.py | 5 - ldclient/impl/sse.py | 4 +- ldclient/{ => impl}/util.py | 17 +- ldclient/integrations/__init__.py | 31 +- ldclient/integrations/test_data.py | 227 +++- ldclient/interfaces.py | 44 +- ldclient/operators.py | 195 ---- ldclient/repeating_timer.py | 16 - ldclient/sse_client.py | 213 ---- ldclient/user_filter.py | 41 - ldclient/versioned_data_kind.py | 21 +- setup.py | 3 +- testing/builders.py | 212 ++++ testing/feature_store_test_base.py | 28 +- testing/impl/datasource/__init__.py | 0 .../datasource}/test_feature_requester.py | 5 +- .../datasource}/test_polling_processor.py | 20 +- .../{ => impl/datasource}/test_streaming.py | 26 +- testing/impl/evaluator_util.py | 76 +- testing/impl/events/__init__.py | 0 testing/{ => impl/events}/test_diagnostics.py | 12 +- .../events/test_event_context_formatter.py | 82 ++ .../{ => impl/events}/test_event_factory.py | 54 +- testing/impl/events/test_event_processor.py | 517 +++++++++ testing/impl/events/test_event_summarizer.py | 52 + testing/impl/test_attribute_ref.py | 54 + testing/impl/test_big_segments.py | 6 +- testing/impl/test_evaluator.py | 433 +------ testing/impl/test_evaluator_big_segment.py | 91 +- testing/impl/test_evaluator_bucketing.py | 131 +++ testing/impl/test_evaluator_clause.py | 72 ++ testing/impl/test_evaluator_prerequisites.py | 74 ++ testing/impl/test_evaluator_segment.py | 292 ++--- testing/impl/test_evaluator_target.py | 80 ++ testing/{ => impl}/test_lru_cache.py | 4 +- testing/impl/test_model_decode.py | 47 + testing/impl/test_model_encoder.py | 17 + testing/{ => impl}/test_operators.py | 14 +- testing/{ => impl}/test_retry_delay.py | 0 .../persistent_feature_store_test_base.py | 8 +- testing/integrations/test_test_data_source.py | 499 ++++---- testing/stub_util.py | 11 +- testing/test_context.py | 301 +++++ testing/test_event_processor.py | 696 ----------- testing/test_event_summarizer.py | 63 - testing/test_flags_state.py | 2 +- testing/test_ldclient.py | 377 +----- testing/test_ldclient_end_to_end.py | 12 +- testing/test_ldclient_evaluation.py | 75 +- testing/test_ldclient_events.py | 290 +++++ testing/test_user_filter.py | 117 -- 100 files changed, 6083 insertions(+), 3629 deletions(-) create mode 100644 contract-tests/big_segment_store_fixture.py delete mode 100644 docs/api-deprecated.rst create mode 100644 ldclient/context.py delete mode 100644 ldclient/event_summarizer.py delete mode 100644 ldclient/flag.py delete mode 100644 ldclient/flags_state.py create mode 100644 ldclient/impl/datasource/__init__.py rename ldclient/{ => impl/datasource}/feature_requester.py (88%) rename ldclient/{ => impl/datasource}/polling.py (93%) rename ldclient/{ => impl/datasource}/streaming.py (98%) delete mode 100644 ldclient/impl/event_factory.py create mode 100644 ldclient/impl/events/__init__.py rename ldclient/{ => impl/events}/diagnostics.py (93%) create mode 100644 ldclient/impl/events/event_context_formatter.py rename ldclient/{ => impl/events}/event_processor.py (64%) create mode 100644 ldclient/impl/events/event_summarizer.py create mode 100644 ldclient/impl/events/types.py rename ldclient/{ => impl}/fixed_thread_pool.py (92%) rename ldclient/{ => impl}/lru_cache.py (89%) create mode 100644 ldclient/impl/model/__init__.py create mode 100644 ldclient/impl/model/attribute_ref.py create mode 100644 ldclient/impl/model/clause.py create mode 100644 ldclient/impl/model/encoder.py create mode 100644 ldclient/impl/model/entity.py create mode 100644 ldclient/impl/model/feature_flag.py create mode 100644 ldclient/impl/model/segment.py create mode 100644 ldclient/impl/model/value_parsing.py create mode 100644 ldclient/impl/model/variation_or_rollout.py create mode 100644 ldclient/impl/operators.py rename ldclient/{ => impl}/rwlock.py (91%) rename ldclient/{ => impl}/util.py (92%) delete mode 100644 ldclient/operators.py delete mode 100644 ldclient/repeating_timer.py delete mode 100644 ldclient/sse_client.py delete mode 100644 ldclient/user_filter.py create mode 100644 testing/builders.py create mode 100644 testing/impl/datasource/__init__.py rename testing/{ => impl/datasource}/test_feature_requester.py (97%) rename testing/{ => impl/datasource}/test_polling_processor.py (89%) rename testing/{ => impl/datasource}/test_streaming.py (94%) create mode 100644 testing/impl/events/__init__.py rename testing/{ => impl/events}/test_diagnostics.py (91%) create mode 100644 testing/impl/events/test_event_context_formatter.py rename testing/{ => impl/events}/test_event_factory.py (65%) create mode 100644 testing/impl/events/test_event_processor.py create mode 100644 testing/impl/events/test_event_summarizer.py create mode 100644 testing/impl/test_attribute_ref.py create mode 100644 testing/impl/test_evaluator_bucketing.py create mode 100644 testing/impl/test_evaluator_clause.py create mode 100644 testing/impl/test_evaluator_prerequisites.py create mode 100644 testing/impl/test_evaluator_target.py rename testing/{ => impl}/test_lru_cache.py (94%) create mode 100644 testing/impl/test_model_decode.py create mode 100644 testing/impl/test_model_encoder.py rename testing/{ => impl}/test_operators.py (88%) rename testing/{ => impl}/test_retry_delay.py (100%) create mode 100644 testing/test_context.py delete mode 100644 testing/test_event_processor.py delete mode 100644 testing/test_event_summarizer.py create mode 100644 testing/test_ldclient_events.py delete mode 100644 testing/test_user_filter.py diff --git a/.circleci/config.yml b/.circleci/config.yml index 007b5fb2..7b9bd2db 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -6,14 +6,6 @@ orbs: workflows: test: jobs: - - test-linux: - name: Python 3.5 - docker-image: cimg/python:3.5 - skip-sse-contract-tests: true # the test service app has dependencies that aren't available in 3.5, which is EOL anyway - skip-contract-tests: true # the test service app has dependencies that aren't available in 3.5, which is EOL anyway - - test-linux: - name: Python 3.6 - docker-image: cimg/python:3.6 - test-linux: name: Python 3.7 docker-image: cimg/python:3.7 @@ -26,6 +18,9 @@ workflows: - test-linux: name: Python 3.10 docker-image: cimg/python:3.10 + - test-linux: + name: Python 3.11 + docker-image: cimg/python:3.11 - test-windows: name: Windows Python 3 py3: true @@ -57,6 +52,7 @@ jobs: - image: consul steps: - checkout + - run: python --version - run: name: install requirements command: | @@ -142,6 +138,7 @@ jobs: - run: name: install Python 3 command: choco install python --no-progress + - run: python --version - run: name: set up DynamoDB command: | diff --git a/.ldrelease/config.yml b/.ldrelease/config.yml index 9021210c..cc14b358 100644 --- a/.ldrelease/config.yml +++ b/.ldrelease/config.yml @@ -12,7 +12,8 @@ publications: branches: - name: main - description: 7.x + description: 8.x + - name: 7.x - name: 6.x jobs: diff --git a/.readthedocs.yml b/.readthedocs.yml index 2739d0f6..56781a23 100644 --- a/.readthedocs.yml +++ b/.readthedocs.yml @@ -1,7 +1,7 @@ version: 2 python: - version: 3.5 + version: 3.7 install: - requirements: docs/requirements.txt - requirements: requirements.txt diff --git a/Makefile b/Makefile index ca4fa068..e007f6d9 100644 --- a/Makefile +++ b/Makefile @@ -18,6 +18,10 @@ docs: TEMP_TEST_OUTPUT=/tmp/contract-test-service.log +# TEST_HARNESS_PARAMS can be set to add -skip parameters for any contract tests that cannot yet pass +# TEST_HARNESS_PARAMS := $(TEST_HARNESS_PARAMS) \ + + # port 8000 and 9000 is already used in the CI environment because we're # running a DynamoDB container and an SSE contract test PORT=10000 @@ -33,8 +37,8 @@ start-contract-test-service-bg: @make start-contract-test-service >$(TEMP_TEST_OUTPUT) 2>&1 & run-contract-tests: - @curl -s https://raw.githubusercontent.com/launchdarkly/sdk-test-harness/v1.0.0/downloader/run.sh \ - | VERSION=v1 PARAMS="-url http://localhost:$(PORT) -debug -stop-service-at-end $(TEST_HARNESS_PARAMS)" sh + curl -s https://raw.githubusercontent.com/launchdarkly/sdk-test-harness/v2/downloader/run.sh \ + | VERSION=v2 PARAMS="-url http://localhost:$(PORT) -debug -stop-service-at-end $(TEST_HARNESS_PARAMS)" sh contract-tests: build-contract-tests start-contract-test-service-bg run-contract-tests diff --git a/README.md b/README.md index 8ea3a283..d5edcbad 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ ## Supported Python versions -This version of the LaunchDarkly SDK is compatible with Python 3.5 through 3.10. It is tested with the most recent patch releases of those versions. Python versions 2.7 to 3.4 are no longer supported. +This version of the LaunchDarkly SDK is compatible with Python 3.7 through 3.11. It is tested with the most recent patch releases of those versions. Python versions 2.7 to 3.6 are no longer supported. ## Getting started diff --git a/contract-tests/big_segment_store_fixture.py b/contract-tests/big_segment_store_fixture.py new file mode 100644 index 00000000..14907573 --- /dev/null +++ b/contract-tests/big_segment_store_fixture.py @@ -0,0 +1,37 @@ +import json +import os +import sys +from typing import Optional +import urllib3 + +# Import ldclient from parent directory +sys.path.insert(1, os.path.join(sys.path[0], '..')) +from ldclient.interfaces import BigSegmentStore, BigSegmentStoreMetadata + + +http = urllib3.PoolManager() + + +class BigSegmentStoreFixture(BigSegmentStore): + def __init__(self, callback_uri: str): + self._callback_uri = callback_uri + + def get_metadata(self) -> BigSegmentStoreMetadata: + resp_data = self._post_callback('/getMetadata', None) + return BigSegmentStoreMetadata(resp_data.get("lastUpToDate")) + + def get_membership(self, context_hash: str) -> Optional[dict]: + resp_data = self._post_callback('/getMembership', {'contextHash': context_hash}) + return resp_data.get("values") + + def _post_callback(self, path: str, params: Optional[dict]) -> dict: + url = self._callback_uri + path + resp = http.request('POST', url, + body=None if params is None else json.dumps(params), + headers=None if params is None else {'Content-Type': 'application/json'}) + if resp.status != 200: + raise Exception("HTTP error %d from callback to %s" % (resp.status, url)) + return json.loads(resp.data.decode('utf-8')) + + def stop(self): + pass diff --git a/contract-tests/client_entity.py b/contract-tests/client_entity.py index 5d2d5220..dac8625f 100644 --- a/contract-tests/client_entity.py +++ b/contract-tests/client_entity.py @@ -1,6 +1,12 @@ +import json import logging import os import sys +from typing import Optional + +from big_segment_store_fixture import BigSegmentStoreFixture + +from ldclient.config import BigSegmentsConfig # Import ldclient from parent directory sys.path.insert(1, os.path.join(sys.path[0], '..')) @@ -16,8 +22,7 @@ def __init__(self, tag, config): streaming = config["streaming"] if streaming.get("baseUri") is not None: opts["stream_uri"] = streaming["baseUri"] - if streaming.get("initialRetryDelayMs") is not None: - opts["initial_reconnect_delay"] = streaming["initialRetryDelayMs"] / 1000.0 + _set_optional_time_prop(streaming, "initialRetryDelayMs", opts, "initial_reconnect_delay") if config.get("events") is not None: events = config["events"] @@ -28,13 +33,22 @@ def __init__(self, tag, config): opts["diagnostic_opt_out"] = not events.get("enableDiagnostics", False) opts["all_attributes_private"] = events.get("allAttributesPrivate", False) opts["private_attribute_names"] = events.get("globalPrivateAttributes", {}) - if events.get("flushIntervalMs") is not None: - opts["flush_interval"] = events["flushIntervalMs"] / 1000.0 - if events.get("inlineUsers") is not None: - opts["inline_users_in_events"] = events["inlineUsers"] + _set_optional_time_prop(events, "flushIntervalMs", opts, "flush_interval") else: opts["send_events"] = False + if config.get("bigSegments") is not None: + big_params = config["bigSegments"] + big_config = { + "store": BigSegmentStoreFixture(big_params["callbackUri"]) + } + if big_params.get("userCacheSize") is not None: + big_config["context_cache_size"] = big_params["userCacheSize"] + _set_optional_time_prop(big_params, "userCacheTimeMs", big_config, "context_cache_time") + _set_optional_time_prop(big_params, "statusPollIntervalMs", big_config, "status_poll_interval") + _set_optional_time_prop(big_params, "staleAfterMs", big_config, "stale_after") + opts["big_segments"] = BigSegmentsConfig(**big_config) + start_wait = config.get("startWaitTimeMs") or 5000 config = Config(**opts) @@ -43,41 +57,90 @@ def __init__(self, tag, config): def is_initializing(self) -> bool: return self.client.is_initialized() - def evaluate(self, params) -> dict: + def evaluate(self, params: dict) -> dict: response = {} if params.get("detail", False): - detail = self.client.variation_detail(params["flagKey"], params["user"], params["defaultValue"]) + detail = self.client.variation_detail(params["flagKey"], params["context"], params["defaultValue"]) response["value"] = detail.value response["variationIndex"] = detail.variation_index response["reason"] = detail.reason else: - response["value"] = self.client.variation(params["flagKey"], params["user"], params["defaultValue"]) + response["value"] = self.client.variation(params["flagKey"], params["context"], params["defaultValue"]) return response - def evaluate_all(self, params): + def evaluate_all(self, params: dict): opts = {} opts["client_side_only"] = params.get("clientSideOnly", False) opts["with_reasons"] = params.get("withReasons", False) opts["details_only_for_tracked_flags"] = params.get("detailsOnlyForTrackedFlags", False) - state = self.client.all_flags_state(params["user"], **opts) + state = self.client.all_flags_state(params["context"], **opts) return {"state": state.to_json_dict()} - def track(self, params): - self.client.track(params["eventKey"], params["user"], params["data"], params.get("metricValue", None)) - - def identify(self, params): - self.client.identify(params["user"]) + def track(self, params: dict): + self.client.track(params["eventKey"], params["context"], params["data"], params.get("metricValue", None)) - def alias(self, params): - self.client.alias(params["user"], params["previousUser"]) + def identify(self, params: dict): + self.client.identify(params["context"]) def flush(self): self.client.flush() + def secure_mode_hash(self, params: dict) -> dict: + return {"result": self.client.secure_mode_hash(params["context"])} + + def context_build(self, params: dict) -> dict: + if params.get("multi"): + b = Context.multi_builder() + for c in params.get("multi"): + b.add(self._context_build_single(c)) + return self._context_response(b.build()) + return self._context_response(self._context_build_single(params["single"])) + + def _context_build_single(self, params: dict) -> Context: + b = Context.builder(params["key"]) + if "kind" in params: + b.kind(params["kind"]) + if "name" in params: + b.name(params["name"]) + if "anonymous" in params: + b.anonymous(params["anonymous"]) + if "custom" in params: + for k, v in params.get("custom").items(): + b.set(k, v) + if "private" in params: + for attr in params.get("private"): + b.private(attr) + return b.build() + + def context_convert(self, params: dict) -> dict: + input = params["input"] + try: + props = json.loads(input) + return self._context_response(Context.from_dict(props)) + except Exception as e: + return {"error": str(e)} + + def _context_response(self, c: Context) -> dict: + if c.valid: + return {"output": c.to_json_string()} + return {"error": c.error} + + def get_big_segment_store_status(self) -> dict: + status = self.client.big_segment_store_status_provider.status + return { + "available": status.available, + "stale": status.stale + } + def close(self): self.client.close() self.log.info('Test ended') + +def _set_optional_time_prop(params_in: dict, name_in: str, params_out: dict, name_out: str): + if params_in.get(name_in) is not None: + params_out[name_out] = params_in[name_in] / 1000.0 + return None diff --git a/contract-tests/service.py b/contract-tests/service.py index d9f8e0a5..e455d8ad 100644 --- a/contract-tests/service.py +++ b/contract-tests/service.py @@ -12,6 +12,7 @@ default_port = 8000 + # logging configuration dictConfig({ 'version': 1, @@ -30,11 +31,10 @@ 'level': 'INFO', 'handlers': ['console'] }, - 'ldclient.util': { - 'level': 'INFO', - 'handlers': ['console'] - }, 'loggers': { + 'ldclient': { + 'level': 'INFO', # change to 'DEBUG' to enable SDK debug logging + }, 'werkzeug': { 'level': 'ERROR' } # disable irrelevant Flask app logging } }) @@ -53,6 +53,7 @@ def handle_exception(e): if isinstance(e, HTTPException): return e + app.logger.exception(e) return str(e), 500 @app.route('/', methods=['GET']) @@ -63,6 +64,9 @@ def status(): 'all-flags-with-reasons', 'all-flags-client-side-only', 'all-flags-details-only-for-tracked-flags', + 'big-segments', + 'context-type', + 'secure-mode-hash', ] } return (json.dumps(body), 200, {'Content-type': 'application/json'}) @@ -102,26 +106,35 @@ def post_client_command(id): if client is None: return ('', 404) - if params.get('command') == "evaluate": - response = client.evaluate(params.get("evaluate")) - return (json.dumps(response), 200) - elif params.get("command") == "evaluateAll": - response = client.evaluate_all(params.get("evaluateAll")) - return (json.dumps(response), 200) - elif params.get("command") == "customEvent": - client.track(params.get("customEvent")) - return ('', 201) - elif params.get("command") == "identifyEvent": - client.identify(params.get("identifyEvent")) - return ('', 201) - elif params.get("command") == "aliasEvent": - client.alias(params.get("aliasEvent")) - return ('', 201) - elif params.get('command') == "flushEvents": + command = params.get('command') + sub_params = params.get(command) + + response = None + + if command == "evaluate": + response = client.evaluate(sub_params) + elif command == "evaluateAll": + response = client.evaluate_all(sub_params) + elif command == "customEvent": + client.track(sub_params) + elif command == "identifyEvent": + client.identify(sub_params) + elif command == "flushEvents": client.flush() + elif command == "secureModeHash": + response = client.secure_mode_hash(sub_params) + elif command == "contextBuild": + response = client.context_build(sub_params) + elif command == "contextConvert": + response = client.context_convert(sub_params) + elif command == "getBigSegmentStoreStatus": + response = client.get_big_segment_store_status() + else: + return ('', 400) + + if response is None: return ('', 201) - - return ('', 400) + return (json.dumps(response), 200) @app.route('/clients/', methods=['DELETE']) def delete_client(id): diff --git a/docs/api-deprecated.rst b/docs/api-deprecated.rst deleted file mode 100644 index 4b24254b..00000000 --- a/docs/api-deprecated.rst +++ /dev/null @@ -1,12 +0,0 @@ -Deprecated modules -=============================== - -ldclient.flag module --------------------- - -This module is deprecated. For the :class:`~ldclient.evaluation.EvaluationDetail` type, please use :mod:`ldclient.evaluation`. - -ldclient.flags_state module ---------------------------- - -This module is deprecated. For the :class:`~ldclient.evaluation.FeatureFlagsState` type, please use :mod:`ldclient.evaluation`. diff --git a/docs/api-main.rst b/docs/api-main.rst index 15ddca1b..514dc698 100644 --- a/docs/api-main.rst +++ b/docs/api-main.rst @@ -5,22 +5,26 @@ ldclient module --------------- .. automodule:: ldclient - :members: get,set_config + :members: Context,ContextBuilder,ContextMultiBuilder,get,set_config + :special-members: __init__ ldclient.client module ---------------------- .. automodule:: ldclient.client :members: LDClient + :special-members: __init__ ldclient.config module ---------------------- .. automodule:: ldclient.config :members: + :special-members: __init__ ldclient.evaluation module -------------------------- .. automodule:: ldclient.evaluation :members: + :special-members: __init__ diff --git a/docs/index.rst b/docs/index.rst index aa03075e..9c6e4ecb 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -12,6 +12,8 @@ The latest version of the SDK can be found on `PyPI `_ and `SDK Reference Guide `_. +Any types, functions, or constants that are not specifically described in this API reference should be considered implementation details that are not supported for external use; LaunchDarkly reserves the right to change them at any time and application code should not rely on them. + .. toctree:: :maxdepth: 2 :caption: Contents: @@ -20,4 +22,3 @@ For more information, see LaunchDarkly's `Quickstart bool: return self.store.initialized +def _get_store_item(store, kind: VersionedDataKind, key: str) -> Any: + # This decorator around store.get provides backward compatibility with any custom data + # store implementation that might still be returning a dict, instead of our data model + # classes like FeatureFlag. + item = store.get(kind, key, lambda x: x) + return kind.decode(item) if isinstance(item, dict) else item + + class LDClient: """The LaunchDarkly SDK client object. @@ -84,8 +93,8 @@ def __init__(self, config: Config, start_wait: float=5): self._event_processor = None self._lock = Lock() - self._event_factory_default = _EventFactory(False) - self._event_factory_with_reasons = _EventFactory(True) + self._event_factory_default = EventFactory(False) + self._event_factory_with_reasons = EventFactory(True) store = _FeatureStoreClientWrapper(self._config.feature_store) self._store = store # type: FeatureStore @@ -94,9 +103,10 @@ def __init__(self, config: Config, start_wait: float=5): self.__big_segment_store_manager = big_segment_store_manager self._evaluator = Evaluator( - lambda key: store.get(FEATURES, key, lambda x: x), - lambda key: store.get(SEGMENTS, key, lambda x: x), - lambda key: big_segment_store_manager.get_user_membership(key) + lambda key: _get_store_item(store, FEATURES, key), + lambda key: _get_store_item(store, SEGMENTS, key), + lambda key: big_segment_store_manager.get_user_membership(key), + log ) if self._config.offline: @@ -179,57 +189,62 @@ def __exit__(self, type, value, traceback): def _send_event(self, event): self._event_processor.send_event(event) - def track(self, event_name: str, user: dict, data: Optional[Any]=None, metric_value: Optional[AnyNum]=None): - """Tracks that a user performed an event. + def track(self, event_name: str, context: Union[dict, Context], data: Optional[Any]=None, + metric_value: Optional[AnyNum]=None): + """Tracks that an application-defined event occurred. + + This method creates a "custom" analytics event containing the specified event name (key) + and context properties. You may attach arbitrary data or a metric value to the event with the + optional ``data`` and ``metric_value`` parameters. - LaunchDarkly automatically tracks pageviews and clicks that are specified in the Goals - section of the dashboard. This can be used to track custom goals or other events that do - not currently have goals. + Note that event delivery is asynchronous, so the event may not actually be sent until later; + see :func:`flush()`. - :param event_name: the name of the event, which may correspond to a goal in A/B tests - :param user: the attributes of the user + If you pass a dictionary of user attributes instead of a :class:`ldclient.Context`, + the SDK will convert the user to a Context. There is some overhead to this conversion, + so it is more efficient to pass a Context. + + :param event_name: the name of the event + :param context: the evaluation context or user associated with the event :param data: optional additional data associated with the event :param metric_value: a numeric value used by the LaunchDarkly experimentation feature in - numeric custom metrics. Can be omitted if this event is used by only non-numeric metrics. - This field will also be returned as part of the custom event for Data Export. + numeric custom metrics; can be omitted if this event is used by only non-numeric metrics """ - if user is None or user.get('key') is None: - log.warning("Missing user or user key when calling track().") + if not isinstance(context, Context): + context = Context.from_dict(context) + if not context.valid: + log.warning("Invalid context for track (%s)" % context.error) else: - self._send_event(self._event_factory_default.new_custom_event(event_name, user, data, metric_value)) + self._send_event(self._event_factory_default.new_custom_event(event_name, + context, data, metric_value)) - def alias(self, current_user: dict, previous_user: dict): - """Associates two users for analytics purposes. + def identify(self, context: Union[Context, dict]): + """Reports details about an evaluation context. - This can be helpful in the situation where a person is represented by multiple - LaunchDarkly users. This may happen, for example, when a person initially logs into - an application, the person might be represented by an anonymous user prior to logging - in and a different user after logging in, as denoted by a different user key. + This method simply creates an analytics event containing the context properties, to + that LaunchDarkly will know about that context if it does not already. - :param current_user: The new version of a user. - :param previous_user: The old version of a user. - """ - if current_user is None or current_user.get('key') is None: - log.warning("Missing current_user or current_user key when calling alias().") - return None - if previous_user is None or previous_user.get('key') is None: - log.warning("Missing previous_user or previous_user key when calling alias().") - return None - self._send_event(self._event_factory_default.new_alias_event(current_user, previous_user)) - - def identify(self, user: dict): - """Registers the user. + Evaluating a flag, by calling :func:`variation()` or :func:`variation_detail()`, also + sends the context information to LaunchDarkly (if events are enabled), so you only + need to use :func:`identify()` if you want to identify the context without evaluating a + flag. - This simply creates an analytics event that will transmit the given user properties to - LaunchDarkly, so that the user will be visible on your dashboard even if you have not - evaluated any flags for that user. It has no other effect. + If you pass a dictionary of user attributes instead of a :class:`ldclient.Context`, + the SDK will convert the user to a Context. There is some overhead to this conversion, + so it is more efficient to pass a Context. - :param user: attributes of the user to register + :param context: the context to register """ - if user is None or user.get('key') is None or len(str(user.get('key'))) == 0: - log.warning("Missing user or user key when calling identify().") + if not isinstance(context, Context): + context = Context.from_dict(context) + if not context.valid: + log.warning("Invalid context for identify (%s)" % context.error) + elif context.key == '' and not context.multiple: + # This could be a valid context for evaluations (if it was using the old user schema) + # but an identify event with an empty key is no good. + log.warning("Empty user key for identify") else: - self._send_event(self._event_factory_default.new_identify_event(user)) + self._send_event(self._event_factory_default.new_identify_event(context)) def is_offline(self) -> bool: """Returns true if the client is in offline mode. @@ -258,34 +273,42 @@ def flush(self): return return self._event_processor.flush() - def variation(self, key: str, user: dict, default: Any) -> Any: - """Determines the variation of a feature flag for a user. + def variation(self, key: str, context: Union[Context, dict], default: Any) -> Any: + """Calculates the value of a feature flag for a given context. + + If you pass a dictionary of user attributes instead of a :class:`ldclient.Context`, + the SDK will convert the user to a Context. There is some overhead to this conversion, + so it is more efficient to pass a Context. :param key: the unique key for the feature flag - :param user: a dictionary containing parameters for the end user requesting the flag + :param context: the evaluation context or user :param default: the default value of the flag, to be used if the value is not available from LaunchDarkly - :return: one of the flag's variation values, or the default value + :return: the variation for the given context, or the ``default`` value if the flag cannot be evaluated """ - return self._evaluate_internal(key, user, default, self._event_factory_default).value + return self._evaluate_internal(key, context, default, self._event_factory_default).value - def variation_detail(self, key: str, user: dict, default: Any) -> EvaluationDetail: - """Determines the variation of a feature flag for a user, like :func:`variation()`, but also - provides additional information about how this value was calculated, in the form of an - :class:`ldclient.flag.EvaluationDetail` object. + def variation_detail(self, key: str, context: Union[Context, dict], default: Any) -> EvaluationDetail: + """Calculates the value of a feature flag for a given context, and returns an object that + describes the way the value was determined. - Calling this method also causes the "reason" data to be included in analytics events, - if you are capturing detailed event data for this flag. + The ``reason`` property in the result will also be included in analytics events, if you are + capturing detailed event data for this flag. + + If you pass a dictionary of user attributes instead of a :class:`ldclient.Context`, + the SDK will convert the user to a Context. There is some overhead to this conversion, + so it is more efficient to pass a Context. :param key: the unique key for the feature flag - :param user: a dictionary containing parameters for the end user requesting the flag + :param context: the evaluation context or user :param default: the default value of the flag, to be used if the value is not available from LaunchDarkly - :return: an object describing the result + :return: an :class:`ldclient.evaluation.EvaluationDetail` object that includes the feature + flag value and evaluation reason """ - return self._evaluate_internal(key, user, default, self._event_factory_with_reasons) + return self._evaluate_internal(key, context, default, self._event_factory_with_reasons) - def _evaluate_internal(self, key, user, default, event_factory): + def _evaluate_internal(self, key: str, context: Union[Context, dict], default: Any, event_factory): default = self._config.get_default(key, default) if self._config.offline: @@ -298,47 +321,45 @@ def _evaluate_internal(self, key, user, default, event_factory): log.warning("Feature Flag evaluation attempted before client has initialized! Feature store unavailable - returning default: " + str(default) + " for feature key: " + key) reason = error_reason('CLIENT_NOT_READY') - self._send_event(event_factory.new_unknown_flag_event(key, user, default, reason)) + self._send_event(event_factory.new_unknown_flag_event(key, context, default, reason)) return EvaluationDetail(default, None, reason) - if user is not None and user.get('key', "") == "": - log.warning("User key is blank. Flag evaluation will proceed, but the user will not be stored in LaunchDarkly.") + if not isinstance(context, Context): + context = Context.from_dict(context) + if not context.valid: + log.warning("Context was invalid for flag evaluation (%s); returning default value" % context.error) + return EvaluationDetail(default, None, error_reason('USER_NOT_SPECIFIED')) try: - flag = self._store.get(FEATURES, key, lambda x: x) + flag = _get_store_item(self._store, FEATURES, key) except Exception as e: log.error("Unexpected error while retrieving feature flag \"%s\": %s" % (key, repr(e))) log.debug(traceback.format_exc()) reason = error_reason('EXCEPTION') - self._send_event(event_factory.new_unknown_flag_event(key, user, default, reason)) + self._send_event(event_factory.new_unknown_flag_event(key, context, default, reason)) return EvaluationDetail(default, None, reason) if not flag: reason = error_reason('FLAG_NOT_FOUND') - self._send_event(event_factory.new_unknown_flag_event(key, user, default, reason)) + self._send_event(event_factory.new_unknown_flag_event(key, context, default, reason)) return EvaluationDetail(default, None, reason) else: - if user is None or user.get('key') is None: - reason = error_reason('USER_NOT_SPECIFIED') - self._send_event(event_factory.new_default_event(flag, user, default, reason)) - return EvaluationDetail(default, None, reason) - try: - result = self._evaluator.evaluate(flag, user, event_factory) + result = self._evaluator.evaluate(flag, context, event_factory) for event in result.events or []: self._send_event(event) detail = result.detail if detail.is_default_value(): detail = EvaluationDetail(default, None, detail.reason) - self._send_event(event_factory.new_eval_event(flag, user, detail, default)) + self._send_event(event_factory.new_eval_event(flag, context, detail, default)) return detail except Exception as e: log.error("Unexpected error while evaluating feature flag \"%s\": %s" % (key, repr(e))) log.debug(traceback.format_exc()) reason = error_reason('EXCEPTION') - self._send_event(event_factory.new_default_event(flag, user, default, reason)) + self._send_event(event_factory.new_default_event(flag, context, default, reason)) return EvaluationDetail(default, None, reason) - def all_flags_state(self, user: dict, **kwargs) -> FeatureFlagsState: + def all_flags_state(self, context: Union[Context, dict], **kwargs) -> FeatureFlagsState: """Returns an object that encapsulates the state of all feature flags for a given user, including the flag values and also metadata that can be used on the front end. See the JavaScript SDK Reference Guide on @@ -374,8 +395,10 @@ def all_flags_state(self, user: dict, **kwargs) -> FeatureFlagsState: log.warning("all_flags_state() called before client has finished initializing! Feature store unavailable - returning empty state") return FeatureFlagsState(False) - if user is None or user.get('key') is None: - log.warning("User or user key is None when calling all_flags_state(). Returning empty state.") + if not isinstance(context, Context): + context = Context.from_dict(context) + if not context.valid: + log.warning("Context was invalid for all_flags_state (%s); returning default value" % context.error) return FeatureFlagsState(False) state = FeatureFlagsState(True) @@ -394,14 +417,14 @@ def all_flags_state(self, user: dict, **kwargs) -> FeatureFlagsState: if client_only and not flag.get('clientSide', False): continue try: - detail = self._evaluator.evaluate(flag, user, self._event_factory_default).detail + detail = self._evaluator.evaluate(flag, context, self._event_factory_default).detail except Exception as e: log.error("Error evaluating flag \"%s\" in all_flags_state: %s" % (key, repr(e))) log.debug(traceback.format_exc()) reason = {'kind': 'ERROR', 'errorKind': 'EXCEPTION'} detail = EvaluationDetail(None, None, reason) - requires_experiment_data = _EventFactory.is_experiment(flag, detail.reason) + requires_experiment_data = EventFactory.is_experiment(flag, detail.reason) flag_state = { 'key': flag['key'], 'value': detail.value, @@ -417,20 +440,21 @@ def all_flags_state(self, user: dict, **kwargs) -> FeatureFlagsState: return state - def secure_mode_hash(self, user: dict) -> str: - """Computes an HMAC signature of a user signed with the client's SDK key, - for use with the JavaScript SDK. + def secure_mode_hash(self, context: Union[Context, dict]) -> str: + """Creates a hash string that can be used by the JavaScript SDK to identify a context. - For more information, see the JavaScript SDK Reference Guide on - `Secure mode `_. + For more information, see the documentation on + `Secure mode `_. - :param user: the attributes of the user - :return: a hash string that can be passed to the front end + :param context: the evaluation context or user + :return: the hash string """ - key = user.get('key') - if key is None or self._config.sdk_key is None: + if not isinstance(context, Context): + context = Context.from_dict(context) + if not context.valid: + log.warning("Context was invalid for secure_mode_hash (%s); returning empty hash" % context.error) return "" - return hmac.new(self._config.sdk_key.encode(), key.encode(), hashlib.sha256).hexdigest() + return hmac.new(str(self._config.sdk_key).encode(), context.fully_qualified_key.encode(), hashlib.sha256).hexdigest() @property def big_segment_store_status_provider(self) -> BigSegmentStoreStatusProvider: diff --git a/ldclient/config.py b/ldclient/config.py index dfe1a29a..3ae0c04c 100644 --- a/ldclient/config.py +++ b/ldclient/config.py @@ -4,11 +4,11 @@ Note that the same class can also be imported from the ``ldclient.client`` submodule. """ -from typing import Optional, Callable, List, Any, Set +from typing import Optional, Callable, List, Set from ldclient.feature_store import InMemoryFeatureStore -from ldclient.util import log -from ldclient.interfaces import BigSegmentStore, EventProcessor, FeatureStore, UpdateProcessor, FeatureRequester +from ldclient.impl.util import log +from ldclient.interfaces import BigSegmentStore, EventProcessor, FeatureStore, UpdateProcessor GET_LATEST_FEATURES_PATH = '/sdk/latest-flags' STREAM_FLAGS_PATH = '/flags' @@ -17,12 +17,12 @@ class BigSegmentsConfig: """Configuration options related to Big Segments. - Big Segments are a specific type of user segments. For more information, read the LaunchDarkly + Big Segments are a specific type of segments. For more information, read the LaunchDarkly documentation: https://docs.launchdarkly.com/home/users/big-segments - If your application uses Big Segments, you will need to create a `BigSegmentsConfig` that at a - minimum specifies what database integration to use, and then pass the `BigSegmentsConfig` - object as the `big_segments` parameter when creating a :class:`Config`. + If your application uses Big Segments, you will need to create a ``BigSegmentsConfig`` that at a + minimum specifies what database integration to use, and then pass the ``BigSegmentsConfig`` + object as the ``big_segments`` parameter when creating a :class:`Config`. This example shows Big Segments being configured to use Redis: :: @@ -34,25 +34,29 @@ class BigSegmentsConfig: """ def __init__(self, store: Optional[BigSegmentStore] = None, - user_cache_size: int=1000, - user_cache_time: float=5, + context_cache_size: int=1000, + context_cache_time: float=5, + user_cache_size: Optional[int]=None, + user_cache_time: Optional[float]=None, status_poll_interval: float=5, stale_after: float=120): """ :param store: the implementation of :class:`ldclient.interfaces.BigSegmentStore` that will be used to query the Big Segments database - :param user_cache_size: the maximum number of users whose Big Segment state will be cached + :param context_cache_size: the maximum number of contexts whose Big Segment state will be cached by the SDK at any given time - :param user_cache_time: the maximum length of time (in seconds) that the Big Segment state - for a user will be cached by the SDK + :param context_cache_time: the maximum length of time (in seconds) that the Big Segment state + for a context will be cached by the SDK + :param user_cache_size: deprecated alias for `context_cache_size` + :param user_cache_time: deprecated alias for `context_cache_time` :param status_poll_interval: the interval (in seconds) at which the SDK will poll the Big Segment store to make sure it is available and to determine how long ago it was updated :param stale_after: the maximum length of time between updates of the Big Segments data before the data is considered out of date """ self.__store = store - self.__user_cache_size = user_cache_size - self.__user_cache_time = user_cache_time + self.__context_cache_size = context_cache_size if user_cache_size is None else user_cache_size + self.__context_cache_time = context_cache_time if user_cache_time is None else user_cache_time self.__status_poll_interval = status_poll_interval self.__stale_after = stale_after pass @@ -61,13 +65,23 @@ def __init__(self, def store(self) -> Optional[BigSegmentStore]: return self.__store + @property + def context_cache_size(self) -> int: + return self.__context_cache_size + + @property + def context_cache_time(self) -> float: + return self.__context_cache_time + @property def user_cache_size(self) -> int: - return self.__user_cache_size + """Deprecated alias for :attr:`context_cache_size`.""" + return self.context_cache_size @property def user_cache_time(self) -> float: - return self.__user_cache_time + """Deprecated alias for :attr:`context_cache_time`.""" + return self.context_cache_time @property def status_poll_interval(self) -> float: @@ -81,7 +95,7 @@ class HTTPConfig: """Advanced HTTP configuration options for the SDK client. This class groups together HTTP/HTTPS-related configuration properties that rarely need to be changed. - If you need to set these, construct an `HTTPConfig` instance and pass it as the `http` parameter when + If you need to set these, construct an ``HTTPConfig`` instance and pass it as the ``http`` parameter when you construct the main :class:`Config` for the SDK client. """ def __init__(self, @@ -95,7 +109,7 @@ def __init__(self, :param connect_timeout: The connect timeout for network connections in seconds. :param read_timeout: The read timeout for network connections in seconds. :param http_proxy: Use a proxy when connecting to LaunchDarkly. This is the full URI of the - proxy; for example: http://my-proxy.com:1234. Note that unlike the standard `http_proxy` environment + proxy; for example: http://my-proxy.com:1234. Note that unlike the standard ``http_proxy`` environment variable, this is used regardless of whether the target URI is HTTP or HTTPS (the actual LaunchDarkly service uses HTTPS, but a Relay Proxy instance could use HTTP). Setting this Config parameter will override any proxy specified by an environment variable, but only for LaunchDarkly SDK connections. @@ -105,7 +119,7 @@ def __init__(self, certificate. :param disable_ssl_verification: If true, completely disables SSL verification and certificate verification for secure requests. This is unsafe and should not be used in a production environment; - instead, use a self-signed certificate and set `ca_certs`. + instead, use a self-signed certificate and set ``ca_certs``. """ self.__connect_timeout = connect_timeout self.__read_timeout = read_timeout @@ -155,19 +169,20 @@ def __init__(self, initial_reconnect_delay: float=1, defaults: dict={}, send_events: Optional[bool]=None, - events_enabled: bool=True, update_processor_class: Optional[Callable[[str, 'Config', FeatureStore], UpdateProcessor]]=None, poll_interval: float=30, use_ldd: bool=False, feature_store: Optional[FeatureStore]=None, feature_requester_class=None, - event_processor_class: Callable[['Config'], EventProcessor]=None, + event_processor_class: Callable[['Config'], EventProcessor]=None, + private_attributes: Set[str]=set(), private_attribute_names: Set[str]=set(), all_attributes_private: bool=False, offline: bool=False, - user_keys_capacity: int=1000, - user_keys_flush_interval: float=300, - inline_users_in_events: bool=False, + context_keys_capacity: int=1000, + context_keys_flush_interval: float=300, + user_keys_capacity: Optional[int] = None, + user_keys_flush_interval: Optional[float] = None, diagnostic_opt_out: bool=False, diagnostic_recording_interval: int=900, wrapper_name: Optional[str]=None, @@ -194,31 +209,33 @@ def __init__(self, to be reestablished. The delay for the first reconnection will start near this value, and then increase exponentially for any subsequent connection failures. :param send_events: Whether or not to send events back to LaunchDarkly. This differs from - `offline` in that it affects only the sending of client-side events, not streaming or polling for + ``offline`` in that it affects only the sending of client-side events, not streaming or polling for events from the server. By default, events will be sent. - :param events_enabled: Obsolete name for `send_events`. :param offline: Whether the client should be initialized in offline mode. In offline mode, default values are returned for all flags and no remote network requests are made. By default, this is false. :param poll_interval: The number of seconds between polls for flag updates if streaming is off. :param use_ldd: Whether you are using the LaunchDarkly Relay Proxy in daemon mode. In this configuration, the client will not use a streaming connection to listen for updates, but instead - will get feature state from a Redis instance. The `stream` and `poll_interval` options will be + will get feature state from a Redis instance. The ``stream`` and ``poll_interval`` options will be ignored if this option is set to true. By default, this is false. For more information, read the LaunchDarkly documentation: https://docs.launchdarkly.com/home/relay-proxy/using#using-daemon-mode - :param array private_attribute_names: Marks a set of attribute names private. Any users sent to - LaunchDarkly with this configuration active will have attributes with these names removed. + :param array private_attribute: Marks a set of attributes private. Any users sent to LaunchDarkly + with this configuration active will have these attributes removed. Each item can be either the + name of an attribute ("email"), or a slash-delimited path ("/address/street") to mark a + property within a JSON object value as private. + :param array private_attribute_names: Deprecated alias for ``private_attributes`` ("names" is no longer + strictly accurate because these could also be attribute reference paths). :param all_attributes_private: If true, all user attributes (other than the key) will be - private, not just the attributes specified in `private_attribute_names`. + private, not just the attributes specified in ``private_attributes``. :param feature_store: A FeatureStore implementation - :param user_keys_capacity: The number of user keys that the event processor can remember at any - one time, so that duplicate user details will not be sent in analytics events. - :param user_keys_flush_interval: The interval in seconds at which the event processor will - reset its set of known user keys. - :param inline_users_in_events: Whether to include full user details in every analytics event. - By default, events will only include the user key, except for one "index" event that provides the - full details for the user. + :param context_keys_capacity: The number of context keys that the event processor can remember at any + one time, so that duplicate context details will not be sent in analytics events. + :param context_keys_flush_interval: The interval in seconds at which the event processor will + reset its set of known context keys. + :param user_keys_capacity: Deprecated alias for ``context_keys_capacity``. + :param user_keys_flush_interval: Deprecated alias for ``context_keys_flush_interval``. :param feature_requester_class: A factory for a FeatureRequester implementation taking the sdk key and config :param event_processor_class: A factory for an EventProcessor implementation taking the config :param update_processor_class: A factory for an UpdateProcessor implementation taking the sdk key, @@ -234,8 +251,8 @@ def __init__(self, being used. This will be sent in HTTP headers during requests to the LaunchDarkly servers to allow recording metrics on the usage of these wrapper libraries. :param wrapper_version: For use by wrapper libraries to report the version of the library in - use. If `wrapper_name` is not set, this field will be ignored. Otherwise the version string will - be included in the HTTP headers along with the `wrapper_name` during requests to the LaunchDarkly + use. If ``wrapper_name`` is not set, this field will be ignored. Otherwise the version string will + be included in the HTTP headers along with the ``wrapper_name`` during requests to the LaunchDarkly servers. :param http: Optional properties for customizing the client's HTTP/HTTPS behavior. See :class:`HTTPConfig`. @@ -258,13 +275,12 @@ def __init__(self, self.__defaults = defaults if offline is True: send_events = False - self.__send_events = events_enabled if send_events is None else send_events - self.__private_attribute_names = private_attribute_names + self.__send_events = True if send_events is None else send_events + self.__private_attributes = private_attributes or private_attribute_names self.__all_attributes_private = all_attributes_private self.__offline = offline - self.__user_keys_capacity = user_keys_capacity - self.__user_keys_flush_interval = user_keys_flush_interval - self.__inline_users_in_events = inline_users_in_events + self.__context_keys_capacity = context_keys_capacity if user_keys_capacity is None else user_keys_capacity + self.__context_keys_flush_interval = context_keys_flush_interval if user_keys_flush_interval is None else user_keys_flush_interval self.__diagnostic_opt_out = diagnostic_opt_out self.__diagnostic_recording_interval = max(diagnostic_recording_interval, 60) self.__wrapper_name = wrapper_name @@ -293,12 +309,11 @@ def copy_with_new_sdk_key(self, new_sdk_key: str) -> 'Config': feature_store=self.__feature_store, feature_requester_class=self.__feature_requester_class, event_processor_class=self.__event_processor_class, - private_attribute_names=self.__private_attribute_names, + private_attributes=self.__private_attributes, all_attributes_private=self.__all_attributes_private, offline=self.__offline, - user_keys_capacity=self.__user_keys_capacity, - user_keys_flush_interval=self.__user_keys_flush_interval, - inline_users_in_events=self.__inline_users_in_events, + context_keys_capacity=self.__context_keys_capacity, + context_keys_flush_interval=self.__context_keys_flush_interval, diagnostic_opt_out=self.__diagnostic_opt_out, diagnostic_recording_interval=self.__diagnostic_recording_interval, wrapper_name=self.__wrapper_name, @@ -374,10 +389,6 @@ def event_processor_class(self) -> Optional[Callable[['Config'], EventProcessor] def feature_requester_class(self) -> Callable: return self.__feature_requester_class - @property - def events_enabled(self) -> bool: - return self.__send_events - @property def send_events(self) -> bool: return self.__send_events @@ -391,8 +402,12 @@ def flush_interval(self) -> float: return self.__flush_interval @property - def private_attribute_names(self) -> list: - return list(self.__private_attribute_names) + def private_attributes(self) -> List[str]: + return list(self.__private_attributes) + + @property + def private_attribute_names(self) -> List[str]: + return self.private_attributes @property def all_attributes_private(self) -> bool: @@ -403,16 +418,22 @@ def offline(self) -> bool: return self.__offline @property - def user_keys_capacity(self) -> int: - return self.__user_keys_capacity + def context_keys_capacity(self) -> int: + return self.__context_keys_capacity @property - def user_keys_flush_interval(self) -> float: - return self.__user_keys_flush_interval + def context_keys_flush_interval(self) -> float: + return self.__context_keys_flush_interval + + @property + def user_keys_capacity(self) -> int: + """Deprecated name for :attr:`context_keys_capacity`.""" + return self.context_keys_capacity @property - def inline_users_in_events(self) -> bool: - return self.__inline_users_in_events + def user_keys_flush_interval(self) -> float: + """Deprecated name for :attr:`context_keys_flush_interval`.""" + return self.context_keys_flush_interval @property def diagnostic_opt_out(self) -> bool: @@ -441,3 +462,6 @@ def big_segments(self) -> BigSegmentsConfig: def _validate(self): if self.offline is False and self.sdk_key is None or self.sdk_key == '': log.warning("Missing or blank sdk_key.") + + +__all__ = ['Config', 'BigSegmentsConfig', 'HTTPConfig'] diff --git a/ldclient/context.py b/ldclient/context.py new file mode 100644 index 00000000..6b789318 --- /dev/null +++ b/ldclient/context.py @@ -0,0 +1,1034 @@ +""" +This submodule implements the SDK's evaluation context model. +""" + +from __future__ import annotations +from collections.abc import Iterable +import json +import re +from typing import Any, Dict, Optional, Union + + +_INVALID_KIND_REGEX = re.compile('[^-a-zA-Z0-9._]') +_USER_STRING_ATTRS = {'name', 'firstName', 'lastName', 'email', 'country', 'avatar', 'ip'} + +def _escape_key_for_fully_qualified_key(key: str) -> str: + # When building a fully-qualified key, ':' and '%' are percent-escaped; we do not use a full + # URL-encoding function because implementations of this are inconsistent across platforms. + return key.replace('%', '%25').replace(':', '%3A') + +def _validate_kind(kind: str) -> Optional[str]: + if kind == '': + return 'context kind must not be empty' + if kind == 'kind': + return '"kind" is not a valid context kind' + if kind == 'multi': + return 'context of kind "multi" must be created with create_multi or multi_builder' + if _INVALID_KIND_REGEX.search(kind): + return 'context kind contains disallowed characters' + return None + + +class Context: + """ + A collection of attributes that can be referenced in flag evaluations and analytics events. + This entity is also called an "evaluation context." + + To create a Context of a single kind, such as a user, you may use :func:`create()` when only the + key and the kind are relevant; or, to specify other attributes, use :func:`builder()`. + + To create a Context with multiple kinds (a multi-context), use :func:`create_multi()` or + :func:`multi_builder()`. + + A Context can be in an error state if it was built with invalid attributes. See :attr:`valid` + and :attr:`error`. + + A Context is immutable once created. + """ + + DEFAULT_KIND = 'user' + """A constant for the default context kind of "user".""" + + MULTI_KIND = 'multi' + """A constant for the kind that all multi-contexts have.""" + + def __init__( + self, + kind: Optional[str], + key: str, + name: Optional[str] = None, + anonymous: bool = False, + attributes: Optional[dict] = None, + private_attributes: Optional[list[str]] = None, + multi_contexts: Optional[list[Context]] = None, + allow_empty_key: bool = False, + error: Optional[str] = None + ): + """ + Constructs an instance, setting all properties. Avoid using this constructor directly. + + Applications should not normally use this constructor; the intended pattern is to use + factory methods or builders. Calling this constructor directly may result in some context + validation being skipped. + """ + if error is not None: + self.__make_invalid(error) + return + if multi_contexts is not None: + if len(multi_contexts) == 0: + self.__make_invalid('multi-context must contain at least one kind') + return + # Sort them by kind; they need to be sorted for computing a fully-qualified key, but even + # if fully_qualified_key is never used, this is helpful for __eq__ and determinacy. + multi_contexts = sorted(multi_contexts, key=lambda c: c.kind) + last_kind = None + errors = None # type: Optional[list[str]] + full_key = '' + for c in multi_contexts: + if c.error is not None: + if errors is None: + errors = [] + errors.append(c.error) + continue + if c.kind == last_kind: + self.__make_invalid('multi-kind context cannot have same kind more than once') + return + last_kind = c.kind + if full_key != '': + full_key += ':' + full_key += c.kind + ':' + _escape_key_for_fully_qualified_key(c.key) + if errors: + self.__make_invalid(', '.join(errors)) + return + self.__kind = 'multi' + self.__multi = multi_contexts # type: Optional[list[Context]] + self.__key = '' + self.__name = None + self.__anonymous = False + self.__attributes = None + self.__private = None + self.__full_key = full_key + self.__error = None # type: Optional[str] + return + if kind is None: + kind = Context.DEFAULT_KIND + kind_error = _validate_kind(kind) + if kind_error: + self.__make_invalid(kind_error) + return + if key == '' and not allow_empty_key: + self.__make_invalid('context key must not be None or empty') + return + self.__key = key + self.__kind = kind + self.__name = name + self.__anonymous = anonymous + self.__attributes = attributes + self.__private = private_attributes + self.__multi = None + self.__full_key = key if kind == Context.DEFAULT_KIND else \ + '%s:%s' % (kind, _escape_key_for_fully_qualified_key(key)) + self.__error = None + + @classmethod + def create(cls, key: str, kind: Optional[str] = None) -> Context: + """ + Creates a single-kind Context with only the key and the kind specified. + + If you omit the kind, it defaults to "user" (:const:`DEFAULT_KIND`). + + :param key: the context key + :param kind: the context kind; if omitted, it is :const:`DEFAULT_KIND` ("user") + :return: a context + + :see: :func:`builder()` + :see: :func:`create_multi()` + """ + return Context(kind, key, None, False, None, None, None, False) + + @classmethod + def create_multi(cls, *contexts: Context) -> Context: + """ + Creates a multi-context out of the specified single-kind Contexts. + + To create a Context for a single context kind, use :func:`create()` or + :func:`builder()`. + + You may use :func:`multi_builder()` instead if you want to add contexts one at a time + using a builder pattern. + + For the returned Context to be valid, the contexts list must not be empty, and all of its + elements must be valid Contexts. Otherwise, the returned Context will be invalid as + reported by :func:`error()`. + + If only one context parameter is given, the method returns that same context. + + If a nested context is a multi-context, this is exactly equivalent to adding each of the + individual kinds from it separately. See :func:`ldclient.ContextMultiBuilder.add()`. + + :param contexts: the individual contexts + :return: a multi-context + + :see: :func:`create()` + :see: :func:`multi_builder()` + """ + # implementing this via multi_builder gives us the flattening behavior for free + builder = ContextMultiBuilder() + for c in contexts: + builder.add(c) + return builder.build() + + @classmethod + def from_dict(cls, props: dict) -> Context: + """ + Creates a Context from properties in a dictionary, corresponding to the JSON + representation of a context or a user. + + If the dictionary has a "kind" property, then it is interpreted as a context using + the LaunchDarkly JSON schema for contexts. If it does not have a "kind" property, it + is interpreted as a context with "user" kind using the somewhat different LaunchDarkly + JSON schema for users in older LaunchDarkly SDKs. + + :param props: the context/user properties + :return: a context + """ + if props is None: + return Context.__create_with_error('Cannot use None as a context') + if 'kind' not in props: + return Context.__from_dict_old_user(props) + kind = props['kind'] + if not isinstance(kind, str): + return Context.__create_with_schema_type_error('kind') + if kind == 'multi': + b = ContextMultiBuilder() + for k, v in props.items(): + if k != 'kind': + if not isinstance(v, dict): + return Context.__create_with_schema_type_error(k) + c = Context.__from_dict_single(v, k) + b.add(c) + return b.build() + return Context.__from_dict_single(props, props['kind']) + + @classmethod + def builder(cls, key: str) -> ContextBuilder: + """ + Creates a builder for building a Context. + + You may use :class:`ldclient.ContextBuilder` methods to set additional attributes and/or + change the context kind before calling :func:`ldclient.ContextBuilder.build()`. If you + do not change any values, the defaults for the Context are that its ``kind`` is :const:`DEFAULT_KIND`, + its :attr:`key` is set to the key parameter specified here, :attr:`anonymous` is False, and it has no values for + any other attributes. + + This method is for building a Context that has only a single kind. To define a multi-context, + use :func:`create_multi()` or :func:`multi_builder()`. + + :param key: the context key + :return: a new builder + + :see: :func:`create()` + :see: :func:`create_multi()` + + """ + return ContextBuilder(key) + + @classmethod + def builder_from_context(cls, context: Context) -> ContextBuilder: + """ + Creates a builder whose properties are the same as an existing single-kind Context. + + You may then change the builder's state in any way and call :func:`ldclient.ContextBuilder.build()` + to create a new independent Context. + + :param context: the context to copy from + :return: a new builder + """ + return ContextBuilder(context.key, context) + + @classmethod + def multi_builder(cls) -> ContextMultiBuilder: + """ + Creates a builder for building a multi-context. + + This method is for building a Context that contains multiple contexts, each for a different + context kind. To define a single context, use :func:`create()` or :func:`builder()` instead. + + The difference between this method and :func:`create_multi()` is simply that the builder + allows you to add contexts one at a time, if that is more convenient for your logic. + + :return: a new builder + + :see: :func:`builder()` + :see: :func:`create_multi()` + """ + return ContextMultiBuilder() + + @property + def valid(self) -> bool: + """ + True for a valid Context, or False for an invalid one. + + A valid context is one that can be used in SDK operations. An invalid context is one that + is missing necessary attributes or has invalid attributes, indicating an incorrect usage + of the SDK API. The only ways for a context to be invalid are: + + * The :attr:`kind` property had a disallowed value. See :func:`ldclient.ContextBuilder.kind()`. + * For a single context, the :attr:`key` property was None or empty. + * You tried to create a multi-context without specifying any contexts. + * You tried to create a multi-context using the same context kind more than once. + * You tried to create a multi-context where at least one of the individual Contexts was invalid. + + In any of these cases, :attr:`valid` will be False, and :attr:`error` will return a + description of the error. + + Since in normal usage it is easy for applications to be sure they are using context kinds + correctly, and because throwing an exception is undesirable in application code that uses + LaunchDarkly, the SDK stores the error state in the Context itself and checks for such + errors at the time the Context is used, such as in a flag evaluation. At that point, if + the context is invalid, the operation will fail in some well-defined way as described in + the documentation for that method, and the SDK will generally log a warning as well. But + in any situation where you are not sure if you have a valid Context, you can check + :attr:`valid` or :attr:`error`. + """ + return self.__error is None + + @property + def error(self) -> Optional[str]: + """ + Returns None for a valid Context, or an error message for an invalid one. + + If this is None, then :attr:`valid` is True. If it is not None, then :attr:`valid` is + False. + """ + return self.__error + + @property + def multiple(self) -> bool: + """ + True if this is a multi-context. + + If this value is True, then :attr:`kind` is guaranteed to be :const:`MULTI_KIND`, and + you can inspect the individual context for each kind with :func:`get_individual_context()`. + + If this value is False, then :attr:`kind` is guaranteed to return a value that is not + :const:`MULTI_KIND`. + + :see: :func:`create_multi()` + """ + return self.__multi is not None + + @property + def kind(self) -> str: + """ + Returns the context's ``kind`` attribute. + + Every valid context has a non-empty kind. For multi-contexts, this value is + :const:`MULTI_KIND` and the kinds within the context can be inspected with + :func:`get_individual_context()`. + + :see: :func:`ldclient.ContextBuilder.kind()` + :see: :func:`create()` + """ + return self.__kind + + @property + def key(self) -> str: + """ + Returns the context's ``key`` attribute. + + For a single context, this value is set by :func:`create`, or :func:`ldclient.ContextBuilder.key()`. + + For a multi-context, there is no single value and :attr:`key` returns an empty string. Use + :func:`get_individual_context()` to get the Context for a particular kind, then get the + :attr:`key` of that Context. + + :see: :func:`ldclient.ContextBuilder.key()` + :see: :func:`create()` + """ + return self.__key + + @property + def name(self) -> Optional[str]: + """ + Returns the context's ``name`` attribute. + + For a single context, this value is set by :func:`ldclient.ContextBuilder.name()`. It is + None if no value was set. + + For a multi-context, there is no single value and :attr:`name` returns None. Use + :func:`get_individual_context()` to get the Context for a particular kind, then get the + :attr:`name` of that Context. + + :see: :func:`ldclient.ContextBuilder.name()` + """ + return self.__name + + @property + def anonymous(self) -> bool: + """ + Returns True if this context is only intended for flag evaluations and will not be + indexed by LaunchDarkly. + + The default value is False. False means that this Context represents an entity such as a + user that you want to be able to see on the LaunchDarkly dashboard. + + Setting ``anonymous`` to True excludes this context from the database that is + used by the dashboard. It does not exclude it from analytics event data, so it is + not the same as making attributes private; all non-private attributes will still be + included in events and data export. There is no limitation on what other attributes + may be included (so, for instance, ``anonymous`` does not mean there is no :attr:`name`), + and the context will still have whatever :attr:`key` you have given it. + + This value is also addressable in evaluations as the attribute name "anonymous". It + is always treated as a boolean true or false in evaluations. + + :see: :func:`ldclient.ContextBuilder.anonymous()` + """ + return self.__anonymous + + def get(self, attribute: str) -> Any: + """ + Looks up the value of any attribute of the context by name. + + For a single-kind context, the attribute name can be any custom attribute that was set + by :func:`ldclient.ContextBuilder.set()`. It can also be one of the built-in ones + like "kind", "key", or "name"; in such cases, it is equivalent to :attr:`kind`, + :attr:`key`, or :attr:`name`. + + For a multi-context, the only supported attribute name is "kind". Use + :func:`get_individual_context()` to get the context for a particular kind and then get + its attributes. + + If the value is found, the return value is the attribute value. If there is no such + attribute, the return value is None. An attribute that actually exists cannot have a + value of None. + + Context has a ``__getitem__`` magic method equivalent to ``get``, so ``context['attr']`` + behaves the same as ``context.get('attr')``. + + :param attribute: the desired attribute name + :return: the attribute value, or None if there is no such attribute + + :see: :func:`ldclient.ContextBuilder.set()` + """ + if attribute == 'key': + return self.__key + if attribute == 'kind': + return self.__kind + if attribute == 'name': + return self.__name + if attribute == 'anonymous': + return self.__anonymous + if self.__attributes is None: + return None + return self.__attributes.get(attribute) + + @property + def individual_context_count(self) -> int: + """ + Returns the number of context kinds in this context. + + For a valid individual context, this returns 1. For a multi-context, it returns the number + of context kinds. For an invalid context, it returns zero. + + :return: the number of context kinds + + :see: :func:`get_individual_context()` + """ + if self.__error is not None: + return 0 + if self.__multi is None: + return 1 + return len(self.__multi) + + def get_individual_context(self, kind: Union[int, str]) -> Optional[Context]: + """ + Returns the single-kind Context corresponding to one of the kinds in this context. + + The ``kind`` parameter can be either a number representing a zero-based index, or a string + representing a context kind. + + If this method is called on a single-kind Context, then the only allowable value for + ``kind`` is either zero or the same value as the Context's :attr:`kind`, and the return + value on success is the same Context. + + If the method is called on a multi-context, and ``kind`` is a number, it must be a + non-negative index that is less than the number of kinds (that is, less than the value + of :attr:`individual_context_count`), and the return value on success is one of the + individual Contexts within. Or, if ``kind`` is a string, it must match the context + kind of one of the individual contexts. + + If there is no context corresponding to ``kind``, the method returns None. + + :param kind: the index or string value of a context kind + :return: the context corresponding to that index or kind, or None + + :see: :attr:`individual_context_count` + """ + if self.__error is not None: + return None + if isinstance(kind, str): + if self.__multi is None: + return self if kind == self.__kind else None + for c in self.__multi: + if c.kind == kind: + return c + return None + if self.__multi is None: + return self if kind == 0 else None + if kind < 0 or kind >= len(self.__multi): + return None + return self.__multi[kind] + + @property + def custom_attributes(self) -> Iterable[str]: + """ + Gets the names of all non-built-in attributes that have been set in this context. + + For a single-kind context, this includes all the names that were passed to + :func:`ldclient.ContextBuilder.set()` as long as the values were not None (since a + value of None in LaunchDarkly is equivalent to the attribute not being set). + + For a multi-context, there are no such names. + + :return: an iterable + """ + return () if self.__attributes is None else self.__attributes + + @property + def _attributes(self) -> Optional[dict[str, Any]]: + # for internal use by ContextBuilder - we don't want to expose the original dict + # since that would break immutability + return self.__attributes + + @property + def private_attributes(self) -> Iterable[str]: + """ + Gets the list of all attribute references marked as private for this specific Context. + + This includes all attribute names/paths that were specified with + :func:`ldclient.ContextBuilder.private()`. + + :return: an iterable + """ + return () if self.__private is None else self.__private + + @property + def _private_attributes(self) -> Optional[list[str]]: + # for internal use by ContextBuilder - we don't want to expose the original list otherwise + # since that would break immutability + return self.__private + + @property + def fully_qualified_key(self) -> str: + """ + A string that describes the Context uniquely based on ``kind`` and ``key`` values. + + This value is used whenever LaunchDarkly needs a string identifier based on all of the + :attr:`kind` and :attr:`key` values in the context. Applications typically do not need to use it. + """ + return self.__full_key + + def to_dict(self) -> dict[str, Any]: + """ + Returns a dictionary of properties corresponding to the JSON representation of the + context (as an associative array), in the standard format used by LaunchDarkly SDKs. + + Use this method if you are passing context data to the front end for use with the + LaunchDarkly JavaScript SDK. + + :return: a dictionary corresponding to the JSON representation + """ + if not self.valid: + return {} + if self.__multi is not None: + ret = {"kind": "multi"} # type: dict[str, Any] + for c in self.__multi: + ret[c.kind] = c.__to_dict_single(False) + return ret + return self.__to_dict_single(True) + + def to_json_string(self) -> str: + """ + Returns the JSON representation of the context as a string, in the standard format + used by LaunchDarkly SDKs. + + This is equivalent to calling :func:`to_dict()` and then ``json.dumps()``. + + :return: the JSON representation as a string + """ + return json.dumps(self.to_dict(), separators=(',', ':')) + + def __to_dict_single(self, with_kind: bool) -> dict[str, Any]: + ret = {"key": self.__key} # type: Dict[str, Any] + if with_kind: + ret["kind"] = self.__kind + if self.__name is not None: + ret["name"] = self.__name + if self.__anonymous: + ret["anonymous"] = True + if self.__attributes is not None: + for k, v in self.__attributes.items(): + ret[k] = v + if self.__private is not None: + ret["_meta"] = {"privateAttributes": self.__private} + return ret + + @classmethod + def __from_dict_single(self, props: dict, kind: Optional[str]) -> Context: + b = ContextBuilder('') + if kind is not None: + b.kind(kind) + for k, v in props.items(): + if k == '_meta': + if v is None: + continue + if not isinstance(v, dict): + return Context.__create_with_schema_type_error(k) + p = v.get("privateAttributes") + if p is not None: + if not isinstance(p, list): + return Context.__create_with_schema_type_error("privateAttributes") + for pa in p: + if not isinstance(pa, str): + return Context.__create_with_schema_type_error("privateAttributes") + b.private(pa) + else: + if not b.try_set(k, v): + return Context.__create_with_schema_type_error(k) + return b.build() + + @classmethod + def __from_dict_old_user(self, props: dict) -> Context: + b = ContextBuilder('').kind('user') + has_key = False + for k, v in props.items(): + if k == 'custom': + if v is None: + continue + if not isinstance(v, dict): + return Context.__create_with_schema_type_error(k) + for k1, v1 in v.items(): + b.set(k1, v1) + elif k == 'privateAttributeNames': + if v is None: + continue + if not isinstance(v, list): + return Context.__create_with_schema_type_error(k) + for pa in v: + if not isinstance(pa, str): + return Context.__create_with_schema_type_error(k) + b.private(pa) + elif k in _USER_STRING_ATTRS: + if v is None: + continue + if not isinstance(v, str): + return Context.__create_with_schema_type_error(k) + b.set(k, v) + else: + if k == 'anonymous' and v is None: + v = False # anonymous: null was allowed in the old user model + if not b.try_set(k, v): + return Context.__create_with_schema_type_error(k) + if k == 'key': + has_key = True + b._allow_empty_key(has_key) + return b.build() + + def __getitem__(self, attribute) -> Any: + return self.get(attribute) if isinstance(attribute, str) else None + + def __repr__(self) -> str: + """ + Returns a standard string representation of a context. + + For a valid Context, this is currently defined as being the same as the JSON representation, + since that is the simplest way to represent all of the Context properties. However, application + code should not rely on ``__repr__`` always being the same as the JSON representation. If you + specifically want the latter, use :func:`to_json_string()`. For an invalid Context, ``__repr__`` + returns a description of why it is invalid. + + :return: a string representation + """ + if not self.valid: + return "[invalid context: %s]" % self.__error + return self.to_json_string() + + def __eq__(self, other) -> bool: + """ + Compares contexts for deep equality of their attributes. + + :return: true if the Contexts are equal + """ + if not isinstance(other, Context): + return False + if self.__kind != other.__kind or self.__key != other.__key or self.__name != other.__name or \ + self.__anonymous != other.__anonymous or self.__attributes != other.__attributes or \ + self.__private != other.__private or self.__error != other.__error: + return False + # Note that it's OK to compare __attributes because Python does a deep-equality check for dicts, + # and it's OK to compare __private_attributes because we have canonicalized them by sorting. + if self.__multi is None: + return True # we already know the other context isn't a multi-context due to checking kind + if other.__multi is None or len(other.__multi) != len(self.__multi): + return False + for i in range(len(self.__multi)): + if other.__multi[i] != self.__multi[i]: + return False + return True + + def __ne__(self, other) -> bool: + return not self.__eq__(other) + + def __make_invalid(self, error: str): + self.__error = error + self.__kind = '' + self.__key = '' + self.__name = None + self.__anonymous = False + self.__attributes = None + self.__private = None + self.__multi = None + self.__full_key = '' + + @classmethod + def __create_with_error(cls, error: str) -> Context: + return Context('', '', None, False, None, None, None, False, error) + + @classmethod + def __create_with_schema_type_error(cls, propname: str) -> Context: + return Context.__create_with_error('invalid data type for "%s"' % propname) + + +class ContextBuilder: + """ + A mutable object that uses the builder pattern to specify properties for :class:`ldclient.Context`. + + Use this type if you need to construct a context that has only a single kind. To define a + multi-context, use :func:`ldclient.Context.create_multi()` or :func:`ldclient.Context.multi_builder()`. + + Obtain an instance of ContextBuilder by calling :func:`ldclient.Context.builder()`. Then, call + setter methods such as :func:`name()` or :func:`set()` to specify any additional attributes. Then, + call :func:`build()` to create the context. ContextBuilder setters return a reference to the same + builder, so calls can be chained: + :: + + context = Context.builder('user-key') \ + .name('my-name') \ + .set('country', 'us') \ + .build + + :param key: the context key + """ + def __init__(self, key: str, copy_from: Optional[Context] = None): + self.__key = key + if copy_from is None: + self.__kind = Context.DEFAULT_KIND + self.__name = None # type: Optional[str] + self.__anonymous = False + self.__attributes = None # type: Optional[Dict[str, Any]] + self.__private = None # type: Optional[list[str]] + self.__copy_on_write_attrs = False + self.__copy_on_write_private = False + else: + self.__kind = copy_from.kind + self.__name = copy_from.name + self.__anonymous = copy_from.anonymous + self.__attributes = copy_from._attributes + self.__private = copy_from._private_attributes + self.__copy_on_write_attrs = self.__attributes is not None + self.__copy_on_write_private = self.__private is not None + self.__allow_empty_key = False + + def build(self) -> Context: + """ + Creates a Context from the current builder properties. + + The Context is immutable and will not be affected by any subsequent actions on the builder. + + It is possible to specify invalid attributes for a ContextBuilder, such as an empty key. + Instead of throwing an exception, the ContextBuilder always returns an Context and you can + check :attr:`ldclient.Context.valid` or :attr:`ldclient.Context.error` to see if it has + an error. See :attr:`ldclient.Context.valid` for more information about invalid conditions. + If you pass an invalid Context to an SDK method, the SDK will detect this and will log a + description of the error. + + :return: a new :class:`ldclient.Context` + """ + self.__copy_on_write_attrs = (self.__attributes is not None) + self.__copy_on_write_private = (self.__private is not None) + return Context(self.__kind, self.__key, self.__name, self.__anonymous, self.__attributes, self.__private, + None, self.__allow_empty_key) + + def key(self, key: str) -> ContextBuilder: + """ + Sets the context's key attribute. + + Every context has a key, which is always a string. It cannot be an empty string, but + there are no other restrictions on its value. + + The key attribute can be referenced by flag rules, flag target lists, and segments. + + :param key: the context key + :return: the builder + """ + self.__key = key + return self + + def kind(self, kind: str) -> ContextBuilder: + """ + Sets the context's kind attribute. + + Every context has a kind. Setting it to an empty string or None is equivalent to + :const:`ldclient.Context.DEFAULT_KIND` ("user"). This value is case-sensitive. + + The meaning of the context kind is completely up to the application. Validation rules are + as follows: + + * It may only contain letters, numbers, and the characters ``.``, ``_``, and ``-``. + * It cannot equal the literal string "kind". + * For a single context, it cannot equal "multi". + + :param kind: the context kind + :return: the builder + """ + self.__kind = kind + return self + + def name(self, name: Optional[str]) -> ContextBuilder: + """ + Sets the context's name attribute. + + This attribute is optional. It has the following special rules: + + * Unlike most other attributes, it is always a string if it is specified. + * The LaunchDarkly dashboard treats this attribute as the preferred display name for + contexts. + + :param name: the context name (None to unset the attribute) + :return: the builder + """ + self.__name = name + return self + + def anonymous(self, anonymous: bool) -> ContextBuilder: + """ + Sets whether the context is only intended for flag evaluations and should not be + indexed by LaunchDarkly. + + The default value is False. False means that this Context represents an entity + such as a user that you want to be able to see on the LaunchDarkly dashboard. + + Setting ``anonymous`` to True excludes this context from the database that is + used by the dashboard. It does not exclude it from analytics event data, so it is + not the same as making attributes private; all non-private attributes will still be + included in events and data export. There is no limitation on what other attributes + may be included (so, for instance, ``anonymous`` does not mean there is no ``name``), + and the context will still have whatever ``key`` you have given it. + + This value is also addressable in evaluations as the attribute name "anonymous". It + is always treated as a boolean true or false in evaluations. + + :param anonymous: true if the context should be excluded from the LaunchDarkly database + :return: the builder + + :see: :attr:`ldclient.Context.anonymous` + """ + self.__anonymous = anonymous + return self + + def set(self, attribute: str, value: Any) -> ContextBuilder: + """ + Sets the value of any attribute for the context. + + This includes only attributes that are addressable in evaluations-- not metadata such + as :func:`private()`. If ``attributeName`` is ``"private"``, you will be setting an attribute + with that name which you can use in evaluations or to record data for your own purposes, + but it will be unrelated to :func:`private()`. + + The allowable types for context attributes are equivalent to JSON types: boolean, number, + string, array (list), or object (dictionary). For all attribute names that do not have + special meaning to LaunchDarkly, you may use any of those types. Values of different JSON + types are always treated as different values: for instance, the number 1 is not the same + as the string "1". + + The following attribute names have special restrictions on their value types, and + any value of an unsupported type will be ignored (leaving the attribute unchanged): + + * ``"kind"``, ``"key"``: Must be a string. See :func:`kind()` and :func:`key()`. + * ``"name"``: Must be a string or None. See :func:`name()`. + * ``"anonymous"``: Must be a boolean. See :func:`anonymous()`. + + The attribute name ``"_meta"`` is not allowed, because it has special meaning in the + JSON schema for contexts; any attempt to set an attribute with this name has no + effect. + + Values that are JSON arrays or objects have special behavior when referenced in + flag/segment rules. + + A value of None is equivalent to removing any current non-default value of the + attribute. Null/None is not a valid attribute value in the LaunchDarkly model; any + expressions in feature flags that reference an attribute with a null value will + behave as if the attribute did not exist. + + :param attribute: the attribute name to set + :param value: the value to set + :return: the builder + """ + self.try_set(attribute, value) + return self + + def try_set(self, attribute: str, value: Any) -> bool: + """ + Same as :func:`set()`, but returns a boolean indicating whether the attribute was + successfully set. + + :param attribute: the attribute name to set + :param value: the value to set + :return: True if successful; False if the name was invalid or the value was not an + allowed type for that attribute + """ + if attribute == '' or attribute == '_meta': + return False + if attribute == 'key': + if isinstance(value, str): + self.__key = value + return True + return False + if attribute == 'kind': + if isinstance(value, str): + self.__kind = value + return True + return False + if attribute == 'name': + if value is None or isinstance(value, str): + self.__name = value + return True + return False + if attribute == 'anonymous': + if isinstance(value, bool): + self.__anonymous = value + return True + return False + if self.__copy_on_write_attrs: + self.__copy_on_write_attrs = False + self.__attributes = self.__attributes and self.__attributes.copy() + if self.__attributes is None: + self.__attributes = {} + if value is None: + self.__attributes.pop(attribute, None) + else: + self.__attributes[attribute] = value + return True + + def private(self, *attributes: str) -> ContextBuilder: + """ + Designates any number of Context attributes, or properties within them, as private: that is, + their values will not be sent to LaunchDarkly. + + Each parameter can be either a simple attribute name, or a slash-delimited path referring to + a JSON object property within an attribute. + + :param attributes: attribute names or references to mark as private + :return: the builder + """ + if len(attributes) != 0: + if self.__copy_on_write_private: + self.__copy_on_write_private = False + self.__private = self.__private and self.__private.copy() + if self.__private is None: + self.__private = [] + self.__private.extend(attributes) + return self + + def _allow_empty_key(self, allow: bool): + # This is used internally in Context.__from_dict_old_user to support old-style users with an + # empty key, which was allowed in the user model. + self.__allow_empty_key = allow + + +class ContextMultiBuilder: + """ + A mutable object that uses the builder pattern to specify properties for a multi-context. + + Use this builder if you need to construct a :class:`ldclient.Context` that contains multiple contexts, + each for a different context kind. To define a regular context for a single kind, use + :func:`ldclient.Context.create()` or :func:`ldclient.Context.builder()`. + + Obtain an instance of ContextMultiBuilder by calling :func:`ldclient.Context.multi_builder()`; + then, call :func:`add()` to specify the individual context for each kind. The method returns a + reference to the same builder, so calls can be chained: + :: + + context = Context.multi_builder() \ + .add(Context.new("my-user-key")) \ + .add(Context.new("my-org-key", "organization")) \ + .build + """ + def __init__(self): + self.__contexts = [] # type: list[Context] + self.__copy_on_write = False + + def build(self) -> Context: + """ + Creates a Context from the current builder properties. + + The Context is immutable and will not be affected by any subsequent actions on the builder. + + It is possible for a ContextMultiBuilder to represent an invalid state. Instead of throwing + an exception, the ContextMultiBuilder always returns a Context, and you can check + :attr:`ldclient.Context.valid` or :attr:`ldclient.Context.error` to see if it has an + error. See :attr:`ldclient.Context.valid` for more information about invalid context + conditions. If you pass an invalid context to an SDK method, the SDK will detect this and + will log a description of the error. + + If only one context was added to the builder, this method returns that context rather + than a multi-context. + + :return: a new Context + """ + if len(self.__contexts) == 1: + return self.__contexts[0] # multi-context with only one context is the same as just that context + self.__copy_on_write = True + # Context constructor will handle validation + return Context(None, '', None, False, None, None, self.__contexts) + + def add(self, context: Context) -> ContextMultiBuilder: + """ + Adds an individual Context for a specific kind to the builer. + + It is invalid to add more than one Context for the same kind, or to add an LContext + that is itself invalid. This error is detected when you call :func:`build()`. + + If the nested context is a multi-context, this is exactly equivalent to adding each of the + individual contexts from it separately. For instance, in the following example, ``multi1`` and + ``multi2`` end up being exactly the same: + :: + + c1 = Context.new("key1", "kind1") + c2 = Context.new("key2", "kind2") + c3 = Context.new("key3", "kind3") + + multi1 = Context.multi_builder().add(c1).add(c2).add(c3).build() + + c1plus2 = Context.multi_builder.add(c1).add(c2).build() + multi2 = Context.multi_builder().add(c1plus2).add(c3).build() + + :param context: the context to add + :return: the builder + """ + if context.multiple: + for i in range(context.individual_context_count): + c = context.get_individual_context(i) + if c is not None: + self.add(c) + else: + if self.__copy_on_write: + self.__copy_on_write = False + self.__contexts = self.__contexts.copy() + self.__contexts.append(context) + return self + + +__all__ = ['Context', 'ContextBuilder', 'ContextMultiBuilder'] diff --git a/ldclient/event_summarizer.py b/ldclient/event_summarizer.py deleted file mode 100644 index c0b10eef..00000000 --- a/ldclient/event_summarizer.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -Implementation details of the analytics event delivery component. -""" -# currently excluded from documentation - see docs/README.md - -from collections import namedtuple - - -EventSummary = namedtuple('EventSummary', ['start_date', 'end_date', 'counters']) - - -class EventSummarizer: - def __init__(self): - self.start_date = 0 - self.end_date = 0 - self.counters = dict() - - """ - Add this event to our counters, if it is a type of event we need to count. - """ - def summarize_event(self, event): - if event['kind'] == 'feature': - counter_key = (event['key'], event.get('variation'), event.get('version')) - counter_val = self.counters.get(counter_key) - if counter_val is None: - counter_val = { 'count': 1, 'value': event['value'], 'default': event.get('default') } - self.counters[counter_key] = counter_val - else: - counter_val['count'] = counter_val['count'] + 1 - date = event['creationDate'] - if self.start_date == 0 or date < self.start_date: - self.start_date = date - if date > self.end_date: - self.end_date = date - - """ - Return the current summarized event data. - """ - def snapshot(self): - return EventSummary(start_date = self.start_date, end_date = self.end_date, counters = self.counters) - - def clear(self): - self.start_date = 0 - self.end_date = 0 - self.counters = dict() diff --git a/ldclient/feature_store.py b/ldclient/feature_store.py index dcac232d..401010db 100644 --- a/ldclient/feature_store.py +++ b/ldclient/feature_store.py @@ -9,9 +9,9 @@ from typing import Callable, Any from collections import OrderedDict, defaultdict -from ldclient.util import log +from ldclient.impl.rwlock import ReadWriteLock +from ldclient.impl.util import log from ldclient.interfaces import DiagnosticDescription, FeatureStore -from ldclient.rwlock import ReadWriteLock from ldclient.versioned_data_kind import VersionedDataKind @@ -108,10 +108,16 @@ def all(self, kind, callback): def init(self, all_data): """ """ + all_decoded = {} + for kind, items in all_data.items(): + items_decoded = {} + for key, item in items.items(): + items_decoded[key] = kind.decode(item) + all_decoded[kind] = items_decoded try: self._lock.rlock() self._items.clear() - self._items.update(all_data) + self._items.update(all_decoded) self._initialized = True for k in all_data: log.debug("Initialized '%s' store with %d items", k.namespace, len(all_data[k])) @@ -135,13 +141,14 @@ def delete(self, kind, key: str, version: int): def upsert(self, kind, item): """ """ + decoded_item = kind.decode(item) key = item['key'] try: self._lock.rlock() itemsOfKind = self._items[kind] i = itemsOfKind.get(key) if i is None or i['version'] < item['version']: - itemsOfKind[key] = item + itemsOfKind[key] = decoded_item log.debug("Updated %s in '%s' to version %d", key, kind.namespace, item['version']) finally: self._lock.runlock() diff --git a/ldclient/feature_store_helpers.py b/ldclient/feature_store_helpers.py index 1904f59d..2ed911f9 100644 --- a/ldclient/feature_store_helpers.py +++ b/ldclient/feature_store_helpers.py @@ -9,6 +9,12 @@ from ldclient.versioned_data_kind import VersionedDataKind from ldclient.feature_store import CacheConfig +def _ensure_encoded(kind, item): + return item if isinstance(item, dict) else kind.encode(item) + +def _is_deleted(item): + return item is not None and item.get('deleted') is True + class CachingStoreWrapper(DiagnosticDescription, FeatureStore): """A partial implementation of :class:`ldclient.interfaces.FeatureStore`. @@ -32,16 +38,20 @@ def __init__(self, core: FeatureStoreCore, cache_config: CacheConfig): self._cache = None self._inited = False - def init(self, all_data: Mapping[VersionedDataKind, Mapping[str, Dict[Any, Any]]]): + def init(self, all_encoded_data: Mapping[VersionedDataKind, Mapping[str, Dict[Any, Any]]]): """ """ - self._core.init_internal(all_data) + self._core.init_internal(all_encoded_data) # currently FeatureStoreCore expects to receive dicts if self._cache is not None: self._cache.clear() - for kind, items in all_data.items(): - self._cache[self._all_cache_key(kind)] = self._items_if_not_deleted(items) + for kind, items in all_encoded_data.items(): + decoded_items = {} # we don't want to cache dicts, we want to cache FeatureFlags/Segments for key, item in items.items(): - self._cache[self._item_cache_key(kind, key)] = [item] # note array wrapper + decoded_item = kind.decode(item) + self._cache[self._item_cache_key(kind, key)] = [decoded_item] # note array wrapper + if not _is_deleted(decoded_item): + decoded_items[key] = decoded_item + self._cache[self._all_cache_key(kind)] = decoded_items self._inited = True def get(self, kind, key, callback=lambda x: x): @@ -52,11 +62,13 @@ def get(self, kind, key, callback=lambda x: x): cached_item = self._cache.get(cache_key) # note, cached items are wrapped in an array so we can cache None values if cached_item is not None: - return callback(self._item_if_not_deleted(cached_item[0])) - item = self._core.get_internal(kind, key) + item = cached_item[0] + return callback(None if _is_deleted(item) else item) + encoded_item = self._core.get_internal(kind, key) # currently FeatureStoreCore returns dicts + item = None if encoded_item is None else kind.decode(encoded_item) if self._cache is not None: self._cache[cache_key] = [item] - return callback(self._item_if_not_deleted(item)) + return callback(None if _is_deleted(item) else item) def all(self, kind, callback=lambda x: x): """ @@ -66,7 +78,12 @@ def all(self, kind, callback=lambda x: x): cached_items = self._cache.get(cache_key) if cached_items is not None: return callback(cached_items) - items = self._items_if_not_deleted(self._core.get_all_internal(kind)) + encoded_items = self._core.get_all_internal(kind) + all_items = {} + if encoded_items is not None: + for key, item in encoded_items.items(): + all_items[key] = kind.decode(item) + items = self._items_if_not_deleted(all_items) if self._cache is not None: self._cache[cache_key] = items return callback(items) @@ -77,12 +94,14 @@ def delete(self, kind, key, version): deleted_item = { "key": key, "version": version, "deleted": True } self.upsert(kind, deleted_item) - def upsert(self, kind, item): + def upsert(self, kind, encoded_item): """ """ - new_state = self._core.upsert_internal(kind, item) + encoded_item = _ensure_encoded(kind, encoded_item) + new_state = self._core.upsert_internal(kind, encoded_item) + new_decoded_item = kind.decode(new_state) if self._cache is not None: - self._cache[self._item_cache_key(kind, item.get('key'))] = [new_state] + self._cache[self._item_cache_key(kind, new_decoded_item.get('key'))] = [new_decoded_item] self._cache.pop(self._all_cache_key(kind), None) @property @@ -115,12 +134,6 @@ def _item_cache_key(kind, key): def _all_cache_key(kind): return kind.namespace - @staticmethod - def _item_if_not_deleted(item): - if item is not None and item.get('deleted', False): - return None - return item - @staticmethod def _items_if_not_deleted(items): results = {} diff --git a/ldclient/flag.py b/ldclient/flag.py deleted file mode 100644 index 67dfa838..00000000 --- a/ldclient/flag.py +++ /dev/null @@ -1,27 +0,0 @@ - -# This module exists only for historical reasons. Previously, ldclient.flag contained a -# combination of public API types (EvaluationDetail) and implementation details (the evaluate() -# function, etc.). Our new convention is to keep all such implementation details within -# ldclient.impl and its submodules, to make it clear that applications should never try to -# reference them directly. Since some application code may have done so in the past, and since -# we do not want to move anything in the public API yet, we are retaining this module as a -# deprecated entry point and re-exporting some symbols. -# -# In the future, ldclient.evaluation will be the preferred entry point for the public types and -# ldclient.flag will be removed. - -from ldclient.evaluation import BigSegmentsStatus, EvaluationDetail -from ldclient.impl.evaluator import Evaluator, EvalResult, error_reason -from ldclient.versioned_data_kind import FEATURES, SEGMENTS - -# Deprecated internal function for evaluating flags. -def evaluate(flag, user, store, event_factory) -> EvalResult: - evaluator = Evaluator( - lambda key: store.get(FEATURES, key), - lambda key: store.get(SEGMENTS, key), - lambda key: (None, BigSegmentsStatus.NOT_CONFIGURED) - ) - return evaluator.evaluate(flag, user, event_factory) - - -__all__ = ['EvaluationDetail', 'evaluate', 'error_reason', 'EvalResult'] diff --git a/ldclient/flags_state.py b/ldclient/flags_state.py deleted file mode 100644 index 4701031e..00000000 --- a/ldclient/flags_state.py +++ /dev/null @@ -1,8 +0,0 @@ - -# This module exists only for historical reasons. It only contained the FeatureFlagsState class, -# which is now in the ldclient.evaluation module. We are retaining this module as a deprecated -# entry point and re-exporting the class from ldclient.evaluation. -# -# In the future, ldclient.evaluation will be the preferred entry point and ldclient.flags_state -# will be removed. -from ldclient.evaluation import FeatureFlagsState diff --git a/ldclient/impl/big_segments.py b/ldclient/impl/big_segments.py index b6a013d3..b4f4ce1e 100644 --- a/ldclient/impl/big_segments.py +++ b/ldclient/impl/big_segments.py @@ -2,8 +2,8 @@ from ldclient.evaluation import BigSegmentsStatus from ldclient.impl.listeners import Listeners from ldclient.impl.repeating_task import RepeatingTask +from ldclient.impl.util import log from ldclient.interfaces import BigSegmentStoreStatus, BigSegmentStoreStatusProvider -from ldclient.util import log import base64 from expiringdict import ExpiringDict @@ -61,7 +61,7 @@ def __init__(self, config: BigSegmentsConfig): self.__poll_task = None # type: Optional[RepeatingTask] if self.__store: - self.__cache = ExpiringDict(max_len = config.user_cache_size, max_age_seconds=config.user_cache_time) + self.__cache = ExpiringDict(max_len = config.context_cache_size, max_age_seconds=config.context_cache_time) self.__poll_task = RepeatingTask(config.status_poll_interval, 0, self.poll_store_and_update_status) self.__poll_task.start() @@ -81,7 +81,6 @@ def get_user_membership(self, user_key: str) -> Tuple[Optional[dict], str]: membership = self.__cache.get(user_key) if membership is None: user_hash = _hash_for_user_key(user_key) - log.warn("*** querying Big Segments for user hash: %s" % user_hash) try: membership = self.__store.get_membership(user_hash) if membership is None: @@ -89,6 +88,7 @@ def get_user_membership(self, user_key: str) -> Tuple[Optional[dict], str]: self.__cache[user_key] = membership except Exception as e: log.exception("Big Segment store membership query returned error: %s" % e) + return (None, BigSegmentsStatus.STORE_ERROR) status = self.__last_status if not status: status = self.poll_store_and_update_status() diff --git a/ldclient/impl/datasource/__init__.py b/ldclient/impl/datasource/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/ldclient/feature_requester.py b/ldclient/impl/datasource/feature_requester.py similarity index 88% rename from ldclient/feature_requester.py rename to ldclient/impl/datasource/feature_requester.py index b526a332..dbb4f16c 100644 --- a/ldclient/feature_requester.py +++ b/ldclient/impl/datasource/feature_requester.py @@ -1,18 +1,14 @@ """ Default implementation of feature flag polling requests. """ -# currently excluded from documentation - see docs/README.md from collections import namedtuple import json import urllib3 from ldclient.impl.http import _http_factory +from ldclient.impl.util import _headers, log, throw_if_unsuccessful_response from ldclient.interfaces import FeatureRequester -from ldclient.util import UnsuccessfulResponseException -from ldclient.util import _headers -from ldclient.util import log -from ldclient.util import throw_if_unsuccessful_response from ldclient.versioned_data_kind import FEATURES, SEGMENTS diff --git a/ldclient/polling.py b/ldclient/impl/datasource/polling.py similarity index 93% rename from ldclient/polling.py rename to ldclient/impl/datasource/polling.py index 5b2a3c44..95d4d3ff 100644 --- a/ldclient/polling.py +++ b/ldclient/impl/datasource/polling.py @@ -7,9 +7,8 @@ from ldclient.config import Config from ldclient.impl.repeating_task import RepeatingTask +from ldclient.impl.util import UnsuccessfulResponseException, http_error_message, is_http_error_recoverable, log from ldclient.interfaces import FeatureRequester, FeatureStore, UpdateProcessor -from ldclient.util import log -from ldclient.util import UnsuccessfulResponseException, http_error_message, is_http_error_recoverable class PollingUpdateProcessor(UpdateProcessor): diff --git a/ldclient/streaming.py b/ldclient/impl/datasource/streaming.py similarity index 98% rename from ldclient/streaming.py rename to ldclient/impl/datasource/streaming.py index 2255b419..3dbef2ff 100644 --- a/ldclient/streaming.py +++ b/ldclient/impl/datasource/streaming.py @@ -14,8 +14,8 @@ from ldclient.impl.http import HTTPFactory, _http_factory from ldclient.impl.retry_delay import RetryDelayStrategy, DefaultBackoffStrategy, DefaultJitterStrategy from ldclient.impl.sse import SSEClient +from ldclient.impl.util import log, UnsuccessfulResponseException, http_error_message, is_http_error_recoverable from ldclient.interfaces import UpdateProcessor -from ldclient.util import log, UnsuccessfulResponseException, http_error_message, is_http_error_recoverable from ldclient.versioned_data_kind import FEATURES, SEGMENTS # allows for up to 5 minutes to elapse without any data sent across the stream. The heartbeats sent as comments on the diff --git a/ldclient/impl/evaluator.py b/ldclient/impl/evaluator.py index 0fa9f088..0ceed119 100644 --- a/ldclient/impl/evaluator.py +++ b/ldclient/impl/evaluator.py @@ -1,15 +1,15 @@ -from ldclient import operators +from ldclient.context import Context from ldclient.evaluation import BigSegmentsStatus, EvaluationDetail -from ldclient.impl.event_factory import _EventFactory -from ldclient.util import stringify_attrs +from ldclient.impl import operators +from ldclient.impl.events.types import EventFactory, EventInputEvaluation +from ldclient.impl.model import * -from collections import namedtuple import hashlib import logging -from typing import Callable, Optional, Tuple +from typing import Any, Callable, Dict, List, Optional, Tuple # For consistency with past logging behavior, we are pretending that the evaluation logic still lives in -# the ldclient.flag module. +# the ldclient.evaluation module. log = logging.getLogger('ldclient.flag') __LONG_SCALE__ = float(0xFFFFFFFFFFFFFFF) @@ -17,27 +17,46 @@ __BUILTINS__ = ["key", "secondary", "ip", "country", "email", "firstName", "lastName", "avatar", "name", "anonymous"] -__USER_ATTRS_TO_STRINGIFY_FOR_EVALUATION__ = [ "key", "secondary" ] -# Currently we are not stringifying the rest of the built-in attributes prior to evaluation, only for events. -# This is because it could affect evaluation results for existing users (ch35206). - # EvalResult is used internally to hold the EvaluationDetail result of an evaluation along with # other side effects that are not exposed to the application, such as events generated by # prerequisite evaluations, and the cached state of any Big Segments query that we may have -# ended up having to do for the user. +# ended up having to do for the context. class EvalResult: + __slots__ = ['detail', 'events', 'big_segments_status', 'big_segments_membership', + 'original_flag_key', 'prereq_stack', 'segment_stack'] + def __init__(self): self.detail = None - self.events = None - self.big_segments_status = None - self.big_segments_membership = None - - def add_event(self, event): + self.events = None # type: Optional[List[EventInputEvaluation]] + self.big_segments_status = None # type: Optional[str] + self.big_segments_membership = None # type: Optional[Dict[str, Optional[dict]]] + self.original_flag_key = None # type: Optional[str] + self.prereq_stack = None # type: Optional[List[str]] + self.segment_stack = None # type: Optional[List[str]] + + def add_event(self, event: EventInputEvaluation): if self.events is None: self.events = [] self.events.append(event) + def __repr__(self) -> str: # used only in test debugging + return "EvalResult(detail=%s, events=%s)" % (self.detail, self.events) + + +class EvaluationException(Exception): + def __init__(self, message: str, error_kind: str = 'MALFORMED_FLAG'): + self._message = message + self._error_kind = error_kind + + @property + def message(self) -> str: + return self._message + + @property + def error_kind(self) -> str: + return self._error_kind + class Evaluator: """ @@ -48,97 +67,219 @@ class Evaluator: """ def __init__( self, - get_flag: Callable[[str], Optional[dict]], - get_segment: Callable[[str], Optional[dict]], - get_big_segments_membership: Callable[[str], Tuple[Optional[dict], str]] + get_flag: Callable[[str], Optional[FeatureFlag]], + get_segment: Callable[[str], Optional[Segment]], + get_big_segments_membership: Callable[[str], Tuple[Optional[dict], str]], + logger: Optional[logging.Logger] = None ): """ :param get_flag: function provided by LDClient that takes a flag key and returns either the flag or None :param get_segment: same as get_flag but for segments - :param get_big_segments_membership: takes a user key (not a user hash) and returns a tuple of + :param get_big_segments_membership: takes a context key (not a context hash) and returns a tuple of (membership, status) where membership is as defined in BigSegmentStore, and status is one of the BigSegmentStoreStatus constants """ self.__get_flag = get_flag self.__get_segment = get_segment self.__get_big_segments_membership = get_big_segments_membership + self.__logger = logger - def evaluate(self, flag: dict, user: dict, event_factory: _EventFactory) -> EvalResult: - sanitized_user = stringify_attrs(user, __USER_ATTRS_TO_STRINGIFY_FOR_EVALUATION__) + def evaluate(self, flag: FeatureFlag, context: Context, event_factory: EventFactory) -> EvalResult: state = EvalResult() - state.detail = self._evaluate(flag, sanitized_user, state, event_factory) + state.original_flag_key = flag.key + try: + state.detail = self._evaluate(flag, context, state, event_factory) + except EvaluationException as e: + if self.__logger is not None: + self.__logger.error('Could not evaluate flag "%s": %s' % (flag.key, e.message)) + state.detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': e.error_kind}) + return state if state.big_segments_status is not None: state.detail.reason['bigSegmentsStatus'] = state.big_segments_status return state - def _evaluate(self, flag: dict, user: dict, state: EvalResult, event_factory: _EventFactory): - if not flag.get('on', False): + def _evaluate(self, flag: FeatureFlag, context: Context, state: EvalResult, event_factory: EventFactory) -> EvaluationDetail: + if not flag.on: return _get_off_value(flag, {'kind': 'OFF'}) - prereq_failure_reason = self._check_prerequisites(flag, user, state, event_factory) + prereq_failure_reason = self._check_prerequisites(flag, context, state, event_factory) if prereq_failure_reason is not None: return _get_off_value(flag, prereq_failure_reason) - # Check to see if any user targets match: - for target in flag.get('targets') or []: - for value in target.get('values') or []: - if value == user['key']: - return _get_variation(flag, target.get('variation'), {'kind': 'TARGET_MATCH'}) + # Check to see if any context targets match: + target_result = self._check_targets(flag, context) + if target_result is not None: + return target_result # Now walk through the rules to see if any match - for index, rule in enumerate(flag.get('rules') or []): - if self._rule_matches_user(rule, user, state): - return _get_value_for_variation_or_rollout(flag, rule, user, - {'kind': 'RULE_MATCH', 'ruleIndex': index, 'ruleId': rule.get('id')}) + for index, rule in enumerate(flag.rules): + if self._rule_matches_context(rule, context, state): + return _get_value_for_variation_or_rollout(flag, rule.variation_or_rollout, context, + {'kind': 'RULE_MATCH', 'ruleIndex': index, 'ruleId': rule.id}) # Walk through fallthrough and see if it matches - if flag.get('fallthrough') is not None: - return _get_value_for_variation_or_rollout(flag, flag['fallthrough'], user, {'kind': 'FALLTHROUGH'}) + return _get_value_for_variation_or_rollout(flag, flag.fallthrough, context, {'kind': 'FALLTHROUGH'}) - def _check_prerequisites(self, flag: dict, user: dict, state: EvalResult, event_factory: _EventFactory): + def _check_prerequisites(self, flag: FeatureFlag, context: Context, state: EvalResult, event_factory: EventFactory) -> Optional[dict]: failed_prereq = None prereq_res = None - for prereq in flag.get('prerequisites') or []: - prereq_flag = self.__get_flag(prereq.get('key')) - if prereq_flag is None: - log.warning("Missing prereq flag: " + prereq.get('key')) - failed_prereq = prereq - else: - prereq_res = self._evaluate(prereq_flag, user, state, event_factory) - # Note that if the prerequisite flag is off, we don't consider it a match no matter what its - # off variation was. But we still need to evaluate it in order to generate an event. - if (not prereq_flag.get('on', False)) or prereq_res.variation_index != prereq.get('variation'): + if flag.prerequisites.count == 0: + return None + + try: + # We use the state object to guard against circular references in prerequisites. To avoid + # the overhead of creating the state.prereq_stack list in the most common case where + # there's only a single level prerequisites, we treat state.original_flag_key as the first + # element in the stack. + flag_key = flag.key + if flag_key != state.original_flag_key: + if state.prereq_stack is None: + state.prereq_stack = [] + state.prereq_stack.append(flag_key) + + for prereq in flag.prerequisites: + prereq_key = prereq.key + if (prereq_key == state.original_flag_key or + (state.prereq_stack is not None and prereq.key in state.prereq_stack)): + raise EvaluationException(('prerequisite relationship to "%s" caused a circular reference;' + + ' this is probably a temporary condition due to an incomplete update') % prereq_key) + + prereq_flag = self.__get_flag(prereq_key) + if prereq_flag is None: + log.warning("Missing prereq flag: " + prereq_key) failed_prereq = prereq - event = event_factory.new_eval_event(prereq_flag, user, prereq_res, None, flag) - state.add_event(event) - if failed_prereq: - return {'kind': 'PREREQUISITE_FAILED', 'prerequisiteKey': failed_prereq.get('key')} + else: + prereq_res = self._evaluate(prereq_flag, context, state, event_factory) + # Note that if the prerequisite flag is off, we don't consider it a match no matter what its + # off variation was. But we still need to evaluate it in order to generate an event. + if (not prereq_flag.on) or prereq_res.variation_index != prereq.variation: + failed_prereq = prereq + event = event_factory.new_eval_event(prereq_flag, context, prereq_res, None, flag) + state.add_event(event) + if failed_prereq: + return {'kind': 'PREREQUISITE_FAILED', 'prerequisiteKey': failed_prereq.key} + return None + finally: + if state.prereq_stack is not None and len(state.prereq_stack) != 0: + state.prereq_stack.pop() + + def _check_targets(self, flag: FeatureFlag, context: Context) -> Optional[EvaluationDetail]: + user_targets = flag.targets + context_targets = flag.context_targets + if len(context_targets) == 0: + # old-style data has only targets for users + if len(user_targets) != 0: + user_context = context.get_individual_context(Context.DEFAULT_KIND) + if (user_context is None): + return None + key = user_context.key + for t in user_targets: + if key in t.values: + return _target_match_result(flag, t.variation) + return None + for t in context_targets: + kind = t.context_kind or Context.DEFAULT_KIND + var = t.variation + actual_context = context.get_individual_context(kind) + if actual_context is None: + continue + key = actual_context.key + if kind == Context.DEFAULT_KIND: + for ut in user_targets: + if ut.variation == var: + if key in ut.values: + return _target_match_result(flag, var) + break + continue + if key in t.values: + return _target_match_result(flag, var) return None - def _rule_matches_user(self, rule: dict, user: dict, state: EvalResult): - for clause in rule.get('clauses') or []: - if clause.get('attribute') is not None: - if not self._clause_matches_user(clause, user, state): - return False + def _rule_matches_context(self, rule: FlagRule, context: Context, state: EvalResult) -> bool: + for clause in rule.clauses: + if not self._clause_matches_context(clause, context, state): + return False return True - def _clause_matches_user(self, clause: dict, user: dict, state: EvalResult): - if clause.get('op') == 'segmentMatch': - for seg_key in clause.get('values') or []: + def _clause_matches_context(self, clause: Clause, context: Context, state: EvalResult) -> bool: + if clause.op == 'segmentMatch': + for seg_key in clause.values: segment = self.__get_segment(seg_key) - if segment is not None and self._segment_matches_user(segment, user, state): + if segment is not None and self._segment_matches_context(segment, context, state): + return _maybe_negate(clause, True) + return _maybe_negate(clause, False) + + attr = clause.attribute + if attr is None: + return False + if attr.depth == 1 and attr[0] == 'kind': + return _maybe_negate(clause, _match_clause_by_kind(clause, context)) + actual_context = context.get_individual_context(clause.context_kind or Context.DEFAULT_KIND) + if actual_context is None: + return False + context_value = _get_context_value_by_attr_ref(actual_context, attr) + if context_value is None: + return False + + # is the attr an array? + if isinstance(context_value, (list, tuple)): + for v in context_value: + if _match_single_context_value(clause, v): return _maybe_negate(clause, True) return _maybe_negate(clause, False) - else: - return _clause_matches_user_no_segments(clause, user) + return _maybe_negate(clause, _match_single_context_value(clause, context_value)) + + def _segment_matches_context(self, segment: Segment, context: Context, state: EvalResult) -> bool: + if state.segment_stack is not None and segment.key in state.segment_stack: + raise EvaluationException(('segment rule referencing segment "%s" caused a circular reference;' + + ' this is probably a temporary condition due to an incomplete update') % segment.key) + if segment.unbounded: + return self._big_segment_match_context(segment, context, state) + return self._simple_segment_match_context(segment, context, state, True) + + def _simple_segment_match_context(self, segment: Segment, context: Context, state: EvalResult, use_includes_and_excludes: bool) -> bool: + if use_includes_and_excludes: + if _context_key_is_in_target_list(context, None, segment.included): + return True + for t in segment.included_contexts: + if _context_key_is_in_target_list(context, t.context_kind, t.values): + return True + if _context_key_is_in_target_list(context, None, segment.excluded): + return False + for t in segment.excluded_contexts: + if _context_key_is_in_target_list(context, t.context_kind, t.values): + return False + if segment.rules.count != 0: + # Evaluating rules means we might be doing recursive segment matches, so we'll push the current + # segment key onto the stack for cycle detection. + if state.segment_stack is None: + state.segment_stack = [] + state.segment_stack.append(segment.key) + try: + for rule in segment.rules: + if self._segment_rule_matches_context(rule, context, state, segment.key, segment.salt): + return True + return False + finally: + state.segment_stack.pop() + return False + + def _segment_rule_matches_context(self, rule: SegmentRule, context: Context, state: EvalResult, segment_key: str, salt: str) -> bool: + for clause in rule.clauses: + if not self._clause_matches_context(clause, context, state): + return False - def _segment_matches_user(self, segment: dict, user: dict, state: EvalResult): - if segment.get('unbounded', False): - return self._big_segment_match_user(segment, user, state) - return _simple_segment_match_user(segment, user, True) + # If the weight is absent, this rule matches + if rule.weight is None: + return True + + # All of the clauses are met. See if the context buckets in + bucket = _bucket_context(None, context, rule.rollout_context_kind, segment_key, salt, rule.bucket_by) + weight = rule.weight / 100000.0 + return bucket < weight - def _big_segment_match_user(self, segment: dict, user: dict, state: EvalResult): - generation = segment.get('generation', None) + def _big_segment_match_context(self, segment: Segment, context: Context, state: EvalResult) -> bool: + generation = segment.generation if generation is None: # Big segment queries can only be done if the generation is known. If it's unset, # that probably means the data store was populated by an older SDK that doesn't know @@ -146,103 +287,133 @@ def _big_segment_match_user(self, segment: dict, user: dict, state: EvalResult): # that as a "not configured" condition. state.big_segments_status = BigSegmentsStatus.NOT_CONFIGURED return False - if state.big_segments_status is None: - user_key = str(user.get('key')) - result = self.__get_big_segments_membership(user_key) - state.big_segments_membership, state.big_segments_status = result - segment_ref = _make_big_segment_ref(segment) - membership = state.big_segments_membership - included = None if membership is None else membership.get(segment_ref, None) + + # A big segment can only apply to one context kind, so if we don't have a key for that kind, + # we don't need to bother querying the data. + match_context = context.get_individual_context(segment.unbounded_context_kind or Context.DEFAULT_KIND) + if match_context is None: + return False + key = match_context.key + + membership = None + has_cached_membership = False + if state.big_segments_membership is not None: + if key in state.big_segments_membership: + has_cached_membership = True + membership = state.big_segments_membership[key] + # Note that we could have cached a None result from a query, in which case membership + # will be None but has_cached_membership will be True. + if not has_cached_membership: + if self.__get_big_segments_membership is None: + state.big_segments_status = BigSegmentsStatus.NOT_CONFIGURED + return False + result = self.__get_big_segments_membership(key) + # Note that this query is just by key; the context kind doesn't matter because any given + # Big Segment can only reference one context kind. So if segment A for the "user" kind + # includes a "user" context with key X, and segment B for the "org" kind includes an "org" + # context with the same key X, it is fine to say that the membership for key X is + # segment A and segment B-- there is no ambiguity. + membership, state.big_segments_status = result + if state.big_segments_membership is None: + state.big_segments_membership = {} + state.big_segments_membership[key] = membership + included = None if membership is None else membership.get(_make_big_segment_ref(segment), None) if included is not None: return included - return _simple_segment_match_user(segment, user, False) + return self._simple_segment_match_context(segment, context, state, False) # The following functions are declared outside Evaluator because they do not depend on any # of Evaluator's state. -def _get_variation(flag, variation, reason): - vars = flag.get('variations') or [] +def _get_variation(flag: FeatureFlag, variation: int, reason: dict) -> EvaluationDetail: + vars = flag.variations if variation < 0 or variation >= len(vars): return EvaluationDetail(None, None, error_reason('MALFORMED_FLAG')) return EvaluationDetail(vars[variation], variation, reason) -def _get_off_value(flag, reason): - off_var = flag.get('offVariation') +def _get_off_value(flag: FeatureFlag, reason: dict) -> EvaluationDetail: + off_var = flag.off_variation if off_var is None: return EvaluationDetail(None, None, reason) return _get_variation(flag, off_var, reason) -def _get_value_for_variation_or_rollout(flag, vr, user, reason): - index, inExperiment = _variation_index_for_user(flag, vr, user) +def _get_value_for_variation_or_rollout(flag: FeatureFlag, vr: VariationOrRollout, context: Context, reason: dict) -> EvaluationDetail: + index, inExperiment = _variation_index_for_context(flag, vr, context) if index is None: return EvaluationDetail(None, None, error_reason('MALFORMED_FLAG')) if inExperiment: reason['inExperiment'] = inExperiment return _get_variation(flag, index, reason) -def _get_user_attribute(user, attr): - if attr in __BUILTINS__: - return user.get(attr), False - else: # custom attribute - if user.get('custom') is None or user['custom'].get(attr) is None: - return None, True - return user['custom'][attr], False - -def _variation_index_for_user(feature, rule, user): - if rule.get('variation') is not None: - return (rule['variation'], False) +def _variation_index_for_context(flag: FeatureFlag, vr: VariationOrRollout, context: Context) -> Tuple[Optional[int], bool]: + var = vr.variation + if var is not None: + return (var, False) - rollout = rule.get('rollout') + rollout = vr.rollout if rollout is None: return (None, False) - variations = rollout.get('variations') - seed = rollout.get('seed') - if variations is not None and len(variations) > 0: - bucket_by = 'key' - if rollout.get('bucketBy') is not None: - bucket_by = rollout['bucketBy'] - bucket = _bucket_user(seed, user, feature['key'], feature['salt'], bucket_by) - is_experiment = rollout.get('kind') == 'experiment' - sum = 0.0 - for wv in variations: - sum += wv.get('weight', 0.0) / 100000.0 - if bucket < sum: - is_experiment_partition = is_experiment and not wv.get('untracked') - return (wv.get('variation'), is_experiment_partition) - - # The user's bucket value was greater than or equal to the end of the last bucket. This could happen due - # to a rounding error, or due to the fact that we are scaling to 100000 rather than 99999, or the flag - # data could contain buckets that don't actually add up to 100000. Rather than returning an error in - # this case (or changing the scaling, which would potentially change the results for *all* users), we - # will simply put the user in the last bucket. - is_experiment_partition = is_experiment and not variations[-1].get('untracked') - return (variations[-1].get('variation'), is_experiment_partition) - - return (None, False) - -def _bucket_user(seed, user, key, salt, bucket_by): - u_value, should_pass = _get_user_attribute(user, bucket_by) - bucket_by_value = _bucketable_string_value(u_value) - - if should_pass or bucket_by_value is None: + variations = rollout.variations + if len(variations) == 0: + return (None, False) + + bucket_by = None if rollout.is_experiment else rollout.bucket_by + bucket = _bucket_context( + rollout.seed, + context, + rollout.context_kind, + flag.key, + flag.salt, + bucket_by + ) + is_experiment = rollout.is_experiment and bucket >= 0 + # _bucket_context returns a negative value if the context didn't exist, in which case we + # still end up returning the first bucket, but we will force the "in experiment" state to be false. + + sum = 0.0 + for wv in variations: + sum += wv.weight / 100000.0 + if bucket < sum: + is_experiment_partition = is_experiment and not wv.untracked + return (wv.variation, is_experiment_partition) + + # The context's bucket value was greater than or equal to the end of the last bucket. This could happen due + # to a rounding error, or due to the fact that we are scaling to 100000 rather than 99999, or the flag + # data could contain buckets that don't actually add up to 100000. Rather than returning an error in + # this case (or changing the scaling, which would potentially change the results for *all* contexts), we + # will simply put the context in the last bucket. + is_experiment_partition = is_experiment and not variations[-1].untracked + return (variations[-1].variation, is_experiment_partition) + +def _bucket_context( + seed: Optional[int], + context: Context, + context_kind: Optional[str], + key: str, + salt: str, + bucket_by: Optional[AttributeRef] + ) -> float: + match_context = context.get_individual_context(context_kind or Context.DEFAULT_KIND) + if match_context is None: + return -1 + clause_value = match_context.key if bucket_by is None else _get_context_value_by_attr_ref(match_context, bucket_by) + if clause_value is None: return 0.0 - - id_hash = u_value - if user.get('secondary') is not None: - id_hash = id_hash + '.' + user['secondary'] - + bucket_by_value = _bucketable_string_value(clause_value) + if bucket_by_value is None: + return 0.0 + id_hash = clause_value if seed is not None: prefix = str(seed) else: prefix = '%s.%s' % (key, salt) - hash_key = '%s.%s' % (prefix, id_hash) hash_val = int(hashlib.sha1(hash_key.encode('utf-8')).hexdigest()[:15], 16) result = hash_val / __LONG_SCALE__ return result -def _bucketable_string_value(u_value): +def _bucketable_string_value(u_value) -> Optional[str]: if isinstance(u_value, bool): return None elif isinstance(u_value, (str, int)): @@ -250,66 +421,62 @@ def _bucketable_string_value(u_value): return None -def _clause_matches_user_no_segments(clause, user): - u_value, should_pass = _get_user_attribute(user, clause.get('attribute')) - if should_pass is True: +def _context_key_is_in_target_list(context: Context, context_kind: Optional[str], keys: Set[str]) -> bool: + if keys is None or len(keys) == 0: return False - if u_value is None: + match_context = context.get_individual_context(context_kind or Context.DEFAULT_KIND) + return match_context is not None and match_context.key in keys + +def _get_context_value_by_attr_ref(context: Context, attr: AttributeRef) -> Any: + if attr is None: + raise EvaluationException("rule clause did not specify an attribute") + if attr.error is not None: + raise EvaluationException("invalid attribute reference: " + attr.error) + name = attr[0] + if name is None: return None - # is the attr an array? - op_fn = operators.ops[clause['op']] - if isinstance(u_value, (list, tuple)): - for u in u_value: - if _match_any(op_fn, u, clause.get('values') or []): - return _maybe_negate(clause, True) - return _maybe_negate(clause, False) - else: - return _maybe_negate(clause, _match_any(op_fn, u_value, clause.get('values') or [])) - -def _simple_segment_match_user(segment, user, use_includes_and_excludes): - key = user.get('key') - if key is not None: - if use_includes_and_excludes: - if key in segment.get('included', []): - return True - if key in segment.get('excluded', []): - return False - for rule in segment.get('rules', []): - if _segment_rule_matches_user(rule, user, segment.get('key'), segment.get('salt')): - return True + value = context.get(name) + depth = attr.depth + i = 1 + while i < depth: + if not isinstance(value, dict): + return None # can't get subproperty if we're not in a JSON object + value = value.get(attr[i]) + i += 1 + return value + +def _match_single_context_value(clause: Clause, context_value: Any) -> bool: + op_fn = operators.ops.get(clause.op) + if op_fn is None: + return False + values_preprocessed = clause.values_preprocessed + for i, v in enumerate(clause.values): + preprocessed = None if values_preprocessed is None else values_preprocessed[i] + if op_fn(context_value, v, preprocessed): + return True return False -def _segment_rule_matches_user(rule, user, segment_key, salt): - for clause in rule.get('clauses') or []: - if not _clause_matches_user_no_segments(clause, user): - return False - - # If the weight is absent, this rule matches - if 'weight' not in rule or rule['weight'] is None: - return True - - # All of the clauses are met. See if the user buckets in - bucket_by = 'key' if rule.get('bucketBy') is None else rule['bucketBy'] - bucket = _bucket_user(None, user, segment_key, salt, bucket_by) - weight = rule['weight'] / 100000.0 - return bucket < weight - -def _match_any(op_fn, u, vals): - for v in vals: - if op_fn(u, v): +def _match_clause_by_kind(clause: Clause, context: Context) -> bool: + # If attribute is "kind", then we treat operator and values as a match expression against a list + # of all individual kinds in the context. That is, for a multi-kind context with kinds of "org" + # and "user", it is a match if either of those strings is a match with Operator and Values. + for i in range(context.individual_context_count): + c = context.get_individual_context(i) + if c is not None and _match_single_context_value(clause, c.kind): return True return False -def _maybe_negate(clause, val): - if clause.get('negate', False) is True: - return not val - return val +def _maybe_negate(clause: Clause, val: bool) -> bool: + return not val if clause.negate else val -def _make_big_segment_ref(segment: dict) -> str: +def _make_big_segment_ref(segment: Segment) -> str: # The format of Big Segment references is independent of what store implementation is being # used; the store implementation receives only this string and does not know the details of # the data model. The Relay Proxy will use the same format when writing to the store. - return "%s.g%d" % (segment.get('key', ''), segment.get('generation', 0)) + return "%s.g%d" % (segment.key, segment.generation or 0) + +def _target_match_result(flag: FeatureFlag, var: int) -> EvaluationDetail: + return _get_variation(flag, var, {'kind': 'TARGET_MATCH'}) def error_reason(error_kind: str) -> dict: return {'kind': 'ERROR', 'errorKind': error_kind} diff --git a/ldclient/impl/event_factory.py b/ldclient/impl/event_factory.py deleted file mode 100644 index 12823bed..00000000 --- a/ldclient/impl/event_factory.py +++ /dev/null @@ -1,119 +0,0 @@ - -# Event constructors are centralized here to avoid mistakes and repetitive logic. -# The LDClient owns two instances of _EventFactory: one that always embeds evaluation reasons -# in the events (for when variation_detail is called) and one that doesn't. -# -# Note that none of these methods fill in the "creationDate" property, because in the Python -# client, that is done by DefaultEventProcessor.send_event(). - -class _EventFactory: - def __init__(self, with_reasons): - self._with_reasons = with_reasons - - def new_eval_event(self, flag, user, detail, default_value, prereq_of_flag = None): - add_experiment_data = self.is_experiment(flag, detail.reason) - e = { - 'kind': 'feature', - 'key': flag.get('key'), - 'user': user, - 'value': detail.value, - 'variation': detail.variation_index, - 'default': default_value, - 'version': flag.get('version') - } - # the following properties are handled separately so we don't waste bandwidth on unused keys - if add_experiment_data or flag.get('trackEvents', False): - e['trackEvents'] = True - if flag.get('debugEventsUntilDate', None): - e['debugEventsUntilDate'] = flag.get('debugEventsUntilDate') - if prereq_of_flag is not None: - e['prereqOf'] = prereq_of_flag.get('key') - if add_experiment_data or self._with_reasons: - e['reason'] = detail.reason - if user is not None and user.get('anonymous'): - e['contextKind'] = self._user_to_context_kind(user) - return e - - def new_default_event(self, flag, user, default_value, reason): - e = { - 'kind': 'feature', - 'key': flag.get('key'), - 'user': user, - 'value': default_value, - 'default': default_value, - 'version': flag.get('version') - } - # the following properties are handled separately so we don't waste bandwidth on unused keys - if flag.get('trackEvents', False): - e['trackEvents'] = True - if flag.get('debugEventsUntilDate', None): - e['debugEventsUntilDate'] = flag.get('debugEventsUntilDate') - if self._with_reasons: - e['reason'] = reason - if user is not None and user.get('anonymous'): - e['contextKind'] = self._user_to_context_kind(user) - return e - - def new_unknown_flag_event(self, key, user, default_value, reason): - e = { - 'kind': 'feature', - 'key': key, - 'user': user, - 'value': default_value, - 'default': default_value - } - if self._with_reasons: - e['reason'] = reason - if user is not None and user.get('anonymous'): - e['contextKind'] = self._user_to_context_kind(user) - return e - - def new_identify_event(self, user): - return { - 'kind': 'identify', - 'key': str(user.get('key')), - 'user': user - } - - def new_custom_event(self, event_name, user, data, metric_value): - e = { - 'kind': 'custom', - 'key': event_name, - 'user': user - } - if data is not None: - e['data'] = data - if metric_value is not None: - e['metricValue'] = metric_value - if user.get('anonymous'): - e['contextKind'] = self._user_to_context_kind(user) - return e - - def new_alias_event(self, current_user, previous_user): - return { - 'kind': 'alias', - 'key': current_user.get('key'), - 'contextKind': self._user_to_context_kind(current_user), - 'previousKey': previous_user.get('key'), - 'previousContextKind': self._user_to_context_kind(previous_user) - } - - def _user_to_context_kind(self, user): - if user.get('anonymous'): - return "anonymousUser" - else: - return "user" - - @staticmethod - def is_experiment(flag, reason): - if reason is not None: - if reason.get('inExperiment'): - return True - kind = reason['kind'] - if kind == 'RULE_MATCH': - index = reason['ruleIndex'] - rules = flag.get('rules') or [] - return index >= 0 and index < len(rules) and rules[index].get('trackEvents', False) - elif kind == 'FALLTHROUGH': - return flag.get('trackEventsFallthrough', False) - return False diff --git a/ldclient/impl/events/__init__.py b/ldclient/impl/events/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/ldclient/diagnostics.py b/ldclient/impl/events/diagnostics.py similarity index 93% rename from ldclient/diagnostics.py rename to ldclient/impl/events/diagnostics.py index e40b4ff0..d6ce98c9 100644 --- a/ldclient/diagnostics.py +++ b/ldclient/impl/events/diagnostics.py @@ -1,8 +1,3 @@ -""" -Implementation details of the diagnostic event generation. -""" -# currently excluded from documentation - see docs/README.md - import threading import time import uuid @@ -76,9 +71,8 @@ def _create_diagnostic_config_object(config): 'usingRelayDaemon': config.use_ldd, 'allAttributesPrivate': config.all_attributes_private, 'pollingIntervalMillis': config.poll_interval * 1000, - 'userKeysCapacity': config.user_keys_capacity, - 'userKeysFlushIntervalMillis': config.user_keys_flush_interval * 1000, - 'inlineUsersInEvents': config.inline_users_in_events, + 'userKeysCapacity': config.context_keys_capacity, + 'userKeysFlushIntervalMillis': config.context_keys_flush_interval * 1000, 'diagnosticRecordingIntervalMillis': config.diagnostic_recording_interval * 1000, 'dataStoreType': _get_component_type_name(config.feature_store, config, 'memory')} diff --git a/ldclient/impl/events/event_context_formatter.py b/ldclient/impl/events/event_context_formatter.py new file mode 100644 index 00000000..7af7b50b --- /dev/null +++ b/ldclient/impl/events/event_context_formatter.py @@ -0,0 +1,95 @@ +from typing import Any, List, Optional + +from ldclient.context import Context +from ldclient.impl.model import AttributeRef + + +class EventContextFormatter: + IGNORE_ATTRS = frozenset(['key', 'custom', 'anonymous']) + ALLOWED_TOP_LEVEL_ATTRS = frozenset(['key', 'secondary', 'ip', 'country', 'email', + 'firstName', 'lastName', 'avatar', 'name', 'anonymous', 'custom']) + + def __init__(self, all_attributes_private: bool, private_attributes: List[str]): + self._all_attributes_private = all_attributes_private + self._private_attributes = [] # type: List[AttributeRef] + for p in private_attributes: + ar = AttributeRef.from_path(p) + if ar.valid: + self._private_attributes.append(ar) + + def format_context(self, context: Context) -> dict: + if context.multiple: + out = {'kind': 'multi'} # type: dict[str, Any] + for i in range(context.individual_context_count): + c = context.get_individual_context(i) + if c is not None: + out[c.kind] = self._format_context_single(c, False) + return out + else: + return self._format_context_single(context, True) + + def _format_context_single(self, context: Context, include_kind: bool) -> dict: + out = {'key': context.key} # type: dict[str, Any] + if include_kind: + out['kind'] = context.kind + if context.anonymous: + out['anonymous'] = True + + redacted = [] # type: List[str] + all_private = self._private_attributes + for p in context.private_attributes: + if all_private is self._private_attributes: + all_private = all_private.copy() + ar = AttributeRef.from_path(p) + if ar.valid: + all_private.append(ar) + + if context.name is not None and not self._check_whole_attr_private('name', all_private, redacted): + out['name'] = context.name + + for attr in context.custom_attributes: + if not self._check_whole_attr_private(attr, all_private, redacted): + value = context.get(attr) + out[attr] = self._redact_json_value(None, attr, value, all_private, redacted) + + if len(redacted) != 0: + out['_meta'] = {'redactedAttributes': redacted} + + return out + + def _check_whole_attr_private(self, attr: str, all_private: List[AttributeRef], redacted: List[str]) -> bool: + if self._all_attributes_private: + redacted.append(attr) + return True + for p in all_private: + if p.depth == 1 and p[0] == attr: + redacted.append(attr) + return True + return False + + def _redact_json_value(self, parent_path: Optional[List[str]], name: str, value: Any, all_private: List[AttributeRef], + redacted: List[str]) -> Any: + if not isinstance(value, dict) or len(value) == 0: + return value + ret = {} + current_path = parent_path.copy() if parent_path else [] + current_path.append(name) + for k, v in value.items(): + was_redacted = False + for p in all_private: + if p.depth != len(current_path) + 1: + continue + if p[len(current_path)] != k: + continue + match = True + for i, component in enumerate(current_path): + if p[i] != component: + match = False + break + if match: + redacted.append(p.path) + was_redacted = True + break + if not was_redacted: + ret[k] = self._redact_json_value(current_path, k, v, all_private, redacted) + return ret diff --git a/ldclient/event_processor.py b/ldclient/impl/events/event_processor.py similarity index 64% rename from ldclient/event_processor.py rename to ldclient/impl/events/event_processor.py index e13a0d2d..bf246153 100644 --- a/ldclient/event_processor.py +++ b/ldclient/impl/events/event_processor.py @@ -1,128 +1,120 @@ """ Implementation details of the analytics event delivery component. """ -# currently excluded from documentation - see docs/README.md from calendar import timegm from collections import namedtuple from email.utils import parsedate -import errno import json from threading import Event, Lock, Thread +from typing import Any, List, Optional import time import uuid import queue import urllib3 - -from ldclient.event_summarizer import EventSummarizer -from ldclient.fixed_thread_pool import FixedThreadPool +from ldclient.config import Config + +from ldclient.context import Context +from ldclient.impl.events.diagnostics import create_diagnostic_init +from ldclient.impl.events.event_context_formatter import EventContextFormatter +from ldclient.impl.events.event_summarizer import EventSummarizer, EventSummary +from ldclient.impl.events.types import EventInput, EventInputCustom, EventInputEvaluation, EventInputIdentify +from ldclient.impl.fixed_thread_pool import FixedThreadPool from ldclient.impl.http import _http_factory +from ldclient.impl.lru_cache import SimpleLRUCache from ldclient.impl.repeating_task import RepeatingTask -from ldclient.lru_cache import SimpleLRUCache -from ldclient.user_filter import UserFilter +from ldclient.impl.util import check_if_error_is_recoverable_and_log, current_time_millis, is_http_error_recoverable, log, _headers from ldclient.interfaces import EventProcessor -from ldclient.util import log -from ldclient.util import check_if_error_is_recoverable_and_log, is_http_error_recoverable, stringify_attrs, throw_if_unsuccessful_response, _headers -from ldclient.diagnostics import create_diagnostic_init __MAX_FLUSH_THREADS__ = 5 -__CURRENT_EVENT_SCHEMA__ = 3 -__USER_ATTRS_TO_STRINGIFY_FOR_EVENTS__ = [ "key", "secondary", "ip", "country", "email", "firstName", "lastName", "avatar", "name" ] +__CURRENT_EVENT_SCHEMA__ = 4 EventProcessorMessage = namedtuple('EventProcessorMessage', ['type', 'param']) +class DebugEvent: + __slots__ = ['original_input'] + + def __init__(self, original_input: EventInputEvaluation): + self.original_input = original_input + +class IndexEvent: + __slots__ = ['timestamp', 'context'] + + def __init__(self, timestamp: int, context: Context): + self.timestamp = timestamp + self.context = context + + class EventOutputFormatter: - def __init__(self, config): - self._inline_users = config.inline_users_in_events - self._user_filter = UserFilter(config) + def __init__(self, config: Config): + self._context_formatter = EventContextFormatter(config.all_attributes_private, config.private_attributes) - def make_output_events(self, events, summary): + def make_output_events(self, events: List[Any], summary: EventSummary): events_out = [ self.make_output_event(e) for e in events ] - if len(summary.counters) > 0: + if not summary.is_empty(): events_out.append(self.make_summary_event(summary)) return events_out - def make_output_event(self, e): - kind = e['kind'] - if kind == 'feature': - is_debug = e.get('debug') - out = { - 'kind': 'debug' if is_debug else 'feature', - 'creationDate': e['creationDate'], - 'key': e['key'], - 'version': e.get('version'), - 'variation': e.get('variation'), - 'value': e.get('value'), - 'default': e.get('default') - } - if 'prereqOf' in e: - out['prereqOf'] = e.get('prereqOf') - if self._inline_users or is_debug: - out['user'] = self._process_user(e) - else: - out['userKey'] = self._get_userkey(e) - if e.get('reason'): - out['reason'] = e.get('reason') - if e.get('contextKind'): - out['contextKind'] = e.get('contextKind') + def make_output_event(self, e: Any): + if isinstance(e, EventInputEvaluation): + out = self._base_eval_props(e, 'feature') + out['contextKeys'] = self._context_keys(e.context) + return out + elif isinstance(e, DebugEvent): + out = self._base_eval_props(e.original_input, 'debug') + out['context'] = self._process_context(e.original_input.context) return out - elif kind == 'identify': + elif isinstance(e, EventInputIdentify): return { 'kind': 'identify', - 'creationDate': e['creationDate'], - 'key': self._get_userkey(e), - 'user': self._process_user(e) + 'creationDate': e.timestamp, + 'context': self._process_context(e.context) + } + elif isinstance(e, IndexEvent): + return { + 'kind': 'index', + 'creationDate': e.timestamp, + 'context': self._process_context(e.context) } - elif kind == 'custom': + elif isinstance(e, EventInputCustom): out = { 'kind': 'custom', - 'creationDate': e['creationDate'], - 'key': e['key'] + 'creationDate': e.timestamp, + 'key': e.key, + 'contextKeys': self._context_keys(e.context) } - if self._inline_users: - out['user'] = self._process_user(e) - else: - out['userKey'] = self._get_userkey(e) - if e.get('data') is not None: - out['data'] = e['data'] - if e.get('metricValue') is not None: - out['metricValue'] = e['metricValue'] - if e.get('contextKind'): - out['contextKind'] = e.get('contextKind') + if e.data is not None: + out['data'] = e.data + if e.metric_value is not None: + out['metricValue'] = e.metric_value return out - elif kind == 'index': - return { - 'kind': 'index', - 'creationDate': e['creationDate'], - 'user': self._process_user(e) - } - else: - return e + return None """ Transform summarizer data into the format used for the event payload. """ - def make_summary_event(self, summary): - flags_out = dict() - for ckey, cval in summary.counters.items(): - flag_key, variation, version = ckey - flag_data = flags_out.get(flag_key) - if flag_data is None: - flag_data = { 'default': cval['default'], 'counters': [] } - flags_out[flag_key] = flag_data - counter = { - 'count': cval['count'], - 'value': cval['value'] - } - if variation is not None: - counter['variation'] = variation - if version is None: - counter['unknown'] = True - else: - counter['version'] = version - flag_data['counters'].append(counter) + def make_summary_event(self, summary: EventSummary): + flags_out = dict() # type: dict[str, Any] + for key, flag_data in summary.flags.items(): + flag_data_out = {'default': flag_data.default, 'contextKinds': list(flag_data.context_kinds)} + counters = [] # type: list[dict[str, Any]] + for ckey, cval in flag_data.counters.items(): + variation, version = ckey + counter = { + 'count': cval.count, + 'value': cval.value + } + if variation is not None: + counter['variation'] = variation + if version is None: + counter['unknown'] = True + else: + counter['version'] = version + counters.append(counter) + flag_data_out['counters'] = counters + flags_out[key] = flag_data_out return { 'kind': 'summary', 'startDate': summary.start_date, @@ -130,12 +122,34 @@ def make_summary_event(self, summary): 'features': flags_out } - def _process_user(self, event): - filtered = self._user_filter.filter_user_props(event['user']) - return stringify_attrs(filtered, __USER_ATTRS_TO_STRINGIFY_FOR_EVENTS__) - - def _get_userkey(self, event): - return str(event['user'].get('key')) + def _process_context(self, context: Context): + return self._context_formatter.format_context(context) + + def _context_keys(self, context: Context): + out = {} + for i in range(context.individual_context_count): + c = context.get_individual_context(i) + if c is not None: + out[c.kind] = c.key + return out + + def _base_eval_props(self, e: EventInputEvaluation, kind: str) -> dict: + out = { + 'kind': kind, + 'creationDate': e.timestamp, + 'key': e.key, + 'value': e.value, + 'default': e.default_value + } + if e.flag is not None: + out['version'] = e.flag.version + if e.variation is not None: + out['variation'] = e.variation + if e.reason is not None: + out['reason'] = e.reason + if e.prereq_of is not None: + out['prereqOf'] = e.prereq_of.key + return out class EventPayloadSendTask: @@ -150,7 +164,7 @@ def run(self): try: output_events = self._formatter.make_output_events(self._payload.events, self._payload.summary) resp = self._do_send(output_events) - except Exception: + except Exception as e: log.warning( 'Unhandled exception in event processor. Analytics events were not processed.', exc_info=True) @@ -158,7 +172,7 @@ def run(self): def _do_send(self, output_events): # noinspection PyBroadException try: - json_body = json.dumps(output_events) + json_body = json.dumps(output_events, separators=(',',':')) log.debug('Sending events payload: ' + json_body) payload_id = str(uuid.uuid4()) r = _post_events_with_retry( @@ -212,7 +226,7 @@ def __init__(self, capacity): self._exceeded_capacity = False self._dropped_events = 0 - def add_event(self, event): + def add_event(self, event: Any): if len(self._events) >= self._capacity: self._dropped_events += 1 if not self._exceeded_capacity: @@ -222,7 +236,7 @@ def add_event(self, event): self._events.append(event) self._exceeded_capacity = False - def add_to_summary(self, event): + def add_to_summary(self, event: EventInputEvaluation): self._summarizer.summarize_event(event) def get_and_clear_dropped_count(self): @@ -246,10 +260,10 @@ def __init__(self, inbox, config, http_client, diagnostic_accumulator=None): self._close_http = (http_client is None) # so we know whether to close it later self._disabled = False self._outbox = EventBuffer(config.events_max_pending) - self._user_keys = SimpleLRUCache(config.user_keys_capacity) + self._context_keys = SimpleLRUCache(config.context_keys_capacity) self._formatter = EventOutputFormatter(config) self._last_known_past_time = 0 - self._deduplicated_users = 0 + self._deduplicated_contexts = 0 self._diagnostic_accumulator = None if config.diagnostic_opt_out else diagnostic_accumulator self._flush_workers = FixedThreadPool(__MAX_FLUSH_THREADS__, "ldclient.flush") @@ -274,8 +288,8 @@ def _run_main_loop(self): self._process_event(message.param) elif message.type == 'flush': self._trigger_flush() - elif message.type == 'flush_users': - self._user_keys.clear() + elif message.type == 'flush_contexts': + self._context_keys.clear() elif message.type == 'diagnostic': self._send_and_reset_diagnostics() elif message.type == 'test_sync': @@ -287,61 +301,58 @@ def _run_main_loop(self): self._do_shutdown() message.param.set() return - except Exception: + except Exception as e: log.error('Unhandled exception in event processor', exc_info=True) - def _process_event(self, event): + def _process_event(self, event: EventInput): if self._disabled: return - # Always record the event in the summarizer. - self._outbox.add_to_summary(event) - # Decide whether to add the event to the payload. Feature events may be added twice, once for # the event (if tracked) and once for debugging. - add_full_event = False - add_debug_event = False - add_index_event = False - if event['kind'] == "feature": - add_full_event = event.get('trackEvents') - add_debug_event = self._should_debug_event(event) - else: - add_full_event = True - - # For each user we haven't seen before, we add an index event - unless this is already - # an identify event for that user. - if not (add_full_event and self._config.inline_users_in_events): - user = event.get('user') - if user and 'key' in user: - is_index_event = event['kind'] == 'identify' - already_seen = self.notice_user(user) - add_index_event = not is_index_event and not already_seen - if not is_index_event and already_seen: - self._deduplicated_users += 1 - - if add_index_event: - ie = { 'kind': 'index', 'creationDate': event['creationDate'], 'user': user } - self._outbox.add_event(ie) - if add_full_event: - self._outbox.add_event(event) - if add_debug_event: - debug_event = event.copy() - debug_event['debug'] = True + context = None # type: Optional[Context] + can_add_index = True + full_event = None # type: Any + debug_event = None # type: Optional[DebugEvent] + + if isinstance(event, EventInputEvaluation): + context = event.context + self._outbox.add_to_summary(event) + if event.track_events: + full_event = event + if self._should_debug_event(event): + debug_event = DebugEvent(event) + elif isinstance(event, EventInputIdentify): + context = event.context + full_event = event + can_add_index = False # an index event would be redundant if there's an identify event + elif isinstance(event, EventInputCustom): + context = event.context + full_event = event + + # For each context we haven't seen before, we add an index event - unless this is already + # an identify event. + if context is not None: + already_seen = self._context_keys.put(context.fully_qualified_key, True) + if can_add_index: + if already_seen: + self._deduplicated_contexts += 1 + else: + self._outbox.add_event(IndexEvent(event.timestamp, context)) + + if full_event: + self._outbox.add_event(full_event) + + if debug_event: self._outbox.add_event(debug_event) - # Add to the set of users we've noticed, and return true if the user was already known to us. - def notice_user(self, user): - if user is None or 'key' not in user: + def _should_debug_event(self, event: EventInputEvaluation): + if event.flag is None: return False - key = user['key'] - return self._user_keys.put(key, True) - - def _should_debug_event(self, event): - debug_until = event.get('debugEventsUntilDate') + debug_until = event.flag.debug_events_until_date if debug_until is not None: last_past = self._last_known_past_time - now = int(time.time() * 1000) - if debug_until > last_past and debug_until > now: + if debug_until > last_past and debug_until > current_time_millis(): return True return False @@ -351,7 +362,7 @@ def _trigger_flush(self): payload = self._outbox.get_payload() if self._diagnostic_accumulator: self._diagnostic_accumulator.record_events_in_batch(len(payload.events)) - if len(payload.events) > 0 or len(payload.summary.counters) > 0: + if len(payload.events) > 0 or not payload.summary.is_empty(): task = EventPayloadSendTask(self._http, self._config, self._formatter, payload, self._handle_response) if self._flush_workers.execute(task.run): @@ -375,8 +386,8 @@ def _handle_response(self, r): def _send_and_reset_diagnostics(self): if self._diagnostic_accumulator is not None: dropped_event_count = self._outbox.get_and_clear_dropped_count() - stats_event = self._diagnostic_accumulator.create_event_and_reset(dropped_event_count, self._deduplicated_users) - self._deduplicated_users = 0 + stats_event = self._diagnostic_accumulator.create_event_and_reset(dropped_event_count, self._deduplicated_contexts) + self._deduplicated_contexts = 0 task = DiagnosticEventSendTask(self._http, self._config, stats_event) self._diagnostic_flush_workers.execute(task.run) @@ -392,9 +403,9 @@ def __init__(self, config, http=None, dispatcher_class=None, diagnostic_accumula self._inbox = queue.Queue(config.events_max_pending) self._inbox_full = False self._flush_timer = RepeatingTask(config.flush_interval, config.flush_interval, self.flush) - self._users_flush_timer = RepeatingTask(config.user_keys_flush_interval, config.user_keys_flush_interval, self._flush_users) + self._contexts_flush_timer = RepeatingTask(config.context_keys_flush_interval, config.context_keys_flush_interval, self._flush_contexts) self._flush_timer.start() - self._users_flush_timer.start() + self._contexts_flush_timer.start() if diagnostic_accumulator is not None: self._diagnostic_event_timer = RepeatingTask(config.diagnostic_recording_interval, config.diagnostic_recording_interval, self._send_diagnostic) @@ -407,8 +418,7 @@ def __init__(self, config, http=None, dispatcher_class=None, diagnostic_accumula (dispatcher_class or EventDispatcher)(self._inbox, config, http, diagnostic_accumulator) - def send_event(self, event): - event['creationDate'] = int(time.time() * 1000) + def send_event(self, event: EventInput): self._post_to_inbox(EventProcessorMessage('event', event)) def flush(self): @@ -420,7 +430,7 @@ def stop(self): return self._closed = True self._flush_timer.stop() - self._users_flush_timer.stop() + self._contexts_flush_timer.stop() if self._diagnostic_event_timer: self._diagnostic_event_timer.stop() self.flush() @@ -437,8 +447,8 @@ def _post_to_inbox(self, message): self._inbox_full = True log.warning("Events are being produced faster than they can be processed; some events will be dropped") - def _flush_users(self): - self._inbox.put(EventProcessorMessage('flush_users', None)) + def _flush_contexts(self): + self._inbox.put(EventProcessorMessage('flush_contexts', None)) def _send_diagnostic(self): self._inbox.put(EventProcessorMessage('diagnostic', None)) diff --git a/ldclient/impl/events/event_summarizer.py b/ldclient/impl/events/event_summarizer.py new file mode 100644 index 00000000..1577d278 --- /dev/null +++ b/ldclient/impl/events/event_summarizer.py @@ -0,0 +1,98 @@ +""" +Implementation details of the analytics event delivery component. +""" + +from collections import namedtuple +from typing import Any, Dict, List, Optional, Set, Tuple + +from ldclient.impl.events.types import EventInputEvaluation + + +class EventSummaryCounter: + __slots__ = ['count', 'value'] + + def __init__(self, count: int, value: Any): + self.count = count + self.value = value + + def __eq__(self, other: Any) -> bool: # used only in tests + return isinstance(other, EventSummaryCounter) and \ + other.count == self.count and other.value == self.value + + def __repr__(self) -> str: # used only in test debugging + return "EventSummaryCounter(%d, %s)" % (self.count, self.value) + + +class EventSummaryFlag: + __slots__ = ['context_kinds', 'default', 'counters'] + + def __init__(self, context_kinds: Set[str], default: Any, counters: Dict[Tuple[Optional[int], Optional[int]], EventSummaryCounter]): + self.context_kinds = context_kinds + self.counters = counters + self.default = default + + def __eq__(self, other: Any) -> bool: # used only in tests + return isinstance(other, EventSummaryFlag) and \ + other.context_kinds == self.context_kinds and other.counters == self.counters and other.default == self.default + + def __repr__(self) -> str: # used only in test debugging + return "EventSummaryFlag(%s, %s, %s)" % (self.context_kinds, self.counters, self.default) + + +class EventSummary: + __slots__ = ['start_date', 'end_date', 'flags'] + + def __init__(self, start_date: int, end_date: int, flags: Dict[str, EventSummaryFlag]): + self.start_date = start_date + self.end_date = end_date + self.flags = flags + + def is_empty(self) -> bool: + return len(self.flags) == 0 + + +class EventSummarizer: + def __init__(self): + self.start_date = 0 + self.end_date = 0 + self.flags = dict() # type: Dict[str, EventSummaryFlag] + + """ + Add this event to our counters, if it is a type of event we need to count. + """ + def summarize_event(self, event: EventInputEvaluation): + flag_data = self.flags.get(event.key) + if flag_data is None: + flag_data = EventSummaryFlag(set(), event.default_value, dict()) + self.flags[event.key] = flag_data + + context = event.context + for i in range(context.individual_context_count): + c = context.get_individual_context(i) + if c is not None: + flag_data.context_kinds.add(c.kind) + + counter_key = (event.variation, None if event.flag is None else event.flag.version) + counter = flag_data.counters.get(counter_key) + if counter is None: + counter = EventSummaryCounter(1, event.value) + flag_data.counters[counter_key] = counter + else: + counter.count += 1 + + date = event.timestamp + if self.start_date == 0 or date < self.start_date: + self.start_date = date + if date > self.end_date: + self.end_date = date + + """ + Return the current summarized event data. + """ + def snapshot(self): + return EventSummary(start_date = self.start_date, end_date = self.end_date, flags = self.flags) + + def clear(self): + self.start_date = 0 + self.end_date = 0 + self.flags = dict() diff --git a/ldclient/impl/events/types.py b/ldclient/impl/events/types.py new file mode 100644 index 00000000..0526a7db --- /dev/null +++ b/ldclient/impl/events/types.py @@ -0,0 +1,173 @@ +from ldclient.context import Context +from ldclient.evaluation import EvaluationDetail +from ldclient.impl import AnyNum +from ldclient.impl.model import FeatureFlag +from ldclient.impl.util import current_time_millis + +import json +from typing import Any, Callable, Optional + +# These event types are not the event data that is sent to LaunchDarkly; they're the input +# parameters that are passed to EventProcessor, which translates them into event data (for +# instance, many evaluations may produce just one summary event). Since the SDK generates +# these at high volume, we want them to be efficient so we use attributes and slots rather +# than dictionaries. + +class EventInput: + __slots__ = ['timestamp', 'context'] + + def __init__(self, timestamp: int, context: Context): + self.timestamp = timestamp + self.context = context + + def __repr__(self) -> str: # used only in test debugging + return "%s(%s)" % (self.__class__.__name__, json.dumps(self.to_debugging_dict())) + + def __eq__(self, other) -> bool: # used only in tests + return isinstance(other, EventInput) and self.to_debugging_dict() == other.to_debugging_dict() + + def to_debugging_dict(self) -> dict: + pass + +class EventInputEvaluation(EventInput): + __slots__ = ['key', 'flag', 'variation', 'value', 'reason', 'default_value', 'prereq_of', 'track_events'] + + def __init__(self, timestamp: int, context: Context, key: str, flag: Optional[FeatureFlag], + variation: Optional[int], value: Any, reason: Optional[dict], + default_value: Any, prereq_of: Optional[FeatureFlag] = None, track_events: bool = False): + super().__init__(timestamp, context) + self.key = key + self.flag = flag + self.variation = variation + self.value = value + self.reason = reason + self.default_value = default_value + self.prereq_of = prereq_of + self.track_events = track_events + + def to_debugging_dict(self) -> dict: + return { + "timestamp": self.timestamp, + "context": self.context.to_dict(), + "key": self.key, + "flag": {"key": self.flag.key} if self.flag else None, + "variation": self.variation, + "value": self.value, + "reason": self.reason, + "default_value": self.default_value, + "prereq_of": {"key": self.prereq_of.key} if self.prereq_of else None, + "track_events": self.track_events + } + +class EventInputIdentify(EventInput): + def to_debugging_dict(self) -> dict: + return { + "timestamp": self.timestamp, + "context": self.context.to_dict() + } + +class EventInputCustom(EventInput): + __slots__ = ['key', 'data', 'metric_value'] + + def __init__(self, timestamp: int, context: Context, key: str, data: Any = None, metric_value: Optional[AnyNum] = None): + super().__init__(timestamp, context) + self.key = key + self.data = data + self.metric_value = metric_value # type: Optional[int|float|complex] + + def to_debugging_dict(self) -> dict: + return { + "timestamp": self.timestamp, + "context": self.context.to_dict(), + "key": self.key, + "data": self.data, + "metric_value": self.metric_value + } + +# Event constructors are centralized here to avoid mistakes and repetitive logic. +# The LDClient owns two instances of EventFactory: one that always embeds evaluation reasons +# in the events (for when variation_detail is called) and one that doesn't. +# +# Note that none of these methods fill in the "creationDate" property, because in the Python +# client, that is done by DefaultEventProcessor.send_event(). + +class EventFactory: + def __init__(self, with_reasons: bool, timestamp_fn: Callable[[], int] = current_time_millis): + self._with_reasons = with_reasons + self._timestamp_fn = timestamp_fn + + def new_eval_event(self, flag: FeatureFlag, context: Context, detail: EvaluationDetail, + default_value: Any, prereq_of_flag: Optional[FeatureFlag] = None) -> EventInputEvaluation: + add_experiment_data = self.is_experiment(flag, detail.reason) + return EventInputEvaluation( + self._timestamp_fn(), + context, + flag.key, + flag, + detail.variation_index, + detail.value, + detail.reason if self._with_reasons or add_experiment_data else None, + default_value, + prereq_of_flag, + flag.track_events or add_experiment_data + ) + + def new_default_event(self, flag: FeatureFlag, context: Context, default_value: Any, + reason: Optional[dict]) -> EventInputEvaluation: + return EventInputEvaluation( + self._timestamp_fn(), + context, + flag.key, + flag, + None, + default_value, + reason if self._with_reasons else None, + default_value, + None, + flag.track_events + ) + + def new_unknown_flag_event(self, key: str, context: Context, default_value: Any, + reason: Optional[dict]) -> EventInputEvaluation: + return EventInputEvaluation( + self._timestamp_fn(), + context, + key, + None, + None, + default_value, + reason if self._with_reasons else None, + default_value, + None, + False + ) + + def new_identify_event(self, context: Context) -> EventInputIdentify: + return EventInputIdentify( + self._timestamp_fn(), + context + ) + + def new_custom_event(self, event_name: str, context: Context, data: Any, metric_value: Optional[AnyNum]) \ + -> EventInputCustom: + return EventInputCustom( + self._timestamp_fn(), + context, + event_name, + data, + metric_value + ) + + @staticmethod + def is_experiment(flag: FeatureFlag, reason: Optional[dict]) -> bool: + if reason is not None: + if reason.get('inExperiment'): + return True + kind = reason['kind'] + if kind == 'RULE_MATCH': + index = reason['ruleIndex'] + rules = flag.rules + return index >= 0 and index < len(rules) and rules[index].track_events + elif kind == 'FALLTHROUGH': + return flag.track_events_fallthrough + return False diff --git a/ldclient/fixed_thread_pool.py b/ldclient/impl/fixed_thread_pool.py similarity index 92% rename from ldclient/fixed_thread_pool.py rename to ldclient/impl/fixed_thread_pool.py index 3428daa8..c545ed7f 100644 --- a/ldclient/fixed_thread_pool.py +++ b/ldclient/impl/fixed_thread_pool.py @@ -1,12 +1,7 @@ -""" -Internal helper class for thread management. -""" -# currently excluded from documentation - see docs/README.md - from threading import Event, Lock, Thread import queue -from ldclient.util import log +from ldclient.impl.util import log """ A simple fixed-size thread pool that rejects jobs when its limit is reached. diff --git a/ldclient/impl/integrations/files/file_data_source.py b/ldclient/impl/integrations/files/file_data_source.py index f25eecae..1b292fee 100644 --- a/ldclient/impl/integrations/files/file_data_source.py +++ b/ldclient/impl/integrations/files/file_data_source.py @@ -19,10 +19,14 @@ pass from ldclient.impl.repeating_task import RepeatingTask +from ldclient.impl.util import log from ldclient.interfaces import UpdateProcessor -from ldclient.util import log from ldclient.versioned_data_kind import FEATURES, SEGMENTS +def _sanitize_json_item(item): + if not ('version' in item): + item['version'] = 1 + class _FileDataSource(UpdateProcessor): def __init__(self, store, ready, paths, auto_update, poll_interval, force_polling): self._store = store @@ -62,8 +66,12 @@ def _load_all(self): log.error('Unable to load flag data from "%s": %s' % (path, repr(e))) traceback.print_exc() return - self._store.init(all_data) - self._inited = True + try: + self._store.init(all_data) + self._inited = True + except Exception as e: + log.error('Unable to store data: %s' % repr(e)) + traceback.print_exc() def _load_file(self, path, all_data): content = None @@ -71,10 +79,12 @@ def _load_file(self, path, all_data): content = f.read() parsed = self._parse_content(content) for key, flag in parsed.get('flags', {}).items(): + _sanitize_json_item(flag) self._add_item(all_data, FEATURES, flag) for key, value in parsed.get('flagValues', {}).items(): self._add_item(all_data, FEATURES, self._make_flag_with_value(key, value)) for key, segment in parsed.get('segments', {}).items(): + _sanitize_json_item(segment) self._add_item(all_data, SEGMENTS, segment) def _parse_content(self, content): @@ -93,6 +103,7 @@ def _add_item(self, all_data, kind, item): def _make_flag_with_value(self, key, value): return { 'key': key, + 'version': 1, 'on': True, 'fallthrough': { 'variation': 0 diff --git a/ldclient/impl/integrations/test_data/test_data_source.py b/ldclient/impl/integrations/test_data/test_data_source.py index e6272925..36483a69 100644 --- a/ldclient/impl/integrations/test_data/test_data_source.py +++ b/ldclient/impl/integrations/test_data/test_data_source.py @@ -1,7 +1,9 @@ -import copy from ldclient.versioned_data_kind import FEATURES -from ldclient.rwlock import ReadWriteLock +# This is the internal component that's created when you initialize an SDK instance that is using +# TestData. The TestData object manages the setup of the fake data, and it broadcasts the data +# through _TestDataSource to inject it into the SDK. If there are multiple SDK instances connected +# to a TestData, each has its own _TestDataSource. class _TestDataSource(): diff --git a/ldclient/impl/listeners.py b/ldclient/impl/listeners.py index 6a1e5c86..dda5bf52 100644 --- a/ldclient/impl/listeners.py +++ b/ldclient/impl/listeners.py @@ -1,4 +1,4 @@ -from ldclient.util import log +from ldclient.impl.util import log from threading import RLock from typing import Any, Callable diff --git a/ldclient/lru_cache.py b/ldclient/impl/lru_cache.py similarity index 89% rename from ldclient/lru_cache.py rename to ldclient/impl/lru_cache.py index d65c59c4..387d6b91 100644 --- a/ldclient/lru_cache.py +++ b/ldclient/impl/lru_cache.py @@ -1,8 +1,3 @@ -""" -Internal helper class for caching. -""" -# currently excluded from documentation - see docs/README.md - from collections import OrderedDict class SimpleLRUCache: diff --git a/ldclient/impl/model/__init__.py b/ldclient/impl/model/__init__.py new file mode 100644 index 00000000..b485d21f --- /dev/null +++ b/ldclient/impl/model/__init__.py @@ -0,0 +1,6 @@ +from .clause import * +from .encoder import * +from .entity import * +from .feature_flag import * +from .segment import * +from .variation_or_rollout import * diff --git a/ldclient/impl/model/attribute_ref.py b/ldclient/impl/model/attribute_ref.py new file mode 100644 index 00000000..5349cfe7 --- /dev/null +++ b/ldclient/impl/model/attribute_ref.py @@ -0,0 +1,98 @@ +from __future__ import annotations +import re +from typing import List, Optional + + +def req_attr_ref_with_opt_context_kind(attr_ref_str: str, context_kind: Optional[str]) -> AttributeRef: + if context_kind is None or context_kind == '': + return AttributeRef.from_literal(attr_ref_str) + return AttributeRef.from_path(attr_ref_str) + +def opt_attr_ref_with_opt_context_kind(attr_ref_str: Optional[str], context_kind: Optional[str]) -> Optional[AttributeRef]: + if attr_ref_str is None or attr_ref_str == '': + return None + return req_attr_ref_with_opt_context_kind(attr_ref_str, context_kind) + + +_INVALID_ATTR_ESCAPE_REGEX = re.compile('(~[^01]|~$)') + + +class AttributeRef: + __slots__ = ['_raw', '_single_component', '_components', '_error'] + + _ERR_EMPTY = 'attribute reference cannot be empty' + + def __init__( + self, + raw: str, + single_component: Optional[str], + components: Optional[List[str]], + error: Optional[str] + ): + self._raw = raw + self._single_component = single_component + self._components = components + self._error = error + + @property + def valid(self) -> bool: + return self._error is None + + @property + def error(self) -> Optional[str]: + return self._error + + @property + def path(self) -> str: + return self._raw + + @property + def depth(self) -> int: + if self._error is not None: + return 0 + if self._components is not None: + return len(self._components) + return 1 + + def __getitem__(self, index) -> Optional[str]: + if self._error is not None: + return None + if self._components is not None: + return None if index < 0 or index >= len(self._components) else self._components[index] + return self._single_component if index == 0 else None + + @staticmethod + def from_path(path: str) -> AttributeRef: + if path == '' or path == '/': + return AttributeRef._from_error(AttributeRef._ERR_EMPTY) + if path[0] != '/': + return AttributeRef(path, path, None, None) + components = path[1:].split('/') + for i, c in enumerate(components): + if c == '': + return AttributeRef._from_error('attribute reference contained a double slash or a trailing slash') + unescaped = AttributeRef._unescape(c) + if unescaped is None: + return AttributeRef._from_error('attribute reference contained an escape character (~) that was not followed by 0 or 1') + components[i] = unescaped + return AttributeRef(path, None, components, None) + + @staticmethod + def from_literal(name: str) -> AttributeRef: + if name == '': + return AttributeRef._from_error(AttributeRef._ERR_EMPTY) + return AttributeRef(AttributeRef._escape(name), name, None, None) + + @staticmethod + def _from_error(error: str) -> AttributeRef: + return AttributeRef('', None, None, error) + + @staticmethod + def _unescape(s: str) -> Optional[str]: + if _INVALID_ATTR_ESCAPE_REGEX.search(s): + return None + return s.replace("~1", "/").replace("~0", "~") + + @staticmethod + def _escape(s: str) -> str: + return s.replace("~", "~0").replace("/", "~1") diff --git a/ldclient/impl/model/clause.py b/ldclient/impl/model/clause.py new file mode 100644 index 00000000..19ab52c5 --- /dev/null +++ b/ldclient/impl/model/clause.py @@ -0,0 +1,73 @@ +from re import Pattern +from semver import VersionInfo +from typing import Any, List, Optional + +from ldclient.impl.model.attribute_ref import AttributeRef, req_attr_ref_with_opt_context_kind +from ldclient.impl.model.entity import * +from ldclient.impl.model.value_parsing import parse_regex, parse_semver, parse_time + +class ClausePreprocessedValue: + __slots__ = ['_as_time', '_as_regex', '_as_semver'] + + def __init__(self, as_time: Optional[float]=None, as_regex: Optional[Pattern]=None, as_semver: Optional[VersionInfo]=None): + self._as_time = as_time + self._as_regex = as_regex + self._as_semver = as_semver + + @property + def as_time(self) -> Optional[float]: + return self._as_time + + @property + def as_regex(self) -> Optional[Pattern]: + return self._as_regex + + @property + def as_semver(self) -> Optional[VersionInfo]: + return self._as_semver + + +def _preprocess_clause_values(op: str, values: List[Any]) -> Optional[List[ClausePreprocessedValue]]: + if op == 'matches': + return list(ClausePreprocessedValue(as_regex=parse_regex(value)) for value in values) + if op == 'before' or op == 'after': + return list(ClausePreprocessedValue(as_time=parse_time(value)) for value in values) + if op == 'semVerEqual' or op == 'semVerGreaterThan' or op == 'semVerLessThan': + return list(ClausePreprocessedValue(as_semver=parse_semver(value)) for value in values) + return None + + +class Clause: + __slots__ = ['_context_kind', '_attribute', '_op', '_negate', '_values', '_values_preprocessed'] + + def __init__(self, data: dict): + self._context_kind = opt_str(data, 'contextKind') + self._attribute = req_attr_ref_with_opt_context_kind(req_str(data, 'attribute'), self._context_kind) + self._negate = opt_bool(data, 'negate') + self._op = req_str(data, 'op') + self._values = req_list(data, 'values') + self._values_preprocessed = _preprocess_clause_values(self._op, self._values) + + @property + def attribute(self) -> AttributeRef: + return self._attribute + + @property + def context_kind(self) -> Optional[str]: + return self._context_kind + + @property + def negate(self) -> bool: + return self._negate + + @property + def op(self) -> str: + return self._op + + @property + def values(self) -> List[Any]: + return self._values + + @property + def values_preprocessed(self) -> Optional[List[ClausePreprocessedValue]]: + return self._values_preprocessed diff --git a/ldclient/impl/model/encoder.py b/ldclient/impl/model/encoder.py new file mode 100644 index 00000000..af6ad258 --- /dev/null +++ b/ldclient/impl/model/encoder.py @@ -0,0 +1,17 @@ +from ldclient.impl.model.entity import ModelEntity + +import json + +class ModelEncoder(json.JSONEncoder): + """ + A JSON encoder customized to serialize our data model types correctly. We should + use this whenever we are writing flag data to a persistent store. + """ + + def __init__(self): + super().__init__(separators=(',',':')) + + def default(self, obj): + if isinstance(obj, ModelEntity): + return obj.to_json_dict() + return json.JSONEncoder.default(self, obj) diff --git a/ldclient/impl/model/entity.py b/ldclient/impl/model/entity.py new file mode 100644 index 00000000..03cf1085 --- /dev/null +++ b/ldclient/impl/model/entity.py @@ -0,0 +1,105 @@ +import json + +from typing import Any, List, Optional, Union + +# This file provides support for our data model classes. +# +# Top-level data model classes (FeatureFlag, Segment) should subclass ModelEntity. This +# provides a standard behavior where we decode the entity from a dict that corresponds to +# the JSON representation, and the constructor for each class does any necessary capturing +# and validation of individual properties, while the ModelEntity constructor also stores +# the original data as a dict so we can easily re-serialize it or inspect it as a dict. +# +# Lower-level classes such as Clause are not derived from ModelEntity because we don't +# need to serialize them outside of the enclosing FeatureFlag/Segment. +# +# All data model classes should use the opt_ and req_ functions so that any JSON values +# of invalid types will cause immediate rejection of the data set, rather than allowing +# invalid types to get into the evaluation/event logic where they would cause errors that +# are harder to diagnose. + +def opt_type(data: dict, name: str, desired_type) -> Any: + value = data.get(name) + if value is not None and not isinstance(value, desired_type): + raise ValueError('error in flag/segment data: property "%s" should be type %s but was %s"' % \ + (name, desired_type, value.__class__)) + return value + +def opt_bool(data: dict, name: str) -> bool: + return opt_type(data, name, bool) is True + +def opt_dict(data: dict, name: str) -> Optional[dict]: + return opt_type(data, name, dict) + +def opt_dict_list(data: dict, name: str) -> list: + return validate_list_type(opt_list(data, name), name, dict) + +def opt_int(data: dict, name: str) -> Optional[int]: + return opt_type(data, name, int) + +def opt_number(data: dict, name: str) -> Optional[Union[int, float]]: + value = data.get(name) + if value is not None and not isinstance(value, int) and not isinstance(value, float): + raise ValueError('error in flag/segment data: property "%s" should be a number but was %s"' % \ + (name, value.__class__)) + return value + +def opt_list(data: dict, name: str) -> list: + return opt_type(data, name, list) or [] + +def opt_str(data: dict, name: str) -> Optional[str]: + return opt_type(data, name, str) + +def opt_str_list(data: dict, name: str) -> List[str]: + return validate_list_type(opt_list(data, name), name, str) + +def req_type(data: dict, name: str, desired_type) -> Any: + value = opt_type(data, name, desired_type) + if value is None: + raise ValueError('error in flag/segment data: required property "%s" is missing' % name) + return value + +def req_dict_list(data: dict, name: str) -> list: + return validate_list_type(req_list(data, name), name, dict) + +def req_int(data: dict, name: str) -> int: + return req_type(data, name, int) + +def req_list(data: dict, name: str) -> list: + return req_type(data, name, list) + +def req_str(data: dict, name: str) -> str: + return req_type(data, name, str) + +def req_str_list(data: dict, name: str) -> List[str]: + return validate_list_type(req_list(data, name), name, str) + +def validate_list_type(items: list, name: str, desired_type) -> list: + for item in items: + if not isinstance(item, desired_type): + raise ValueError('error in flag/segment data: property %s should be an array of %s but an item was %s' % \ + (name, desired_type, item.__class__)) + return items + + +class ModelEntity: + def __init__(self, data: dict): + self._data = data + + def to_json_dict(self): + return self._data + + def get(self, attribute, default = None) -> Any: + return self._data.get(attribute, default) + + def __getitem__(self, attribute) -> Any: + return self._data[attribute] + + def __contains__(self, attribute) -> bool: + return attribute in self._data + + def __eq__(self, other) -> bool: + return self.__class__ == other.__class__ and self._data == other._data + + def __repr__(self) -> str: + return json.dumps(self._data, separators=(',',':')) diff --git a/ldclient/impl/model/feature_flag.py b/ldclient/impl/model/feature_flag.py new file mode 100644 index 00000000..a53c7689 --- /dev/null +++ b/ldclient/impl/model/feature_flag.py @@ -0,0 +1,158 @@ +from typing import Any, List, Optional, Set + +from ldclient.impl.model.clause import Clause +from ldclient.impl.model.entity import * +from ldclient.impl.model.variation_or_rollout import VariationOrRollout + + +class Prerequisite: + __slots__ = ['_key', '_variation'] + + def __init__(self, data: dict): + self._key = req_str(data, 'key') + self._variation = req_int(data, 'variation') + + @property + def key(self) -> str: + return self._key + + @property + def variation(self) -> int: + return self._variation + + +class Target: + __slots__ = ['_context_kind', '_variation', '_values'] + + def __init__(self, data: dict): + self._context_kind = opt_str(data, 'contextKind') + self._variation = req_int(data, 'variation') + self._values = set(req_str_list(data, 'values')) + + @property + def context_kind(self) -> Optional[str]: + return self._context_kind + + @property + def variation(self) -> int: + return self._variation + + @property + def values(self) -> Set[str]: + return self._values + + +class FlagRule: + __slots__ = ['_id', '_clauses', '_track_events', '_variation_or_rollout'] + + def __init__(self, data: dict): + self._id = opt_str(data, 'id') + self._variation_or_rollout = VariationOrRollout(data) + self._clauses = list(Clause(item) for item in req_dict_list(data, 'clauses')) + self._track_events = opt_bool(data, 'trackEvents') + + @property + def id(self) -> Optional[str]: + return self._id + + @property + def clauses(self) -> List[Clause]: + return self._clauses + + @property + def track_events(self) -> bool: + return self._track_events + + @property + def variation_or_rollout(self) -> VariationOrRollout: + return self._variation_or_rollout + + +class FeatureFlag(ModelEntity): + __slots__ = ['_data', '_key', '_version', '_deleted', '_variations', '_on', + '_off_variation', '_fallthrough', '_prerequisites', '_targets', '_context_targets', '_rules', + '_salt', '_track_events', '_debug_events_until_date'] + + def __init__(self, data: dict): + super().__init__(data) + # In the following logic, we're being somewhat lenient in terms of allowing most properties to + # be absent even if they are really required in the schema. That's for backward compatibility + # with test logic that constructed incomplete JSON, and also with the file data source which + # previously allowed users to get away with leaving out a lot of properties in the JSON. + self._key = req_str(data, 'key') + self._version = req_int(data, 'version') + self._deleted = opt_bool(data, 'deleted') + if self._deleted: + return + self._variations = opt_list(data, 'variations') + self._on = opt_bool(data, 'on') + self._off_variation = opt_int(data, 'offVariation') + self._fallthrough = VariationOrRollout(opt_dict(data, 'fallthrough')) + self._prerequisites = list(Prerequisite(item) for item in opt_dict_list(data, 'prerequisites')) + self._rules = list(FlagRule(item) for item in opt_dict_list(data, 'rules')) + self._targets = list(Target(item) for item in opt_dict_list(data, 'targets')) + self._context_targets = list(Target(item) for item in opt_dict_list(data, 'contextTargets')) + self._salt = opt_str(data, 'salt') or '' + self._track_events = opt_bool(data, 'trackEvents') + self._track_events_fallthrough = opt_bool(data, 'trackEventsFallthrough') + self._debug_events_until_date = opt_number(data, 'debugEventsUntilDate') + + @property + def key(self) -> str: + return self._key + + @property + def version(self) -> int: + return self._version + + @property + def deleted(self) -> bool: + return self._deleted + + @property + def variations(self) -> List[Any]: + return self._variations + + @property + def on(self) -> bool: + return self._on + + @property + def off_variation(self) -> Optional[int]: + return self._off_variation + + @property + def fallthrough(self) -> VariationOrRollout: + return self._fallthrough + + @property + def prerequisites(self) -> List[Prerequisite]: + return self._prerequisites + + @property + def targets(self) -> List[Target]: + return self._targets + + @property + def context_targets(self) -> List[Target]: + return self._context_targets + + @property + def rules(self) -> List[FlagRule]: + return self._rules + + @property + def salt(self) -> str: + return self._salt + + @property + def track_events(self) -> bool: + return self._track_events + + @property + def track_events_fallthrough(self) -> bool: + return self._track_events_fallthrough + + @property + def debug_events_until_date(self) -> Optional[Union[int, float]]: + return self._debug_events_until_date diff --git a/ldclient/impl/model/segment.py b/ldclient/impl/model/segment.py new file mode 100644 index 00000000..f5c4fa2a --- /dev/null +++ b/ldclient/impl/model/segment.py @@ -0,0 +1,122 @@ +from typing import Any, List, Optional, Set + +from ldclient.impl.model.attribute_ref import AttributeRef, opt_attr_ref_with_opt_context_kind +from ldclient.impl.model.clause import Clause +from ldclient.impl.model.entity import * + + +class SegmentTarget: + __slots__ = ['_context_kind', '_values'] + + def __init__(self, data: dict, logger = None): + self._context_kind = opt_str(data, 'contextKind') + self._values = set(req_str_list(data, 'values')) + + @property + def context_kind(self) -> Optional[str]: + return self._context_kind + + @property + def values(self) -> Set[str]: + return self._values + + +class SegmentRule: + __slots__ = ['_bucket_by', '_clauses', '_rollout_context_kind', '_weight'] + + def __init__(self, data: dict): + self._clauses = list(Clause(item) for item in req_dict_list(data, 'clauses')) + self._rollout_context_kind = opt_str(data, 'rolloutContextKind') + self._bucket_by = opt_attr_ref_with_opt_context_kind(opt_str(data, 'bucketBy'), self._rollout_context_kind) + self._weight = opt_int(data, 'weight') + + @property + def bucket_by(self) -> Optional[AttributeRef]: + return self._bucket_by + + @property + def clauses(self) -> List[Clause]: + return self._clauses + + @property + def rollout_context_kind(self) -> Optional[str]: + return self._rollout_context_kind + + @property + def weight(self) -> Optional[int]: + return self._weight + + +class Segment(ModelEntity): + __slots__ = ['_data', '_key', '_version', '_deleted', '_included', '_excluded', + '_included_contexts', '_excluded_contexts', '_rules', '_salt', '_unbounded', + '_unbounded_context_kind', '_generation'] + + def __init__(self, data: dict): + super().__init__(data) + # In the following logic, we're being somewhat lenient in terms of allowing most properties to + # be absent even if they are really required in the schema. That's for backward compatibility + # with test logic that constructed incomplete JSON, and also with the file data source which + # previously allowed users to get away with leaving out a lot of properties in the JSON. + self._key = req_str(data, 'key') + self._version = req_int(data, 'version') + self._deleted = opt_bool(data, 'deleted') + if self._deleted: + return + self._included = set(opt_str_list(data, 'included')) + self._excluded = set(opt_str_list(data, 'excluded')) + self._included_contexts = list(SegmentTarget(item) for item in opt_dict_list(data, 'includedContexts')) + self._excluded_contexts = list(SegmentTarget(item) for item in opt_dict_list(data, 'excludedContexts')) + self._rules = list(SegmentRule(item) for item in opt_dict_list(data, 'rules')) + self._salt = opt_str(data, 'salt') or '' + self._unbounded = opt_bool(data, 'unbounded') + self._unbounded_context_kind = opt_str(data, 'unboundedContextKind') + self._generation = opt_int(data, 'generation') + + @property + def key(self) -> str: + return self._key + + @property + def version(self) -> int: + return self._version + + @property + def deleted(self) -> bool: + return self._deleted + + @property + def included(self) -> Set[str]: + return self._included + + @property + def excluded(self) -> Set[str]: + return self._excluded + + @property + def included_contexts(self) -> List[SegmentTarget]: + return self._included_contexts + + @property + def excluded_contexts(self) -> List[SegmentTarget]: + return self._excluded_contexts + + @property + def rules(self) -> List[Any]: + return self._rules + + @property + def salt(self) -> str: + return self._salt + + @property + def unbounded(self) -> bool: + return self._unbounded + + @property + def unbounded_context_kind(self) -> Optional[str]: + return self._unbounded_context_kind + + @property + def generation(self) -> Optional[int]: + return self._generation diff --git a/ldclient/impl/model/value_parsing.py b/ldclient/impl/model/value_parsing.py new file mode 100644 index 00000000..8fb843a0 --- /dev/null +++ b/ldclient/impl/model/value_parsing.py @@ -0,0 +1,85 @@ +import re +from re import Pattern +from semver import VersionInfo +from datetime import tzinfo, timedelta, datetime +from numbers import Number +from typing import Any, Optional + +import pyrfc3339 + +_ZERO = timedelta(0) + +# A UTC class. + +class _UTC(tzinfo): + """UTC""" + + def utcoffset(self, dt): + return _ZERO + + def tzname(self, dt): + return "UTC" + + def dst(self, dt): + return _ZERO + +_epoch = datetime.utcfromtimestamp(0).replace(tzinfo=_UTC()) + + +def is_number(input: Any) -> bool: + # bool is a subtype of int, and we don't want to try and treat it as a number. + return isinstance(input, Number) and not isinstance(input, bool) + + +def parse_regex(input: Any) -> Optional[Pattern]: + if isinstance(input, str): + try: + return re.compile(input) + except Exception: + return None + return None + + +def parse_time(input: Any) -> Optional[float]: + """ + :param input: Either a number as milliseconds since Unix Epoch, or a string as a valid RFC3339 timestamp + :return: milliseconds since Unix epoch, or None if input was invalid. + """ + + if is_number(input): + return float(input) + + if isinstance(input, str): + try: + parsed_time = pyrfc3339.parse(input) + timestamp = (parsed_time - _epoch).total_seconds() + return timestamp * 1000.0 + except Exception as e: + return None + + return None + +def parse_semver(input: Any) -> Optional[VersionInfo]: + if not isinstance(input, str): + return None + try: + return VersionInfo.parse(input) + except TypeError: + return None + except ValueError as e: + try: + input = _add_zero_version_component(input) + return VersionInfo.parse(input) + except ValueError as e: + try: + input = _add_zero_version_component(input) + return VersionInfo.parse(input) + return input + except ValueError as e: + return None + +def _add_zero_version_component(input): + m = re.search("^([0-9.]*)(.*)", input) + if m is None: + return input + ".0" + return m.group(1) + ".0" + m.group(2) diff --git a/ldclient/impl/model/variation_or_rollout.py b/ldclient/impl/model/variation_or_rollout.py new file mode 100644 index 00000000..476d1df3 --- /dev/null +++ b/ldclient/impl/model/variation_or_rollout.py @@ -0,0 +1,74 @@ +from typing import Any, List, Optional + +from ldclient.impl.model.attribute_ref import AttributeRef, opt_attr_ref_with_opt_context_kind +from ldclient.impl.model.entity import * + + +class WeightedVariation: + __slots__ = ['_variation', '_weight', '_untracked'] + + def __init__(self, data: dict): + self._variation = req_int(data, 'variation') + self._weight = req_int(data, 'weight') + self._untracked = opt_bool(data, 'untracked') + + @property + def variation(self) -> int: + return self._variation + + @property + def weight(self) -> int: + return self._weight + + @property + def untracked(self) -> int: + return self._untracked + + +class Rollout: + __slots__ = ['_bucket_by', '_context_kind', '_is_experiment', '_seed', '_variations'] + + def __init__(self, data: dict): + self._context_kind = opt_str(data, 'contextKind') + self._bucket_by = opt_attr_ref_with_opt_context_kind(opt_str(data, 'bucketBy'), self._context_kind) + self._is_experiment = opt_str(data, 'kind') == 'experiment' + self._seed = opt_int(data, 'seed') + self._variations = list(WeightedVariation(item) for item in req_dict_list(data, 'variations')) + + @property + def bucket_by(self) -> Optional[AttributeRef]: + return self._bucket_by + + @property + def context_kind(self) -> Optional[str]: + return self._context_kind + + @property + def is_experiment(self) -> bool: + return self._is_experiment + + @property + def seed(self) -> Optional[int]: + return self._seed + + @property + def variations(self) -> List[WeightedVariation]: + return self._variations + + +class VariationOrRollout: + __slots__ = ['_variation', '_rollout'] + + def __init__(self, data): + data = {} if data is None else data + self._variation = opt_int(data, 'variation') + rollout = opt_dict(data, 'rollout') + self._rollout = None if rollout is None else Rollout(rollout) + + @property + def variation(self) -> Optional[int]: + return self._variation + + @property + def rollout(self) -> Optional[Rollout]: + return self._rollout diff --git a/ldclient/impl/operators.py b/ldclient/impl/operators.py new file mode 100644 index 00000000..bf4b5578 --- /dev/null +++ b/ldclient/impl/operators.py @@ -0,0 +1,112 @@ +from ldclient.impl.model.clause import ClausePreprocessedValue +from ldclient.impl.model.value_parsing import is_number, parse_semver, parse_time + +from collections import defaultdict +from numbers import Number +from semver import VersionInfo +from typing import Any, Callable, Optional + + +def _string_operator(context_value: Any, clause_value: Any, fn: Callable[[str, str], bool]) -> bool: + return isinstance(context_value, str) and isinstance(clause_value, str) and fn(context_value, clause_value) + + +def _numeric_operator(context_value: Any, clause_value: Any, fn: Callable[[float, float], bool]) -> bool: + return is_number(context_value) and is_number(clause_value) and fn(float(context_value), float(clause_value)) + + +def _time_operator(clause_preprocessed: Optional[ClausePreprocessedValue], + context_value: Any, fn: Callable[[float, float], bool]) -> bool: + clause_time = None if clause_preprocessed is None else clause_preprocessed.as_time + if clause_time is None: + return False + context_time = parse_time(context_value) + return context_time is not None and fn(context_time, clause_time) + + +def _semver_operator(clause_preprocessed: Optional[ClausePreprocessedValue], + context_value: Any, fn: Callable[[VersionInfo, VersionInfo], bool]) -> bool: + clause_ver = None if clause_preprocessed is None else clause_preprocessed.as_semver + if clause_ver is None: + return False + context_ver = parse_semver(context_value) + return context_ver is not None and fn(context_ver, clause_ver) + + +def _in(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]) -> bool: + return context_value == clause_value + + +def _starts_with(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]) -> bool: + return _string_operator(context_value, clause_value, lambda a, b: a.startswith(b)) + + +def _ends_with(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _string_operator(context_value, clause_value, lambda a, b: a.endswith(b)) + + +def _contains(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _string_operator(context_value, clause_value, lambda a, b: b in a) + + +def _matches(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + clause_regex = None if clause_preprocessed is None else clause_preprocessed.as_regex + if clause_regex is None: + return False + return isinstance(context_value, str) and clause_regex.search(context_value) is not None + + +def _less_than(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _numeric_operator(context_value, clause_value, lambda a, b: a < b) + + +def _less_than_or_equal(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _numeric_operator(context_value, clause_value, lambda a, b: a <= b) + + +def _greater_than(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _numeric_operator(context_value, clause_value, lambda a, b: a > b) + + +def _greater_than_or_equal(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _numeric_operator(context_value, clause_value, lambda a, b: a >= b) + + +def _before(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _time_operator(clause_preprocessed, context_value, lambda a, b: a < b) + + +def _after(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _time_operator(clause_preprocessed, context_value, lambda a, b: a > b) + + +def _semver_equal(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _semver_operator(clause_preprocessed, context_value, lambda a, b: a.compare(b) == 0) + + +def _semver_less_than(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _semver_operator(clause_preprocessed, context_value, lambda a, b: a.compare(b) < 0) + + +def _semver_greater_than(context_value: Any, clause_value: Any, clause_preprocessed: Optional[ClausePreprocessedValue]): + return _semver_operator(clause_preprocessed, context_value, lambda a, b: a.compare(b) > 0) + + +ops = { + "in": _in, + "endsWith": _ends_with, + "startsWith": _starts_with, + "matches": _matches, + "contains": _contains, + "lessThan": _less_than, + "lessThanOrEqual": _less_than_or_equal, + "greaterThan": _greater_than, + "greaterThanOrEqual": _greater_than_or_equal, + "before": _before, + "after": _after, + "semVerEqual": _semver_equal, + "semVerLessThan": _semver_less_than, + "semVerGreaterThan": _semver_greater_than +} + +ops = defaultdict(lambda: lambda l, r, p: False, ops) diff --git a/ldclient/impl/repeating_task.py b/ldclient/impl/repeating_task.py index 15794e3a..57d9a088 100644 --- a/ldclient/impl/repeating_task.py +++ b/ldclient/impl/repeating_task.py @@ -1,4 +1,4 @@ -from ldclient.util import log +from ldclient.impl.util import log from threading import Event, Thread import time diff --git a/ldclient/rwlock.py b/ldclient/impl/rwlock.py similarity index 91% rename from ldclient/rwlock.py rename to ldclient/impl/rwlock.py index 4365cd9a..8179d20a 100644 --- a/ldclient/rwlock.py +++ b/ldclient/impl/rwlock.py @@ -1,8 +1,3 @@ -""" -Internal helper class for locking. -""" -# currently excluded from documentation - see docs/README.md - import threading diff --git a/ldclient/impl/sse.py b/ldclient/impl/sse.py index 5a867096..1e37b659 100644 --- a/ldclient/impl/sse.py +++ b/ldclient/impl/sse.py @@ -1,8 +1,6 @@ -import urllib3 - from ldclient.config import HTTPConfig from ldclient.impl.http import HTTPFactory -from ldclient.util import throw_if_unsuccessful_response +from ldclient.impl.util import throw_if_unsuccessful_response class _BufferedLineReader: diff --git a/ldclient/util.py b/ldclient/impl/util.py similarity index 92% rename from ldclient/util.py rename to ldclient/impl/util.py index 66c0c70b..47820100 100644 --- a/ldclient/util.py +++ b/ldclient/impl/util.py @@ -1,16 +1,15 @@ -""" -General internal helper functions. -""" -# currently excluded from documentation - see docs/README.md - import logging -from os import environ import sys -import urllib3 +import time + +from ldclient.impl.http import _base_headers + + +def current_time_millis() -> int: + return int(time.time() * 1000) -from ldclient.impl.http import HTTPFactory, _base_headers -log = logging.getLogger(sys.modules[__name__].__name__) +log = logging.getLogger('ldclient.util') # historical logger name import queue diff --git a/ldclient/integrations/__init__.py b/ldclient/integrations/__init__.py index b45b2cb3..de2b10f8 100644 --- a/ldclient/integrations/__init__.py +++ b/ldclient/integrations/__init__.py @@ -41,9 +41,6 @@ def new_feature_store(host: str=None, store = Consul.new_feature_store() config = Config(feature_store=store) - Note that ``python-consul`` is not available for Python 3.3 or 3.4, so this feature cannot be - used in those Python versions. - :param host: hostname of the Consul server (uses ``localhost`` if omitted) :param port: port of the Consul server (uses 8500 if omitted) :param prefix: a namespace prefix to be prepended to all Consul keys @@ -164,24 +161,12 @@ def new_feature_store(url: str='redis://localhost:6379/0', :param url: the URL of the Redis host; defaults to ``DEFAULT_URL`` :param prefix: a namespace prefix to be prepended to all Redis keys; defaults to ``DEFAULT_PREFIX`` - :param max_connections: the maximum number of Redis connections to keep in the - connection pool; defaults to ``DEFAULT_MAX_CONNECTIONS``. This - parameter will later be dropped in favor of setting - redis_opts['max_connections'] :param caching: specifies whether local caching should be enabled and if so, sets the cache properties; defaults to :func:`ldclient.feature_store.CacheConfig.default()` :param redis_opts: extra options for initializing Redis connection from the url, - see `redis.connection.ConnectionPool.from_url` for more details. Note that - if you set max_connections, this will take precedence over the - deprecated max_connections parameter. + see `redis.connection.ConnectionPool.from_url` for more details. """ - # WARN(deprecated): Remove the max_connection parameter from - # this signature and clean up this bit of code. - if 'max_connections' not in redis_opts: - redis_opts = redis_opts.copy() - redis_opts['max_connections'] = max_connections - core = _RedisFeatureStoreCore(url, prefix, redis_opts) wrapper = CachingStoreWrapper(core, caching) wrapper._core = core # exposed for testing @@ -211,22 +196,10 @@ def new_big_segment_store(url: str='redis://localhost:6379/0', :param url: the URL of the Redis host; defaults to ``DEFAULT_URL`` :param prefix: a namespace prefix to be prepended to all Redis keys; defaults to ``DEFAULT_PREFIX`` - :param max_connections: the maximum number of Redis connections to keep in the - connection pool; defaults to ``DEFAULT_MAX_CONNECTIONS``. This - parameter will later be dropped in favor of setting - redis_opts['max_connections'] :param redis_opts: extra options for initializing Redis connection from the url, - see `redis.connection.ConnectionPool.from_url` for more details. Note that - if you set max_connections, this will take precedence over the - deprecated max_connections parameter. + see `redis.connection.ConnectionPool.from_url` for more details. """ - # WARN(deprecated): Remove the max_connection parameter from - # this signature and clean up this bit of code. - if 'max_connections' not in redis_opts: - redis_opts = redis_opts.copy() - redis_opts['max_connections'] = max_connections - return _RedisBigSegmentStore(url, prefix, redis_opts) class Files: diff --git a/ldclient/integrations/test_data.py b/ldclient/integrations/test_data.py index 0030cde6..ce9527eb 100644 --- a/ldclient/integrations/test_data.py +++ b/ldclient/integrations/test_data.py @@ -1,9 +1,10 @@ import copy -from typing import Any, Dict, List, Optional, Union +from typing import Any, Dict, List, Optional, Set, Union +from ldclient.context import Context from ldclient.versioned_data_kind import FEATURES -from ldclient.rwlock import ReadWriteLock from ldclient.impl.integrations.test_data.test_data_source import _TestDataSource +from ldclient.impl.rwlock import ReadWriteLock TRUE_VARIATION_INDEX = 0 FALSE_VARIATION_INDEX = 1 @@ -37,8 +38,8 @@ class TestData(): supports many of the ways a flag can be configured on the LaunchDarkly dashboard, but does not currently support 1. rule operators other than "in" and "not in", or 2. percentage rollouts. - If the same `TestData` instance is used to configure multiple `LDClient` instances, - any changes made to the data will propagate to all of the `LDClient` instances. + If the same ``TestData`` instance is used to configure multiple ``LDClient`` instances, + any changes made to the data will propagate to all of the ``LDClient`` instances. """ # Prevent pytest from treating this as a test class @@ -153,7 +154,7 @@ def __init__(self, key: str): self._variations = [] # type: List[Any] self._off_variation = None # type: Optional[int] self._fallthrough_variation = None # type: Optional[int] - self._targets = {} # type: Dict[int, List[str]] + self._targets = {} # type: Dict[str, Dict[int, Set[str]]] self._rules = [] # type: List[FlagRuleBuilder] # Note that _copy is private by convention, because we don't want developers to @@ -170,7 +171,9 @@ def _copy(self) -> 'FlagBuilder': to._variations = copy.copy(self._variations) to._off_variation = self._off_variation to._fallthrough_variation = self._fallthrough_variation - to._targets = copy.copy(self._targets) + to._targets = dict() + for k, v in self._targets.items(): + to._targets[k] = copy.copy(v) to._rules = copy.copy(self._rules) return to @@ -276,9 +279,17 @@ def variations(self, *variations) -> 'FlagBuilder': return self def variation_for_all_users(self, variation: Union[bool, int]) -> 'FlagBuilder': - """Sets the flag to always return the specified variation for all users. + """Deprecated name for variation_for_all(). - The variation is specified, Targeting is switched on, and any existing targets or rules are removed. + .. deprecated:: 8.0.0 + Use :meth:`ldclient.integrations.test_data.FlagBuilder.variation_for_all()`. + """ + return self.variation_for_all(variation) + + def variation_for_all(self, variation: Union[bool, int]) -> 'FlagBuilder': + """Sets the flag to always return the specified variation for all contexts. + + The variation is specified, targeting is switched on, and any existing targets or rules are removed. The fallthrough variation is set to the specified value. The off variation is left unchanged. If the flag was previously configured with other variations and the variation specified is a boolean, @@ -294,6 +305,14 @@ def variation_for_all_users(self, variation: Union[bool, int]) -> 'FlagBuilder': return self.clear_rules().clear_targets().on(True).fallthrough_variation(variation) def value_for_all_users(self, value: Any) -> 'FlagBuilder': + """Deprecated name for value_for_all(). + + .. deprecated:: 8.0.0 + Use :meth:`ldclient.integrations.test_data.FlagBuilder.value_for_all()`. + """ + return self.value_for_all(value) + + def value_for_all(self, value: Any) -> 'FlagBuilder': """ Sets the flag to always return the specified variation value for all users. @@ -321,37 +340,50 @@ def variation_for_user(self, user_key: str, variation: Union[bool, int]) -> 'Fla ``0`` for the first, ``1`` for the second, etc. :return: the flag builder """ - if isinstance(variation, bool): - # `variation` is True/False value - return self.boolean_flag().variation_for_user(user_key, _variation_for_boolean(variation)) - else: - # `variation` specifies the index of the variation to set - targets = self._targets + return self.variation_for_key(Context.DEFAULT_KIND, user_key, variation) - for idx, var in enumerate(self._variations): - if (idx == variation): - # If there is no set at the current variation, set it to be empty - target_for_variation = [] # type: List[str] - if idx in targets: - target_for_variation = targets[idx] + def variation_for_key(self, context_kind: str, context_key: str, variation: Union[bool, int]) -> 'FlagBuilder': + """Sets the flag to return the specified variation for a specific context, identified + by context kind and key, when targeting is on. - # If user is not in the current variation set, add them - if user_key not in target_for_variation: - target_for_variation.append(user_key) + This has no effect when targeting is turned off for the flag. - self._targets[idx] = target_for_variation + If the flag was previously configured with other variations and the variation specified is a boolean, + this also changes it to a boolean flag. - else: - # Remove user from the other variation set if necessary - if idx in targets: - target_for_variation = targets[idx] - if user_key in target_for_variation: - user_key_idx = target_for_variation.index(user_key) - del target_for_variation[user_key_idx] + :param context_kind: the context kind + :param context_key: the context key + :param bool|int variation: ``True`` or ``False`` or the desired variation index to return: + ``0`` for the first, ``1`` for the second, etc. + :return: the flag builder + """ + if isinstance(variation, bool): + # `variation` is True/False value + return self.boolean_flag().variation_for_key(context_kind, context_key, _variation_for_boolean(variation)) + + # `variation` specifies the index of the variation to set + targets = self._targets.get(context_kind) + if targets is None: + targets = {} + self._targets[context_kind] = targets + + for idx, var in enumerate(self._variations): + if (idx == variation): + # If there is no set at the current variation, set it to be empty + target_for_variation = targets.get(idx) + if target_for_variation is None: + target_for_variation = set() + targets[idx] = target_for_variation + + # If key is not in the current variation set, add it + target_for_variation.add(context_key) - self._targets[idx] = target_for_variation + else: + # Remove key from the other variation set if necessary + if idx in targets: + targets[idx].discard(context_key) - return self + return self def _add_rule(self, flag_rule_builder: 'FlagRuleBuilder'): self._rules.append(flag_rule_builder) @@ -359,6 +391,9 @@ def _add_rule(self, flag_rule_builder: 'FlagRuleBuilder'): def if_match(self, attribute: str, *values) -> 'FlagRuleBuilder': """Starts defining a flag rule, using the "is one of" operator. + This is a shortcut for calling :meth:`ldclient.integrations.test_data.FlagBuilder.if_match_context()` + with "user" as the context kind. + **Example:** create a rule that returns ``True`` if the name is "Patsy" or "Edina" :: @@ -370,12 +405,34 @@ def if_match(self, attribute: str, *values) -> 'FlagRuleBuilder': :param values: values to compare to :return: the flag rule builder """ + return self.if_match_context(Context.DEFAULT_KIND, attribute, *values) + + def if_match_context(self, context_kind: str, attribute: str, *values) -> 'FlagRuleBuilder': + """Starts defining a flag rule, using the "is one of" operator. This matching expression only + applies to contexts of a specific kind. + + **Example:** create a rule that returns ``True`` if the name attribute for the + company" context is "Ella" or "Monsoon": + :: + + td.flag("flag") \\ + .if_match_context('company', 'name', 'Ella', 'Monsoon') \\ + .then_return(True) + + :param context_kind: the context kind + :param attribute: the context attribute to match against + :param values: values to compare to + :return: the flag rule builder + """ flag_rule_builder = FlagRuleBuilder(self) - return flag_rule_builder.and_match(attribute, *values) + return flag_rule_builder.and_match_context(context_kind, attribute, *values) def if_not_match(self, attribute: str, *values) -> 'FlagRuleBuilder': """Starts defining a flag rule, using the "is not one of" operator. + This is a shortcut for calling :meth:`ldclient.integrations.test_data.FlagBuilder.if_not_match_context()` + with "user" as the context kind. + **Example:** create a rule that returns ``True`` if the name is neither "Saffron" nor "Bubble" :: @@ -387,13 +444,32 @@ def if_not_match(self, attribute: str, *values) -> 'FlagRuleBuilder': :param values: values to compare to :return: the flag rule builder """ + return self.if_not_match_context(Context.DEFAULT_KIND, attribute, *values) + + def if_not_match_context(self, context_kind: str, attribute: str, *values) -> 'FlagRuleBuilder': + """Starts defining a flag rule, using the "is not one of" operator. This matching expression only + applies to contexts of a specific kind. + + **Example:** create a rule that returns ``True`` if the name attribute for the + "company" context is neither "Pendant" nor "Sterling Cooper": + :: + + td.flag("flag") \\ + .if_not_match('company', 'name', 'Pendant', 'Sterling Cooper') \\ + .then_return(True) + + :param context_kind: the context kind + :param attribute: the context attribute to match against + :param values: values to compare to + :return: the flag rule builder + """ flag_rule_builder = FlagRuleBuilder(self) - return flag_rule_builder.and_not_match(attribute, values) + return flag_rule_builder.and_not_match_context(context_kind, attribute, *values) def clear_rules(self) -> 'FlagBuilder': """Removes any existing rules from the flag. This undoes the effect of methods like - :meth:`ldclient.integrations.test_data.FlagBuilder.if_match()` + :meth:`ldclient.integrations.test_data.FlagBuilder.if_match()`. :return: the same flag builder """ @@ -403,7 +479,7 @@ def clear_rules(self) -> 'FlagBuilder': def clear_targets(self) -> 'FlagBuilder': """Removes any existing targets from the flag. This undoes the effect of methods like - :meth:`ldclient.integrations.test_data.FlagBuilder.variation_for_user()` + :meth:`ldclient.integrations.test_data.FlagBuilder.variation_for_user()`. :return: the same flag builder """ @@ -422,7 +498,9 @@ def _build(self, version: int) -> dict: 'key': self._key, 'version': version, 'on': self._on, - 'variations': self._variations + 'variations': self._variations, + 'prerequisites': [], + 'salt': '' } base_flag_object['offVariation'] = self._off_variation @@ -431,12 +509,27 @@ def _build(self, version: int) -> dict: } targets = [] - for var_index, user_keys in self._targets.items(): - targets.append({ - 'variation': var_index, - 'values': user_keys - }) + context_targets = [] + for target_context_kind, target_variations in self._targets.items(): + for var_index, target_keys in target_variations.items(): + if target_context_kind == Context.DEFAULT_KIND: + targets.append({ + 'variation': var_index, + 'values': sorted(list(target_keys)) # sorting just for test determinacy + }) + context_targets.append({ + 'contextKind': target_context_kind, + 'variation': var_index, + 'values': [] + }) + else: + context_targets.append({ + 'contextKind': target_context_kind, + 'variation': var_index, + 'values': sorted(list(target_keys)) # sorting just for test determinacy + }) base_flag_object['targets'] = targets + base_flag_object['contextTargets'] = context_targets rules = [] for idx, rule in enumerate(self._rules): @@ -471,6 +564,9 @@ def __init__(self, flag_builder: FlagBuilder): def and_match(self, attribute: str, *values) -> 'FlagRuleBuilder': """Adds another clause, using the "is one of" operator. + This is a shortcut for calling :meth:`ldclient.integrations.test_data.FlagRuleBuilder.and_match_context()` + with "user" as the context kind. + **Example:** create a rule that returns ``True`` if the name is "Patsy" and the country is "gb" :: @@ -483,7 +579,28 @@ def and_match(self, attribute: str, *values) -> 'FlagRuleBuilder': :param values: values to compare to :return: the flag rule builder """ + return self.and_match_context(Context.DEFAULT_KIND, attribute, *values) + + def and_match_context(self, context_kind: str, attribute: str, *values) -> 'FlagRuleBuilder': + """Adds another clause, using the "is one of" operator. This matching expression only + applies to contexts of a specific kind. + + **Example:** create a rule that returns ``True`` if the name attribute for the + "company" context is "Ella", and the country attribute for the "company" context is "gb": + :: + + td.flag('flag') \\ + .if_match_context('company', 'name', 'Ella') \\ + .and_match_context('company', 'country', 'gb') \\ + .then_return(True) + + :param context_kind: the context kind + :param attribute: the context attribute to match against + :param values: values to compare to + :return: the flag rule builder + """ self._clauses.append({ + 'contextKind': context_kind, 'attribute': attribute, 'op': 'in', 'values': list(values), @@ -494,6 +611,9 @@ def and_match(self, attribute: str, *values) -> 'FlagRuleBuilder': def and_not_match(self, attribute: str, *values) -> 'FlagRuleBuilder': """Adds another clause, using the "is not one of" operator. + This is a shortcut for calling :meth:`ldclient.integrations.test_data.FlagRuleBuilder.and_not_match_context()` + with "user" as the context kind. + **Example:** create a rule that returns ``True`` if the name is "Patsy" and the country is not "gb" :: @@ -506,7 +626,28 @@ def and_not_match(self, attribute: str, *values) -> 'FlagRuleBuilder': :param values: values to compare to :return: the flag rule builder """ + return self.and_not_match_context(Context.DEFAULT_KIND, attribute, *values) + + def and_not_match_context(self, context_kind: str, attribute: str, *values) -> 'FlagRuleBuilder': + """Adds another clause, using the "is not one of" operator. This matching expression only + applies to contexts of a specific kind. + + **Example:** create a rule that returns ``True`` if the name attribute for the + "company" context is "Ella", and the country attribute for the "company" context is not "gb": + :: + + td.flag('flag') \\ + .if_match_context('company', 'name', 'Ella') \\ + .and_not_match_context('company', 'country', 'gb') \\ + .then_return(True) + + :param context_kind: the context kind + :param attribute: the context attribute to match against + :param values: values to compare to + :return: the flag rule builder + """ self._clauses.append({ + 'contextKind': context_kind, 'attribute': attribute, 'op': 'in', 'values': list(values), diff --git a/ldclient/interfaces.py b/ldclient/interfaces.py index a863319f..c3b1f2f7 100644 --- a/ldclient/interfaces.py +++ b/ldclient/interfaces.py @@ -139,7 +139,7 @@ def upsert_internal(self, kind: VersionedDataKind, item: dict) -> dict: the old one. It should return the final state of the item, i.e. if the update succeeded then it returns the item that was passed in, and if the update failed due to the version check then it returns the item that is currently in the data store (this ensures that - `CachingStoreWrapper` will update the cache correctly). + ``CachingStoreWrapper`` will update the cache correctly). :param kind: The kind of object to update :param item: The object to update or insert @@ -152,7 +152,7 @@ def initialized_internal(self) -> bool: Returns true if this store has been initialized. In a shared data store, it should be able to detect this even if initInternal was called in a different process, i.e. the test should be based on looking at what is in the data store. The method does not need to worry about caching - this value; `CachingStoreWrapper` will only call it when necessary. + this value; ``CachingStoreWrapper`` will only call it when necessary. """ @@ -263,7 +263,7 @@ def __init__(self, last_up_to_date: Optional[int]): @property def last_up_to_date(self) -> Optional[int]: """ - The Unix epoch millisecond timestamp of the last update to the `BigSegmentStore`. It is + The Unix epoch millisecond timestamp of the last update to the ``BigSegmentStore``. It is None if the store has never been updated. """ return self.__last_up_to_date @@ -288,30 +288,30 @@ def get_metadata(self) -> BigSegmentStoreMetadata: pass @abstractmethod - def get_membership(self, user_hash: str) -> Optional[dict]: + def get_membership(self, context_hash: str) -> Optional[dict]: """ - Queries the store for a snapshot of the current segment state for a specific user. + Queries the store for a snapshot of the current segment state for a specific context. - The user_hash is a base64-encoded string produced by hashing the user key as defined by - the Big Segments specification; the store implementation does not need to know the details + The context_hash is a base64-encoded string produced by hashing the context key as defined + by the Big Segments specification; the store implementation does not need to know the details of how this is done, because it deals only with already-hashed keys, but the string can be assumed to only contain characters that are valid in base64. - The return value should be either a `dict`, or None if the user is not referenced in any big + The return value should be either a ``dict``, or None if the context is not referenced in any big segments. Each key in the dictionary is a "segment reference", which is how segments are identified in Big Segment data. This string is not identical to the segment key-- the SDK will add other information. The store implementation should not be concerned with the - format of the string. Each value in the dictionary is True if the user is explicitly included - in the segment, False if the user is explicitly excluded from the segment-- and is not also + format of the string. Each value in the dictionary is True if the context is explicitly included + in the segment, False if the context is explicitly excluded from the segment-- and is not also explicitly included (that is, if both an include and an exclude existed in the data, the - include would take precedence). If the user's status in a particular segment is undefined, + include would take precedence). If the context's status in a particular segment is undefined, there should be no key or value for that segment. This dictionary may be cached by the SDK, so it should not be modified after it is created. It is a snapshot of the segment membership state at one point in time. - :param user_hash: the hashed user key - :return: True/False values for Big Segments that reference this user + :param context_hash: the hashed context key + :return: True/False values for Big Segments that reference this context """ pass @@ -341,9 +341,9 @@ def available(self) -> bool: If this property is False, the store is not able to make queries (for instance, it may not have a valid database connection). In this case, the SDK will treat any reference to a Big Segment - as if no users are included in that segment. Also, the :func:`ldclient.flag.EvaluationDetail.reason` + as if no users are included in that segment. Also, the :func:`ldclient.evaluation.EvaluationDetail.reason` associated with with any flag evaluation that references a Big Segment when the store is not - available will have a `bigSegmentsStatus` of `"STORE_ERROR"`. + available will have a ``bigSegmentsStatus`` of ``"STORE_ERROR"``. """ return self.__available @@ -356,8 +356,8 @@ def stale(self) -> bool: This may indicate that the LaunchDarkly Relay Proxy, which populates the store, has stopped running or has become unable to receive fresh data from LaunchDarkly. Any feature flag evaluations that reference a Big Segment will be using the last known data, which may be out - of date. Also, the :func:`ldclient.flag.EvaluationDetail.reason` associated with those evaluations - will have a `bigSegmentsStatus` of `"STALE"`. + of date. Also, the :func:`ldclient.evaluation.EvaluationDetail.reason` associated with those evaluations + will have a ``bigSegmentsStatus`` of ``"STALE"``. """ return self.__stale @@ -375,13 +375,13 @@ class BigSegmentStoreStatusProvider: Application code never needs to implement this interface. There are two ways to interact with the status. One is to simply get the current status; if its - `available` property is true, then the SDK is able to evaluate user membership in Big Segments, - and the `stale`` property indicates whether the data might be out of date. + ``available`` property is true, then the SDK is able to evaluate user membership in Big Segments, + and the ``stale`` property indicates whether the data might be out of date. The other way is to subscribe to status change notifications. Applications may wish to know if there is an outage in the Big Segment store, or if it has become stale (the Relay Proxy has stopped updating it with new data), since then flag evaluations that reference a Big Segment - might return incorrect values. Use `add_listener` to register a callback for notifications. + might return incorrect values. Use :func:`add_listener()` to register a callback for notifications. """ @abstractproperty @@ -399,7 +399,7 @@ def add_listener(self, listener: Callable[[BigSegmentStoreStatus], None]) -> Non Subscribes for notifications of status changes. The listener is a function or method that will be called with a single parameter: the - new `BigSegmentStoreStatus`. + new ``BigSegmentStoreStatus``. :param listener: the listener to add """ @@ -410,7 +410,7 @@ def remove_listener(self, listener: Callable[[BigSegmentStoreStatus], None]) -> """ Unsubscribes from notifications of status changes. - :param listener: a listener that was previously added with `add_listener`; if it was not, + :param listener: a listener that was previously added with :func:`add_listener()`; if it was not, this method does nothing """ pass diff --git a/ldclient/operators.py b/ldclient/operators.py deleted file mode 100644 index 0fb45c68..00000000 --- a/ldclient/operators.py +++ /dev/null @@ -1,195 +0,0 @@ -""" -Implementation details of feature flag evaluation. -""" -# currently excluded from documentation - see docs/README.md - -import logging -import re -from semver import VersionInfo -import sys -from datetime import tzinfo, timedelta, datetime -from collections import defaultdict -from numbers import Number - -import pyrfc3339 - -log = logging.getLogger(sys.modules[__name__].__name__) - - -def _string_operator(u, c, fn): - return fn(u, c) if isinstance(u, str) and isinstance(c, str) else False - -def _numeric_operator(u, c, fn): - # bool is a subtype of int, and we don't want to try and compare it as a number. - if isinstance(input, bool): - log.warning("Got unexpected bool type when attempting to parse time") - return None - - if isinstance(u, Number): - if isinstance(c, Number): - return fn(u, c) - return False - - -def _parse_time(input): - """ - :param input: Either a number as milliseconds since Unix Epoch, or a string as a valid RFC3339 timestamp - :return: milliseconds since Unix epoch, or None if input was invalid. - """ - - # bool is a subtype of int, and we don't want to try and compare it as a time. - if isinstance(input, bool): - log.warning("Got unexpected bool type when attempting to parse time") - return None - - if isinstance(input, Number): - return float(input) - - if isinstance(input, str): - try: - parsed_time = pyrfc3339.parse(input) - timestamp = (parsed_time - epoch).total_seconds() - return timestamp * 1000.0 - except Exception as e: - log.warning("Couldn't parse timestamp:" + str(input) + " with message: " + str(e)) - return None - - log.warning("Got unexpected type: " + str(type(input)) + " with value: " + str(input) + " when attempting to parse time") - return None - -def _time_operator(u, c, fn): - u_time = _parse_time(u) - if u_time is not None: - c_time = _parse_time(c) - if c_time is not None: - return fn(u_time, c_time) - return False - -def _parse_semver(input): - try: - VersionInfo.parse(input) - return input - except TypeError: - return None - except ValueError as e: - try: - input = _add_zero_version_component(input) - VersionInfo.parse(input) - return input - except ValueError as e: - try: - input = _add_zero_version_component(input) - VersionInfo.parse(input) - return input - except ValueError as e: - return None - -def _add_zero_version_component(input): - m = re.search("^([0-9.]*)(.*)", input) - if m is None: - return input + ".0" - return m.group(1) + ".0" + m.group(2) - -def _semver_operator(u, c, fn): - u_ver = _parse_semver(u) - c_ver = _parse_semver(c) - if u_ver is not None and c_ver is not None: - return fn(u_ver, c_ver) - return False - - -def _in(u, c): - if u == c: - return True - return False - - -def _starts_with(u, c): - return _string_operator(u, c, lambda u, c: u.startswith(c)) - - -def _ends_with(u, c): - return _string_operator(u, c, lambda u, c: u.endswith(c)) - - -def _contains(u, c): - return _string_operator(u, c, lambda u, c: c in u) - - -def _matches(u, c): - return _string_operator(u, c, lambda u, c: re.search(c, u) is not None) - - -def _less_than(u, c): - return _numeric_operator(u, c, lambda u, c: u < c) - - -def _less_than_or_equal(u, c): - return _numeric_operator(u, c, lambda u, c: u <= c) - - -def _greater_than(u, c): - return _numeric_operator(u, c, lambda u, c: u > c) - - -def _greater_than_or_equal(u, c): - return _numeric_operator(u, c, lambda u, c: u >= c) - - -def _before(u, c): - return _time_operator(u, c, lambda u, c: u < c) - - -def _after(u, c): - return _time_operator(u, c, lambda u, c: u > c) - - -def _semver_equal(u, c): - return _semver_operator(u, c, lambda u, c: VersionInfo.parse(u).compare(c) == 0) - - -def _semver_less_than(u, c): - return _semver_operator(u, c, lambda u, c: VersionInfo.parse(u).compare(c) < 0) - - -def _semver_greater_than(u, c): - return _semver_operator(u, c, lambda u, c: VersionInfo.parse(u).compare(c) > 0) - - -_ZERO = timedelta(0) -_HOUR = timedelta(hours=1) - -# A UTC class. - -class _UTC(tzinfo): - """UTC""" - - def utcoffset(self, dt): - return _ZERO - - def tzname(self, dt): - return "UTC" - - def dst(self, dt): - return _ZERO - -epoch = datetime.utcfromtimestamp(0).replace(tzinfo=_UTC()) - -ops = { - "in": _in, - "endsWith": _ends_with, - "startsWith": _starts_with, - "matches": _matches, - "contains": _contains, - "lessThan": _less_than, - "lessThanOrEqual": _less_than_or_equal, - "greaterThan": _greater_than, - "greaterThanOrEqual": _greater_than_or_equal, - "before": _before, - "after": _after, - "semVerEqual": _semver_equal, - "semVerLessThan": _semver_less_than, - "semVerGreaterThan": _semver_greater_than -} - -ops = defaultdict(lambda: lambda l, r: False, ops) diff --git a/ldclient/repeating_timer.py b/ldclient/repeating_timer.py deleted file mode 100644 index 1f160c63..00000000 --- a/ldclient/repeating_timer.py +++ /dev/null @@ -1,16 +0,0 @@ -""" -Internal helper class for repeating tasks. -""" -# currently excluded from documentation - see docs/README.md - -from ldclient.impl.repeating_task import RepeatingTask - -class RepeatingTimer(RepeatingTask): - """ - Deprecated internal class, retained until the next major version in case any application code was - referencing it. This was used in situations where we did not want the callback to execute - immediately, but to always wait for the interval first, so we are setting both the interval - parameter and the initial_delay parameter of RepeatingTask to the same value. - """ - def __init__(self, interval, callable): - super().init(self, interval, interval, callable) diff --git a/ldclient/sse_client.py b/ldclient/sse_client.py deleted file mode 100644 index 80dea242..00000000 --- a/ldclient/sse_client.py +++ /dev/null @@ -1,213 +0,0 @@ -# -# This deprecated implementation was based on: -# https://bitbucket.org/btubbs/sseclient/src/a47a380a3d7182a205c0f1d5eb470013ce796b4d/sseclient.py?at=default&fileviewer=file-view-default -# -# It has the following known issues: -# - It does not properly handle line terminators other than \n. -# - It does not properly handle multi-line data that starts with a blank line. -# - It fails if a multi-byte character is split across chunks of the stream. -# -# It is replaced by the ldclient.impl.sse module. -# -import re -import time - -import urllib3 - -from ldclient.config import HTTPConfig -from ldclient.impl.http import HTTPFactory -from ldclient.util import log -from ldclient.util import throw_if_unsuccessful_response - -# Technically, we should support streams that mix line endings. This regex, -# however, assumes that a system will provide consistent line endings. -end_of_field = re.compile(r'\r\n\r\n|\r\r|\n\n') - - -class SSEClient: - """ - This class is deprecated and no longer used in the SDK. It is retained here for backward compatibility in case - any external code was referencing it, but it will be removed in a future major version. - """ - def __init__(self, url, last_id=None, retry=3000, connect_timeout=10, read_timeout=300, chunk_size=10000, - verify_ssl=False, http=None, http_proxy=None, http_factory=None, **kwargs): - self.url = url - self.last_id = last_id - self.retry = retry - self._chunk_size = chunk_size - - if http_factory: - self._timeout = http_factory.timeout - base_headers = http_factory.base_headers - else: - # for backward compatibility in case anyone else is using this class - self._timeout = urllib3.Timeout(connect=connect_timeout, read=read_timeout) - base_headers = {} - - # Optional support for passing in an HTTP client - if http: - self.http = http - else: - hf = http_factory - if hf is None: # build from individual parameters which we're only retaining for backward compatibility - hc = HTTPConfig( - connect_timeout=connect_timeout, - read_timeout=read_timeout, - disable_ssl_verification=not verify_ssl, - http_proxy=http_proxy - ) - hf = HTTPFactory({}, hc) - self.http = hf.create_pool_manager(1, url) - - # Any extra kwargs will be fed into the request call later. - self.requests_kwargs = kwargs - - # The SSE spec requires making requests with Cache-Control: nocache - if 'headers' not in self.requests_kwargs: - self.requests_kwargs['headers'] = {} - - self.requests_kwargs['headers'].update(base_headers) - - self.requests_kwargs['headers']['Cache-Control'] = 'no-cache' - - # The 'Accept' header is not required, but explicit > implicit - self.requests_kwargs['headers']['Accept'] = 'text/event-stream' - - # Keep data here as it streams in - self.buf = u'' - - self._connect() - - def _connect(self): - if self.last_id: - self.requests_kwargs['headers']['Last-Event-ID'] = self.last_id - - # Use session if set. Otherwise fall back to requests module. - self.resp = self.http.request( - 'GET', - self.url, - timeout=self._timeout, - preload_content=False, - retries=0, # caller is responsible for implementing appropriate retry semantics, e.g. backoff - **self.requests_kwargs) - - # Raw readlines doesn't work because we may be missing newline characters until the next chunk - # For some reason, we also need to specify a chunk size because stream=True doesn't seem to guarantee - # that we get the newlines in a timeline manner - self.resp_file = self.resp.stream(amt=self._chunk_size) - - # TODO: Ensure we're handling redirects. Might also stick the 'origin' - # attribute on Events like the Javascript spec requires. - throw_if_unsuccessful_response(self.resp) - - def _event_complete(self): - return re.search(end_of_field, self.buf[len(self.buf)-self._chunk_size-10:]) is not None # Just search the last chunk plus a bit - - def __iter__(self): - return self - - def __next__(self): - while not self._event_complete(): - try: - nextline = next(self.resp_file) - # There are some bad cases where we don't always get a line: https://github.com/requests/requests/pull/2431 - if not nextline: - raise EOFError() - self.buf += nextline.decode("utf-8") - except (StopIteration, EOFError) as e: - if self.retry: - # This retry logic is not what we want in the SDK. It's retained here for backward compatibility in case - # anyone else is using SSEClient. - time.sleep(self.retry / 1000.0) - self._connect() - - # The SSE spec only supports resuming from a whole message, so - # if we have half a message we should throw it out. - head, sep, tail = self.buf.rpartition('\n') - self.buf = head + sep - continue - else: - raise - - split = re.split(end_of_field, self.buf) - head = split[0] - tail = "".join(split[1:]) - - self.buf = tail - msg = Event.parse(head) - - # If the server requests a specific retry delay, we need to honor it. - if msg.retry: - self.retry = msg.retry - - # last_id should only be set if included in the message. It's not - # forgotten if a message omits it. - if msg.id: - self.last_id = msg.id - - return msg - - -class Event: - - sse_line_pattern = re.compile('(?P[^:]*):?( ?(?P.*))?') - - def __init__(self, data='', event='message', id=None, retry=None): - self.data = data - self.event = event - self.id = id - self.retry = retry - - def dump(self): - lines = [] - if self.id: - lines.append('id: %s' % self.id) - - # Only include an event line if it's not the default already. - if self.event != 'message': - lines.append('event: %s' % self.event) - - if self.retry: - lines.append('retry: %s' % self.retry) - - lines.extend('data: %s' % d for d in self.data.split('\n')) - return '\n'.join(lines) + '\n\n' - - @classmethod - def parse(cls, raw): - """ - Given a possibly-multiline string representing an SSE message, parse it - and return a Event object. - """ - msg = cls() - for line in raw.split('\n'): - m = cls.sse_line_pattern.match(line) - if m is None: - # Malformed line. Discard but warn. - log.warning('Invalid SSE line: "%s"' % line) - continue - - name = m.groupdict()['name'] - value = m.groupdict()['value'] - if name == '': - # line began with a ":", so is a comment. Ignore - continue - - if name == 'data': - # If we already have some data, then join to it with a newline. - # Else this is it. - if msg.data: - msg.data = '%s\n%s' % (msg.data, value) - else: - msg.data = value - elif name == 'event': - msg.event = value - elif name == 'id': - msg.id = value - elif name == 'retry': - msg.retry = int(value) - - return msg - - def __str__(self): - return self.data diff --git a/ldclient/user_filter.py b/ldclient/user_filter.py deleted file mode 100644 index acca254f..00000000 --- a/ldclient/user_filter.py +++ /dev/null @@ -1,41 +0,0 @@ -""" -Internal helper class for filtering out private attributes. -""" -# currently excluded from documentation - see docs/README.md - -class UserFilter: - IGNORE_ATTRS = frozenset(['key', 'custom', 'anonymous']) - ALLOWED_TOP_LEVEL_ATTRS = frozenset(['key', 'secondary', 'ip', 'country', 'email', - 'firstName', 'lastName', 'avatar', 'name', 'anonymous', 'custom']) - - def __init__(self, config): - self._private_attribute_names = config.private_attribute_names - self._all_attributes_private = config.all_attributes_private - - def _is_private_attr(self, name, user_private_attrs): - if name in UserFilter.IGNORE_ATTRS: - return False - elif self._all_attributes_private: - return True - else: - return (name in self._private_attribute_names) or (name in user_private_attrs) - - def filter_user_props(self, user_props): - all_private_attrs = set() - user_private_attrs = user_props.get('privateAttributeNames', []) - - def filter_private_attrs(attrs, allowed_attrs = frozenset()): - for key, value in attrs.items(): - if (not allowed_attrs) or (key in allowed_attrs): - if self._is_private_attr(key, user_private_attrs): - all_private_attrs.add(key) - else: - yield key, value - - ret = dict(filter_private_attrs(user_props, UserFilter.ALLOWED_TOP_LEVEL_ATTRS)) - if 'custom' in user_props: - ret['custom'] = dict(filter_private_attrs(user_props['custom'])) - - if all_private_attrs: - ret['privateAttrs'] = sorted(list(all_private_attrs)) # note, only sorting to make tests reliable - return ret diff --git a/ldclient/versioned_data_kind.py b/ldclient/versioned_data_kind.py index 910618e6..93d3ca83 100644 --- a/ldclient/versioned_data_kind.py +++ b/ldclient/versioned_data_kind.py @@ -9,16 +9,20 @@ for features or segments. """ +from ldclient.impl.model import FeatureFlag, ModelEntity, Segment + from collections import namedtuple -from typing import Callable, Iterable, Optional +from typing import Any, Callable, Iterable, Optional # Note that VersionedDataKind without the extra attributes is no longer used in the SDK, # but it's preserved here for backward compatibility just in case someone else used it class VersionedDataKind: - def __init__(self, namespace: str, request_api_path: str, stream_api_path: str): + def __init__(self, namespace: str, request_api_path: str, stream_api_path: str, + decoder: Optional[Callable[[dict], Any]] = None): self._namespace = namespace self._request_api_path = request_api_path self._stream_api_path = stream_api_path + self._decoder = decoder @property def namespace(self) -> str: @@ -31,11 +35,20 @@ def request_api_path(self) -> str: @property def stream_api_path(self) -> str: return self._stream_api_path + + def decode(self, data: Any) -> Any: + if self._decoder is None or isinstance(data, ModelEntity): + return data + return self._decoder(data) + + def encode(self, item: Any) -> dict: + return item.to_json_dict() if isinstance(item, ModelEntity) else item class VersionedDataKindWithOrdering(VersionedDataKind): def __init__(self, namespace: str, request_api_path: str, stream_api_path: str, + decoder: Optional[Callable[[dict], Any]], priority: int, get_dependency_keys: Optional[Callable[[dict], Iterable[str]]]): - super().__init__(namespace, request_api_path, stream_api_path) + super().__init__(namespace, request_api_path, stream_api_path, decoder) self._priority = priority self._get_dependency_keys = get_dependency_keys @@ -50,11 +63,13 @@ def get_dependency_keys(self) -> Optional[Callable[[dict], Iterable[str]]]: FEATURES = VersionedDataKindWithOrdering(namespace = "features", request_api_path = "/sdk/latest-flags", stream_api_path = "/flags/", + decoder = FeatureFlag, priority = 1, get_dependency_keys = lambda flag: (p.get('key') for p in flag.get('prerequisites', []))) SEGMENTS = VersionedDataKindWithOrdering(namespace = "segments", request_api_path = "/sdk/latest-segments", stream_api_path = "/segments/", + decoder = Segment, priority = 0, get_dependency_keys = None) diff --git a/setup.py b/setup.py index 6c2cbe43..c96ef429 100644 --- a/setup.py +++ b/setup.py @@ -64,12 +64,11 @@ def run(self): 'License :: OSI Approved :: Apache Software License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.5', - 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', + 'Programming Language :: Python :: 3.11', 'Topic :: Software Development', 'Topic :: Software Development :: Libraries', ], diff --git a/testing/builders.py b/testing/builders.py new file mode 100644 index 00000000..202627bb --- /dev/null +++ b/testing/builders.py @@ -0,0 +1,212 @@ +from __future__ import annotations +from typing import Any, List ,Optional + +from ldclient.context import Context +from ldclient.impl.model import * + + +class BaseBuilder: + def __init__(self, data): + self.data = data + + def _set(self, key: str, value: Any): + self.data[key] = value + return self + + def _append(self, key: str, item: dict): + self.data[key].append(item) + return self + + def _append_all(self, key: str, items: List[Any]): + self.data[key].extend(items) + return self + + def build(self): + return self.data.copy() + + +class FlagBuilder(BaseBuilder): + def __init__(self, key): + super().__init__({ + 'key': key, + 'version': 1, + 'on': False, + 'variations': [], + 'offVariation': None, + 'fallthrough': {}, + 'prerequisites': [], + 'targets': [], + 'contextTargets': [], + 'rules': [], + 'salt': '' + }) + + def build(self): + return FeatureFlag(self.data.copy()) + + def key(self, key: str) -> FlagBuilder: + return self._set('key', key) + + def version(self, version: int) -> FlagBuilder: + return self._set('version', version) + + def on(self, on: bool) -> FlagBuilder: + return self._set('on', on) + + def variations(self, *variations: Any) -> FlagBuilder: + return self._set('variations', list(variations)) + + def off_variation(self, value: Optional[int]) -> FlagBuilder: + return self._set('offVariation', value) + + def fallthrough_variation(self, index: int) -> FlagBuilder: + return self._set('fallthrough', {'variation': index}) + + def fallthrough_rollout(self, rollout: dict) -> FlagBuilder: + return self._set('fallthrough', {'rollout': rollout}) + + def prerequisite(self, key: str, variation: int) -> FlagBuilder: + return self._append('prerequisites', {'key': key, 'variation': variation}) + + def target(self, variation: int, *keys: str) -> FlagBuilder: + return self._append('targets', {'variation': variation, 'values': list(keys)}) + + def context_target(self, context_kind: str, variation: int, *keys: str) -> FlagBuilder: + return self._append('contextTargets', + {'contextKind': context_kind, 'variation': variation, 'values': list(keys)}) + + def rules(self, *rules: dict) -> FlagBuilder: + return self._append_all('rules', list(rules)) + + def salt(self, value: str) -> FlagBuilder: + return self._set('salt', value) + + def track_events(self, value: bool) -> FlagBuilder: + return self._set('trackEvents', value) + + def track_events_fallthrough(self, value: bool) -> FlagBuilder: + return self._set('trackEventsFallthrough', value) + + def debug_events_until_date(self, value: Optional[int]) -> FlagBuilder: + return self._set('debugEventsUntilDate', value) + + +class FlagRuleBuilder(BaseBuilder): + def __init__(self): + super().__init__({'clauses': []}) + + def clauses(self, *clauses: dict) -> FlagRuleBuilder: + return self._append_all('clauses', list(clauses)) + + def id(self, value: str) -> FlagRuleBuilder: + return self._set('id', value) + + def rollout(self, rollout: Optional[dict]) -> FlagRuleBuilder: + return self._set('rollout', rollout) + + def track_events(self, value: bool) -> FlagRuleBuilder: + return self._set('trackEvents', value) + + def variation(self, variation: int) -> FlagRuleBuilder: + return self._set('variation', variation) + + +class SegmentBuilder(BaseBuilder): + def __init__(self, key): + super().__init__({ + 'key': key, + 'version': 1, + 'included': [], + 'excluded': [], + 'includedContexts': [], + 'excludedContexts': [], + 'rules': [], + 'unbounded': False, + 'salt': '' + }) + + def build(self): + return Segment(self.data.copy()) + + def key(self, key: str) -> SegmentBuilder: + return self._set('key', key) + + def version(self, version: int) -> SegmentBuilder: + return self._set('version', version) + + def excluded(self, *keys: str) -> SegmentBuilder: + return self._append_all('excluded', list(keys)) + + def excluded_contexts(self, context_kind: str, *keys: str) -> SegmentBuilder: + return self._append('excludedContexts', {'contextKind': context_kind, 'values': list(keys)}) + + def included(self, *keys: str) -> SegmentBuilder: + return self._append_all('included', list(keys)) + + def included_contexts(self, context_kind: str, *keys: str) -> SegmentBuilder: + return self._append('includedContexts', {'contextKind': context_kind, 'values': list(keys)}) + + def salt(self, salt: str) -> SegmentBuilder: + return self._set('salt', salt) + + def rules(self, *rules: dict) -> SegmentBuilder: + return self._append_all('rules', list(rules)) + + def unbounded(self, value: bool) -> SegmentBuilder: + return self._set('unbounded', value) + + def unbounded_context_kind(self, value: Optional[str]) -> SegmentBuilder: + return self._set('unboundedContextKind', value) + + def generation(self, value: Optional[int]) -> SegmentBuilder: + return self._set('generation', value) + + +class SegmentRuleBuilder(BaseBuilder): + def __init__(self): + super().__init__({'clauses': []}) + + def bucket_by(self, value: Optional[str]) -> SegmentRuleBuilder: + return self._set('bucketBy', value) + + def clauses(self, *clauses: dict) -> SegmentRuleBuilder: + return self._append_all('clauses', list(clauses)) + + def rollout_context_kind(self, value: Optional[str]) -> SegmentRuleBuilder: + return self._set('rolloutContextKind', value) + + def weight(self, value: Optional[int]) -> SegmentRuleBuilder: + return self._set('weight', value) + + +def build_off_flag_with_value(key: str, value: Any) -> FlagBuilder: + return FlagBuilder(key).version(100).on(False).variations(value).off_variation(0) + +def make_boolean_flag_matching_segment(segment: Segment) -> FeatureFlag: + return make_boolean_flag_with_clauses(make_clause_matching_segment_key(segment.key)) + +def make_boolean_flag_with_clauses(*clauses: dict) -> FeatureFlag: + return make_boolean_flag_with_rules(FlagRuleBuilder().clauses(*clauses).variation(0).build()) + +def make_boolean_flag_with_rules(*rules: dict) -> FeatureFlag: + return FlagBuilder('flagkey').on(True).variations(True, False).fallthrough_variation(1).rules(*rules).build() + +def make_clause(context_kind: Optional[str], attr: str, op: str, *values: Any) -> dict: + ret = {'attribute': attr, 'op': op, 'values': list(values)} + if context_kind is not None: + ret['contextKind'] = context_kind + return ret + +def make_clause_matching_context(context: Context) -> dict: + return {'contextKind': context.kind, 'attribute': 'key', 'op': 'in', 'values': [context.key]} + +def make_clause_matching_segment_key(*segment_keys: str) -> dict: + return {'attribute': '', 'op': 'segmentMatch', 'values': list(segment_keys)} + +def make_segment_rule_matching_context(context: Context) -> dict: + return SegmentRuleBuilder().clauses(make_clause_matching_context(context)).build() + +def negate_clause(clause: dict) -> dict: + c = clause.copy() + c['negate'] = not c.get('negate') + return c diff --git a/testing/feature_store_test_base.py b/testing/feature_store_test_base.py index e622b62d..ab24f5a2 100644 --- a/testing/feature_store_test_base.py +++ b/testing/feature_store_test_base.py @@ -1,6 +1,8 @@ from ldclient.interfaces import FeatureStore from ldclient.versioned_data_kind import FEATURES +from testing.builders import * + from abc import abstractmethod import pytest @@ -52,32 +54,15 @@ def inited_store(self, tester): scope = StoreTestScope(tester.create_feature_store()) scope.store.init({ FEATURES: { - 'foo': self.make_feature('foo', 10), - 'bar': self.make_feature('bar', 10), + 'foo': self.make_feature('foo', 10).to_json_dict(), + 'bar': self.make_feature('bar', 10).to_json_dict(), } }) return scope @staticmethod def make_feature(key, ver): - return { - u'key': key, - u'version': ver, - u'salt': u'abc', - u'on': True, - u'variations': [ - { - u'value': True, - u'weight': 100, - u'targets': [] - }, - { - u'value': False, - u'weight': 0, - u'targets': [] - } - ] - } + return FlagBuilder(key).version(ver).on(True).variations(True, False).salt('abc').build() def test_not_initialized_before_init(self, tester): with self.store(tester) as store: @@ -90,7 +75,8 @@ def test_initialized(self, tester): def test_get_existing_feature(self, tester): with self.inited_store(tester) as store: expected = self.make_feature('foo', 10) - assert store.get(FEATURES, 'foo', lambda x: x) == expected + flag = store.get(FEATURES, 'foo', lambda x: x) + assert flag == expected def test_get_nonexisting_feature(self, tester): with self.inited_store(tester) as store: diff --git a/testing/impl/datasource/__init__.py b/testing/impl/datasource/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/testing/test_feature_requester.py b/testing/impl/datasource/test_feature_requester.py similarity index 97% rename from testing/test_feature_requester.py rename to testing/impl/datasource/test_feature_requester.py index db18f555..b6eacc1e 100644 --- a/testing/test_feature_requester.py +++ b/testing/impl/datasource/test_feature_requester.py @@ -1,8 +1,5 @@ -import pytest - from ldclient.config import Config -from ldclient.feature_requester import FeatureRequesterImpl -from ldclient.util import UnsuccessfulResponseException +from ldclient.impl.datasource.feature_requester import FeatureRequesterImpl from ldclient.version import VERSION from ldclient.versioned_data_kind import FEATURES, SEGMENTS from testing.http_util import start_server, BasicResponse, JsonResponse diff --git a/testing/test_polling_processor.py b/testing/impl/datasource/test_polling_processor.py similarity index 89% rename from testing/test_polling_processor.py rename to testing/impl/datasource/test_polling_processor.py index e4a4dcd0..068d1684 100644 --- a/testing/test_polling_processor.py +++ b/testing/impl/datasource/test_polling_processor.py @@ -1,14 +1,14 @@ -import pytest import threading import time import mock from ldclient.config import Config from ldclient.feature_store import InMemoryFeatureStore -from ldclient.interfaces import FeatureRequester -from ldclient.polling import PollingUpdateProcessor -from ldclient.util import UnsuccessfulResponseException +from ldclient.impl.datasource.polling import PollingUpdateProcessor +from ldclient.impl.util import UnsuccessfulResponseException from ldclient.versioned_data_kind import FEATURES, SEGMENTS + +from testing.builders import * from testing.stub_util import MockFeatureRequester, MockResponse pp = None @@ -33,18 +33,14 @@ def setup_processor(config): pp.start() def test_successful_request_puts_feature_data_in_store(): - flag = { - "key": "flagkey" - } - segment = { - "key": "segkey" - } + flag = FlagBuilder('flagkey').build() + segment = SegmentBuilder('segkey').build() mock_requester.all_data = { FEATURES: { - "flagkey": flag + "flagkey": flag.to_json_dict() }, SEGMENTS: { - "segkey": segment + "segkey": segment.to_json_dict() } } setup_processor(Config("SDK_KEY")) diff --git a/testing/test_streaming.py b/testing/impl/datasource/test_streaming.py similarity index 94% rename from testing/test_streaming.py rename to testing/impl/datasource/test_streaming.py index 1838e500..5bf3ba38 100644 --- a/testing/test_streaming.py +++ b/testing/impl/datasource/test_streaming.py @@ -4,11 +4,13 @@ import time from ldclient.config import Config -from ldclient.diagnostics import _DiagnosticAccumulator from ldclient.feature_store import InMemoryFeatureStore -from ldclient.streaming import StreamingUpdateProcessor +from ldclient.impl.datasource.streaming import StreamingUpdateProcessor +from ldclient.impl.events.diagnostics import _DiagnosticAccumulator from ldclient.version import VERSION from ldclient.versioned_data_kind import FEATURES, SEGMENTS + +from testing.builders import * from testing.http_util import start_server, BasicResponse, CauseNetworkError, SequentialHandler from testing.proxy_test_util import do_proxy_tests from testing.stub_util import make_delete_event, make_patch_event, make_put_event, stream_content @@ -72,8 +74,8 @@ def test_sends_wrapper_header_without_version(): def test_receives_put_event(): store = InMemoryFeatureStore() ready = Event() - flag = { 'key': 'flagkey', 'version': 1 } - segment = { 'key': 'segkey', 'version': 1 } + flag = FlagBuilder('flagkey').version(1).build() + segment = SegmentBuilder('segkey').version(1).build() with start_server() as server: with stream_content(make_put_event([ flag ], [ segment ])) as stream: @@ -90,10 +92,10 @@ def test_receives_put_event(): def test_receives_patch_events(): store = InMemoryFeatureStore() ready = Event() - flagv1 = { 'key': 'flagkey', 'version': 1 } - flagv2 = { 'key': 'flagkey', 'version': 2 } - segmentv1 = { 'key': 'segkey', 'version': 1 } - segmentv2 = { 'key': 'segkey', 'version': 1 } + flagv1 = FlagBuilder('flagkey').version(1).build() + flagv2 = FlagBuilder('flagkey').version(2).build() + segmentv1 = SegmentBuilder('segkey').version(1).build() + segmentv2 = SegmentBuilder('segkey').version(2).build() with start_server() as server: with stream_content(make_put_event([ flagv1 ], [ segmentv1 ])) as stream: @@ -116,8 +118,8 @@ def test_receives_patch_events(): def test_receives_delete_events(): store = InMemoryFeatureStore() ready = Event() - flagv1 = { 'key': 'flagkey', 'version': 1 } - segmentv1 = { 'key': 'segkey', 'version': 1 } + flagv1 = FlagBuilder('flagkey').version(1).build() + segmentv1 = SegmentBuilder('segkey').version(1).build() with start_server() as server: with stream_content(make_put_event([ flagv1 ], [ segmentv1 ])) as stream: @@ -140,8 +142,8 @@ def test_receives_delete_events(): def test_reconnects_if_stream_is_broken(): store = InMemoryFeatureStore() ready = Event() - flagv1 = { 'key': 'flagkey', 'version': 1 } - flagv2 = { 'key': 'flagkey', 'version': 2 } + flagv1 = FlagBuilder('flagkey').version(1).build() + flagv2 = FlagBuilder('flagkey').version(2).build() with start_server() as server: with stream_content(make_put_event([ flagv1 ])) as stream1: diff --git a/testing/impl/evaluator_util.py b/testing/impl/evaluator_util.py index 9bae2dc1..deb7fe25 100644 --- a/testing/impl/evaluator_util.py +++ b/testing/impl/evaluator_util.py @@ -1,11 +1,16 @@ +from ldclient import Context from ldclient.evaluation import BigSegmentsStatus from ldclient.impl.evaluator import Evaluator, _make_big_segment_ref -from ldclient.impl.event_factory import _EventFactory +from ldclient.impl.events.types import EventFactory +from ldclient.impl.model import * +from testing.builders import * -from typing import Optional, Tuple +from typing import Any, Optional, Tuple, Union + +basic_user = Context.create('user-key') +fake_timestamp = 0 +event_factory = EventFactory(False, lambda: fake_timestamp) -basic_user = { "key": "user-key" } -event_factory = _EventFactory(False) class EvaluatorBuilder: def __init__(self): @@ -21,79 +26,64 @@ def build(self) -> Evaluator: self._get_big_segments_membership ) - def with_flag(self, flag: dict) -> 'EvaluatorBuilder': - self.__flags[flag['key']] = flag + def with_flag(self, flag: FeatureFlag) -> 'EvaluatorBuilder': + self.__flags[flag.key] = flag return self def with_unknown_flag(self, key) -> 'EvaluatorBuilder': self.__flags[key] = None return self - def with_segment(self, segment: dict) -> 'EvaluatorBuilder': - self.__segments[segment['key']] = segment + def with_segment(self, segment: Segment) -> 'EvaluatorBuilder': + self.__segments[segment.key] = segment return self def with_unknown_segment(self, key) -> 'EvaluatorBuilder': self.__segments[key] = None return self - def with_big_segment_for_user(self, user: dict, segment: dict, included: bool) -> 'EvaluatorBuilder': - user_key = user['key'] - if user_key not in self.__big_segments: - self.__big_segments[user_key] = {} - self.__big_segments[user_key][_make_big_segment_ref(segment)] = included + def with_big_segment_for_key(self, key: str, segment: Segment, included: bool) -> 'EvaluatorBuilder': + if key not in self.__big_segments: + self.__big_segments[key] = {} + self.__big_segments[key][_make_big_segment_ref(segment)] = included return self - - def with_no_big_segments_for_user(self, user: dict) -> 'EvaluatorBuilder': - self.__big_segments[user['key']] = {} + + def with_no_big_segments_for_key(self, key: str) -> 'EvaluatorBuilder': + self.__big_segments[key] = {} return self def with_big_segments_status(self, status: str) -> 'EvaluatorBuilder': self.__big_segments_status = status return self - def _get_flag(self, key: str) -> Optional[dict]: + def _get_flag(self, key: str) -> Optional[FeatureFlag]: if key not in self.__flags: raise Exception("test made unexpected request for flag '%s'" % key) return self.__flags[key] - def _get_segment(self, key: str) -> Optional[dict]: + def _get_segment(self, key: str) -> Optional[Segment]: if key not in self.__segments: raise Exception("test made unexpected request for segment '%s'" % key) return self.__segments[key] def _get_big_segments_membership(self, key: str) -> Tuple[Optional[dict], str]: if key not in self.__big_segments: - raise Exception("test made unexpected request for big segments for user key '%s'" % key) + raise Exception("test made unexpected request for big segments for context key '%s'" % key) return (self.__big_segments[key], self.__big_segments_status) basic_evaluator = EvaluatorBuilder().build() -def make_boolean_flag_with_rules(rules) -> dict: - return { - 'key': 'feature', - 'on': True, - 'rules': rules, - 'fallthrough': { 'variation': 0 }, - 'variations': [ False, True ], - 'salt': '' - } +def assert_eval_result(result, expected_detail, expected_events): + assert result.detail == expected_detail + assert result.events == expected_events + -def make_boolean_flag_with_clause(clause: dict) -> dict: - return make_boolean_flag_with_rules([ - { - 'clauses': [ clause ], - 'variation': 1 - } - ]) +def assert_match(evaluator: Evaluator, flag: FeatureFlag, context: Context, expect_value: Any): + result = evaluator.evaluate(flag, context, event_factory) + assert result.detail.value == expect_value -def make_boolean_flag_matching_segment(segment: dict) -> dict: - return make_boolean_flag_with_clause({ - 'attribute': '', - 'op': 'segmentMatch', - 'values': [ segment['key'] ] - }) -def make_clause_matching_user(user: dict) -> dict: - return { 'attribute': 'key', 'op': 'in', 'values': [ user['key'] ] } +def make_clause_matching_user(user: Union[Context, dict]) -> dict: + key = user.key if isinstance(user, Context) else user['key'] + return { 'attribute': 'key', 'op': 'in', 'values': [ key ] } diff --git a/testing/impl/events/__init__.py b/testing/impl/events/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/testing/test_diagnostics.py b/testing/impl/events/test_diagnostics.py similarity index 91% rename from testing/test_diagnostics.py rename to testing/impl/events/test_diagnostics.py index c725e8d9..1de0566f 100644 --- a/testing/test_diagnostics.py +++ b/testing/impl/events/test_diagnostics.py @@ -2,9 +2,9 @@ import uuid from ldclient.config import Config, HTTPConfig -from ldclient.diagnostics import create_diagnostic_id, create_diagnostic_init, _DiagnosticAccumulator, _create_diagnostic_config_object from ldclient.feature_store import CacheConfig from ldclient.feature_store_helpers import CachingStoreWrapper +from ldclient.impl.events.diagnostics import create_diagnostic_id, create_diagnostic_init, _DiagnosticAccumulator, _create_diagnostic_config_object def test_create_diagnostic_id(): test_config = Config(sdk_key = "SDK_KEY", http=HTTPConfig()) @@ -42,7 +42,7 @@ def test_create_diagnostic_config_defaults(): test_config = Config("SDK_KEY") diag_config = _create_diagnostic_config_object(test_config) - assert len(diag_config) == 17 + assert len(diag_config) == 16 assert diag_config['customBaseURI'] is False assert diag_config['customEventsURI'] is False assert diag_config['customStreamURI'] is False @@ -57,7 +57,6 @@ def test_create_diagnostic_config_defaults(): assert diag_config['pollingIntervalMillis'] == 30000 assert diag_config['userKeysCapacity'] == 1000 assert diag_config['userKeysFlushIntervalMillis'] == 300000 - assert diag_config['inlineUsersInEvents'] is False assert diag_config['diagnosticRecordingIntervalMillis'] == 900000 assert diag_config['dataStoreType'] == 'memory' @@ -66,11 +65,11 @@ def test_create_diagnostic_config_custom(): test_config = Config("SDK_KEY", base_uri='https://test.com', events_uri='https://test.com', events_max_pending=10, flush_interval=1, stream_uri='https://test.com', stream=False, poll_interval=60, use_ldd=True, feature_store=test_store, - all_attributes_private=True, user_keys_capacity=10, user_keys_flush_interval=60, - inline_users_in_events=True, http=HTTPConfig(http_proxy = 'proxy', read_timeout=1, connect_timeout=1), diagnostic_recording_interval=60) + all_attributes_private=True, context_keys_capacity=10, context_keys_flush_interval=60, + http=HTTPConfig(http_proxy = 'proxy', read_timeout=1, connect_timeout=1), diagnostic_recording_interval=60) diag_config = _create_diagnostic_config_object(test_config) - assert len(diag_config) == 17 + assert len(diag_config) == 16 assert diag_config['customBaseURI'] is True assert diag_config['customEventsURI'] is True assert diag_config['customStreamURI'] is True @@ -85,7 +84,6 @@ def test_create_diagnostic_config_custom(): assert diag_config['pollingIntervalMillis'] == 60000 assert diag_config['userKeysCapacity'] == 10 assert diag_config['userKeysFlushIntervalMillis'] == 60000 - assert diag_config['inlineUsersInEvents'] is True assert diag_config['diagnosticRecordingIntervalMillis'] == 60000 assert diag_config['dataStoreType'] == 'MyFavoriteStore' diff --git a/testing/impl/events/test_event_context_formatter.py b/testing/impl/events/test_event_context_formatter.py new file mode 100644 index 00000000..06662ab3 --- /dev/null +++ b/testing/impl/events/test_event_context_formatter.py @@ -0,0 +1,82 @@ +from ldclient.context import Context +from ldclient.impl.events.event_context_formatter import EventContextFormatter + +def test_simple_context(): + f = EventContextFormatter(False, []) + c = Context.create('a') + assert f.format_context(c) == {'kind': 'user', 'key': 'a'} + +def test_context_with_more_attributes(): + f = EventContextFormatter(False, []) + c = Context.builder('a').name('b').anonymous(True).set('c', True).set('d', 2).build() + assert f.format_context(c) == { + 'kind': 'user', + 'key': 'a', + 'name': 'b', + 'anonymous': True, + 'c': True, + 'd': 2 + } + +def test_multi_context(): + f = EventContextFormatter(False, []) + c = Context.create_multi( + Context.create('a'), + Context.builder('b').kind('c').name('d').build() + ) + assert f.format_context(c) == { + 'kind': 'multi', + 'user': { + 'key': 'a' + }, + 'c': { + 'key': 'b', + 'name': 'd' + } + } + +def test_all_private(): + f = EventContextFormatter(True, []) + c = Context.builder('a').name('b').anonymous(True).set('c', True).set('d', 2).build() + assert f.format_context(c) == { + 'kind': 'user', + 'key': 'a', + 'anonymous': True, + '_meta': {'redactedAttributes': ['name', 'c', 'd']} + } + +def test_some_private_global(): + f = EventContextFormatter(False, ['name', 'd']) + c = Context.builder('a').name('b').anonymous(True).set('c', True).set('d', 2).build() + assert f.format_context(c) == { + 'kind': 'user', + 'key': 'a', + 'anonymous': True, + 'c': True, + '_meta': {'redactedAttributes': ['name', 'd']} + } + +def test_some_private_per_context(): + f = EventContextFormatter(False, ['name']) + c = Context.builder('a').name('b').anonymous(True).set('c', True).set('d', 2).private('d').build() + assert f.format_context(c) == { + 'kind': 'user', + 'key': 'a', + 'anonymous': True, + 'c': True, + '_meta': {'redactedAttributes': ['name', 'd']} + } + +def test_private_property_in_object(): + f = EventContextFormatter(False, ['/b/prop1', '/c/prop2/sub1']) + c = Context.builder('a') \ + .set('b', {'prop1': True, 'prop2': 3}) \ + .set('c', {'prop1': {'sub1': True}, 'prop2': {'sub1': 4, 'sub2': 5}}) \ + .build() + assert f.format_context(c) == { + 'kind': 'user', + 'key': 'a', + 'b': {'prop2': 3}, + 'c': {'prop1': {'sub1': True}, 'prop2': {'sub2': 5}}, + '_meta': {'redactedAttributes': ['/b/prop1', '/c/prop2/sub1']} + } diff --git a/testing/test_event_factory.py b/testing/impl/events/test_event_factory.py similarity index 65% rename from testing/test_event_factory.py rename to testing/impl/events/test_event_factory.py index e039c6c7..f1e40b2d 100644 --- a/testing/test_event_factory.py +++ b/testing/impl/events/test_event_factory.py @@ -1,72 +1,66 @@ -import pytest +from ldclient.context import Context from ldclient.evaluation import EvaluationDetail -from ldclient.impl.event_factory import _EventFactory +from ldclient.impl.events.types import EventFactory -_event_factory_default = _EventFactory(False) -_user = { 'key': 'x' } +from testing.builders import * + +_event_factory_default = EventFactory(False) +_user = Context.create('x') def make_basic_flag_with_rules(kind, should_track_events): - rule = { - 'rollout': { - 'variations': [ - { 'variation': 0, 'weight': 50000 }, - { 'variation': 1, 'weight': 50000 } - ] - } - } + rule_builder = FlagRuleBuilder().rollout({ + 'variations': [ + { 'variation': 0, 'weight': 50000 }, + { 'variation': 1, 'weight': 50000 } + ] + }) if kind == 'rulematch': - rule.update({'trackEvents': should_track_events}) - - flag = { - 'key': 'feature', - 'on': True, - 'rules': [rule], - 'fallthrough': { 'variation': 0 }, - 'variations': [ False, True ], - 'salt': '' - } + rule_builder.track_events(should_track_events) + + flag_builder = FlagBuilder('feature').on(True).fallthrough_variation(0).variations(False, True) \ + .rules(rule_builder.build()) if kind == 'fallthrough': - flag.update({'trackEventsFallthrough': should_track_events}) - return flag + flag_builder.track_events_fallthrough(should_track_events) + return flag_builder.build() def test_fallthrough_track_event_false(): flag = make_basic_flag_with_rules('fallthrough', False) detail = EvaluationDetail('b', 1, {'kind': 'FALLTHROUGH'}) eval = _event_factory_default.new_eval_event(flag, _user, detail, 'b', None) - assert eval.get('trackEvents') is None + assert eval.track_events is False def test_fallthrough_track_event_true(): flag = make_basic_flag_with_rules('fallthrough', True) detail = EvaluationDetail('b', 1, {'kind': 'FALLTHROUGH'}) eval = _event_factory_default.new_eval_event(flag, _user, detail, 'b', None) - assert eval['trackEvents'] == True + assert eval.track_events is True def test_fallthrough_track_event_false_with_experiment(): flag = make_basic_flag_with_rules('fallthrough', False) detail = EvaluationDetail('b', 1, {'kind': 'FALLTHROUGH', 'inExperiment': True}) eval = _event_factory_default.new_eval_event(flag, _user, detail, 'b', None) - assert eval['trackEvents'] == True + assert eval.track_events is True def test_rulematch_track_event_false(): flag = make_basic_flag_with_rules('rulematch', False) detail = EvaluationDetail('b', 1, {'kind': 'RULE_MATCH', 'ruleIndex': 0}) eval = _event_factory_default.new_eval_event(flag, _user, detail, 'b', None) - assert eval.get('trackEvents') is None + assert eval.track_events is False def test_rulematch_track_event_true(): flag = make_basic_flag_with_rules('rulematch', True) detail = EvaluationDetail('b', 1, {'kind': 'RULE_MATCH', 'ruleIndex': 0}) eval = _event_factory_default.new_eval_event(flag, _user, detail, 'b', None) - assert eval['trackEvents'] == True + assert eval.track_events is True def test_rulematch_track_event_false_with_experiment(): flag = make_basic_flag_with_rules('rulematch', False) detail = EvaluationDetail('b', 1, {'kind': 'RULE_MATCH', 'ruleIndex': 0, 'inExperiment': True}) eval = _event_factory_default.new_eval_event(flag, _user, detail, 'b', None) - assert eval['trackEvents'] == True + assert eval.track_events is True diff --git a/testing/impl/events/test_event_processor.py b/testing/impl/events/test_event_processor.py new file mode 100644 index 00000000..15e95bba --- /dev/null +++ b/testing/impl/events/test_event_processor.py @@ -0,0 +1,517 @@ +import json +from threading import Thread +import time +import uuid + +from ldclient.config import Config +from ldclient.context import Context +from ldclient.impl.events.diagnostics import create_diagnostic_id, _DiagnosticAccumulator +from ldclient.impl.events.event_processor import DefaultEventProcessor +from ldclient.impl.events.types import EventInput, EventInputCustom, EventInputEvaluation, EventInputIdentify + +from testing.builders import * +from testing.proxy_test_util import do_proxy_tests +from testing.stub_util import MockHttp + + +default_config = Config("fake_sdk_key") +context = Context.builder('userkey').name('Red').build() +filtered_context = context.to_dict() # TODO: implement attribute redaction +filtered_context = { + 'kind': 'user', + 'key': 'userkey', + '_meta': {'redactedAttributes': ['name']} +} +flag = FlagBuilder('flagkey').version(2).build() +timestamp = 10000 + +ep = None +mock_http = None + + +def setup_function(): + global mock_http + mock_http = MockHttp() + +def teardown_function(): + if ep is not None: + ep.stop() + +def make_context_keys(context: Context) -> dict: + ret = {} # type: dict[str, str] + for i in range(context.individual_context_count): + c = context.get_individual_context(i) + if c is not None: + ret[c.kind] = c.key + return ret + + +class DefaultTestProcessor(DefaultEventProcessor): + def __init__(self, **kwargs): + if not 'diagnostic_opt_out' in kwargs: + kwargs['diagnostic_opt_out'] = True + if not 'sdk_key' in kwargs: + kwargs['sdk_key'] = 'SDK_KEY' + config = Config(**kwargs) + diagnostic_accumulator = _DiagnosticAccumulator(create_diagnostic_id(config)) + DefaultEventProcessor.__init__(self, config, mock_http, diagnostic_accumulator = diagnostic_accumulator) + +def test_identify_event_is_queued(): + with DefaultTestProcessor() as ep: + e = EventInputIdentify(timestamp, context) + ep.send_event(e) + + output = flush_and_get_events(ep) + assert len(output) == 1 + check_identify_event(output[0], e) + +def test_context_is_filtered_in_identify_event(): + with DefaultTestProcessor(all_attributes_private = True) as ep: + e = EventInputIdentify(timestamp, context) + ep.send_event(e) + + output = flush_and_get_events(ep) + assert len(output) == 1 + check_identify_event(output[0], e, filtered_context) + +def test_individual_feature_event_is_queued_with_index_event(): + with DefaultTestProcessor() as ep: + e = EventInputEvaluation(timestamp, context, flag.key, flag, 1, 'value', None, 'default', None, True) + ep.send_event(e) + + output = flush_and_get_events(ep) + assert len(output) == 3 + check_index_event(output[0], e) + check_feature_event(output[1], e) + check_summary_event(output[2]) + +def test_context_is_filtered_in_index_event(): + with DefaultTestProcessor(all_attributes_private = True) as ep: + e = EventInputEvaluation(timestamp, context, flag.key, flag, 1, 'value', None, 'default', None, True) + ep.send_event(e) + + output = flush_and_get_events(ep) + assert len(output) == 3 + check_index_event(output[0], e, filtered_context) + check_feature_event(output[1], e) + check_summary_event(output[2]) + +def test_two_events_for_same_context_only_produce_one_index_event(): + with DefaultTestProcessor(context_keys_flush_interval = 300) as ep: + e0 = EventInputEvaluation(timestamp, context, flag.key, flag, 1, 'value1', None, 'default', None, True) + e1 = EventInputEvaluation(timestamp, context, flag.key, flag, 2, 'value2', None, 'default', None, True) + ep.send_event(e0) + ep.send_event(e1) + + output = flush_and_get_events(ep) + assert len(output) == 4 + check_index_event(output[0], e0) + check_feature_event(output[1], e0) + check_feature_event(output[2], e1) + check_summary_event(output[3]) + +def test_new_index_event_is_added_if_context_cache_has_been_cleared(): + with DefaultTestProcessor(context_keys_flush_interval = 0.1) as ep: + e0 = EventInputEvaluation(timestamp, context, flag.key, flag, 1, 'value1', None, 'default', None, True) + e1 = EventInputEvaluation(timestamp, context, flag.key, flag, 2, 'value2', None, 'default', None, True) + ep.send_event(e0) + time.sleep(0.2) + ep.send_event(e1) + + output = flush_and_get_events(ep) + assert len(output) == 5 + check_index_event(output[0], e0) + check_feature_event(output[1], e0) + check_index_event(output[2], e1) + check_feature_event(output[3], e1) + check_summary_event(output[4]) + +def test_event_kind_is_debug_if_flag_is_temporarily_in_debug_mode(): + with DefaultTestProcessor() as ep: + future_time = now() + 100000 + debugged_flag = FlagBuilder(flag.key).version(flag.version).debug_events_until_date(future_time).build() + e = EventInputEvaluation(timestamp, context, debugged_flag.key, debugged_flag, 1, 'value', None, 'default', None, False) + ep.send_event(e) + + output = flush_and_get_events(ep) + assert len(output) == 3 + check_index_event(output[0], e) + check_debug_event(output[1], e) + check_summary_event(output[2]) + +def test_event_can_be_both_tracked_and_debugged(): + with DefaultTestProcessor() as ep: + future_time = now() + 100000 + debugged_flag = FlagBuilder(flag.key).version(flag.version).debug_events_until_date(future_time).build() + e = EventInputEvaluation(timestamp, context, debugged_flag.key, debugged_flag, 1, 'value', None, 'default', None, True) + ep.send_event(e) + + output = flush_and_get_events(ep) + assert len(output) == 4 + check_index_event(output[0], e) + check_feature_event(output[1], e) + check_debug_event(output[2], e) + check_summary_event(output[3]) + +def test_debug_mode_does_not_expire_if_both_client_time_and_server_time_are_before_expiration_time(): + with DefaultTestProcessor() as ep: + # Pick a server time that slightly different from client time + server_time = now() + 1000 + + # Send and flush an event we don't care about, just to set the last server time + mock_http.set_server_time(server_time) + ep.send_event(EventInputIdentify(timestamp, Context.create('otherUser'))) + flush_and_get_events(ep) + + # Now send an event with debug mode on, with a "debug until" time that is further in + # the future than both the client time and the server time + debug_until = server_time + 10000 + debugged_flag = FlagBuilder(flag.key).version(flag.version).debug_events_until_date(debug_until).build() + e = EventInputEvaluation(timestamp, context, debugged_flag.key, debugged_flag, 1, 'value', None, 'default', None, False) + ep.send_event(e) + + # Should get a summary event only, not a full feature event + output = flush_and_get_events(ep) + assert len(output) == 3 + check_index_event(output[0], e) + check_debug_event(output[1], e) + check_summary_event(output[2]) + +def test_debug_mode_expires_based_on_client_time_if_client_time_is_later_than_server_time(): + with DefaultTestProcessor() as ep: + # Pick a server time that is somewhat behind the client time + server_time = now() - 20000 + + # Send and flush an event we don't care about, just to set the last server time + mock_http.set_server_time(server_time) + ep.send_event(EventInputIdentify(timestamp, Context.create('otherUser'))) + flush_and_get_events(ep) + + # Now send an event with debug mode on, with a "debug until" time that is further in + # the future than the server time, but in the past compared to the client. + debug_until = server_time + 1000 + debugged_flag = FlagBuilder(flag.key).version(flag.version).debug_events_until_date(debug_until).build() + e = EventInputEvaluation(timestamp, context, debugged_flag.key, debugged_flag, 1, 'value', None, 'default', None, False) + ep.send_event(e) + + # Should get a summary event only, not a full feature event + output = flush_and_get_events(ep) + assert len(output) == 2 + check_index_event(output[0], e) + check_summary_event(output[1]) + +def test_debug_mode_expires_based_on_server_time_if_server_time_is_later_than_client_time(): + with DefaultTestProcessor() as ep: + # Pick a server time that is somewhat ahead of the client time + server_time = now() + 20000 + + # Send and flush an event we don't care about, just to set the last server time + mock_http.set_server_time(server_time) + ep.send_event(EventInputIdentify(timestamp, Context.create('otherUser'))) + flush_and_get_events(ep) + + # Now send an event with debug mode on, with a "debug until" time that is further in + # the future than the client time, but in the past compared to the server. + debug_until = server_time - 1000 + debugged_flag = FlagBuilder(flag.key).version(flag.version).debug_events_until_date(debug_until).build() + e = EventInputEvaluation(timestamp, context, debugged_flag.key, debugged_flag, 1, 'value', None, 'default', None, False) + ep.send_event(e) + + # Should get a summary event only, not a full feature event + output = flush_and_get_events(ep) + assert len(output) == 2 + check_index_event(output[0], e) + check_summary_event(output[1]) + +def test_nontracked_events_are_summarized(): + with DefaultTestProcessor() as ep: + flag1 = FlagBuilder('flagkey1').version(11).build() + flag2 = FlagBuilder('flagkey2').version(22).build() + earlier_time, later_time = 1111111, 2222222 + e1 = EventInputEvaluation(later_time, context, flag1.key, flag1, 1, 'value1', None, 'default1', None, False) + e2 = EventInputEvaluation(earlier_time, context, flag2.key, flag2, 2, 'value2', None, 'default2', None, False) + ep.send_event(e1) + ep.send_event(e2) + + output = flush_and_get_events(ep) + assert len(output) == 2 + check_index_event(output[0], e1) + se = output[1] + assert se['kind'] == 'summary' + assert se['startDate'] == earlier_time + assert se['endDate'] == later_time + assert se['features'] == { + 'flagkey1': { + 'contextKinds': ['user'], + 'default': 'default1', + 'counters': [ { 'version': 11, 'variation': 1, 'value': 'value1', 'count': 1 } ] + }, + 'flagkey2': { + 'contextKinds': ['user'], + 'default': 'default2', + 'counters': [ { 'version': 22, 'variation': 2, 'value': 'value2', 'count': 1 } ] + } + } + +def test_custom_event_is_queued_with_user(): + with DefaultTestProcessor() as ep: + e = EventInputCustom(timestamp, context, 'eventkey', { 'thing': 'stuff '}, 1.5) + ep.send_event(e) + + output = flush_and_get_events(ep) + assert len(output) == 2 + check_index_event(output[0], e) + check_custom_event(output[1], e) + +def test_nothing_is_sent_if_there_are_no_events(): + with DefaultTestProcessor() as ep: + ep.flush() + ep._wait_until_inactive() + assert mock_http.request_data is None + +def test_sdk_key_is_sent(): + with DefaultTestProcessor(sdk_key = 'SDK_KEY') as ep: + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + + assert mock_http.request_headers.get('Authorization') == 'SDK_KEY' + +def test_wrapper_header_not_sent_when_not_set(): + with DefaultTestProcessor() as ep: + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + + assert mock_http.request_headers.get('X-LaunchDarkly-Wrapper') is None + +def test_wrapper_header_sent_when_set(): + with DefaultTestProcessor(wrapper_name = "Flask", wrapper_version = "0.0.1") as ep: + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + + assert mock_http.request_headers.get('X-LaunchDarkly-Wrapper') == "Flask/0.0.1" + +def test_wrapper_header_sent_without_version(): + with DefaultTestProcessor(wrapper_name = "Flask") as ep: + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + + assert mock_http.request_headers.get('X-LaunchDarkly-Wrapper') == "Flask" + +def test_event_schema_set_on_event_send(): + with DefaultTestProcessor() as ep: + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + + assert mock_http.request_headers.get('X-LaunchDarkly-Event-Schema') == "4" + +def test_sdk_key_is_sent_on_diagnostic_request(): + with DefaultTestProcessor(sdk_key = 'SDK_KEY', diagnostic_opt_out=False) as ep: + ep._wait_until_inactive() + assert mock_http.request_headers.get('Authorization') == 'SDK_KEY' + +def test_event_schema_not_set_on_diagnostic_send(): + with DefaultTestProcessor(diagnostic_opt_out=False) as ep: + ep._wait_until_inactive() + assert mock_http.request_headers.get('X-LaunchDarkly-Event-Schema') is None + +def test_init_diagnostic_event_sent(): + with DefaultTestProcessor(diagnostic_opt_out=False) as ep: + diag_init = flush_and_get_events(ep) + # Fields are tested in test_diagnostics.py + assert len(diag_init) == 6 + assert diag_init['kind'] == 'diagnostic-init' + +def test_periodic_diagnostic_includes_events_in_batch(): + with DefaultTestProcessor(diagnostic_opt_out=False) as ep: + # Ignore init event + flush_and_get_events(ep) + # Send a payload with a single event + ep.send_event(EventInputIdentify(timestamp, context)) + flush_and_get_events(ep) + + ep._send_diagnostic() + diag_event = flush_and_get_events(ep) + assert len(diag_event) == 8 + assert diag_event['kind'] == 'diagnostic' + assert diag_event['eventsInLastBatch'] == 1 + assert diag_event['deduplicatedUsers'] == 0 + +def test_periodic_diagnostic_includes_deduplicated_users(): + with DefaultTestProcessor(diagnostic_opt_out=False) as ep: + # Ignore init event + flush_and_get_events(ep) + # Send two custom events with the same user to cause a user deduplication + e0 = EventInputCustom(timestamp, context, 'event1', None, None) + e1 = EventInputCustom(timestamp, context, 'event2', None, None) + ep.send_event(e0) + ep.send_event(e1) + flush_and_get_events(ep) + + ep._send_diagnostic() + diag_event = flush_and_get_events(ep) + assert len(diag_event) == 8 + assert diag_event['kind'] == 'diagnostic' + assert diag_event['eventsInLastBatch'] == 3 + assert diag_event['deduplicatedUsers'] == 1 + +def test_no_more_payloads_are_sent_after_401_error(): + verify_unrecoverable_http_error(401) + +def test_no_more_payloads_are_sent_after_403_error(): + verify_unrecoverable_http_error(403) + +def test_will_still_send_after_408_error(): + verify_recoverable_http_error(408) + +def test_will_still_send_after_429_error(): + verify_recoverable_http_error(429) + +def test_will_still_send_after_500_error(): + verify_recoverable_http_error(500) + +def test_does_not_block_on_full_inbox(): + config = Config("fake_sdk_key", events_max_pending=1) # this sets the size of both the inbox and the outbox to 1 + ep_inbox_holder = [ None ] + ep_inbox = None + + def dispatcher_factory(inbox, config, http, diag): + ep_inbox_holder[0] = inbox # it's an array because otherwise it's hard for a closure to modify a variable + return None # the dispatcher object itself doesn't matter, we only manipulate the inbox + def event_consumer(): + while True: + message = ep_inbox.get(block=True) + if message.type == 'stop': + message.param.set() + return + def start_consuming_events(): + Thread(target=event_consumer).start() + + with DefaultEventProcessor(config, mock_http, dispatcher_factory) as ep: + ep_inbox = ep_inbox_holder[0] + event1 = EventInputCustom(timestamp, context, 'event1') + event2 = EventInputCustom(timestamp, context, 'event2') + ep.send_event(event1) + ep.send_event(event2) # this event should be dropped - inbox is full + message1 = ep_inbox.get(block=False) + had_no_more = ep_inbox.empty() + start_consuming_events() + assert message1.param == event1 + assert had_no_more + +def test_http_proxy(monkeypatch): + def _event_processor_proxy_test(server, config, secure): + with DefaultEventProcessor(config) as ep: + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + do_proxy_tests(_event_processor_proxy_test, 'POST', monkeypatch) + +def verify_unrecoverable_http_error(status): + with DefaultTestProcessor(sdk_key = 'SDK_KEY') as ep: + mock_http.set_response_status(status) + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + mock_http.reset() + + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + assert mock_http.request_data is None + +def verify_recoverable_http_error(status): + with DefaultTestProcessor(sdk_key = 'SDK_KEY') as ep: + mock_http.set_response_status(status) + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + mock_http.reset() + + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + assert mock_http.request_data is not None + +def test_event_payload_id_is_sent(): + with DefaultEventProcessor(Config(sdk_key = 'SDK_KEY'), mock_http) as ep: + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + + headerVal = mock_http.request_headers.get('X-LaunchDarkly-Payload-ID') + assert headerVal is not None + # Throws on invalid UUID + uuid.UUID(headerVal) + +def test_event_payload_id_changes_between_requests(): + with DefaultEventProcessor(Config(sdk_key = 'SDK_KEY'), mock_http) as ep: + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + + ep.send_event(EventInputIdentify(timestamp, context)) + ep.flush() + ep._wait_until_inactive() + + firstPayloadId = mock_http.recorded_requests[0][0].get('X-LaunchDarkly-Payload-ID') + secondPayloadId = mock_http.recorded_requests[1][0].get('X-LaunchDarkly-Payload-ID') + assert firstPayloadId != secondPayloadId + +def flush_and_get_events(ep): + ep.flush() + ep._wait_until_inactive() + if mock_http.request_data is None: + raise AssertionError('Expected to get an HTTP request but did not get one') + else: + return json.loads(mock_http.request_data) + +def check_identify_event(data, source: EventInput, context_json: Optional[dict] = None): + assert data['kind'] == 'identify' + assert data['creationDate'] == source.timestamp + assert data['context'] == (source.context.to_dict() if context_json is None else context_json) + +def check_index_event(data, source: EventInput, context_json: Optional[dict] = None): + assert data['kind'] == 'index' + assert data['creationDate'] == source.timestamp + assert data['context'] == (source.context.to_dict() if context_json is None else context_json) + +def check_feature_event(data, source: EventInputEvaluation): + assert data['kind'] == 'feature' + assert data['creationDate'] == source.timestamp + assert data['key'] == source.key + assert data.get('version') == None if source.flag is None else source.flag.version + assert data.get('variation') == source.variation + assert data.get('value') == source.value + assert data.get('default') == source.default_value + assert data['contextKeys'] == make_context_keys(source.context) + assert data.get('prereq_of') == None if source.prereq_of is None else source.prereq_of.key + +def check_debug_event(data, source: EventInputEvaluation, context_json: Optional[dict] = None): + assert data['kind'] == 'debug' + assert data['creationDate'] == source.timestamp + assert data['key'] == source.key + assert data.get('version') == None if source.flag is None else source.flag.version + assert data.get('variation') == source.variation + assert data.get('value') == source.value + assert data.get('default') == source.default_value + assert data['context'] == (source.context.to_dict() if context_json is None else context_json) + assert data.get('prereq_of') == None if source.prereq_of is None else source.prereq_of.key + +def check_custom_event(data, source: EventInputCustom): + assert data['kind'] == 'custom' + assert data['creationDate'] == source.timestamp + assert data['key'] == source.key + assert data['data'] == source.data + assert data['contextKeys'] == make_context_keys(source.context) + assert data.get('metricValue') == source.metric_value + +def check_summary_event(data): + assert data['kind'] == 'summary' + +def now(): + return int(time.time() * 1000) diff --git a/testing/impl/events/test_event_summarizer.py b/testing/impl/events/test_event_summarizer.py new file mode 100644 index 00000000..5f4a76ac --- /dev/null +++ b/testing/impl/events/test_event_summarizer.py @@ -0,0 +1,52 @@ +from ldclient.context import Context +from ldclient.impl.events.event_summarizer import EventSummarizer, EventSummaryCounter, EventSummaryFlag +from ldclient.impl.events.types import * + +from testing.builders import * + + +user = Context.create('user1') +flag1 = FlagBuilder('flag1').version(11).build() +flag2 = FlagBuilder('flag2').version(22).build() + + +def test_summarize_event_sets_start_and_end_dates(): + es = EventSummarizer() + event1 = EventInputEvaluation(2000, user, flag1.key, flag1, 0, '', None, None) + event2 = EventInputEvaluation(1000, user, flag1.key, flag1, 0, '', None, None) + event3 = EventInputEvaluation(1500, user, flag1.key, flag1, 0, '', None, None) + es.summarize_event(event1) + es.summarize_event(event2) + es.summarize_event(event3) + data = es.snapshot() + + assert data.start_date == 1000 + assert data.end_date == 2000 + +def test_summarize_event_increments_counters(): + es = EventSummarizer() + event1 = EventInputEvaluation(1000, user, flag1.key, flag1, 1, 'value1', None, 'default1') + event2 = EventInputEvaluation(1000, user, flag1.key, flag1, 2, 'value2', None, 'default1') + event3 = EventInputEvaluation(1000, user, flag2.key, flag2, 1, 'value99', None, 'default2') + event4 = EventInputEvaluation(1000, user, flag1.key, flag1, 1, 'value1', None, 'default1') + event5 = EventInputEvaluation(1000, user, 'badkey', None, None, 'default3', None, 'default3') + es.summarize_event(event1) + es.summarize_event(event2) + es.summarize_event(event3) + es.summarize_event(event4) + es.summarize_event(event5) + data = es.snapshot() + + expected = { + 'flag1': EventSummaryFlag({'user'}, 'default1', { + (1, flag1.version): EventSummaryCounter(2, 'value1'), + (2, flag1.version): EventSummaryCounter(1, 'value2') + }), + 'flag2': EventSummaryFlag({'user'}, 'default2', { + (1, flag2.version): EventSummaryCounter(1, 'value99') + }), + 'badkey': EventSummaryFlag({'user'}, 'default3', { + (None, None): EventSummaryCounter(1, 'default3') + }) + } + assert data.flags == expected diff --git a/testing/impl/test_attribute_ref.py b/testing/impl/test_attribute_ref.py new file mode 100644 index 00000000..4370e98f --- /dev/null +++ b/testing/impl/test_attribute_ref.py @@ -0,0 +1,54 @@ +from ldclient.impl.model.attribute_ref import * + +import pytest + + +class TestAttributeRef: + @pytest.mark.parametrize("input", ["", "/"]) + def test_invalid_attr_ref_from_path(self, input: str): + a = AttributeRef.from_path(input) + assert a.valid is False + assert a.error is not None + assert a.depth == 0 + + @pytest.mark.parametrize("input", [""]) + def test_invalid_attr_ref_from_literal(self, input: str): + a = AttributeRef.from_literal(input) + assert a.valid is False + assert a.error is not None + assert a.depth == 0 + + @pytest.mark.parametrize("input", ["name", "name/with/slashes", "name~0~1with-what-looks-like-escape-sequences"]) + def test_ref_with_no_leading_slash(self, input: str): + a = AttributeRef.from_path(input) + assert a.valid is True + assert a.error is None + assert a.depth == 1 + assert a[0] == input + + @pytest.mark.parametrize("input,unescaped", [ + ("/name", "name"), + ("/0", "0"), + ("/name~1with~1slashes~0and~0tildes", "name/with/slashes~and~tildes") + ]) + def test_ref_simple_with_leading_slash(self, input: str, unescaped: str): + a = AttributeRef.from_path(input) + assert a.valid is True + assert a.error is None + assert a.depth == 1 + assert a[0] == unescaped + + @pytest.mark.parametrize("input", []) + def test_literal(self, input: str): + a = AttributeRef.from_literal(input) + assert a.valid is True + assert a.error is None + assert a.depth == 1 + assert a[0] == input + + def test_get_component(self): + a = AttributeRef.from_path("/first/sec~1ond/third") + assert a.depth == 3 + assert a[0] == "first" + assert a[1] == "sec/ond" + assert a[2] == "third" diff --git a/testing/impl/test_big_segments.py b/testing/impl/test_big_segments.py index 9cb8e3fd..2b880e4c 100644 --- a/testing/impl/test_big_segments.py +++ b/testing/impl/test_big_segments.py @@ -55,7 +55,7 @@ def test_membership_query_cache_can_expire(): store = MockBigSegmentStore() store.setup_metadata_always_up_to_date() store.setup_membership(user_hash, expected_membership) - manager = BigSegmentStoreManager(BigSegmentsConfig(store=store, user_cache_time=0.005)) + manager = BigSegmentStoreManager(BigSegmentsConfig(store=store, context_cache_time=0.005)) try: expected_result = (expected_membership, BigSegmentsStatus.HEALTHY) assert manager.get_user_membership(user_key) == expected_result @@ -89,7 +89,7 @@ def test_membership_query_stale_status_no_store_metadata(): finally: manager.stop() -def test_membership_query_least_recent_user_evicted_from_cache(): +def test_membership_query_least_recent_context_evicted_from_cache(): user_key_1, user_key_2, user_key_3 = 'userkey1', 'userkey2', 'userkey3' user_hash_1, user_hash_2, user_hash_3 = _hash_for_user_key(user_key_1), \ _hash_for_user_key(user_key_2), _hash_for_user_key(user_key_3) @@ -100,7 +100,7 @@ def test_membership_query_least_recent_user_evicted_from_cache(): store.setup_membership(user_hash_2, membership_2) store.setup_membership(user_hash_3, membership_3) - manager = BigSegmentStoreManager(BigSegmentsConfig(store=store, user_cache_size=2)) + manager = BigSegmentStoreManager(BigSegmentsConfig(store=store, context_cache_size=2)) try: result1 = manager.get_user_membership(user_key_1) diff --git a/testing/impl/test_evaluator.py b/testing/impl/test_evaluator.py index e48353ab..5e0e8044 100644 --- a/testing/impl/test_evaluator.py +++ b/testing/impl/test_evaluator.py @@ -1,467 +1,104 @@ -import math -import pytest +from ldclient.client import Context from ldclient.evaluation import EvaluationDetail -from ldclient.impl.evaluator import _bucket_user, _variation_index_for_user +from testing.builders import * from testing.impl.evaluator_util import * -def assert_eval_result(result, expected_detail, expected_events): - assert result.detail == expected_detail - assert result.events == expected_events - - def test_flag_returns_off_variation_if_flag_is_off(): - flag = { - 'key': 'feature', - 'on': False, - 'offVariation': 1, - 'variations': ['a', 'b', 'c'] - } - user = { 'key': 'x' } + flag = FlagBuilder('feature').on(False).off_variation(1).variations('a', 'b', 'c').build() + user = Context.create('x') detail = EvaluationDetail('b', 1, {'kind': 'OFF'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_none_if_flag_is_off_and_off_variation_is_unspecified(): - flag = { - 'key': 'feature', - 'on': False, - 'variations': ['a', 'b', 'c'] - } - user = { 'key': 'x' } + flag = FlagBuilder('feature').on(False).variations('a', 'b', 'c').build() + user = Context.create('x') detail = EvaluationDetail(None, None, {'kind': 'OFF'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_off_variation_is_too_high(): - flag = { - 'key': 'feature', - 'on': False, - 'offVariation': 999, - 'variations': ['a', 'b', 'c'] - } - user = { 'key': 'x' } + flag = FlagBuilder('feature').on(False).off_variation(999).variations('a', 'b', 'c').build() + user = Context.create('x') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_off_variation_is_negative(): - flag = { - 'key': 'feature', - 'on': False, - 'offVariation': -1, - 'variations': ['a', 'b', 'c'] - } - user = { 'key': 'x' } + flag = FlagBuilder('feature').on(False).off_variation(-1).variations('a', 'b', 'c').build() + user = Context.create('x') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) -def test_flag_returns_off_variation_if_prerequisite_not_found(): - flag = { - 'key': 'feature0', - 'on': True, - 'prerequisites': [{'key': 'badfeature', 'variation': 1}], - 'fallthrough': { 'variation': 0 }, - 'offVariation': 1, - 'variations': ['a', 'b', 'c'] - } - evaluator = EvaluatorBuilder().with_unknown_flag('badfeature').build() - user = { 'key': 'x' } - detail = EvaluationDetail('b', 1, {'kind': 'PREREQUISITE_FAILED', 'prerequisiteKey': 'badfeature'}) - assert_eval_result(evaluator.evaluate(flag, user, event_factory), detail, None) - -def test_flag_returns_off_variation_and_event_if_prerequisite_is_off(): - flag = { - 'key': 'feature0', - 'on': True, - 'prerequisites': [{'key': 'feature1', 'variation': 1}], - 'fallthrough': { 'variation': 0 }, - 'offVariation': 1, - 'variations': ['a', 'b', 'c'], - 'version': 1 - } - flag1 = { - 'key': 'feature1', - 'off': False, - 'offVariation': 1, - # note that even though it returns the desired variation, it is still off and therefore not a match - 'fallthrough': { 'variation': 0 }, - 'variations': ['d', 'e'], - 'version': 2, - 'trackEvents': False - } - evaluator = EvaluatorBuilder().with_flag(flag1).build() - user = { 'key': 'x' } - detail = EvaluationDetail('b', 1, {'kind': 'PREREQUISITE_FAILED', 'prerequisiteKey': 'feature1'}) - events_should_be = [{'kind': 'feature', 'key': 'feature1', 'variation': 1, 'value': 'e', 'default': None, - 'version': 2, 'user': user, 'prereqOf': 'feature0'}] - assert_eval_result(evaluator.evaluate(flag, user, event_factory), detail, events_should_be) - -def test_flag_returns_off_variation_and_event_if_prerequisite_is_not_met(): - flag = { - 'key': 'feature0', - 'on': True, - 'prerequisites': [{'key': 'feature1', 'variation': 1}], - 'fallthrough': { 'variation': 0 }, - 'offVariation': 1, - 'variations': ['a', 'b', 'c'], - 'version': 1 - } - flag1 = { - 'key': 'feature1', - 'on': True, - 'fallthrough': { 'variation': 0 }, - 'variations': ['d', 'e'], - 'version': 2, - 'trackEvents': False - } - evaluator = EvaluatorBuilder().with_flag(flag1).build() - user = { 'key': 'x' } - detail = EvaluationDetail('b', 1, {'kind': 'PREREQUISITE_FAILED', 'prerequisiteKey': 'feature1'}) - events_should_be = [{'kind': 'feature', 'key': 'feature1', 'variation': 0, 'value': 'd', 'default': None, - 'version': 2, 'user': user, 'prereqOf': 'feature0'}] - assert_eval_result(evaluator.evaluate(flag, user, event_factory), detail, events_should_be) - -def test_flag_returns_fallthrough_and_event_if_prereq_is_met_and_there_are_no_rules(): - flag = { - 'key': 'feature0', - 'on': True, - 'prerequisites': [{ 'key': 'feature1', 'variation': 1 }], - 'fallthrough': { 'variation': 0 }, - 'offVariation': 1, - 'variations': ['a', 'b', 'c'], - 'version': 1 - } - flag1 = { - 'key': 'feature1', - 'on': True, - 'fallthrough': { 'variation': 1 }, - 'variations': ['d', 'e'], - 'version': 2, - 'trackEvents': False - } - evaluator = EvaluatorBuilder().with_flag(flag1).build() - user = { 'key': 'x' } - detail = EvaluationDetail('a', 0, {'kind': 'FALLTHROUGH'}) - events_should_be = [{'kind': 'feature', 'key': 'feature1', 'variation': 1, 'value': 'e', 'default': None, - 'version': 2, 'user': user, 'prereqOf': 'feature0'}] - assert_eval_result(evaluator.evaluate(flag, user, event_factory), detail, events_should_be) - def test_flag_returns_error_if_fallthrough_variation_is_too_high(): - flag = { - 'key': 'feature', - 'on': True, - 'fallthrough': {'variation': 999}, - 'variations': ['a', 'b', 'c'] - } - user = { 'key': 'x' } + flag = FlagBuilder('feature').on(True).variations('a', 'b', 'c').fallthrough_variation(999).build() + user = Context.create('x') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_fallthrough_variation_is_negative(): - flag = { - 'key': 'feature', - 'on': True, - 'fallthrough': {'variation': -1}, - 'variations': ['a', 'b', 'c'] - } - user = { 'key': 'x' } + flag = FlagBuilder('feature').on(True).variations('a', 'b', 'c').fallthrough_variation(-1).build() + user = Context.create('x') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_fallthrough_has_no_variation_or_rollout(): - flag = { - 'key': 'feature', - 'on': True, - 'fallthrough': {}, - 'variations': ['a', 'b', 'c'] - } - user = { 'key': 'x' } + flag = FlagBuilder('feature').on(True).variations('a', 'b', 'c').build() + user = Context.create('x') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_fallthrough_has_rollout_with_no_variations(): - flag = { - 'key': 'feature', - 'on': True, - 'fallthrough': {'rollout': {'variations': []}}, - 'variations': ['a', 'b', 'c'], - 'salt': '' - } - user = { 'key': 'x' } + flag = FlagBuilder('feature').on(True).variations('a', 'b', 'c').fallthrough_rollout({'variations': []}).build() + user = Context.create('x') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) -def test_flag_matches_user_from_targets(): - flag = { - 'key': 'feature0', - 'on': True, - 'targets': [{ 'values': ['whoever', 'userkey'], 'variation': 2 }], - 'fallthrough': { 'variation': 0 }, - 'offVariation': 1, - 'variations': ['a', 'b', 'c'] - } - user = { 'key': 'userkey' } - detail = EvaluationDetail('c', 2, {'kind': 'TARGET_MATCH'}) - assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) - def test_flag_matches_user_from_rules(): - rule = { 'id': 'id', 'clauses': [{'attribute': 'key', 'op': 'in', 'values': ['userkey']}], 'variation': 1} - flag = make_boolean_flag_with_rules([rule]) - user = { 'key': 'userkey' } - detail = EvaluationDetail(True, 1, {'kind': 'RULE_MATCH', 'ruleIndex': 0, 'ruleId': 'id'}) + rule = { 'id': 'id', 'clauses': [{'attribute': 'key', 'op': 'in', 'values': ['userkey']}], 'variation': 0} + flag = make_boolean_flag_with_rules(rule) + user = Context.create('userkey') + detail = EvaluationDetail(True, 0, {'kind': 'RULE_MATCH', 'ruleIndex': 0, 'ruleId': 'id'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_rule_variation_is_too_high(): rule = { 'id': 'id', 'clauses': [{'attribute': 'key', 'op': 'in', 'values': ['userkey']}], 'variation': 999} - flag = make_boolean_flag_with_rules([rule]) - user = { 'key': 'userkey' } + flag = make_boolean_flag_with_rules(rule) + user = Context.create('userkey') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_rule_variation_is_negative(): rule = { 'id': 'id', 'clauses': [{'attribute': 'key', 'op': 'in', 'values': ['userkey']}], 'variation': -1} - flag = make_boolean_flag_with_rules([rule]) - user = { 'key': 'userkey' } + flag = make_boolean_flag_with_rules(rule) + user = Context.create('userkey') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_rule_has_no_variation_or_rollout(): rule = { 'id': 'id', 'clauses': [{'attribute': 'key', 'op': 'in', 'values': ['userkey']}]} - flag = make_boolean_flag_with_rules([rule]) - user = { 'key': 'userkey' } + flag = make_boolean_flag_with_rules(rule) + user = Context.create('userkey') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) def test_flag_returns_error_if_rule_has_rollout_with_no_variations(): rule = { 'id': 'id', 'clauses': [{'attribute': 'key', 'op': 'in', 'values': ['userkey']}], 'rollout': {'variations': []} } - flag = make_boolean_flag_with_rules([rule]) - user = { 'key': 'userkey' } + flag = make_boolean_flag_with_rules(rule) + user = Context.create('userkey') detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) assert_eval_result(basic_evaluator.evaluate(flag, user, event_factory), detail, None) -def test_user_key_is_coerced_to_string_for_evaluation(): - clause = { 'attribute': 'key', 'op': 'in', 'values': [ '999' ] } - flag = make_boolean_flag_with_clause(clause) - user = { 'key': 999 } - assert basic_evaluator.evaluate(flag, user, event_factory).detail.value == True - -def test_secondary_key_is_coerced_to_string_for_evaluation(): - # We can't really verify that the rollout calculation works correctly, but we can at least - # make sure it doesn't error out if there's a non-string secondary value (ch35189) - rule = { - 'id': 'ruleid', - 'clauses': [ - { 'attribute': 'key', 'op': 'in', 'values': [ 'userkey' ] } - ], - 'rollout': { - 'salt': '', - 'variations': [ { 'weight': 100000, 'variation': 1 } ] - } - } - flag = make_boolean_flag_with_rules([rule]) - user = { 'key': 'userkey', 'secondary': 999 } - assert basic_evaluator.evaluate(flag, user, event_factory).detail.value == True - def test_segment_match_clause_retrieves_segment_from_store(): - segment = { - "key": "segkey", - "included": [ "foo" ], - "version": 1 - } + segment = SegmentBuilder('segkey').included('foo').build() evaluator = EvaluatorBuilder().with_segment(segment).build() - - user = { "key": "foo" } - flag = { - "key": "test", - "variations": [ False, True ], - "fallthrough": { "variation": 0 }, - "on": True, - "rules": [ - { - "clauses": [ - { - "attribute": "", - "op": "segmentMatch", - "values": [ "segkey" ] - } - ], - "variation": 1 - } - ] - } + user = Context.create('foo') + flag = make_boolean_flag_matching_segment(segment) assert evaluator.evaluate(flag, user, event_factory).detail.value == True def test_segment_match_clause_falls_through_with_no_errors_if_segment_not_found(): - user = { "key": "foo" } - flag = { - "key": "test", - "variations": [ False, True ], - "fallthrough": { "variation": 0 }, - "on": True, - "rules": [ - { - "clauses": [ - { - "attribute": "", - "op": "segmentMatch", - "values": [ "segkey" ] - } - ], - "variation": 1 - } - ] - } + user = Context.create('foo') + flag = make_boolean_flag_with_clauses(make_clause_matching_segment_key('segkey')) evaluator = EvaluatorBuilder().with_unknown_segment('segkey').build() assert evaluator.evaluate(flag, user, event_factory).detail.value == False - -def test_clause_matches_builtin_attribute(): - clause = { - 'attribute': 'name', - 'op': 'in', - 'values': [ 'Bob' ] - } - user = { 'key': 'x', 'name': 'Bob' } - flag = make_boolean_flag_with_clause(clause) - assert basic_evaluator.evaluate(flag, user, event_factory).detail.value == True - -def test_clause_matches_custom_attribute(): - clause = { - 'attribute': 'legs', - 'op': 'in', - 'values': [ 4 ] - } - user = { 'key': 'x', 'name': 'Bob', 'custom': { 'legs': 4 } } - flag = make_boolean_flag_with_clause(clause) - assert basic_evaluator.evaluate(flag, user, event_factory).detail.value == True - -def test_clause_returns_false_for_missing_attribute(): - clause = { - 'attribute': 'legs', - 'op': 'in', - 'values': [ 4 ] - } - user = { 'key': 'x', 'name': 'Bob' } - flag = make_boolean_flag_with_clause(clause) - assert basic_evaluator.evaluate(flag, user, event_factory).detail.value == False - -def test_clause_can_be_negated(): - clause = { - 'attribute': 'name', - 'op': 'in', - 'values': [ 'Bob' ], - 'negate': True - } - user = { 'key': 'x', 'name': 'Bob' } - flag = make_boolean_flag_with_clause(clause) - assert basic_evaluator.evaluate(flag, user, event_factory).detail.value == False - -def test_variation_index_is_returned_for_bucket(): - user = { 'key': 'userkey' } - flag = { 'key': 'flagkey', 'salt': 'salt' } - - # First verify that with our test inputs, the bucket value will be greater than zero and less than 100000, - # so we can construct a rollout whose second bucket just barely contains that value - bucket_value = math.trunc(_bucket_user(None, user, flag['key'], flag['salt'], 'key') * 100000) - assert bucket_value > 0 and bucket_value < 100000 - - bad_variation_a = 0 - matched_variation = 1 - bad_variation_b = 2 - rule = { - 'rollout': { - 'variations': [ - { 'variation': bad_variation_a, 'weight': bucket_value }, # end of bucket range is not inclusive, so it will *not* match the target value - { 'variation': matched_variation, 'weight': 1 }, # size of this bucket is 1, so it only matches that specific value - { 'variation': bad_variation_b, 'weight': 100000 - (bucket_value + 1) } - ] - } - } - result_variation = _variation_index_for_user(flag, rule, user) - assert result_variation == (matched_variation, False) - -def test_last_bucket_is_used_if_bucket_value_equals_total_weight(): - user = { 'key': 'userkey' } - flag = { 'key': 'flagkey', 'salt': 'salt' } - - # We'll construct a list of variations that stops right at the target bucket value - bucket_value = math.trunc(_bucket_user(None, user, flag['key'], flag['salt'], 'key') * 100000) - - rule = { - 'rollout': { - 'variations': [ - { 'variation': 0, 'weight': bucket_value } - ] - } - } - result_variation = _variation_index_for_user(flag, rule, user) - assert result_variation == (0, False) - -def test_bucket_by_user_key(): - user = { u'key': u'userKeyA' } - bucket = _bucket_user(None, user, 'hashKey', 'saltyA', 'key') - assert bucket == pytest.approx(0.42157587) - - user = { u'key': u'userKeyB' } - bucket = _bucket_user(None, user, 'hashKey', 'saltyA', 'key') - assert bucket == pytest.approx(0.6708485) - - user = { u'key': u'userKeyC' } - bucket = _bucket_user(None, user, 'hashKey', 'saltyA', 'key') - assert bucket == pytest.approx(0.10343106) - -def test_bucket_by_user_key_with_seed(): - seed = 61 - user = { u'key': u'userKeyA' } - point = _bucket_user(seed, user, 'hashKey', 'saltyA', 'key') - assert point == pytest.approx(0.09801207) - - user = { u'key': u'userKeyB' } - point = _bucket_user(seed, user, 'hashKey', 'saltyA', 'key') - assert point == pytest.approx(0.14483777) - - user = { u'key': u'userKeyC' } - point = _bucket_user(seed, user, 'hashKey', 'saltyA', 'key') - assert point == pytest.approx(0.9242641) - -def test_bucket_by_int_attr(): - user = { - u'key': u'userKey', - u'custom': { - u'intAttr': 33333, - u'stringAttr': u'33333' - } - } - bucket = _bucket_user(None, user, 'hashKey', 'saltyA', 'intAttr') - assert bucket == pytest.approx(0.54771423) - bucket2 = _bucket_user(None, user, 'hashKey', 'saltyA', 'stringAttr') - assert bucket2 == bucket - -def test_bucket_by_float_attr_not_allowed(): - user = { - u'key': u'userKey', - u'custom': { - u'floatAttr': 33.5 - } - } - bucket = _bucket_user(None, user, 'hashKey', 'saltyA', 'floatAttr') - assert bucket == 0.0 - -def test_seed_independent_of_salt_and_hashKey(): - seed = 61 - user = { u'key': u'userKeyA' } - point1 = _bucket_user(seed, user, 'hashKey', 'saltyA', 'key') - point2 = _bucket_user(seed, user, 'hashKey', 'saltyB', 'key') - point3 = _bucket_user(seed, user, 'hashKey2', 'saltyA', 'key') - - assert point1 == point2 - assert point2 == point3 - -def test_seed_changes_hash_evaluation(): - seed1 = 61 - user = { u'key': u'userKeyA' } - point1 = _bucket_user(seed1, user, 'hashKey', 'saltyA', 'key') - seed2 = 62 - point2 = _bucket_user(seed2, user, 'hashKey', 'saltyB', 'key') - - assert point1 != point2 diff --git a/testing/impl/test_evaluator_big_segment.py b/testing/impl/test_evaluator_big_segment.py index 1c60bd2d..959728d6 100644 --- a/testing/impl/test_evaluator_big_segment.py +++ b/testing/impl/test_evaluator_big_segment.py @@ -1,75 +1,82 @@ import pytest from ldclient.evaluation import BigSegmentsStatus +from testing.builders import * from testing.impl.evaluator_util import * def test_big_segment_with_no_generation_is_not_matched(): - segment = { - 'key': 'test', - 'included': [ basic_user['key'] ], # included should be ignored for a big segment - 'version': 1, - 'unbounded': True - } + segment = SegmentBuilder('key').version(1) \ + .included(basic_user.key) \ + .unbounded(True) \ + .build() + # included should be ignored for a big segment evaluator = EvaluatorBuilder().with_segment(segment).build() flag = make_boolean_flag_matching_segment(segment) result = evaluator.evaluate(flag, basic_user, event_factory) assert result.detail.value == False assert result.detail.reason['bigSegmentsStatus'] == BigSegmentsStatus.NOT_CONFIGURED -def test_big_segment_matched_with_include(): - segment = { - 'key': 'test', - 'version': 1, - 'unbounded': True, - 'generation': 2 - } - evaluator = EvaluatorBuilder().with_segment(segment).with_big_segment_for_user(basic_user, segment, True).build() +def test_big_segment_matched_with_include_for_default_kind(): + _test_matched_with_include(False, False) + _test_matched_with_include(False, True) + +def test_big_segment_matched_with_include_for_non_default_kind(): + _test_matched_with_include(True, False) + _test_matched_with_include(True, True) + +def _test_matched_with_include(non_default_kind: bool, multi_kind_context: bool): + target_key = 'contextkey' + single_kind_context = Context.create(target_key, 'kind1') if non_default_kind else Context.create(target_key) + eval_context = Context.create_multi(single_kind_context, Context.create('key2', 'kind2')) if multi_kind_context \ + else single_kind_context + + segment = SegmentBuilder('key').version(1) \ + .unbounded(True) \ + .unbounded_context_kind('kind1' if non_default_kind else None) \ + .generation(2) \ + .build() flag = make_boolean_flag_matching_segment(segment) - result = evaluator.evaluate(flag, basic_user, event_factory) + evaluator = EvaluatorBuilder().with_segment(segment).with_big_segment_for_key(target_key, segment, True).build() + + result = evaluator.evaluate(flag, eval_context, event_factory) assert result.detail.value == True assert result.detail.reason['bigSegmentsStatus'] == BigSegmentsStatus.HEALTHY def test_big_segment_matched_with_rule(): - segment = { - 'key': 'test', - 'version': 1, - 'unbounded': True, - 'generation': 2, - 'rules': [ - { 'clauses': [ make_clause_matching_user(basic_user) ] } - ] - } - evaluator = EvaluatorBuilder().with_segment(segment).with_no_big_segments_for_user(basic_user).build() + segment = SegmentBuilder('key').version(1) \ + .unbounded(True) \ + .generation(2) \ + .rules( + make_segment_rule_matching_context(basic_user) + ) \ + .build() + evaluator = EvaluatorBuilder().with_segment(segment).with_no_big_segments_for_key(basic_user.key).build() flag = make_boolean_flag_matching_segment(segment) result = evaluator.evaluate(flag, basic_user, event_factory) assert result.detail.value == True assert result.detail.reason['bigSegmentsStatus'] == BigSegmentsStatus.HEALTHY def test_big_segment_unmatched_by_exclude_regardless_of_rule(): - segment = { - 'key': 'test', - 'version': 1, - 'unbounded': True, - 'generation': 2, - 'rules': [ - { 'clauses': make_clause_matching_user(basic_user) } - ] - } - evaluator = EvaluatorBuilder().with_segment(segment).with_big_segment_for_user(basic_user, segment, False).build() + segment = SegmentBuilder('key').version(1) \ + .unbounded(True) \ + .generation(2) \ + .rules( + make_segment_rule_matching_context(basic_user) + ) \ + .build() + evaluator = EvaluatorBuilder().with_segment(segment).with_big_segment_for_key(basic_user.key, segment, False).build() flag = make_boolean_flag_matching_segment(segment) result = evaluator.evaluate(flag, basic_user, event_factory) assert result.detail.value == False assert result.detail.reason['bigSegmentsStatus'] == BigSegmentsStatus.HEALTHY def test_big_segment_status_is_returned_by_provider(): - segment = { - 'key': 'test', - 'version': 1, - 'unbounded': True, - 'generation': 1 - } - evaluator = EvaluatorBuilder().with_segment(segment).with_no_big_segments_for_user(basic_user). \ + segment = SegmentBuilder('key').version(1) \ + .unbounded(True) \ + .generation(1) \ + .build() + evaluator = EvaluatorBuilder().with_segment(segment).with_no_big_segments_for_key(basic_user.key). \ with_big_segments_status(BigSegmentsStatus.NOT_CONFIGURED).build() flag = make_boolean_flag_matching_segment(segment) result = evaluator.evaluate(flag, basic_user, event_factory) diff --git a/testing/impl/test_evaluator_bucketing.py b/testing/impl/test_evaluator_bucketing.py new file mode 100644 index 00000000..767668b5 --- /dev/null +++ b/testing/impl/test_evaluator_bucketing.py @@ -0,0 +1,131 @@ +from ldclient.client import Context +from ldclient.impl.evaluator import _bucket_context, _variation_index_for_context +from ldclient.impl.model import * + +from testing.builders import * +from testing.impl.evaluator_util import * + +import math +import pytest + + +def assert_match_clause(clause: dict, context: Context, should_match: bool): + assert_match(basic_evaluator, make_boolean_flag_with_clauses(clause), context, should_match) + + +class TestEvaluatorBucketing: + def test_variation_index_is_returned_for_bucket(self): + user = Context.create('userkey') + flag = FlagBuilder('key').salt('salt').build() + + # First verify that with our test inputs, the bucket value will be greater than zero and less than 100000, + # so we can construct a rollout whose second bucket just barely contains that value + bucket_value = math.trunc(_bucket_context(None, user, None, flag.key, flag.salt, None) * 100000) + assert bucket_value > 0 and bucket_value < 100000 + + bad_variation_a = 0 + matched_variation = 1 + bad_variation_b = 2 + rule = VariationOrRollout({ + 'rollout': { + 'variations': [ + { 'variation': bad_variation_a, 'weight': bucket_value }, # end of bucket range is not inclusive, so it will *not* match the target value + { 'variation': matched_variation, 'weight': 1 }, # size of this bucket is 1, so it only matches that specific value + { 'variation': bad_variation_b, 'weight': 100000 - (bucket_value + 1) } + ] + } + }) + result_variation = _variation_index_for_context(flag, rule, user) + assert result_variation == (matched_variation, False) + + def test_last_bucket_is_used_if_bucket_value_equals_total_weight(self): + user = Context.create('userkey') + flag = FlagBuilder('key').salt('salt').build() + + # We'll construct a list of variations that stops right at the target bucket value + bucket_value = math.trunc(_bucket_context(None, user, None, flag.key, flag.salt, None) * 100000) + + rule = VariationOrRollout({ + 'rollout': { + 'variations': [ + { 'variation': 0, 'weight': bucket_value } + ] + } + }) + result_variation = _variation_index_for_context(flag, rule, user) + assert result_variation == (0, False) + + def test_bucket_by_user_key(self): + user = Context.create('userKeyA') + bucket = _bucket_context(None, user, None, 'hashKey', 'saltyA', None) + assert bucket == pytest.approx(0.42157587) + + user = Context.create('userKeyB') + bucket = _bucket_context(None, user, None, 'hashKey', 'saltyA', None) + assert bucket == pytest.approx(0.6708485) + + user = Context.create('userKeyC') + bucket = _bucket_context(None, user, None, 'hashKey', 'saltyA', None) + assert bucket == pytest.approx(0.10343106) + + def test_bucket_by_user_key_with_seed(self): + seed = 61 + user = Context.create('userKeyA') + point = _bucket_context(seed, user, None, 'hashKey', 'saltyA', None) + assert point == pytest.approx(0.09801207) + + user = Context.create('userKeyB') + point = _bucket_context(seed, user, None, 'hashKey', 'saltyA', None) + assert point == pytest.approx(0.14483777) + + user = Context.create('userKeyC') + point = _bucket_context(seed, user, None, 'hashKey', 'saltyA', None) + assert point == pytest.approx(0.9242641) + + def test_bucket_by_int_attr(self): + user = Context.builder('userKey').set('intAttr', 33333).set('stringAttr', '33333').build() + bucket = _bucket_context(None, user, None, 'hashKey', 'saltyA', AttributeRef.from_literal('intAttr')) + assert bucket == pytest.approx(0.54771423) + bucket2 = _bucket_context(None, user, None, 'hashKey', 'saltyA', AttributeRef.from_literal('stringAttr')) + assert bucket2 == bucket + + def test_bucket_by_float_attr_not_allowed(self): + user = Context.builder('userKey').set('floatAttr', 33.5).build() + bucket = _bucket_context(None, user, None, 'hashKey', 'saltyA', AttributeRef.from_literal('floatAttr')) + assert bucket == 0.0 + + def test_seed_independent_of_salt_and_hashKey(self): + seed = 61 + user = Context.create('userKeyA') + point1 = _bucket_context(seed, user, None, 'hashKey', 'saltyA', None) + point2 = _bucket_context(seed, user, None, 'hashKey', 'saltyB', None) + point3 = _bucket_context(seed, user, None, 'hashKey2', 'saltyA', None) + + assert point1 == point2 + assert point2 == point3 + + def test_seed_changes_hash_evaluation(self): + seed1 = 61 + user = Context.create('userKeyA') + point1 = _bucket_context(seed1, user, None, 'hashKey', 'saltyA', None) + seed2 = 62 + point2 = _bucket_context(seed2, user, None, 'hashKey', 'saltyB', None) + + assert point1 != point2 + + def test_context_kind_selects_context(self): + seed = 357 + context1 = Context.create('key1') + context2 = Context.create('key2', 'kind2') + multi = Context.create_multi(context1, context2) + key = 'flag-key' + salt = 'testing123' + + assert _bucket_context(seed, context1, None, key, salt, None) == \ + _bucket_context(seed, context1, 'user', key, salt, None) + assert _bucket_context(seed, context1, None, key, salt, None) == \ + _bucket_context(seed, multi, 'user', key, salt, None) + assert _bucket_context(seed, context2, 'kind2', key, salt, None) == \ + _bucket_context(seed, multi, 'kind2', key, salt, None) + assert _bucket_context(seed, multi, 'user', key, salt, None) != \ + _bucket_context(seed, multi, 'kind2', key, salt, None) diff --git a/testing/impl/test_evaluator_clause.py b/testing/impl/test_evaluator_clause.py new file mode 100644 index 00000000..3a74d68e --- /dev/null +++ b/testing/impl/test_evaluator_clause.py @@ -0,0 +1,72 @@ +from ldclient.client import Context +from testing.builders import * +from testing.impl.evaluator_util import * + + +def assert_match_clause(clause: dict, context: Context, should_match: bool): + assert_match(basic_evaluator, make_boolean_flag_with_clauses(clause), context, should_match) + + +class TestEvaluatorClause: + def test_match_built_in_attribute(self): + clause = make_clause(None, 'name', 'in', 'Bob') + context = Context.builder('key').name('Bob').build() + assert_match_clause(clause, context, True) + + def test_match_custom_attribute(self): + clause = make_clause(None, 'legs', 'in', 4) + context = Context.builder('key').set('legs', 4).build() + assert_match_clause(clause, context, True) + + def test_missing_attribute(self): + clause = make_clause(None, 'legs', 'in', '4') + context = Context.create('key') + assert_match_clause(clause, context, False) + + def test_match_context_value_to_any_of_multiple_values(self): + clause = make_clause(None, 'name', 'in', 'Bob', 'Carol') + context = Context.builder('key').name('Carol').build() + assert_match_clause(clause, context, True) + + def test_match_array_of_context_values_to_clause_value(self): + clause = make_clause(None, 'alias', 'in', 'Maurice') + context = Context.builder('key').set('alias', ['Space Cowboy', 'Maurice']).build() + assert_match_clause(clause, context, True) + + def test_no_match_in_array_of_context_values(self): + clause = make_clause(None, 'alias', 'in', 'Ma') + context = Context.builder('key').set('alias', ['Mary', 'May']).build() + assert_match_clause(clause, context, False) + + def test_negated_to_return_false(self): + clause = negate_clause(make_clause(None, 'name', 'in', 'Bob')) + context = Context.builder('key').name('Bob').build() + assert_match_clause(clause, context, False) + + def test_negated_to_return_true(self): + clause = negate_clause(make_clause(None, 'name', 'in', 'Bobby')) + context = Context.builder('key').name('Bob').build() + assert_match_clause(clause, context, True) + + def test_unknown_operator_does_not_match(self): + clause = make_clause(None, 'name', 'doesSomethingUnsupported', 'Bob') + context = Context.builder('key').name('Bob').build() + assert_match_clause(clause, context, False) + + def test_clause_match_uses_context_kind(self): + clause = make_clause('company', 'name', 'in', 'Catco') + context1 = Context.builder('cc').kind('company').name('Catco').build() + context2 = Context.builder('l').name('Lucy').build() + context3 = Context.create_multi(context1, context2) + assert_match_clause(clause, context1, True) + assert_match_clause(clause, context2, False) + assert_match_clause(clause, context3, True) + + def test_clause_match_by_kind_attribute(self): + clause = make_clause(None, 'kind', 'startsWith', 'a') + context1 = Context.create('key') + context2 = Context.create('key', 'ab') + context3 = Context.create_multi(Context.create('key', 'cd'), context2) + assert_match_clause(clause, context1, False) + assert_match_clause(clause, context2, True) + assert_match_clause(clause, context3, True) diff --git a/testing/impl/test_evaluator_prerequisites.py b/testing/impl/test_evaluator_prerequisites.py new file mode 100644 index 00000000..9f29e538 --- /dev/null +++ b/testing/impl/test_evaluator_prerequisites.py @@ -0,0 +1,74 @@ +import pytest + +from ldclient.client import Context +from ldclient.evaluation import EvaluationDetail +from ldclient.impl.events.types import EventInputEvaluation + +from testing.builders import * +from testing.impl.evaluator_util import * + + +def test_flag_returns_off_variation_if_prerequisite_not_found(): + flag = FlagBuilder('feature').on(True).off_variation(1).variations('a', 'b', 'c').fallthrough_variation(1) \ + .prerequisite('badfeature', 1).build() + evaluator = EvaluatorBuilder().with_unknown_flag('badfeature').build() + user = Context.create('x') + detail = EvaluationDetail('b', 1, {'kind': 'PREREQUISITE_FAILED', 'prerequisiteKey': 'badfeature'}) + assert_eval_result(evaluator.evaluate(flag, user, event_factory), detail, None) + +def test_flag_returns_off_variation_and_event_if_prerequisite_is_off(): + flag = FlagBuilder('feature0').on(True).off_variation(1).variations('a', 'b', 'c').fallthrough_variation(1) \ + .prerequisite('feature1', 1).build() + flag1 = FlagBuilder('feature1').version(2).on(False).off_variation(1).variations('d', 'e').fallthrough_variation(1) \ + .build() + # note that even though flag1 returns the desired variation, it is still off and therefore not a match + evaluator = EvaluatorBuilder().with_flag(flag1).build() + user = Context.create('x') + detail = EvaluationDetail('b', 1, {'kind': 'PREREQUISITE_FAILED', 'prerequisiteKey': 'feature1'}) + events_should_be = [ + EventInputEvaluation(0, user, flag1.key, flag1, 1, 'e', None, None, flag, False) + ] + assert_eval_result(evaluator.evaluate(flag, user, event_factory), detail, events_should_be) + +def test_flag_returns_off_variation_and_event_if_prerequisite_is_not_met(): + flag = FlagBuilder('feature0').on(True).off_variation(1).variations('a', 'b', 'c').fallthrough_variation(1) \ + .prerequisite('feature1', 1).build() + flag1 = FlagBuilder('feature1').version(2).on(True).off_variation(1).variations('d', 'e').fallthrough_variation(0) \ + .build() + evaluator = EvaluatorBuilder().with_flag(flag1).build() + user = Context.create('x') + detail = EvaluationDetail('b', 1, {'kind': 'PREREQUISITE_FAILED', 'prerequisiteKey': 'feature1'}) + events_should_be = [ + EventInputEvaluation(0, user, flag1.key, flag1, 0, 'd', None, None, flag, False) + ] + assert_eval_result(evaluator.evaluate(flag, user, event_factory), detail, events_should_be) + +def test_flag_returns_fallthrough_and_event_if_prereq_is_met_and_there_are_no_rules(): + flag = FlagBuilder('feature0').on(True).off_variation(1).variations('a', 'b', 'c').fallthrough_variation(0) \ + .prerequisite('feature1', 1).build() + flag1 = FlagBuilder('feature1').version(2).on(True).off_variation(1).variations('d', 'e').fallthrough_variation(1) \ + .build() + evaluator = EvaluatorBuilder().with_flag(flag1).build() + user = Context.create('x') + detail = EvaluationDetail('a', 0, {'kind': 'FALLTHROUGH'}) + events_should_be = [ + EventInputEvaluation(0, user, flag1.key, flag1, 1, 'e', None, None, flag, False) + ] + assert_eval_result(evaluator.evaluate(flag, user, event_factory), detail, events_should_be) + +@pytest.mark.parametrize("depth", [1, 2, 3, 4]) +def test_prerequisite_cycle_detection(depth: int): + flag_keys = list("flagkey%d" % i for i in range(depth)) + flags = [] + for i in range(depth): + flags.append( + FlagBuilder(flag_keys[i]).on(True).variations(False, True).off_variation(0) \ + .prerequisite(flag_keys[(i + 1) % depth], 0) \ + .build()) + evaluator_builder = EvaluatorBuilder() + for f in flags: + evaluator_builder.with_flag(f) + evaluator = evaluator_builder.build() + context = Context.create('x') + detail = EvaluationDetail(None, None, {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'}) + assert_eval_result(evaluator.evaluate(flags[0], context, event_factory), detail, None) diff --git a/testing/impl/test_evaluator_segment.py b/testing/impl/test_evaluator_segment.py index 901aef1f..2dddc34d 100644 --- a/testing/impl/test_evaluator_segment.py +++ b/testing/impl/test_evaluator_segment.py @@ -1,162 +1,184 @@ import pytest +from ldclient import Context +from ldclient.impl.evaluator import _bucket_context +from testing.builders import * from testing.impl.evaluator_util import * -def _segment_matches_user(segment: dict, user: dict) -> bool: +def _segment_matches_context(segment: Segment, context: Context) -> bool: e = EvaluatorBuilder().with_segment(segment).build() flag = make_boolean_flag_matching_segment(segment) - result = e.evaluate(flag, user, event_factory) + result = e.evaluate(flag, context, event_factory) return result.detail.value +def verify_rollout( + eval_context: Context, + match_context: Context, + expected_bucket_value: int, + segment_key: str, + salt: str, + bucket_by: Optional[str], + rollout_context_kind: Optional[str] +): + segment_should_match = SegmentBuilder(segment_key) \ + .salt(salt) \ + .rules( + SegmentRuleBuilder() \ + .clauses(make_clause_matching_context(match_context)) \ + .weight(expected_bucket_value + 1) \ + .bucket_by(bucket_by) \ + .rollout_context_kind(rollout_context_kind) \ + .build() + ) \ + .build() + segment_should_not_match = SegmentBuilder(segment_key) \ + .salt(salt) \ + .rules( + SegmentRuleBuilder() \ + .clauses(make_clause_matching_context(match_context)) \ + .weight(expected_bucket_value) \ + .bucket_by(bucket_by) \ + .rollout_context_kind(rollout_context_kind) \ + .build() + ) \ + .build() + assert _segment_matches_context(segment_should_match, eval_context) is True + assert _segment_matches_context(segment_should_not_match, eval_context) is False + def test_explicit_include_user(): - s = { - "key": "test", - "included": [ "foo" ], - "version": 1 - } - u = { "key": "foo" } - assert _segment_matches_user(s, u) is True + user = Context.create('foo') + segment = SegmentBuilder('test').included(user.key).build() + assert _segment_matches_context(segment, user) is True def test_explicit_exclude_user(): - s = { - "key": "test", - "excluded": [ "foo" ], - "version": 1 - } - u = { "key": "foo" } - assert _segment_matches_user(s, u) is False + user = Context.create('foo') + segment = SegmentBuilder('test').excluded(user.key) \ + .rules(make_segment_rule_matching_context(user)) \ + .build() + assert _segment_matches_context(segment, user) is False def test_explicit_include_has_precedence(): - s = { - "key": "test", - "included": [ "foo" ], - "excluded": [ "foo" ], - "version": 1 - } - u = { "key": "foo" } - assert _segment_matches_user(s, u) is True + user = Context.create('foo') + segment = SegmentBuilder('test').included(user.key).excluded(user.key).build() + assert _segment_matches_context(segment, user) is True + +def test_included_key_for_context_kind(): + c1 = Context.create('key1', 'kind1') + c2 = Context.create('key2', 'kind2') + multi = Context.create_multi(c1, c2) + segment = SegmentBuilder('test').included_contexts('kind1', 'key1').build() + assert _segment_matches_context(segment, c1) is True + assert _segment_matches_context(segment, c2) is False + assert _segment_matches_context(segment, multi) is True + +def test_excluded_key_for_context_kind(): + c1 = Context.create('key1', 'kind1') + c2 = Context.create('key2', 'kind2') + multi = Context.create_multi(c1, c2) + segment = SegmentBuilder('test') \ + .excluded_contexts('kind1', 'key1') \ + .rules( + make_segment_rule_matching_context(c1), + make_segment_rule_matching_context(c2) + ) \ + .build() + assert _segment_matches_context(segment, c1) is False + assert _segment_matches_context(segment, c2) is True + assert _segment_matches_context(segment, multi) is False def test_matching_rule_with_no_weight(): - s = { - "key": "test", - "rules": [ - { - "clauses": [ - { - "attribute": "email", - "op": "in", - "values": [ "test@example.com" ] - } - ] - } - ] - } - u = { "key": "foo", "email": "test@example.com" } - assert _segment_matches_user(s, u) is True + context = Context.create('foo') + segment = SegmentBuilder('test') \ + .rules( + SegmentRuleBuilder().clauses(make_clause_matching_context(context)).build() + ) \ + .build() + assert _segment_matches_context(segment, context) is True def test_matching_rule_with_none_weight(): - s = { - "key": "test", - "rules": [ - { - "clauses": [ - { - "attribute": "email", - "op": "in", - "values": [ "test@example.com" ] - } - ], - "weight": None - } - ] - } - u = { "key": "foo", "email": "test@example.com" } - assert _segment_matches_user(s, u) is True + context = Context.create('foo') + segment = SegmentBuilder('test') \ + .rules( + SegmentRuleBuilder().weight(None).clauses(make_clause_matching_context(context)).build() + ) \ + .build() + assert _segment_matches_context(segment, context) is True def test_matching_rule_with_full_rollout(): - s = { - "key": "test", - "rules": [ - { - "clauses": [ - { - "attribute": "email", - "op": "in", - "values": [ "test@example.com" ] - } - ], - "weight": 100000 - } - ] - } - u = { "key": "foo", "email": "test@example.com" } - assert _segment_matches_user(s, u) is True + context = Context.create('foo') + segment = SegmentBuilder('test') \ + .rules( + SegmentRuleBuilder().weight(100000).clauses(make_clause_matching_context(context)).build() + ) \ + .build() + assert _segment_matches_context(segment, context) is True def test_matching_rule_with_zero_rollout(): - s = { - "key": "test", - "rules": [ - { - "clauses": [ - { - "attribute": "email", - "op": "in", - "values": [ "test@example.com" ] - } - ], - "weight": 0 - } - ] - } - u = { "key": "foo", "email": "test@example.com" } - assert _segment_matches_user(s, u) is False + context = Context.create('foo') + segment = SegmentBuilder('test') \ + .rules( + SegmentRuleBuilder().weight(0).clauses(make_clause_matching_context(context)).build() + ) \ + .build() + assert _segment_matches_context(segment, context) is False + +def test_rollout_calculation_can_bucket_by_key(): + context = Context.builder('userkey').name('Bob').build() + verify_rollout(context, context, 12551, 'test', 'salt', None, None) + +def test_rollout_uses_context_kind(): + context1 = Context.create('key1', 'kind1') + context2 = Context.create('key2', 'kind2') + multi = Context.create_multi(context1, context2) + expected_bucket_value = int(100000 * _bucket_context(None, context2, 'kind2', 'test', 'salt', None)) + verify_rollout(multi, context2, expected_bucket_value, 'test', 'salt', None, 'kind2') def test_matching_rule_with_multiple_clauses(): - s = { - "key": "test", - "rules": [ - { - "clauses": [ - { - "attribute": "email", - "op": "in", - "values": [ "test@example.com" ] - }, - { - "attribute": "name", - "op": "in", - "values": [ "bob" ] - } - ], - "weight": 100000 - } - ] - } - u = { "key": "foo", "email": "test@example.com", "name": "bob" } - assert _segment_matches_user(s, u) is True + context = Context.builder('foo').name('bob').set('email', 'test@example.com').build() + segment = SegmentBuilder('test') \ + .rules( + SegmentRuleBuilder().clauses( + make_clause(None, 'email', 'in', 'test@example.com'), + make_clause(None, 'name', 'in', 'bob') + ).build() + ) \ + .build() + assert _segment_matches_context(segment, context) is True def test_non_matching_rule_with_multiple_clauses(): - s = { - "key": "test", - "rules": [ - { - "clauses": [ - { - "attribute": "email", - "op": "in", - "values": [ "test@example.com" ] - }, - { - "attribute": "name", - "op": "in", - "values": [ "bill" ] - } - ], - "weight": 100000 - } - ] - } - u = { "key": "foo", "email": "test@example.com", "name": "bob" } - assert _segment_matches_user(s, u) is False + context = Context.builder('foo').name('bob').set('email', 'test@example.com').build() + segment = SegmentBuilder('test') \ + .rules( + SegmentRuleBuilder().clauses( + make_clause(None, 'email', 'in', 'test@example.com'), + make_clause(None, 'name', 'in', 'bill') + ).build() + ) \ + .build() + assert _segment_matches_context(segment, context) is False + +@pytest.mark.parametrize("depth", [1, 2, 3, 4]) +def test_segment_cycle_detection(depth: int): + segment_keys = list("segmentkey%d" % i for i in range(depth)) + segments = [] + for i in range(depth): + segments.append( + SegmentBuilder(segment_keys[i]) \ + .rules( + SegmentRuleBuilder().clauses( + make_clause_matching_segment_key(segment_keys[(i + 1) % depth]) + ) + .build() + ) + .build()) + evaluator_builder = EvaluatorBuilder() + for s in segments: + evaluator_builder.with_segment(s) + evaluator = evaluator_builder.build() + flag = make_boolean_flag_matching_segment(segments[0]) + context = Context.create('x') + result = evaluator.evaluate(flag, context, event_factory) + assert result.detail.value is None + assert result.detail.reason == {'kind': 'ERROR', 'errorKind': 'MALFORMED_FLAG'} diff --git a/testing/impl/test_evaluator_target.py b/testing/impl/test_evaluator_target.py new file mode 100644 index 00000000..c31f7420 --- /dev/null +++ b/testing/impl/test_evaluator_target.py @@ -0,0 +1,80 @@ +from ldclient.client import Context +from testing.builders import * +from testing.impl.evaluator_util import * + + +FALLTHROUGH_VAR = 0 +MATCH_VAR_1 = 1 +MATCH_VAR_2 = 2 +VARIATIONS = ['fallthrough', 'match1', 'match2'] + +def assert_match_clause(clause: dict, context: Context, should_match: bool): + assert_match(basic_evaluator, make_boolean_flag_with_clauses(clause), context, should_match) + +def base_flag_builder() -> FlagBuilder: + return FlagBuilder('feature').on(True).variations(*VARIATIONS) \ + .fallthrough_variation(FALLTHROUGH_VAR).off_variation(FALLTHROUGH_VAR) + +def expect_match(flag: FeatureFlag, context: Context, variation: int): + result = basic_evaluator.evaluate(flag, context, event_factory) + assert result.detail.variation_index == variation + assert result.detail.value == VARIATIONS[variation] + assert result.detail.reason == {'kind': 'TARGET_MATCH'} + +def expect_fallthrough(flag: FeatureFlag, context: Context): + result = basic_evaluator.evaluate(flag, context, event_factory) + assert result.detail.variation_index == FALLTHROUGH_VAR + assert result.detail.value == VARIATIONS[FALLTHROUGH_VAR] + assert result.detail.reason == {'kind': 'FALLTHROUGH'} + + +class TestEvaluatorTarget: + def test_user_targets_only(self): + flag = base_flag_builder() \ + .target(MATCH_VAR_1, 'c') \ + .target(MATCH_VAR_2, 'b', 'a') \ + .build() + + expect_match(flag, Context.create('a'), MATCH_VAR_2) + expect_match(flag, Context.create('b'), MATCH_VAR_2) + expect_match(flag, Context.create('c'), MATCH_VAR_1) + expect_fallthrough(flag, Context.create('z')) + + # in a multi-kind context, these targets match only the key for the user kind + expect_match(flag, + Context.create_multi(Context.create('b', 'dog'), Context.create('a')), + MATCH_VAR_2) + expect_match(flag, + Context.create_multi(Context.create('a', 'dog'), Context.create('c')), + MATCH_VAR_1) + expect_fallthrough(flag, + Context.create_multi(Context.create('b', 'dog'), Context.create('z'))) + expect_fallthrough(flag, + Context.create_multi(Context.create('a', 'dog'), Context.create('b', 'cat'))) + + def test_user_targets_and_context_targets(self): + flag = base_flag_builder() \ + .target(MATCH_VAR_1, 'c') \ + .target(MATCH_VAR_2, 'b', 'a') \ + .context_target('dog', MATCH_VAR_1, 'a', 'b') \ + .context_target('dog', MATCH_VAR_2, 'c') \ + .context_target(Context.DEFAULT_KIND, MATCH_VAR_1) \ + .context_target(Context.DEFAULT_KIND, MATCH_VAR_2) \ + .build() + + expect_match(flag, Context.create('a'), MATCH_VAR_2) + expect_match(flag, Context.create('b'), MATCH_VAR_2) + expect_match(flag, Context.create('c'), MATCH_VAR_1) + expect_fallthrough(flag, Context.create('z')) + + expect_match(flag, + Context.create_multi(Context.create('b', 'dog'), Context.create('a')), + MATCH_VAR_1) # the "dog" target takes precedence due to ordering + expect_match(flag, + Context.create_multi(Context.create('z', 'dog'), Context.create('a')), + MATCH_VAR_2) # "dog" targets don't match, continue to "user" targets + expect_fallthrough(flag, + Context.create_multi(Context.create('x', 'dog'), Context.create('z'))) # nothing matches + expect_match(flag, + Context.create_multi(Context.create('a', 'dog'), Context.create('b', 'cat')), + MATCH_VAR_1) diff --git a/testing/test_lru_cache.py b/testing/impl/test_lru_cache.py similarity index 94% rename from testing/test_lru_cache.py rename to testing/impl/test_lru_cache.py index a8de97fd..2faba036 100644 --- a/testing/test_lru_cache.py +++ b/testing/impl/test_lru_cache.py @@ -1,6 +1,4 @@ -import pytest - -from ldclient.lru_cache import SimpleLRUCache +from ldclient.impl.lru_cache import SimpleLRUCache def test_retains_values_up_to_capacity(): lru = SimpleLRUCache(3) diff --git a/testing/impl/test_model_decode.py b/testing/impl/test_model_decode.py new file mode 100644 index 00000000..a5e0d44a --- /dev/null +++ b/testing/impl/test_model_decode.py @@ -0,0 +1,47 @@ +import pytest +import re +from semver import VersionInfo + +from ldclient.impl.model import * + +from testing.builders import * + + +def test_flag_targets_are_stored_as_sets(): + flag = FlagBuilder("key") \ + .target(0, "a", "b") \ + .context_target("kind1", 0, "c", "d") \ + .build() + assert flag.targets[0].values == {"a", "b"} + assert flag.context_targets[0].values == {"c", "d"} + +def test_segment_targets_are_stored_as_sets(): + segment = SegmentBuilder("key") \ + .included("a", "b") \ + .excluded("c", "d") \ + .included_contexts("kind1", "e", "f") \ + .excluded_contexts("kind2", "g", "h") \ + .build() + assert segment.included == {"a", "b"} + assert segment.excluded == {"c", "d"} + assert segment.included_contexts[0].values == {"e", "f"} + assert segment.excluded_contexts[0].values == {"g", "h"} + +def test_clause_values_preprocessed_with_regex_operator(): + pattern_str = "^[a-z]*$" + pattern = re.compile(pattern_str) + flag = make_boolean_flag_with_clauses(make_clause(None, "attr", "matches", pattern_str, "?", True)) + assert flag.rules[0].clauses[0]._values == [pattern_str, "?", True] + assert list(x.as_regex for x in flag.rules[0].clauses[0]._values_preprocessed) == [pattern, None, None] + +@pytest.mark.parametrize('op', ['semVerEqual', 'semVerGreaterThan', 'semVerLessThan']) +def test_clause_values_preprocessed_with_semver_operator(op): + flag = make_boolean_flag_with_clauses(make_clause(None, "attr", op, "1.2.3", 1, True)) + assert flag.rules[0].clauses[0]._values == ["1.2.3", 1, True] + assert list(x.as_semver for x in flag.rules[0].clauses[0]._values_preprocessed) == [VersionInfo(1, 2, 3), None, None] + +@pytest.mark.parametrize('op', ['before', 'after']) +def test_clause_values_preprocessed_with_time_operator(op): + flag = make_boolean_flag_with_clauses(make_clause(None, "attr", op, 1000, "1970-01-01T00:00:02Z", True)) + assert flag.rules[0].clauses[0]._values == [1000, "1970-01-01T00:00:02Z", True] + assert list(x.as_time for x in flag.rules[0].clauses[0]._values_preprocessed) == [1000, 2000, None] diff --git a/testing/impl/test_model_encoder.py b/testing/impl/test_model_encoder.py new file mode 100644 index 00000000..c1d164f5 --- /dev/null +++ b/testing/impl/test_model_encoder.py @@ -0,0 +1,17 @@ +from ldclient.impl.model import * + +import json + + +class MyTestEntity(ModelEntity): + def __init__(self, value): + self._value = value + + def to_json_dict(self) -> dict: + return {'magicValue': self._value} + + +def test_model_encoder(): + data = [MyTestEntity(1), MyTestEntity('x')] + output = ModelEncoder().encode(data) + assert output == '[{"magicValue":1},{"magicValue":"x"}]' diff --git a/testing/test_operators.py b/testing/impl/test_operators.py similarity index 88% rename from testing/test_operators.py rename to testing/impl/test_operators.py index bfd72162..4ed56e79 100644 --- a/testing/test_operators.py +++ b/testing/impl/test_operators.py @@ -1,9 +1,11 @@ import pytest -from ldclient import operators +from ldclient.impl import operators +from testing.builders import * -@pytest.mark.parametrize("op,value1,value2,expected", [ + +@pytest.mark.parametrize("op,context_value,clause_value,expected", [ # numeric comparisons [ "in", 99, 99, True ], [ "in", 99.0001, 99.0001, True ], @@ -86,5 +88,9 @@ [ "semVerLessThan", "2.0.1", "xbad%ver", False ], [ "semVerGreaterThan", "2.0.1", "xbad%ver", False ] ]) -def test_operator(op, value1, value2, expected): - assert operators.ops.get(op)(value1, value2) == expected + +def test_operator(op, context_value, clause_value, expected): + flag = make_boolean_flag_with_clauses(make_clause(None, 'attr', op, clause_value)) + preprocessed = flag.rules[0].clauses[0].values_preprocessed + result = operators.ops.get(op)(context_value, clause_value, None if preprocessed is None else preprocessed[0]) + assert result == expected diff --git a/testing/test_retry_delay.py b/testing/impl/test_retry_delay.py similarity index 100% rename from testing/test_retry_delay.py rename to testing/impl/test_retry_delay.py diff --git a/testing/integrations/persistent_feature_store_test_base.py b/testing/integrations/persistent_feature_store_test_base.py index be473e3d..bb02bda0 100644 --- a/testing/integrations/persistent_feature_store_test_base.py +++ b/testing/integrations/persistent_feature_store_test_base.py @@ -91,15 +91,15 @@ def test_stores_with_different_prefixes_are_independent(self): store_b.upsert(FEATURES, flag_b2) item = store_a.get(FEATURES, 'flagA1', lambda x: x) - assert item == flag_a1 + assert item == FEATURES.decode(flag_a1) item = store_a.get(FEATURES, 'flagB1', lambda x: x) assert item is None items = store_a.all(FEATURES, lambda x: x) - assert items == { 'flagA1': flag_a1, 'flagA2': flag_a2 } + assert items == { 'flagA1': FEATURES.decode(flag_a1), 'flagA2': FEATURES.decode(flag_a2) } item = store_b.get(FEATURES, 'flagB1', lambda x: x) - assert item == flag_b1 + assert item == FEATURES.decode(flag_b1) item = store_b.get(FEATURES, 'flagA1', lambda x: x) assert item is None items = store_b.all(FEATURES, lambda x: x) - assert items == { 'flagB1': flag_b1, 'flagB2': flag_b2 } + assert items == { 'flagB1': FEATURES.decode(flag_b1), 'flagB2': FEATURES.decode(flag_b2) } diff --git a/testing/integrations/test_test_data_source.py b/testing/integrations/test_test_data_source.py index 47f0d025..38708b8e 100644 --- a/testing/integrations/test_test_data_source.py +++ b/testing/integrations/test_test_data_source.py @@ -1,12 +1,12 @@ import pytest -import warnings +from typing import Callable from ldclient.client import LDClient from ldclient.config import Config from ldclient.feature_store import InMemoryFeatureStore -from ldclient.versioned_data_kind import FEATURES, SEGMENTS +from ldclient.versioned_data_kind import FEATURES -from ldclient.integrations.test_data import TestData +from ldclient.integrations.test_data import TestData, FlagBuilder ## Test Data + Data Source @@ -20,15 +20,264 @@ def test_makes_valid_datasource(): assert store.all(FEATURES, lambda x: x) == {} -def test_makes_valid_datasource_with_flag(): - td = TestData.data_source() - flag = td.flag(key='test-flag') - assert flag is not None +def verify_flag_builder(desc: str, expected_props: dict, builder_actions: Callable[[FlagBuilder], FlagBuilder]): + all_expected_props = { + 'key': 'test-flag', + 'version': 1, + 'on': True, + 'prerequisites': [], + 'targets': [], + 'contextTargets': [], + 'rules': [], + 'salt': '', + 'variations': [True, False], + 'offVariation': 1, + 'fallthrough': {'variation': 0} + } + all_expected_props.update(expected_props) - builtFlag = flag._build(0) - assert builtFlag['key'] is 'test-flag' - assert builtFlag['on'] is True - assert builtFlag['variations'] == [True, False] + td = TestData.data_source() + flag_builder = builder_actions(td.flag(key='test-flag')) + built_flag = flag_builder._build(1) + assert built_flag == all_expected_props, "did not get expected flag properties for '%s' test" % desc + + +@pytest.mark.parametrize('expected_props,builder_actions', [ + pytest.param( + {}, + lambda f: f, + id='defaults' + ), + pytest.param( + {}, + lambda f: f.boolean_flag(), + id='changing default flag to boolean flag has no effect' + ), + pytest.param( + {}, + lambda f: f.variations('a', 'b').boolean_flag(), + id='non-boolean flag can be changed to boolean flag', + ), + pytest.param( + {'on': False}, + lambda f: f.on(False), + id='flag can be turned off' + ), + pytest.param( + {}, + lambda f: f.on(False).on(True), + id='flag can be turned on', + ), + pytest.param( + {'fallthrough': {'variation': 1}}, + lambda f: f.variation_for_all(False), + id='set false variation for all' + ), + pytest.param( + {'fallthrough': {'variation': 0}}, + lambda f: f.variation_for_all(True), + id='set true variation for all' + ), + pytest.param( + {'variations': ['a', 'b', 'c'], 'fallthrough': {'variation': 2}}, + lambda f: f.variations('a', 'b', 'c').variation_for_all(2), + id='set variation index for all' + ), + pytest.param( + {'offVariation': 0}, + lambda f: f.off_variation(True), + id='set off variation boolean' + ), + pytest.param( + {'variations': ['a', 'b', 'c'], 'offVariation': 2}, + lambda f: f.variations('a', 'b', 'c').off_variation(2), + id='set off variation index' + ), + pytest.param( + { + 'targets': [ + {'variation': 0, 'values': ['key1', 'key2']}, + ], + 'contextTargets': [ + {'contextKind': 'user', 'variation': 0, 'values': []}, + {'contextKind': 'kind1', 'variation': 0, 'values': ['key3', 'key4']}, + {'contextKind': 'kind1', 'variation': 1, 'values': ['key5', 'key6']}, + ] + }, + lambda f: f.variation_for_key('user', 'key1', True) \ + .variation_for_key('user', 'key2', True) \ + .variation_for_key('kind1', 'key3', True) \ + .variation_for_key('kind1', 'key5', False) \ + .variation_for_key('kind1', 'key4', True) \ + .variation_for_key('kind1', 'key6', False), + id='set context targets as boolean' + ), + pytest.param( + { + 'variations': ['a', 'b'], + 'targets': [ + {'variation': 0, 'values': ['key1', 'key2']}, + ], + 'contextTargets': [ + {'contextKind': 'user', 'variation': 0, 'values': []}, + {'contextKind': 'kind1', 'variation': 0, 'values': ['key3', 'key4']}, + {'contextKind': 'kind1', 'variation': 1, 'values': ['key5', 'key6']}, + ] + }, + lambda f: f.variations('a', 'b') \ + .variation_for_key('user', 'key1', 0) \ + .variation_for_key('user', 'key2', 0) \ + .variation_for_key('kind1', 'key3', 0) \ + .variation_for_key('kind1', 'key5', 1) \ + .variation_for_key('kind1', 'key4', 0) \ + .variation_for_key('kind1', 'key6', 1), + id='set context targets as variation index' + ), + pytest.param( + { + 'contextTargets': [ + {'contextKind': 'kind1', 'variation': 0, 'values': ['key1', 'key2']}, + {'contextKind': 'kind1', 'variation': 1, 'values': ['key3']} + ] + }, + lambda f: f.variation_for_key('kind1', 'key1', 0) \ + .variation_for_key('kind1', 'key2', 1) \ + .variation_for_key('kind1', 'key3', 1) \ + .variation_for_key('kind1', 'key2', 0), + id='replace existing context target key' + ), + pytest.param( + { + 'variations': ['a', 'b'], + 'contextTargets': [ + {'contextKind': 'kind1', 'variation': 1, 'values': ['key1']}, + ] + }, + lambda f: f.variations('a', 'b') \ + .variation_for_key('kind1', 'key1', 1) \ + .variation_for_key('kind1', 'key2', 3), + id='ignore target for nonexistent variation' + ), + pytest.param( + { + 'targets': [ + {'variation': 0, 'values': ['key1']} + ], + 'contextTargets': [ + {'contextKind': 'user', 'variation': 0, 'values': []} + ] + }, + lambda f: f.variation_for_user('key1', True), + id='variation_for_user is shortcut for variation_for_key' + ), + pytest.param( + {}, + lambda f: f.variation_for_key('kind1', 'key1', 0) \ + .clear_targets(), + id='clear targets' + ), + pytest.param( + { + 'rules': [ + { + 'variation': 1, + 'id': 'rule0', + 'clauses': [ + {'contextKind': 'kind1', 'attribute': 'attr1', 'op': 'in', 'values': ['a', 'b'], 'negate': False} + ] + } + ] + }, + lambda f: f.if_match_context('kind1', 'attr1', 'a', 'b').then_return(1), + id='if_match_context' + ), + pytest.param( + { + 'rules': [ + { + 'variation': 1, + 'id': 'rule0', + 'clauses': [ + {'contextKind': 'kind1', 'attribute': 'attr1', 'op': 'in', 'values': ['a', 'b'], 'negate': True} + ] + } + ] + }, + lambda f: f.if_not_match_context('kind1', 'attr1', 'a', 'b').then_return(1), + id='if_not_match_context' + ), + pytest.param( + { + 'rules': [ + { + 'variation': 1, + 'id': 'rule0', + 'clauses': [ + {'contextKind': 'user', 'attribute': 'attr1', 'op': 'in', 'values': ['a', 'b'], 'negate': False} + ] + } + ] + }, + lambda f: f.if_match('attr1', 'a', 'b').then_return(1), + id='if_match is shortcut for if_match_context' + ), + pytest.param( + { + 'rules': [ + { + 'variation': 1, + 'id': 'rule0', + 'clauses': [ + {'contextKind': 'user', 'attribute': 'attr1', 'op': 'in', 'values': ['a', 'b'], 'negate': True} + ] + } + ] + }, + lambda f: f.if_not_match('attr1', 'a', 'b').then_return(1), + id='if_not_match is shortcut for if_not_match_context' + ), + pytest.param( + { + 'rules': [ + { + 'variation': 1, + 'id': 'rule0', + 'clauses': [ + {'contextKind': 'kind1', 'attribute': 'attr1', 'op': 'in', 'values': ['a', 'b'], 'negate': False}, + {'contextKind': 'kind1', 'attribute': 'attr2', 'op': 'in', 'values': ['c', 'd'], 'negate': False} + ] + } + ] + }, + lambda f: f.if_match_context('kind1', 'attr1', 'a', 'b') \ + .and_match_context('kind1', 'attr2', 'c', 'd').then_return(1), + id='and_match_context' + ), + pytest.param( + { + 'rules': [ + { + 'variation': 1, + 'id': 'rule0', + 'clauses': [ + {'contextKind': 'kind1', 'attribute': 'attr1', 'op': 'in', 'values': ['a', 'b'], 'negate': False}, + {'contextKind': 'kind1', 'attribute': 'attr2', 'op': 'in', 'values': ['c', 'd'], 'negate': True} + ] + } + ] + }, + lambda f: f.if_match_context('kind1', 'attr1', 'a', 'b') \ + .and_not_match_context('kind1', 'attr2', 'c', 'd').then_return(1), + id='and_not_match_context' + ), + pytest.param( + {}, + lambda f: f.if_match_context('kind1', 'attr1', 'a').then_return(1).clear_rules(), + id='clear rules' + ) +]) +def test_flag_configs_parameterized(expected_props: dict, builder_actions: Callable[[FlagBuilder], FlagBuilder]): + verify_flag_builder('x', expected_props, builder_actions) def test_can_retrieve_flag_from_store(): @@ -39,7 +288,7 @@ def test_can_retrieve_flag_from_store(): client = LDClient(config=Config('SDK_KEY', update_processor_class = td, send_events = False, offline = True, feature_store = store)) - assert store.get(FEATURES, 'some-flag') == td.flag('some-flag')._build(1) + assert store.get(FEATURES, 'some-flag') == FEATURES.decode(td.flag('some-flag')._build(1)) client.close() @@ -52,7 +301,7 @@ def test_updates_to_flags_are_reflected_in_store(): td.update(td.flag('some-flag')) - assert store.get(FEATURES, 'some-flag') == td.flag('some-flag')._build(1) + assert store.get(FEATURES, 'some-flag') == FEATURES.decode(td.flag('some-flag')._build(1)) client.close() @@ -71,7 +320,9 @@ def test_updates_after_client_close_have_no_affect(): def test_can_handle_multiple_clients(): td = TestData.data_source() - td.update(td.flag('flag')) + flag_builder = td.flag('flag') + built_flag = flag_builder._build(1) + td.update(flag_builder) store = InMemoryFeatureStore() store2 = InMemoryFeatureStore() @@ -82,223 +333,23 @@ def test_can_handle_multiple_clients(): config2 = Config('SDK_KEY', update_processor_class = td, send_events = False, offline = True, feature_store = store2) client2 = LDClient(config=config2) - assert store.get(FEATURES, 'flag') == { - 'fallthrough': { - 'variation': 0, - }, - 'key': 'flag', - 'offVariation': 1, - 'on': True, - 'rules': [], - 'targets': [], - 'variations': [True, False], - 'version': 1 - } - - assert store2.get(FEATURES, 'flag') == { - 'fallthrough': { - 'variation': 0, - }, - 'key': 'flag', - 'offVariation': 1, - 'on': True, - 'rules': [], - 'targets': [], - 'variations': [True, False], - 'version': 1 - } - - td.update(td.flag('flag').variation_for_all_users(False)) - - assert store.get(FEATURES, 'flag') == { - 'fallthrough': { - 'variation': 1, - }, - 'key': 'flag', - 'offVariation': 1, - 'on': True, - 'rules': [], - 'targets': [], - 'variations': [True, False], - 'version': 2 - } - - assert store2.get(FEATURES, 'flag') == { - 'fallthrough': { - 'variation': 1, - }, - 'key': 'flag', - 'offVariation': 1, - 'on': True, - 'rules': [], - 'targets': [], - 'variations': [True, False], - 'version': 2 - } - - client.close() - client2.close() - - -## FlagBuilder - -def test_flagbuilder_defaults_to_boolean_flag(): - td = TestData.data_source() - flag = td.flag('empty-flag') - assert flag._build(0)['variations'] == [True, False] - assert flag._build(0)['fallthrough'] == {'variation': 0} - assert flag._build(0)['offVariation'] == 1 - -def test_flagbuilder_can_turn_flag_off(): - td = TestData.data_source() - flag = td.flag('test-flag') - flag.on(False) + assert store.get(FEATURES, 'flag') == FEATURES.decode(built_flag) - assert flag._build(0)['on'] is False + assert store2.get(FEATURES, 'flag') == FEATURES.decode(built_flag) -def test_flagbuilder_can_set_fallthrough_variation(): - td = TestData.data_source() - flag = td.flag('test-flag') - flag.fallthrough_variation(2) + flag_builder_v2 = td.flag('flag').variation_for_all_users(False) + td.update(flag_builder_v2) + built_flag_v2 = flag_builder_v2._build(2) - assert flag._build(0)['fallthrough'] == {'variation': 2} + assert store.get(FEATURES, 'flag') == FEATURES.decode(built_flag_v2) - flag.fallthrough_variation(True) + assert store2.get(FEATURES, 'flag') == FEATURES.decode(built_flag_v2) - assert flag._build(0)['fallthrough'] == {'variation': 0} - -def test_flagbuilder_can_set_off_variation(): - td = TestData.data_source() - flag = td.flag('test-flag') - flag.off_variation(2) - - assert flag._build(0)['offVariation'] == 2 - - flag.off_variation(True) - - assert flag._build(0)['offVariation'] == 0 - -def test_flagbuilder_can_make_boolean_flag(): - td = TestData.data_source() - flag = td.flag('boolean-flag').boolean_flag() - - builtFlag = flag._build(0) - assert builtFlag['fallthrough'] == {'variation': 0} - assert builtFlag['offVariation'] == 1 - -def test_flagbuilder_can_set_variation_when_targeting_is_off(): - td = TestData.data_source() - flag = td.flag('test-flag') \ - .on(False) - assert flag._build(0)['on'] == False - assert flag._build(0)['variations'] == [True,False] - flag.variations('dog', 'cat') - assert flag._build(0)['variations'] == ['dog','cat'] - -def test_flagbuilder_can_set_variation_for_all_users(): - td = TestData.data_source() - flag = td.flag('test-flag') - flag.variation_for_all_users(True) - assert flag._build(0)['fallthrough'] == {'variation': 0} - -def test_flagbuilder_clears_existing_rules_and_targets_when_setting_variation_for_all_users(): - td = TestData.data_source() - - flag = td.flag('test-flag').if_match('name', 'christian').then_return(False).variation_for_user('christian', False).variation_for_all_users(True)._build(0) - - assert flag['rules'] == [] - assert flag['targets'] == [] - -def test_flagbuilder_can_set_variations(): - td = TestData.data_source() - flag = td.flag('test-flag') - flag.variations(2,3,4,5) - assert flag._build(0)['variations'] == [2,3,4,5] - -def test_flagbuilder_can_make_an_immutable_copy(): - td = TestData.data_source() - flag = td.flag('test-flag') - flag.variations(1,2) - copy_of_flag = flag._copy() - flag.variations(3,4) - assert copy_of_flag._build(0)['variations'] == [1,2] - - copy_of_flag.variations(5,6) - assert flag._build(0)['variations'] == [3,4] - -def test_flagbuilder_can_set_boolean_variation_for_user(): - td = TestData.data_source() - flag = td.flag('user-variation-flag') - flag.variation_for_user('christian', False) - expected_targets = [ - { - 'variation': 1, - 'values': ['christian'] - } - ] - assert flag._build(0)['targets'] == expected_targets - -def test_flagbuilder_can_set_numerical_variation_for_user(): - td = TestData.data_source() - flag = td.flag('user-variation-flag') - flag.variations('a','b','c') - flag.variation_for_user('christian', 2) - expected_targets = [ - { - 'variation': 2, - 'values': ['christian'] - } - ] - assert flag._build(1)['targets'] == expected_targets - -def test_flagbuilder_can_set_value_for_all_users(): - td = TestData.data_source() - flag = td.flag('user-value-flag') - flag.variation_for_user('john', 1) - - built_flag = flag._build(0) - assert built_flag['targets'] == [{'values': ['john'], 'variation': 1}] - assert built_flag['variations'] == [True, False] - - flag.value_for_all_users('yes') - - built_flag2 = flag._build(0) - assert built_flag2['targets'] == [] - assert built_flag2['variations'] == ['yes'] - - -def test_flagbuilder_can_build(): - td = TestData.data_source() - flag = td.flag('some-flag') - flag.if_match('country', 'fr').then_return(True) - expected_result = { - 'fallthrough': { - 'variation': 0, - }, - 'key': 'some-flag', - 'offVariation': 1, - 'on': True, - 'targets': [], - 'variations': [True, False], - 'rules': [ - { - 'clauses': [ - {'attribute': 'country', - 'negate': False, - 'op': 'in', - 'values': ['fr'] - } - ], - 'id': 'rule0', - 'variation': 0 - } - ], - 'version': 1, - } + client.close() + client2.close() - assert flag._build(1) == expected_result -def test_flag_can_evaluate_rules(): +def test_flag_evaluation_with_client(): td = TestData.data_source() store = InMemoryFeatureStore() diff --git a/testing/stub_util.py b/testing/stub_util.py index 5a7e99ad..834f0e71 100644 --- a/testing/stub_util.py +++ b/testing/stub_util.py @@ -1,14 +1,19 @@ from email.utils import formatdate import json -from testing.http_util import ChunkedResponse, JsonResponse +from ldclient.impl.model import ModelEntity from ldclient.interfaces import EventProcessor, FeatureRequester, FeatureStore, UpdateProcessor +from testing.http_util import ChunkedResponse, JsonResponse + + +def item_as_json(item): + return item.to_json_dict() if isinstance(item, ModelEntity) else item def make_items_map(items = []): ret = {} for item in items: - ret[item['key']] = item + ret[item['key']] = item_as_json(item) return ret def make_put_event(flags = [], segments = []): @@ -17,7 +22,7 @@ def make_put_event(flags = [], segments = []): def make_patch_event(kind, item): path = '%s%s' % (kind.stream_api_path, item['key']) - data = { "path": path, "data": item } + data = { "path": path, "data": item_as_json(item) } return 'event:patch\ndata: %s\n\n' % json.dumps(data) def make_delete_event(kind, key, version): diff --git a/testing/test_context.py b/testing/test_context.py new file mode 100644 index 00000000..065ce41a --- /dev/null +++ b/testing/test_context.py @@ -0,0 +1,301 @@ +from ldclient.context import Context + +import json +import pytest + + +def assert_context_valid(c): + assert c.valid is True + assert c.error is None + +def assert_context_invalid(c): + assert c.valid is False + assert c.error is not None + + +class TestContext: + def test_create_default_kind(self): + c = Context.create('a') + assert_context_valid(c) + assert c.multiple is False + assert c.key == 'a' + assert c.kind == 'user' + assert c.name is None + assert c.anonymous is False + assert list(c.custom_attributes) == [] + + def test_create_non_default_kind(self): + c = Context.create('a', 'b') + assert_context_valid(c) + assert c.multiple is False + assert c.key == 'a' + assert c.kind == 'b' + assert c.name is None + assert c.anonymous is False + assert list(c.custom_attributes) == [] + + def test_builder_default_kind(self): + c = Context.builder('a').build() + assert_context_valid(c) + assert c.multiple is False + assert c.key == 'a' + assert c.kind == 'user' + assert c.name is None + assert c.anonymous is False + assert list(c.custom_attributes) == [] + + def test_builder_non_default_kind(self): + c = Context.builder('a').kind('b').build() + assert_context_valid(c) + assert c.multiple is False + assert c.key == 'a' + assert c.kind == 'b' + assert c.name is None + assert c.anonymous is False + assert list(c.custom_attributes) == [] + + def test_name(self): + c = Context.builder('a').name('b').build() + assert_context_valid(c) + assert c.key == 'a' + assert c.name == 'b' + assert list(c.custom_attributes) == [] + + def test_anonymous(self): + c = Context.builder('a').anonymous(True).build() + assert_context_valid(c) + assert c.key == 'a' + assert c.anonymous + assert list(c.custom_attributes) == [] + + def test_custom_attributes(self): + c = Context.builder('a').set('b', True).set('c', 'd').build() + assert_context_valid(c) + assert c.key == 'a' + assert c.get('b') is True + assert c.get('c') == 'd' + assert c['b'] is True + assert c['c'] == 'd' + assert sorted(list(c.custom_attributes)) == ['b', 'c'] + + def test_set_built_in_attribute_by_name(self): + c = Context.builder('').set('key', 'a').set('kind', 'b').set('name', 'c').set('anonymous', True).build() + assert_context_valid(c) + assert c.key == 'a' + assert c.kind == 'b' + assert c.name == 'c' + assert c.anonymous + + def test_set_built_in_attribute_by_name_type_checking(self): + b = Context.builder('a').kind('b').name('c').anonymous(True) + + assert b.try_set('key', None) is False + assert b.try_set('key', 3) is False + assert b.build().key == 'a' + + assert b.try_set('kind', None) is False + assert b.try_set('kind', 3) is False + assert b.build().kind == 'b' + + assert b.try_set('name', 3) is False + assert b.build().name == 'c' + + assert b.try_set('anonymous', None) is False + assert b.try_set('anonymous', 3) is False + assert b.build().anonymous is True + + def test_get_built_in_attribute_by_name(self): + c = Context.builder('a').kind('b').name('c').anonymous(True).build() + assert c.get('key') == 'a' + assert c.get('kind') == 'b' + assert c.get('name') == 'c' + assert c.get('anonymous') is True + + def test_get_unknown_attribute(self): + c = Context.create('a') + assert c.get('b') is None + + def test_private_attributes(self): + assert list(Context.create('a').private_attributes) == [] + + c = Context.builder('a').private('b', '/c/d').private('e').build() + assert list(c.private_attributes) == ['b', '/c/d', 'e'] + + def test_fully_qualified_key(self): + assert Context.create('key1').fully_qualified_key == 'key1' + assert Context.create('key1', 'kind1').fully_qualified_key == 'kind1:key1' + assert Context.create('key%with:things', 'kind1').fully_qualified_key == 'kind1:key%25with%3Athings' + + def test_builder_from_context(self): + c1 = Context.builder('a').kind('kind1').name('b').set('c', True).private('d').build() + b = Context.builder_from_context(c1) + assert b.build() == c1 + b.set('c', False) + c2 = b.build() + assert c2 != c1 + assert c1.get('c') is True + assert c2.get('c') is False + + def test_equality(self): + def _assert_contexts_from_factory_equal(fn): + c1, c2 = fn(), fn() + assert c1 == c2 + _assert_contexts_from_factory_equal(lambda: Context.create('a')) + _assert_contexts_from_factory_equal(lambda: Context.create('a', 'kind1')) + _assert_contexts_from_factory_equal(lambda: Context.builder('a').name('b').build()) + _assert_contexts_from_factory_equal(lambda: Context.builder('a').anonymous(True).build()) + _assert_contexts_from_factory_equal(lambda: Context.builder('a').set('b', True).set('c', 3).build()) + assert Context.builder('a').set('b', True).set('c', 3).build() == \ + Context.builder('a').set('c', 3).set('b', True).build() # order doesn't matter + + assert Context.create('a', 'kind1') != Context.create('b', 'kind1') + assert Context.create('a', 'kind1') != Context.create('a', 'kind2') + assert Context.builder('a').name('b').build() != Context.builder('a').name('c').build() + assert Context.builder('a').anonymous(True).build() != Context.builder('a').build() + assert Context.builder('a').set('b', True).build() != Context.builder('a').set('b', False).build() + assert Context.builder('a').set('b', True).build() != \ + Context.builder('a').set('b', True).set('c', False).build() + + _assert_contexts_from_factory_equal(lambda: \ + Context.create_multi(Context.create('a', 'kind1'), Context.create('b', 'kind2'))) + assert Context.create_multi(Context.create('a', 'kind1'), Context.create('b', 'kind2')) == \ + Context.create_multi(Context.create('b', 'kind2'), Context.create('a', 'kind1')) # order doesn't matter + + assert Context.create_multi(Context.create('a', 'kind1'), Context.create('b', 'kind2')) != \ + Context.create_multi(Context.create('a', 'kind1'), Context.create('c', 'kind2')) + assert Context.create_multi(Context.create('a', 'kind1'), Context.create('b', 'kind2'), Context.create('c', 'kind3')) != \ + Context.create_multi(Context.create('a', 'kind1'), Context.create('b', 'kind2')) + assert Context.create_multi(Context.create('a', 'kind1'), Context.create('b', 'kind2')) != \ + Context.create('a', 'kind1') + + _assert_contexts_from_factory_equal(lambda: Context.create('invalid', 'kind')) + assert Context.create('invalid', 'kind') != Context.create_multi() # different errors + + def test_json_encoding(self): + assert Context.create('a', 'kind1').to_dict() == {'kind': 'kind1', 'key': 'a'} + assert Context.builder('a').kind('kind1').name('b').build().to_dict() == \ + {'kind': 'kind1', 'key': 'a', 'name': 'b'} + assert Context.builder('a').kind('kind1').anonymous(True).build().to_dict() == \ + {'kind': 'kind1', 'key': 'a', 'anonymous': True} + assert Context.builder('a').kind('kind1').set('b', True).set('c', 3).build().to_dict() == \ + {'kind': 'kind1', 'key': 'a', 'b': True, 'c': 3} + assert Context.builder('a').kind('kind1').private('b').build().to_dict() == \ + {'kind': 'kind1', 'key': 'a', '_meta': {'privateAttributes': ['b']}} + + assert Context.create_multi(Context.create('key1', 'kind1'), Context.create('key2', 'kind2')).to_dict() == \ + {'kind': 'multi', 'kind1': {'key': 'key1'}, 'kind2': {'key': 'key2'}} + + assert json.loads(Context.create('a', 'kind1').to_json_string()) == {'kind': 'kind1', 'key': 'a'} + + def test_json_decoding(self): + assert Context.from_dict({'kind': 'kind1', 'key': 'key1'}) == Context.create('key1', 'kind1') + assert Context.from_dict({'kind': 'kind1', 'key': 'key1', 'name': 'a'}) == \ + Context.builder('key1').kind('kind1').name('a').build() + assert Context.from_dict({'kind': 'kind1', 'key': 'key1', 'anonymous': True}) == \ + Context.builder('key1').kind('kind1').anonymous(True).build() + assert Context.from_dict({'kind': 'kind1', 'key': 'key1', '_meta': {'privateAttributes': ['b']}}) == \ + Context.builder('key1').kind('kind1').private('b').build() + + assert Context.from_dict({'kind': 'multi', 'kind1': {'key': 'key1'}, 'kind2': {'key': 'key2'}}) == \ + Context.create_multi(Context.create('key1', 'kind1'), Context.create('key2', 'kind2')) + + assert_context_invalid(Context.from_dict({'kind': 'kind1'})) + assert_context_invalid(Context.from_dict({'kind': 'kind1', 'key': 3})) + assert_context_invalid(Context.from_dict({'kind': 'multi'})) + assert_context_invalid(Context.from_dict({'kind': 'multi', 'kind1': 'x'})) + + def test_json_decoding_old_user(self): + assert Context.from_dict({'key': 'key1'}) == Context.create('key1', 'user') + assert Context.from_dict({'key': 'key1', 'name': 'b'}) == Context.builder('key1').name('b').build() + assert Context.from_dict({'key': 'key1', 'custom': {'b': True}}) == \ + Context.builder('key1').set('b', True).build() + + assert_context_valid(Context.from_dict({'key': ''})) + assert_context_invalid(Context.from_dict({})) + assert_context_invalid(Context.from_dict({'key': None})) + assert_context_invalid(Context.from_dict({'key': 3})) + assert_context_invalid(Context.from_dict({'key': 'a', 'name': 3})) + + +class TestContextMulti: + def test_create_multi(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('b', 'kind2') + mc = Context.create_multi(c1, c2) + + assert mc.valid + assert mc.multiple + assert mc.kind == 'multi' + assert mc.key == '' + assert mc.name is None + assert mc.anonymous is False + assert mc.individual_context_count == 2 + assert mc.get_individual_context(0) is c1 + assert mc.get_individual_context(1) is c2 + assert mc.get_individual_context(-1) is None + assert mc.get_individual_context(2) is None + + def test_create_multi_flattens_nested_multi_context(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('b', 'kind2') + c3 = Context.create('c', 'kind3') + c2plus3 = Context.create_multi(c2, c3) + mc = Context.create_multi(c1, c2plus3) + assert mc == Context.create_multi(c1, c2, c3) + + def test_multi_builder(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('b', 'kind2') + mc = Context.multi_builder().add(c1).add(c2).build() + assert mc == Context.create_multi(c1, c2) + + def test_multi_builder_flattens_nested_multi_context(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('b', 'kind2') + c3 = Context.create('c', 'kind3') + c2plus3 = Context.create_multi(c2, c3) + mc = Context.multi_builder().add(c1).add(c2plus3).build() + assert mc == Context.create_multi(c1, c2, c3) + + def test_multi_fully_qualified_key(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('b', 'kind2') + mc = Context.create_multi(c2, c1) # deliberately in reverse order of kind - they should come out sorted + assert mc.fully_qualified_key == 'kind1:a:kind2:b' + + +class TestContextErrors: + def test_key_empty_string(self): + assert_context_invalid(Context.create('')) + assert_context_invalid(Context.builder('').build()) + + @pytest.mark.parametrize('kind', ['kind', 'multi', 'b$c', '']) + def test_kind_invalid_strings(self, kind): + assert_context_invalid(Context.create('a', kind)) + assert_context_invalid(Context.builder('a').kind(kind).build()) + + def test_create_multi_with_no_contexts(self): + assert_context_invalid(Context.create_multi()) + + def test_multi_builder_with_no_contexts(self): + assert_context_invalid(Context.multi_builder().build()) + + def test_create_multi_with_duplicate_kind(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('b', 'kind1') + assert_context_invalid(Context.create_multi(c1, c2)) + + def test_multi_builder_with_duplicate_kind(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('b', 'kind1') + assert_context_invalid(Context.multi_builder().add(c1).add(c2).build()) + + def test_create_multi_with_invalid_context(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('') + assert_context_invalid(Context.create_multi(c1, c2)) + + def test_multi_builder_with_invalid_context(self): + c1 = Context.create('a', 'kind1') + c2 = Context.create('') + assert_context_invalid(Context.multi_builder().add(c1).add(c2).build()) diff --git a/testing/test_event_processor.py b/testing/test_event_processor.py deleted file mode 100644 index 363d980e..00000000 --- a/testing/test_event_processor.py +++ /dev/null @@ -1,696 +0,0 @@ -import json -import pytest -from threading import Thread -import time -import uuid - -from ldclient.config import Config, HTTPConfig -from ldclient.diagnostics import create_diagnostic_id, _DiagnosticAccumulator -from ldclient.event_processor import DefaultEventProcessor -from ldclient.util import log -from testing.http_util import start_server, BasicResponse -from testing.proxy_test_util import do_proxy_tests -from testing.stub_util import MockResponse, MockHttp - - -default_config = Config("fake_sdk_key") -user = { - 'key': 'userkey', - 'name': 'Red' -} -filtered_user = { - 'key': 'userkey', - 'privateAttrs': [ 'name' ] -} -numeric_user = { - 'key': 1, - 'secondary': 2, - 'ip': 3, - 'country': 4, - 'email': 5, - 'firstName': 6, - 'lastName': 7, - 'avatar': 8, - 'name': 9, - 'anonymous': False, - 'custom': { - 'age': 99 - } -} -stringified_numeric_user = { - 'key': '1', - 'secondary': '2', - 'ip': '3', - 'country': '4', - 'email': '5', - 'firstName': '6', - 'lastName': '7', - 'avatar': '8', - 'name': '9', - 'anonymous': False, - 'custom': { - 'age': 99 - } -} - -ep = None -mock_http = None - - -def setup_function(): - global mock_http - mock_http = MockHttp() - -def teardown_function(): - if ep is not None: - ep.stop() - -class DefaultTestProcessor(DefaultEventProcessor): - def __init__(self, **kwargs): - if not 'diagnostic_opt_out' in kwargs: - kwargs['diagnostic_opt_out'] = True - if not 'sdk_key' in kwargs: - kwargs['sdk_key'] = 'SDK_KEY' - config = Config(**kwargs) - diagnostic_accumulator = _DiagnosticAccumulator(create_diagnostic_id(config)) - DefaultEventProcessor.__init__(self, config, mock_http, diagnostic_accumulator = diagnostic_accumulator) - -def test_identify_event_is_queued(): - with DefaultTestProcessor() as ep: - e = { 'kind': 'identify', 'user': user } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 1 - assert output == [{ - 'kind': 'identify', - 'creationDate': e['creationDate'], - 'key': user['key'], - 'user': user - }] - -def test_user_is_filtered_in_identify_event(): - with DefaultTestProcessor(all_attributes_private = True) as ep: - e = { 'kind': 'identify', 'user': user } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 1 - assert output == [{ - 'kind': 'identify', - 'creationDate': e['creationDate'], - 'key': user['key'], - 'user': filtered_user - }] - -def test_user_attrs_are_stringified_in_identify_event(): - with DefaultTestProcessor() as ep: - e = { 'kind': 'identify', 'user': numeric_user } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 1 - assert output == [{ - 'kind': 'identify', - 'creationDate': e['creationDate'], - 'key': stringified_numeric_user['key'], - 'user': stringified_numeric_user - }] - -def test_individual_feature_event_is_queued_with_index_event(): - with DefaultTestProcessor() as ep: - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 3 - check_index_event(output[0], e, user) - check_feature_event(output[1], e, False, None, None) - check_summary_event(output[2]) - -def test_user_is_filtered_in_index_event(): - with DefaultTestProcessor(all_attributes_private = True) as ep: - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True, - 'prereqOf': 'prereqFlagKey' - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 3 - check_index_event(output[0], e, filtered_user) - check_feature_event(output[1], e, False, None, 'prereqFlagKey') - check_summary_event(output[2]) - -def test_user_attrs_are_stringified_in_index_event(): - with DefaultTestProcessor() as ep: - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': numeric_user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 3 - check_index_event(output[0], e, stringified_numeric_user) - check_feature_event(output[1], e, False, None, None) - check_summary_event(output[2]) - -def test_feature_event_can_contain_inline_user(): - with DefaultTestProcessor(inline_users_in_events = True) as ep: - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 2 - check_feature_event(output[0], e, False, user, None) - check_summary_event(output[1]) - -def test_user_is_filtered_in_feature_event(): - with DefaultTestProcessor(inline_users_in_events = True, all_attributes_private = True) as ep: - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 2 - check_feature_event(output[0], e, False, filtered_user, None) - check_summary_event(output[1]) - -def test_user_attrs_are_stringified_in_feature_event(): - with DefaultTestProcessor(inline_users_in_events = True) as ep: - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': numeric_user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 2 - check_feature_event(output[0], e, False, stringified_numeric_user, None) - check_summary_event(output[1]) - -def test_index_event_is_still_generated_if_inline_users_is_true_but_feature_event_is_not_tracked(): - with DefaultTestProcessor(inline_users_in_events = True) as ep: - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': False - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 2 - check_index_event(output[0], e, user) - check_summary_event(output[1]) - -def test_two_events_for_same_user_only_produce_one_index_event(): - with DefaultTestProcessor(user_keys_flush_interval = 300) as ep: - e0 = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True - } - e1 = e0.copy() - ep.send_event(e0) - ep.send_event(e1) - - output = flush_and_get_events(ep) - assert len(output) == 4 - check_index_event(output[0], e0, user) - check_feature_event(output[1], e0, False, None, None) - check_feature_event(output[2], e1, False, None, None) - check_summary_event(output[3]) - -def test_new_index_event_is_added_if_user_cache_has_been_cleared(): - with DefaultTestProcessor(user_keys_flush_interval = 0.1) as ep: - e0 = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True - } - e1 = e0.copy() - ep.send_event(e0) - time.sleep(0.2) - ep.send_event(e1) - - output = flush_and_get_events(ep) - assert len(output) == 5 - check_index_event(output[0], e0, user) - check_feature_event(output[1], e0, False, None, None) - check_index_event(output[2], e1, user) - check_feature_event(output[3], e1, False, None, None) - check_summary_event(output[4]) - -def test_event_kind_is_debug_if_flag_is_temporarily_in_debug_mode(): - with DefaultTestProcessor() as ep: - future_time = now() + 100000 - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', - 'trackEvents': False, 'debugEventsUntilDate': future_time - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 3 - check_index_event(output[0], e, user) - check_feature_event(output[1], e, True, user, None) - check_summary_event(output[2]) - -def test_event_can_be_both_tracked_and_debugged(): - with DefaultTestProcessor() as ep: - future_time = now() + 100000 - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', - 'trackEvents': True, 'debugEventsUntilDate': future_time - } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 4 - check_index_event(output[0], e, user) - check_feature_event(output[1], e, False, None, None) - check_feature_event(output[2], e, True, user, None) - check_summary_event(output[3]) - -def test_debug_mode_does_not_expire_if_both_client_time_and_server_time_are_before_expiration_time(): - with DefaultTestProcessor() as ep: - # Pick a server time that slightly different from client time - server_time = now() + 1000 - - # Send and flush an event we don't care about, just to set the last server time - mock_http.set_server_time(server_time) - ep.send_event({ 'kind': 'identify', 'user': { 'key': 'otherUser' }}) - flush_and_get_events(ep) - - # Now send an event with debug mode on, with a "debug until" time that is further in - # the future than both the client time and the server time - debug_until = server_time + 10000 - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', - 'trackEvents': False, 'debugEventsUntilDate': debug_until - } - ep.send_event(e) - - # Should get a summary event only, not a full feature event - output = flush_and_get_events(ep) - assert len(output) == 3 - check_index_event(output[0], e, user) - check_feature_event(output[1], e, True, user, None) # debug event - check_summary_event(output[2]) - -def test_debug_mode_expires_based_on_client_time_if_client_time_is_later_than_server_time(): - with DefaultTestProcessor() as ep: - # Pick a server time that is somewhat behind the client time - server_time = now() - 20000 - - # Send and flush an event we don't care about, just to set the last server time - mock_http.set_server_time(server_time) - ep.send_event({ 'kind': 'identify', 'user': { 'key': 'otherUser' }}) - flush_and_get_events(ep) - - # Now send an event with debug mode on, with a "debug until" time that is further in - # the future than the server time, but in the past compared to the client. - debug_until = server_time + 1000 - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', - 'trackEvents': False, 'debugEventsUntilDate': debug_until - } - ep.send_event(e) - - # Should get a summary event only, not a full feature event - output = flush_and_get_events(ep) - assert len(output) == 2 - check_index_event(output[0], e, user) - check_summary_event(output[1]) - -def test_debug_mode_expires_based_on_server_time_if_server_time_is_later_than_client_time(): - with DefaultTestProcessor() as ep: - # Pick a server time that is somewhat ahead of the client time - server_time = now() + 20000 - - # Send and flush an event we don't care about, just to set the last server time - mock_http.set_server_time(server_time) - ep.send_event({ 'kind': 'identify', 'user': { 'key': 'otherUser' }}) - flush_and_get_events(ep) - - # Now send an event with debug mode on, with a "debug until" time that is further in - # the future than the client time, but in the past compared to the server. - debug_until = server_time - 1000 - e = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', - 'trackEvents': False, 'debugEventsUntilDate': debug_until - } - ep.send_event(e) - - # Should get a summary event only, not a full feature event - output = flush_and_get_events(ep) - assert len(output) == 2 - check_index_event(output[0], e, user) - check_summary_event(output[1]) - -def test_two_feature_events_for_same_user_generate_only_one_index_event(): - with DefaultTestProcessor() as ep: - e1 = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value1', 'default': 'default', 'trackEvents': False - } - e2 = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 2, 'value': 'value2', 'default': 'default', 'trackEvents': False - } - ep.send_event(e1) - ep.send_event(e2) - - output = flush_and_get_events(ep) - assert len(output) == 2 - check_index_event(output[0], e1, user) - check_summary_event(output[1]) - -def test_nontracked_events_are_summarized(): - with DefaultTestProcessor() as ep: - e1 = { - 'kind': 'feature', 'key': 'flagkey1', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value1', 'default': 'default1', 'trackEvents': False - } - e2 = { - 'kind': 'feature', 'key': 'flagkey2', 'version': 22, 'user': user, - 'variation': 2, 'value': 'value2', 'default': 'default2', 'trackEvents': False - } - ep.send_event(e1) - ep.send_event(e2) - - output = flush_and_get_events(ep) - assert len(output) == 2 - check_index_event(output[0], e1, user) - se = output[1] - assert se['kind'] == 'summary' - assert se['startDate'] == e1['creationDate'] - assert se['endDate'] == e2['creationDate'] - assert se['features'] == { - 'flagkey1': { - 'default': 'default1', - 'counters': [ { 'version': 11, 'variation': 1, 'value': 'value1', 'count': 1 } ] - }, - 'flagkey2': { - 'default': 'default2', - 'counters': [ { 'version': 22, 'variation': 2, 'value': 'value2', 'count': 1 } ] - } - } - -def test_custom_event_is_queued_with_user(): - with DefaultTestProcessor() as ep: - e = { 'kind': 'custom', 'key': 'eventkey', 'user': user, 'data': { 'thing': 'stuff '}, 'metricValue': 1.5 } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 2 - check_index_event(output[0], e, user) - check_custom_event(output[1], e, None) - -def test_custom_event_can_contain_inline_user(): - with DefaultTestProcessor(inline_users_in_events = True) as ep: - e = { 'kind': 'custom', 'key': 'eventkey', 'user': user, 'data': { 'thing': 'stuff '} } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 1 - check_custom_event(output[0], e, user) - -def test_user_is_filtered_in_custom_event(): - with DefaultTestProcessor(inline_users_in_events = True, all_attributes_private = True) as ep: - e = { 'kind': 'custom', 'key': 'eventkey', 'user': user, 'data': { 'thing': 'stuff '} } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 1 - check_custom_event(output[0], e, filtered_user) - -def test_user_attrs_are_stringified_in_custom_event(): - with DefaultTestProcessor(inline_users_in_events = True) as ep: - e = { 'kind': 'custom', 'key': 'eventkey', 'user': numeric_user, 'data': { 'thing': 'stuff '} } - ep.send_event(e) - - output = flush_and_get_events(ep) - assert len(output) == 1 - check_custom_event(output[0], e, stringified_numeric_user) - -def test_nothing_is_sent_if_there_are_no_events(): - with DefaultTestProcessor() as ep: - ep.flush() - ep._wait_until_inactive() - assert mock_http.request_data is None - -def test_sdk_key_is_sent(): - with DefaultTestProcessor(sdk_key = 'SDK_KEY') as ep: - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - - assert mock_http.request_headers.get('Authorization') == 'SDK_KEY' - -def test_wrapper_header_not_sent_when_not_set(): - with DefaultTestProcessor() as ep: - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - - assert mock_http.request_headers.get('X-LaunchDarkly-Wrapper') is None - -def test_wrapper_header_sent_when_set(): - with DefaultTestProcessor(wrapper_name = "Flask", wrapper_version = "0.0.1") as ep: - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - - assert mock_http.request_headers.get('X-LaunchDarkly-Wrapper') == "Flask/0.0.1" - -def test_wrapper_header_sent_without_version(): - with DefaultTestProcessor(wrapper_name = "Flask") as ep: - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - - assert mock_http.request_headers.get('X-LaunchDarkly-Wrapper') == "Flask" - -def test_event_schema_set_on_event_send(): - with DefaultTestProcessor() as ep: - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - - assert mock_http.request_headers.get('X-LaunchDarkly-Event-Schema') == "3" - -def test_sdk_key_is_sent_on_diagnostic_request(): - with DefaultTestProcessor(sdk_key = 'SDK_KEY', diagnostic_opt_out=False) as ep: - ep._wait_until_inactive() - assert mock_http.request_headers.get('Authorization') == 'SDK_KEY' - -def test_event_schema_not_set_on_diagnostic_send(): - with DefaultTestProcessor(diagnostic_opt_out=False) as ep: - ep._wait_until_inactive() - assert mock_http.request_headers.get('X-LaunchDarkly-Event-Schema') is None - -def test_init_diagnostic_event_sent(): - with DefaultTestProcessor(diagnostic_opt_out=False) as ep: - diag_init = flush_and_get_events(ep) - # Fields are tested in test_diagnostics.py - assert len(diag_init) == 6 - assert diag_init['kind'] == 'diagnostic-init' - -def test_periodic_diagnostic_includes_events_in_batch(): - with DefaultTestProcessor(diagnostic_opt_out=False) as ep: - # Ignore init event - flush_and_get_events(ep) - # Send a payload with a single event - ep.send_event({ 'kind': 'identify', 'user': user }) - flush_and_get_events(ep) - - ep._send_diagnostic() - diag_event = flush_and_get_events(ep) - assert len(diag_event) == 8 - assert diag_event['kind'] == 'diagnostic' - assert diag_event['eventsInLastBatch'] == 1 - assert diag_event['deduplicatedUsers'] == 0 - -def test_periodic_diagnostic_includes_deduplicated_users(): - with DefaultTestProcessor(diagnostic_opt_out=False) as ep: - # Ignore init event - flush_and_get_events(ep) - # Send two eval events with the same user to cause a user deduplication - e0 = { - 'kind': 'feature', 'key': 'flagkey', 'version': 11, 'user': user, - 'variation': 1, 'value': 'value', 'default': 'default', 'trackEvents': True - } - e1 = e0.copy(); - ep.send_event(e0) - ep.send_event(e1) - flush_and_get_events(ep) - - ep._send_diagnostic() - diag_event = flush_and_get_events(ep) - assert len(diag_event) == 8 - assert diag_event['kind'] == 'diagnostic' - assert diag_event['eventsInLastBatch'] == 3 - assert diag_event['deduplicatedUsers'] == 1 - -def test_no_more_payloads_are_sent_after_401_error(): - verify_unrecoverable_http_error(401) - -def test_no_more_payloads_are_sent_after_403_error(): - verify_unrecoverable_http_error(403) - -def test_will_still_send_after_408_error(): - verify_recoverable_http_error(408) - -def test_will_still_send_after_429_error(): - verify_recoverable_http_error(429) - -def test_will_still_send_after_500_error(): - verify_recoverable_http_error(500) - -def test_does_not_block_on_full_inbox(): - config = Config("fake_sdk_key", events_max_pending=1) # this sets the size of both the inbox and the outbox to 1 - ep_inbox_holder = [ None ] - ep_inbox = None - - def dispatcher_factory(inbox, config, http, diag): - ep_inbox_holder[0] = inbox # it's an array because otherwise it's hard for a closure to modify a variable - return None # the dispatcher object itself doesn't matter, we only manipulate the inbox - def event_consumer(): - while True: - message = ep_inbox.get(block=True) - if message.type == 'stop': - message.param.set() - return - def start_consuming_events(): - Thread(target=event_consumer).start() - - with DefaultEventProcessor(config, mock_http, dispatcher_factory) as ep: - ep_inbox = ep_inbox_holder[0] - event1 = { 'kind': 'custom', 'key': 'event1', 'user': user } - event2 = { 'kind': 'custom', 'key': 'event2', 'user': user } - ep.send_event(event1) - ep.send_event(event2) # this event should be dropped - inbox is full - message1 = ep_inbox.get(block=False) - had_no_more = ep_inbox.empty() - start_consuming_events() - assert message1.param == event1 - assert had_no_more - -def test_http_proxy(monkeypatch): - def _event_processor_proxy_test(server, config, secure): - with DefaultEventProcessor(config) as ep: - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - do_proxy_tests(_event_processor_proxy_test, 'POST', monkeypatch) - -def verify_unrecoverable_http_error(status): - with DefaultTestProcessor(sdk_key = 'SDK_KEY') as ep: - mock_http.set_response_status(status) - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - mock_http.reset() - - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - assert mock_http.request_data is None - -def verify_recoverable_http_error(status): - with DefaultTestProcessor(sdk_key = 'SDK_KEY') as ep: - mock_http.set_response_status(status) - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - mock_http.reset() - - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - assert mock_http.request_data is not None - -def test_event_payload_id_is_sent(): - with DefaultEventProcessor(Config(sdk_key = 'SDK_KEY'), mock_http) as ep: - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - - headerVal = mock_http.request_headers.get('X-LaunchDarkly-Payload-ID') - assert headerVal is not None - # Throws on invalid UUID - uuid.UUID(headerVal) - -def test_event_payload_id_changes_between_requests(): - with DefaultEventProcessor(Config(sdk_key = 'SDK_KEY'), mock_http) as ep: - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - - ep.send_event({ 'kind': 'identify', 'user': user }) - ep.flush() - ep._wait_until_inactive() - - firstPayloadId = mock_http.recorded_requests[0][0].get('X-LaunchDarkly-Payload-ID') - secondPayloadId = mock_http.recorded_requests[1][0].get('X-LaunchDarkly-Payload-ID') - assert firstPayloadId != secondPayloadId - -def flush_and_get_events(ep): - ep.flush() - ep._wait_until_inactive() - if mock_http.request_data is None: - raise AssertionError('Expected to get an HTTP request but did not get one') - else: - return json.loads(mock_http.request_data) - -def check_index_event(data, source, user): - assert data['kind'] == 'index' - assert data['creationDate'] == source['creationDate'] - assert data['user'] == user - -def check_feature_event(data, source, debug, inline_user, prereq_of): - assert data['kind'] == ('debug' if debug else 'feature') - assert data['creationDate'] == source['creationDate'] - assert data['key'] == source['key'] - assert data.get('version') == source.get('version') - assert data.get('variation') == source.get('variation') - assert data.get('value') == source.get('value') - assert data.get('default') == source.get('default') - if inline_user is None: - assert data['userKey'] == str(source['user']['key']) - else: - assert data['user'] == inline_user - if prereq_of is None: - assert "prereqOf" not in data - else: - assert data['prereqOf'] == prereq_of - -def check_custom_event(data, source, inline_user): - assert data['kind'] == 'custom' - assert data['creationDate'] == source['creationDate'] - assert data['key'] == source['key'] - assert data['data'] == source['data'] - if inline_user is None: - assert data['userKey'] == source['user']['key'] - else: - assert data['user'] == inline_user - assert data.get('metricValue') == source.get('metricValue') - -def check_summary_event(data): - assert data['kind'] == 'summary' - -def now(): - return int(time.time() * 1000) diff --git a/testing/test_event_summarizer.py b/testing/test_event_summarizer.py deleted file mode 100644 index ae411aaf..00000000 --- a/testing/test_event_summarizer.py +++ /dev/null @@ -1,63 +0,0 @@ -import pytest - -from ldclient.event_summarizer import EventSummarizer - - -user = { 'key': 'user1' } - -def test_summarize_event_does_nothing_for_identify_event(): - es = EventSummarizer() - snapshot = es.snapshot() - es.summarize_event({ 'kind': 'identify', 'creationDate': 1000, 'user': user }) - - assert es.snapshot() == snapshot - -def test_summarize_event_does_nothing_for_custom_event(): - es = EventSummarizer() - snapshot = es.snapshot() - es.summarize_event({ 'kind': 'custom', 'creationDate': 1000, 'key': 'eventkey', 'user': user }) - - assert es.snapshot() == snapshot - -def test_summarize_event_sets_start_and_end_dates(): - es = EventSummarizer() - event1 = { 'kind': 'feature', 'creationDate': 2000, 'key': 'flag', 'user': user, - 'version': 1, 'variation': 0, 'value': '', 'default': None } - event2 = { 'kind': 'feature', 'creationDate': 1000, 'key': 'flag', 'user': user, - 'version': 1, 'variation': 0, 'value': '', 'default': None } - event3 = { 'kind': 'feature', 'creationDate': 1500, 'key': 'flag', 'user': user, - 'version': 1, 'variation': 0, 'value': '', 'default': None } - es.summarize_event(event1) - es.summarize_event(event2) - es.summarize_event(event3) - data = es.snapshot() - - assert data.start_date == 1000 - assert data.end_date == 2000 - -def test_summarize_event_increments_counters(): - es = EventSummarizer() - event1 = { 'kind': 'feature', 'creationDate': 1000, 'key': 'flag1', 'user': user, - 'version': 11, 'variation': 1, 'value': 'value1', 'default': 'default1' } - event2 = { 'kind': 'feature', 'creationDate': 1000, 'key': 'flag1', 'user': user, - 'version': 11, 'variation': 2, 'value': 'value2', 'default': 'default1' } - event3 = { 'kind': 'feature', 'creationDate': 1000, 'key': 'flag2', 'user': user, - 'version': 22, 'variation': 1, 'value': 'value99', 'default': 'default2' } - event4 = { 'kind': 'feature', 'creationDate': 1000, 'key': 'flag1', 'user': user, - 'version': 11, 'variation': 1, 'value': 'value1', 'default': 'default1' } - event5 = { 'kind': 'feature', 'creationDate': 1000, 'key': 'badkey', 'user': user, - 'version': None, 'variation': None, 'value': 'default3', 'default': 'default3' } - es.summarize_event(event1) - es.summarize_event(event2) - es.summarize_event(event3) - es.summarize_event(event4) - es.summarize_event(event5) - data = es.snapshot() - - expected = { - ('flag1', 1, 11): { 'count': 2, 'value': 'value1', 'default': 'default1' }, - ('flag1', 2, 11): { 'count': 1, 'value': 'value2', 'default': 'default1' }, - ('flag2', 1, 22): { 'count': 1, 'value': 'value99', 'default': 'default2' }, - ('badkey', None, None): { 'count': 1, 'value': 'default3', 'default': 'default3' } - } - assert data.counters == expected diff --git a/testing/test_flags_state.py b/testing/test_flags_state.py index 1acdbaf8..5a9f43b2 100644 --- a/testing/test_flags_state.py +++ b/testing/test_flags_state.py @@ -1,7 +1,7 @@ import pytest import json import jsonpickle -from ldclient.flags_state import FeatureFlagsState +from ldclient.evaluation import FeatureFlagsState def test_can_get_flag_value(): state = FeatureFlagsState(True) diff --git a/testing/test_ldclient.py b/testing/test_ldclient.py index e9a19c9a..506773b1 100644 --- a/testing/test_ldclient.py +++ b/testing/test_ldclient.py @@ -1,22 +1,20 @@ -from ldclient.client import LDClient, Config -from ldclient.event_processor import DefaultEventProcessor +from ldclient.client import LDClient, Config, Context from ldclient.feature_store import InMemoryFeatureStore -from ldclient.impl.stubs import NullEventProcessor, NullUpdateProcessor +from ldclient.impl.datasource.polling import PollingUpdateProcessor +from ldclient.impl.datasource.streaming import StreamingUpdateProcessor +from ldclient.impl.stubs import NullUpdateProcessor from ldclient.interfaces import UpdateProcessor -from ldclient.polling import PollingUpdateProcessor -from ldclient.streaming import StreamingUpdateProcessor from ldclient.versioned_data_kind import FEATURES, SEGMENTS -import logging import pytest +from testing.builders import * from testing.stub_util import CapturingFeatureStore, MockEventProcessor, MockUpdateProcessor -from testing.sync_util import wait_until -import queue unreachable_uri="http://fake" +context = Context.builder('xyz').set('bizzle', 'def').build() user = { u'key': u'xyz', u'custom': { @@ -55,17 +53,6 @@ def make_ldd_client(): stream_uri=unreachable_uri)) -def make_off_flag_with_value(key, value): - return { - u'key': key, - u'version': 100, - u'salt': u'', - u'on': False, - u'variations': [value], - u'offVariation': 0 - } - - def get_first_event(c): e = c._event_processor._events.pop(0) c._event_processor._events = [] @@ -78,23 +65,6 @@ def count_events(c): return n -def test_client_has_null_event_processor_if_offline(): - with make_offline_client() as client: - assert isinstance(client._event_processor, NullEventProcessor) - - -def test_client_has_null_event_processor_if_send_events_off(): - config = Config(sdk_key="secret", base_uri=unreachable_uri, - update_processor_class = MockUpdateProcessor, send_events=False) - with LDClient(config=config) as client: - assert isinstance(client._event_processor, NullEventProcessor) - - -def test_client_has_normal_event_processor_in_ldd_mode(): - with make_ldd_client() as client: - assert isinstance(client._event_processor, DefaultEventProcessor) - - def test_client_has_null_update_processor_in_offline_mode(): with make_offline_client() as client: assert isinstance(client._update_processor, NullUpdateProcessor) @@ -124,84 +94,6 @@ def test_toggle_offline(): assert client.variation('feature.key', user, default=None) is None -def test_identify(): - with make_client() as client: - client.identify(user) - e = get_first_event(client) - assert e['kind'] == 'identify' and e['key'] == u'xyz' and e['user'] == user - - -def test_identify_no_user(): - with make_client() as client: - client.identify(None) - assert count_events(client) == 0 - - -def test_identify_no_user_key(): - with make_client() as client: - client.identify({ 'name': 'nokey' }) - assert count_events(client) == 0 - - -def test_identify_blank_user_key(): - with make_client() as client: - client.identify({ 'key': '' }) - assert count_events(client) == 0 - - -def test_track(): - with make_client() as client: - client.track('my_event', user) - e = get_first_event(client) - assert e['kind'] == 'custom' and e['key'] == 'my_event' and e['user'] == user and e.get('data') is None and e.get('metricValue') is None - - -def test_track_with_data(): - with make_client() as client: - client.track('my_event', user, 42) - e = get_first_event(client) - assert e['kind'] == 'custom' and e['key'] == 'my_event' and e['user'] == user and e['data'] == 42 and e.get('metricValue') is None - - -def test_track_with_metric_value(): - with make_client() as client: - client.track('my_event', user, 42, 1.5) - e = get_first_event(client) - assert e['kind'] == 'custom' and e['key'] == 'my_event' and e['user'] == user and e['data'] == 42 and e.get('metricValue') == 1.5 - - -def test_track_no_user(): - with make_client() as client: - client.track('my_event', None) - assert count_events(client) == 0 - - -def test_track_no_user_key(): - with make_client() as client: - client.track('my_event', { 'name': 'nokey' }) - assert count_events(client) == 0 - - -def test_track_anonymous_user(): - with make_client() as client: - client.track('my_event', anonymous_user) - e = get_first_event(client) - assert e['kind'] == 'custom' and e['key'] == 'my_event' and e['user'] == anonymous_user and e.get('data') is None and e.get('metricValue') is None and e.get('contextKind') == 'anonymousUser' - - -def test_alias(): - with make_client() as client: - client.alias(user, anonymous_user) - e = get_first_event(client) - assert e['kind'] == 'alias' and e['key'] == 'xyz' and e['contextKind'] == 'user' and e['previousKey'] == 'abc' and e['previousContextKind'] == 'anonymousUser' - - -def test_alias_no_user(): - with make_client() as client: - client.alias(None, None) - assert count_events(client) == 0 - - def test_defaults(): config=Config("SDK_KEY", base_uri="http://localhost:3000", defaults={"foo": "bar"}, offline=True) with LDClient(config=config) as client: @@ -218,8 +110,6 @@ def test_defaults_and_online(): feature_store=InMemoryFeatureStore())) actual = my_client.variation('foo', user, default="originalDefault") assert actual == expected - e = get_first_event(my_client) - assert e['kind'] == 'feature' and e['key'] == u'foo' and e['user'] == user def test_defaults_and_online_no_default(): @@ -229,8 +119,6 @@ def test_defaults_and_online_no_default(): event_processor_class=MockEventProcessor, update_processor_class=MockUpdateProcessor)) assert "jim" == my_client.variation('baz', user, default="jim") - e = get_first_event(my_client) - assert e['kind'] == 'feature' and e['key'] == u'baz' and e['user'] == user def test_no_defaults(): @@ -238,256 +126,13 @@ def test_no_defaults(): assert "bar" == client.variation('foo', user, default="bar") -def test_event_for_existing_feature(): - feature = make_off_flag_with_value('feature.key', 'value') - feature['trackEvents'] = True - feature['debugEventsUntilDate'] = 1000 - store = InMemoryFeatureStore() - store.init({FEATURES: {'feature.key': feature}}) - with make_client(store) as client: - assert 'value' == client.variation('feature.key', user, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == 'feature.key' and - e['user'] == user and - e['version'] == feature['version'] and - e['value'] == 'value' and - e['variation'] == 0 and - e.get('reason') is None and - e['default'] == 'default' and - e['trackEvents'] == True and - e['debugEventsUntilDate'] == 1000 and - e.get('contextKind') is None) - - -def test_event_for_existing_feature_anonymous_user(): - feature = make_off_flag_with_value('feature.key', 'value') - feature['trackEvents'] = True - feature['debugEventsUntilDate'] = 1000 - store = InMemoryFeatureStore() - store.init({FEATURES: {'feature.key': feature}}) - with make_client(store) as client: - assert 'value' == client.variation('feature.key', anonymous_user, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == 'feature.key' and - e['user'] == anonymous_user and - e['version'] == feature['version'] and - e['value'] == 'value' and - e['variation'] == 0 and - e.get('reason') is None and - e['default'] == 'default' and - e['trackEvents'] == True and - e['debugEventsUntilDate'] == 1000 and - e['contextKind'] == 'anonymousUser') - - -def test_event_for_existing_feature_with_reason(): - feature = make_off_flag_with_value('feature.key', 'value') - feature['trackEvents'] = True - feature['debugEventsUntilDate'] = 1000 - store = InMemoryFeatureStore() - store.init({FEATURES: {'feature.key': feature}}) - with make_client(store) as client: - assert 'value' == client.variation_detail('feature.key', user, default='default').value - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == 'feature.key' and - e['user'] == user and - e['version'] == feature['version'] and - e['value'] == 'value' and - e['variation'] == 0 and - e['reason'] == {'kind': 'OFF'} and - e['default'] == 'default' and - e['trackEvents'] == True and - e['debugEventsUntilDate'] == 1000) - - -def test_event_for_existing_feature_with_tracked_rule(): - feature = { - 'key': 'feature.key', - 'version': 100, - 'salt': u'', - 'on': True, - 'rules': [ - { - 'clauses': [ - { 'attribute': 'key', 'op': 'in', 'values': [ user['key'] ] } - ], - 'variation': 0, - 'trackEvents': True, - 'id': 'rule_id' - } - ], - 'variations': [ 'value' ] - } - store = InMemoryFeatureStore() - store.init({FEATURES: {feature['key']: feature}}) - client = make_client(store) - assert 'value' == client.variation(feature['key'], user, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == feature['key'] and - e['user'] == user and - e['version'] == feature['version'] and - e['value'] == 'value' and - e['variation'] == 0 and - e['reason'] == { 'kind': 'RULE_MATCH', 'ruleIndex': 0, 'ruleId': 'rule_id' } and - e['default'] == 'default' and - e['trackEvents'] == True and - e.get('debugEventsUntilDate') is None) - - -def test_event_for_existing_feature_with_untracked_rule(): - feature = { - 'key': 'feature.key', - 'version': 100, - 'salt': u'', - 'on': True, - 'rules': [ - { - 'clauses': [ - { 'attribute': 'key', 'op': 'in', 'values': [ user['key'] ] } - ], - 'variation': 0, - 'trackEvents': False, - 'id': 'rule_id' - } - ], - 'variations': [ 'value' ] - } - store = InMemoryFeatureStore() - store.init({FEATURES: {feature['key']: feature}}) - client = make_client(store) - assert 'value' == client.variation(feature['key'], user, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == feature['key'] and - e['user'] == user and - e['version'] == feature['version'] and - e['value'] == 'value' and - e['variation'] == 0 and - e.get('reason') is None and - e['default'] == 'default' and - e.get('trackEvents', False) == False and - e.get('debugEventsUntilDate') is None) - - -def test_event_for_existing_feature_with_tracked_fallthrough(): - feature = { - 'key': 'feature.key', - 'version': 100, - 'salt': u'', - 'on': True, - 'rules': [], - 'fallthrough': { 'variation': 0 }, - 'variations': [ 'value' ], - 'trackEventsFallthrough': True - } - store = InMemoryFeatureStore() - store.init({FEATURES: {feature['key']: feature}}) - client = make_client(store) - assert 'value' == client.variation(feature['key'], user, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == feature['key'] and - e['user'] == user and - e['version'] == feature['version'] and - e['value'] == 'value' and - e['variation'] == 0 and - e['reason'] == { 'kind': 'FALLTHROUGH' } and - e['default'] == 'default' and - e['trackEvents'] == True and - e.get('debugEventsUntilDate') is None) - - -def test_event_for_existing_feature_with_untracked_fallthrough(): - feature = { - 'key': 'feature.key', - 'version': 100, - 'salt': u'', - 'on': True, - 'rules': [], - 'fallthrough': { 'variation': 0 }, - 'variations': [ 'value' ], - 'trackEventsFallthrough': False - } - store = InMemoryFeatureStore() - store.init({FEATURES: {feature['key']: feature}}) - client = make_client(store) - assert 'value' == client.variation(feature['key'], user, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == feature['key'] and - e['user'] == user and - e['version'] == feature['version'] and - e['value'] == 'value' and - e['variation'] == 0 and - e.get('reason') is None and - e['default'] == 'default' and - e.get('trackEvents', False) == False and - e.get('debugEventsUntilDate') is None) - - -def test_event_for_unknown_feature(): - store = InMemoryFeatureStore() - store.init({FEATURES: {}}) - with make_client(store) as client: - assert 'default' == client.variation('feature.key', user, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == 'feature.key' and - e['user'] == user and - e['value'] == 'default' and - e.get('variation') is None and - e['default'] == 'default') - - -def test_event_for_existing_feature_with_no_user(): - feature = make_off_flag_with_value('feature.key', 'value') - feature['trackEvents'] = True - feature['debugEventsUntilDate'] = 1000 - store = InMemoryFeatureStore() - store.init({FEATURES: {'feature.key': feature}}) - with make_client(store) as client: - assert 'default' == client.variation('feature.key', None, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == 'feature.key' and - e.get('user') is None and - e['version'] == feature['version'] and - e['value'] == 'default' and - e.get('variation') is None and - e['default'] == 'default' and - e['trackEvents'] == True and - e['debugEventsUntilDate'] == 1000) - - -def test_event_for_existing_feature_with_no_user_key(): - feature = make_off_flag_with_value('feature.key', 'value') - feature['trackEvents'] = True - feature['debugEventsUntilDate'] = 1000 - store = InMemoryFeatureStore() - store.init({FEATURES: {'feature.key': feature}}) - with make_client(store) as client: - bad_user = { u'name': u'Bob' } - assert 'default' == client.variation('feature.key', bad_user, default='default') - e = get_first_event(client) - assert (e['kind'] == 'feature' and - e['key'] == 'feature.key' and - e['user'] == bad_user and - e['version'] == feature['version'] and - e['value'] == 'default' and - e.get('variation') is None and - e['default'] == 'default' and - e['trackEvents'] == True and - e['debugEventsUntilDate'] == 1000) - - def test_secure_mode_hash(): - user = {'key': 'Message'} + context_to_hash = Context.create('Message') + equivalent_user_to_hash = {'key': 'Message'} + expected_hash = "aa747c502a898200f9e4fa21bac68136f886a0e27aec70ba06daf2e2a5cb5597" with make_offline_client() as client: - assert client.secure_mode_hash(user) == "aa747c502a898200f9e4fa21bac68136f886a0e27aec70ba06daf2e2a5cb5597" + assert client.secure_mode_hash(context_to_hash) == expected_hash + assert client.secure_mode_hash(equivalent_user_to_hash) == expected_hash dependency_ordering_test_data = { diff --git a/testing/test_ldclient_end_to_end.py b/testing/test_ldclient_end_to_end.py index 7003805a..3f550d0f 100644 --- a/testing/test_ldclient_end_to_end.py +++ b/testing/test_ldclient_end_to_end.py @@ -102,12 +102,12 @@ def test_client_sends_diagnostics(): data = json.loads(r.body) assert data['kind'] == 'diagnostic-init' -# The TLS tests are skipped in Python 3.3 because the embedded HTTPS server does not work correctly, causing +# The TLS tests are skipped in Python 3.7 because the embedded HTTPS server does not work correctly, causing # a TLS handshake failure on the client side. It's unclear whether this is a problem with the self-signed # certificate we are using or with some other server settings, but it does not appear to be a client-side -# problem. +# problem since we know that the SDK is able to connect to secure LD endpoints. -@pytest.mark.skipif(sys.version_info.major == 3 and sys.version_info.minor == 3, reason = "test is skipped in Python 3.3") +@pytest.mark.skipif(sys.version_info.major == 3 and sys.version_info.minor == 7, reason = "test is skipped in Python 3.7") def test_cannot_connect_with_selfsigned_cert_by_default(): with start_secure_server() as server: server.for_path('/sdk/latest-all', poll_content()) @@ -120,7 +120,7 @@ def test_cannot_connect_with_selfsigned_cert_by_default(): with LDClient(config = config, start_wait = 1.5) as client: assert not client.is_initialized() -@pytest.mark.skipif(sys.version_info.major == 3 and sys.version_info.minor == 3, reason = "test is skipped in Python 3.3") +@pytest.mark.skipif(sys.version_info.major == 3 and sys.version_info.minor == 7, reason = "test is skipped in Python 3.7") def test_can_connect_with_selfsigned_cert_if_ssl_verify_is_false(): with start_secure_server() as server: server.for_path('/sdk/latest-all', poll_content()) @@ -134,7 +134,7 @@ def test_can_connect_with_selfsigned_cert_if_ssl_verify_is_false(): with LDClient(config = config) as client: assert client.is_initialized() -@pytest.mark.skipif(sys.version_info.major == 3 and sys.version_info.minor == 3, reason = "test is skipped in Python 3.3") +@pytest.mark.skipif(sys.version_info.major == 3 and sys.version_info.minor == 7, reason = "test is skipped in Python 3.7") def test_can_connect_with_selfsigned_cert_if_disable_ssl_verification_is_true(): with start_secure_server() as server: server.for_path('/sdk/latest-all', poll_content()) @@ -148,7 +148,7 @@ def test_can_connect_with_selfsigned_cert_if_disable_ssl_verification_is_true(): with LDClient(config = config) as client: assert client.is_initialized() -@pytest.mark.skipif(sys.version_info.major == 3 and sys.version_info.minor == 3, reason = "test is skipped in Python 3.3") +@pytest.mark.skipif(sys.version_info.major == 3 and sys.version_info.minor == 7, reason = "test is skipped in Python 3.7") def test_can_connect_with_selfsigned_cert_by_setting_ca_certs(): with start_secure_server() as server: server.for_path('/sdk/latest-all', poll_content()) diff --git a/testing/test_ldclient_evaluation.py b/testing/test_ldclient_evaluation.py index faa3f5b6..ff3753fb 100644 --- a/testing/test_ldclient_evaluation.py +++ b/testing/test_ldclient_evaluation.py @@ -1,22 +1,19 @@ -import pytest -import json import time -from ldclient.client import LDClient, Config +from ldclient.client import LDClient, Config, Context from ldclient.config import BigSegmentsConfig -from ldclient.evaluation import BigSegmentsStatus +from ldclient.evaluation import BigSegmentsStatus, EvaluationDetail from ldclient.feature_store import InMemoryFeatureStore -from ldclient.flag import EvaluationDetail from ldclient.impl.big_segments import _hash_for_user_key from ldclient.impl.evaluator import _make_big_segment_ref from ldclient.interfaces import FeatureStore from ldclient.versioned_data_kind import FEATURES, SEGMENTS -from testing.impl.evaluator_util import make_boolean_flag_matching_segment + +from testing.builders import * from testing.mock_components import MockBigSegmentStore from testing.stub_util import MockEventProcessor, MockUpdateProcessor -from testing.test_ldclient import make_off_flag_with_value +from testing.test_ldclient import make_client, user -user = { 'key': 'userkey' } flag1 = { 'key': 'key1', 'version': 100, @@ -55,13 +52,6 @@ def init(self, data): def initialized(self): return True -def make_client(store): - return LDClient(config=Config(sdk_key='SDK_KEY', - base_uri='http://test', - event_processor_class=MockEventProcessor, - update_processor_class=MockUpdateProcessor, - feature_store=store)) - def get_log_lines(caplog, level): loglines = caplog.records if callable(loglines): @@ -71,44 +61,64 @@ def get_log_lines(caplog, level): def test_variation_for_existing_feature(): - feature = make_off_flag_with_value('feature.key', 'value') + feature = build_off_flag_with_value('feature.key', 'value').build() store = InMemoryFeatureStore() store.init({FEATURES: {'feature.key': feature}}) client = make_client(store) assert 'value' == client.variation('feature.key', user, default='default') +def test_variation_passes_user_to_evaluator(): + u = {'key': 'userkey'} + feature = FlagBuilder('feature.key').on(True).variations('wrong', 'right').target(1, 'userkey').build() + store = InMemoryFeatureStore() + store.init({FEATURES: {'feature.key': feature}}) + client = make_client(store) + assert 'right' == client.variation('feature.key', u, default='default') + +def test_variation_passes_context_to_evaluator(): + c = Context.create('userkey') + feature = FlagBuilder('feature.key').on(True).variations('wrong', 'right').target(1, 'userkey').build() + store = InMemoryFeatureStore() + store.init({FEATURES: {'feature.key': feature}}) + client = make_client(store) + assert 'right' == client.variation('feature.key', c, default='default') + def test_variation_for_unknown_feature(): store = InMemoryFeatureStore() client = make_client(store) assert 'default' == client.variation('feature.key', user, default='default') def test_variation_when_user_is_none(): - feature = make_off_flag_with_value('feature.key', 'value') + feature = build_off_flag_with_value('feature.key', 'value').build() store = InMemoryFeatureStore() store.init({FEATURES: {'feature.key': feature}}) client = make_client(store) assert 'default' == client.variation('feature.key', None, default='default') def test_variation_when_user_has_no_key(): - feature = make_off_flag_with_value('feature.key', 'value') + feature = build_off_flag_with_value('feature.key', 'value').build() store = InMemoryFeatureStore() store.init({FEATURES: {'feature.key': feature}}) client = make_client(store) assert 'default' == client.variation('feature.key', { }, default='default') +def test_variation_for_invalid_context(): + c = Context.create('') + feature = build_off_flag_with_value('feature.key', 'value').build() + store = InMemoryFeatureStore() + store.init({FEATURES: {'feature.key': feature}}) + client = make_client(store) + assert 'default' == client.variation('feature.key', c, default='default') + def test_variation_for_flag_that_evaluates_to_none(): - empty_flag = { - 'key': 'feature.key', - 'on': False, - 'offVariation': None - } + empty_flag = FlagBuilder('feature.key').on(False).build() store = InMemoryFeatureStore() store.init({FEATURES: {'feature.key': empty_flag}}) client = make_client(store) assert 'default' == client.variation('feature.key', user, default='default') def test_variation_detail_for_existing_feature(): - feature = make_off_flag_with_value('feature.key', 'value') + feature = build_off_flag_with_value('feature.key', 'value').build() store = InMemoryFeatureStore() store.init({FEATURES: {'feature.key': feature}}) client = make_client(store) @@ -122,7 +132,7 @@ def test_variation_detail_for_unknown_feature(): assert expected == client.variation_detail('feature.key', user, default='default') def test_variation_detail_when_user_is_none(): - feature = make_off_flag_with_value('feature.key', 'value') + feature = build_off_flag_with_value('feature.key', 'value').build() store = InMemoryFeatureStore() store.init({FEATURES: {'feature.key': feature}}) client = make_client(store) @@ -130,7 +140,7 @@ def test_variation_detail_when_user_is_none(): assert expected == client.variation_detail('feature.key', None, default='default') def test_variation_detail_when_user_has_no_key(): - feature = make_off_flag_with_value('feature.key', 'value') + feature = build_off_flag_with_value('feature.key', 'value').build() store = InMemoryFeatureStore() store.init({FEATURES: {'feature.key': feature}}) client = make_client(store) @@ -138,11 +148,7 @@ def test_variation_detail_when_user_has_no_key(): assert expected == client.variation_detail('feature.key', { }, default='default') def test_variation_detail_for_flag_that_evaluates_to_none(): - empty_flag = { - 'key': 'feature.key', - 'on': False, - 'offVariation': None - } + empty_flag = FlagBuilder('feature.key').on(False).build() store = InMemoryFeatureStore() store.init({FEATURES: {'feature.key': empty_flag}}) client = make_client(store) @@ -169,12 +175,7 @@ def test_variation_detail_when_feature_store_throws_error(caplog): assert errlog == [ 'Unexpected error while retrieving feature flag "feature.key": NotImplementedError()' ] def test_flag_using_big_segment(): - segment = { - 'key': 'segkey', - 'version': 1, - 'generation': 1, - 'unbounded': True - } + segment = SegmentBuilder('segkey').unbounded(True).generation(1).build() flag = make_boolean_flag_matching_segment(segment) store = InMemoryFeatureStore() store.init({ FEATURES: { flag['key']: flag }, SEGMENTS: { segment['key']: segment } }) diff --git a/testing/test_ldclient_events.py b/testing/test_ldclient_events.py new file mode 100644 index 00000000..aa5ff1b6 --- /dev/null +++ b/testing/test_ldclient_events.py @@ -0,0 +1,290 @@ +from ldclient.client import LDClient, Config, Context +from ldclient.impl.events.event_processor import DefaultEventProcessor +from ldclient.feature_store import InMemoryFeatureStore +from ldclient.impl.events.types import EventInputCustom, EventInputEvaluation, EventInputIdentify +from ldclient.impl.stubs import NullEventProcessor +from ldclient.versioned_data_kind import FEATURES + +from testing.builders import * +from testing.stub_util import MockUpdateProcessor +from testing.test_ldclient import context, make_client, make_ldd_client, make_offline_client, unreachable_uri, user + + +def get_first_event(c): + e = c._event_processor._events.pop(0) + c._event_processor._events = [] + return e + + +def count_events(c): + n = len(c._event_processor._events) + c._event_processor._events = [] + return n + + +def test_client_has_null_event_processor_if_offline(): + with make_offline_client() as client: + assert isinstance(client._event_processor, NullEventProcessor) + + +def test_client_has_null_event_processor_if_send_events_off(): + config = Config(sdk_key="secret", base_uri=unreachable_uri, + update_processor_class = MockUpdateProcessor, send_events=False) + with LDClient(config=config) as client: + assert isinstance(client._event_processor, NullEventProcessor) + + +def test_client_has_normal_event_processor_in_ldd_mode(): + with make_ldd_client() as client: + assert isinstance(client._event_processor, DefaultEventProcessor) + + +def test_identify(): + with make_client() as client: + client.identify(context) + e = get_first_event(client) + assert isinstance(e, EventInputIdentify) + assert e.context == context + + +def test_identify_with_user_dict(): + with make_client() as client: + client.identify(user) + e = get_first_event(client) + assert isinstance(e, EventInputIdentify) + assert e.context == context + + +def test_identify_no_user(): + with make_client() as client: + client.identify(None) + assert count_events(client) == 0 + + +def test_identify_no_user_key(): + with make_client() as client: + client.identify({ 'name': 'nokey' }) + assert count_events(client) == 0 + + +def test_identify_invalid_context(): + with make_client() as client: + client.identify(Context.create('')) + assert count_events(client) == 0 + + +def test_track(): + with make_client() as client: + client.track('my_event', context) + e = get_first_event(client) + assert isinstance(e, EventInputCustom) + assert e.key == 'my_event' + assert e.context == context + assert e.data is None + assert e.metric_value is None + + +def test_track_with_user_dict(): + with make_client() as client: + client.track('my_event', user) + e = get_first_event(client) + assert isinstance(e, EventInputCustom) + assert e.key == 'my_event' + assert e.context == context + assert e.data is None + assert e.metric_value is None + + +def test_track_with_data(): + with make_client() as client: + client.track('my_event', context, 42) + e = get_first_event(client) + assert isinstance(e, EventInputCustom) + assert e.key == 'my_event' + assert e.context == context + assert e.data == 42 + assert e.metric_value is None + + +def test_track_with_metric_value(): + with make_client() as client: + client.track('my_event', context, 42, 1.5) + e = get_first_event(client) + assert isinstance(e, EventInputCustom) + assert e.key == 'my_event' + assert e.context == context + assert e.data == 42 + assert e.metric_value == 1.5 + + +def test_track_no_context(): + with make_client() as client: + client.track('my_event', None) + assert count_events(client) == 0 + + +def test_track_invalid_context(): + with make_client() as client: + client.track('my_event', Context.create('')) + assert count_events(client) == 0 + + +def test_event_for_existing_feature(): + feature = build_off_flag_with_value('feature.key', 'value').track_events(True).build() + store = InMemoryFeatureStore() + store.init({FEATURES: {feature.key: feature.to_json_dict()}}) + with make_client(store) as client: + assert 'value' == client.variation(feature.key, context, default='default') + e = get_first_event(client) + assert isinstance(e, EventInputEvaluation) + assert (e.key == feature.key and + e.flag == feature and + e.context == context and + e.value == 'value' and + e.variation == 0 and + e.reason is None and + e.default_value == 'default' and + e.track_events is True) + + +def test_event_for_existing_feature_with_reason(): + feature = build_off_flag_with_value('feature.key', 'value').track_events(True).build() + store = InMemoryFeatureStore() + store.init({FEATURES: {feature.key: feature.to_json_dict()}}) + with make_client(store) as client: + assert 'value' == client.variation_detail(feature.key, context, default='default').value + e = get_first_event(client) + assert isinstance(e, EventInputEvaluation) + assert (e.key == feature.key and + e.flag == feature and + e.context == context and + e.value == 'value' and + e.variation == 0 and + e.reason == {'kind': 'OFF'} and + e.default_value == 'default' and + e.track_events is True) + + +def test_event_for_existing_feature_with_tracked_rule(): + feature = FlagBuilder('feature.key').version(100).on(True).variations('value') \ + .rules( + FlagRuleBuilder().variation(0).id('rule_id').track_events(True) \ + .clauses(make_clause(None, 'key', 'in', user['key'])) \ + .build() + ) \ + .build() + store = InMemoryFeatureStore() + store.init({FEATURES: {feature.key: feature.to_json_dict()}}) + client = make_client(store) + assert 'value' == client.variation(feature.key, context, default='default') + e = get_first_event(client) + assert isinstance(e, EventInputEvaluation) + assert (e.key == feature.key and + e.flag == feature and + e.context == context and + e.value == 'value' and + e.variation == 0 and + e.reason == { 'kind': 'RULE_MATCH', 'ruleIndex': 0, 'ruleId': 'rule_id' } and + e.default_value == 'default' and + e.track_events is True) + + +def test_event_for_existing_feature_with_untracked_rule(): + feature = FlagBuilder('feature.key').version(100).on(True).variations('value') \ + .rules( + FlagRuleBuilder().variation(0).id('rule_id') \ + .clauses(make_clause(None, 'key', 'in', user['key'])) \ + .build() + ) \ + .build() + store = InMemoryFeatureStore() + store.init({FEATURES: {feature.key: feature.to_json_dict()}}) + client = make_client(store) + assert 'value' == client.variation(feature.key, context, default='default') + e = get_first_event(client) + assert isinstance(e, EventInputEvaluation) + assert (e.key == feature.key and + e.flag == feature and + e.context == context and + e.value == 'value' and + e.variation == 0 and + e.reason is None and + e.default_value == 'default' and + e.track_events is False) + + +def test_event_for_existing_feature_with_tracked_fallthrough(): + feature = FlagBuilder('feature.key').version(100).on(True).variations('value') \ + .fallthrough_variation(0).track_events_fallthrough(True) \ + .build() + store = InMemoryFeatureStore() + store.init({FEATURES: {feature.key: feature.to_json_dict()}}) + client = make_client(store) + assert 'value' == client.variation(feature.key, context, default='default') + e = get_first_event(client) + assert isinstance(e, EventInputEvaluation) + assert (e.key == feature.key and + e.flag == feature and + e.context == context and + e.value == 'value' and + e.variation == 0 and + e.reason == { 'kind': 'FALLTHROUGH' } and + e.default_value == 'default' and + e.track_events is True) + + +def test_event_for_existing_feature_with_untracked_fallthrough(): + feature = FlagBuilder('feature.key').version(100).on(True).variations('value') \ + .fallthrough_variation(0) \ + .build() + store = InMemoryFeatureStore() + store.init({FEATURES: {feature.key: feature.to_json_dict()}}) + client = make_client(store) + detail = client.variation_detail(feature.key, context, default='default') + assert 'value' == detail.value + e = get_first_event(client) + assert isinstance(e, EventInputEvaluation) + assert (e.key == feature.key and + e.flag == feature and + e.context == context and + e.value == 'value' and + e.variation == 0 and + e.reason == { 'kind': 'FALLTHROUGH' } and + e.default_value == 'default' and + e.track_events is False) + + +def test_event_for_unknown_feature(): + store = InMemoryFeatureStore() + store.init({FEATURES: {}}) + with make_client(store) as client: + assert 'default' == client.variation('feature.key', context, default='default') + e = get_first_event(client) + assert isinstance(e, EventInputEvaluation) + assert (e.key == 'feature.key' and + e.flag is None and + e.context == context and + e.value == 'default' and + e.variation is None and + e.reason is None and + e.default_value == 'default' and + e.track_events is False) + + +def test_no_event_for_existing_feature_with_no_context(): + feature = build_off_flag_with_value('feature.key', 'value').track_events(True).build() + store = InMemoryFeatureStore() + store.init({FEATURES: {feature.key: feature.to_json_dict()}}) + with make_client(store) as client: + assert 'default' == client.variation(feature.key, None, default='default') + assert count_events(client) == 0 + + +def test_no_event_for_existing_feature_with_invalid_context(): + feature = build_off_flag_with_value('feature.key', 'value').track_events(True).build() + store = InMemoryFeatureStore() + store.init({FEATURES: {feature.key: feature.to_json_dict()}}) + with make_client(store) as client: + bad_context = Context.create('') + assert 'default' == client.variation('feature.key', bad_context, default='default') + assert count_events(client) == 0 diff --git a/testing/test_user_filter.py b/testing/test_user_filter.py deleted file mode 100644 index aa53bbad..00000000 --- a/testing/test_user_filter.py +++ /dev/null @@ -1,117 +0,0 @@ -import json -from ldclient.client import Config -from ldclient.user_filter import UserFilter - - -base_config = Config("fake_sdk_key") -config_with_all_attrs_private = Config("fake_sdk_key", all_attributes_private = True) -config_with_some_attrs_private = Config("fake_sdk_key", private_attribute_names=set([u'firstName', u'bizzle'])) - -# users to serialize - -user = { - u'key': u'abc', - u'firstName': u'Sue', - u'custom': { - u'bizzle': u'def', - u'dizzle': u'ghi' - } -} - -user_specifying_own_private_attr = { - u'key': u'abc', - u'firstName': u'Sue', - u'custom': { - u'bizzle': u'def', - u'dizzle': u'ghi' - }, - u'privateAttributeNames': [ u'dizzle', u'unused' ] -} - -user_with_unknown_top_level_attrs = { - u'key': u'abc', - u'firstName': u'Sue', - u'species': u'human', - u'hatSize': 6, - u'custom': { - u'bizzle': u'def', - u'dizzle': u'ghi' - } -} - -anon_user = { - u'key': u'abc', - u'anonymous': True, - u'custom': { - u'bizzle': u'def', - u'dizzle': u'ghi' - } -} - -# expected results from serializing user - -user_with_all_attrs_hidden = { - u'key': u'abc', - u'custom': { }, - u'privateAttrs': [ u'bizzle', u'dizzle', u'firstName' ] -} - -user_with_some_attrs_hidden = { - u'key': u'abc', - u'custom': { - u'dizzle': u'ghi' - }, - u'privateAttrs': [ u'bizzle', u'firstName' ] -} - -user_with_own_specified_attr_hidden = { - u'key': u'abc', - u'firstName': u'Sue', - u'custom': { - u'bizzle': u'def' - }, - u'privateAttrs': [ u'dizzle' ] -} - -anon_user_with_all_attrs_hidden = { - u'key': u'abc', - u'anonymous': True, - u'custom': { }, - u'privateAttrs': [ u'bizzle', u'dizzle' ] -} - - -def test_all_user_attrs_serialized(): - uf = UserFilter(base_config) - j = uf.filter_user_props(user) - assert j == user - -def test_all_user_attrs_private(): - uf = UserFilter(config_with_all_attrs_private) - j = uf.filter_user_props(user) - assert j == user_with_all_attrs_hidden - -def test_some_user_attrs_private(): - uf = UserFilter(config_with_some_attrs_private) - j = uf.filter_user_props(user) - assert j == user_with_some_attrs_hidden - -def test_per_user_private_attr(): - uf = UserFilter(base_config) - j = uf.filter_user_props(user_specifying_own_private_attr) - assert j == user_with_own_specified_attr_hidden - -def test_per_user_private_attr_plus_global_private_attrs(): - uf = UserFilter(config_with_some_attrs_private) - j = uf.filter_user_props(user_specifying_own_private_attr) - assert j == user_with_all_attrs_hidden - -def test_unknown_top_level_attrs_stripped(): - uf = UserFilter(base_config) - j = uf.filter_user_props(user_with_unknown_top_level_attrs) - assert j == user - -def test_leave_anonymous_attr_as_is(): - uf = UserFilter(config_with_all_attrs_private) - j = uf.filter_user_props(anon_user) - assert j == anon_user_with_all_attrs_hidden