Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master' into add-clang-cl-build
Browse files Browse the repository at this point in the history
Signed-off-by: Sunjay Bhatia <[email protected]>
  • Loading branch information
sunjayBhatia committed Nov 30, 2020
2 parents 5a058de + 5e2a0e6 commit 779a753
Show file tree
Hide file tree
Showing 110 changed files with 1,309 additions and 817 deletions.
18 changes: 7 additions & 11 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -277,22 +277,18 @@ build:remote-ci --remote_cache=grpcs://remotebuildexecution.googleapis.com
build:remote-ci --remote_executor=grpcs://remotebuildexecution.googleapis.com

# Fuzz builds
# -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION is passed in in the bazel build target
# rules for fuzz tests. Passing it in the CLI will cause dependencies to be build
# with the macro. Causing issues in RouteMatcherTest.TestRoutes that expect prod
# behavior from RE2 library.
build:asan-fuzzer --config=asan
build:asan-fuzzer --define=FUZZING_ENGINE=libfuzzer
build:asan-fuzzer --copt=-fsanitize=fuzzer-no-link
build:asan-fuzzer --copt=-fno-omit-frame-pointer
# Remove UBSAN halt_on_error to avoid crashing on protobuf errors.
build:asan-fuzzer --test_env=UBSAN_OPTIONS=print_stacktrace=1

# Fuzzing without ASAN. This is useful for profiling fuzzers without any ASAN artifacts.
build:plain-fuzzer --define=FUZZING_ENGINE=libfuzzer
build:plain-fuzzer --define ENVOY_CONFIG_ASAN=1
build:plain-fuzzer --copt=-fsanitize=fuzzer-no-link
build:plain-fuzzer --linkopt=-fsanitize=fuzzer-no-link
build:plain-fuzzer --copt=-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION

build:asan-fuzzer --config=plain-fuzzer
build:asan-fuzzer --config=asan
build:asan-fuzzer --copt=-fno-omit-frame-pointer
# Remove UBSAN halt_on_error to avoid crashing on protobuf errors.
build:asan-fuzzer --test_env=UBSAN_OPTIONS=print_stacktrace=1

# Compile database generation config
build:compdb --build_tag_filters=-nocompdb
Expand Down
19 changes: 12 additions & 7 deletions api/envoy/api/v2/route/route_components.proto
Original file line number Diff line number Diff line change
Expand Up @@ -1133,13 +1133,18 @@ message HedgePolicy {
// [#not-implemented-hide:]
type.FractionalPercent additional_request_chance = 2;

// Indicates that a hedged request should be sent when the per-try timeout
// is hit. This will only occur if the retry policy also indicates that a
// timed out request should be retried.
// Once a timed out request is retried due to per try timeout, the router
// filter will ensure that it is not retried again even if the returned
// response headers would otherwise be retried according the specified
// :ref:`RetryPolicy <envoy_api_msg_route.RetryPolicy>`.
// Indicates that a hedged request should be sent when the per-try timeout is hit.
// This means that a retry will be issued without resetting the original request, leaving multiple upstream requests in flight.
// The first request to complete successfully will be the one returned to the caller.
//
// * At any time, a successful response (i.e. not triggering any of the retry-on conditions) would be returned to the client.
// * Before per-try timeout, an error response (per retry-on conditions) would be retried immediately or returned ot the client
// if there are no more retries left.
// * After per-try timeout, an error response would be discarded, as a retry in the form of a hedged request is already in progress.
//
// Note: For this to have effect, you must have a :ref:`RetryPolicy <envoy_api_msg_route.RetryPolicy>` that retries at least
// one error code and specifies a maximum number of retries.
//
// Defaults to false.
bool hedge_on_per_try_timeout = 3;
}
Expand Down
19 changes: 12 additions & 7 deletions api/envoy/config/route/v3/route_components.proto
Original file line number Diff line number Diff line change
Expand Up @@ -1310,13 +1310,18 @@ message HedgePolicy {
// [#not-implemented-hide:]
type.v3.FractionalPercent additional_request_chance = 2;

// Indicates that a hedged request should be sent when the per-try timeout
// is hit. This will only occur if the retry policy also indicates that a
// timed out request should be retried.
// Once a timed out request is retried due to per try timeout, the router
// filter will ensure that it is not retried again even if the returned
// response headers would otherwise be retried according the specified
// :ref:`RetryPolicy <envoy_api_msg_config.route.v3.RetryPolicy>`.
// Indicates that a hedged request should be sent when the per-try timeout is hit.
// This means that a retry will be issued without resetting the original request, leaving multiple upstream requests in flight.
// The first request to complete successfully will be the one returned to the caller.
//
// * At any time, a successful response (i.e. not triggering any of the retry-on conditions) would be returned to the client.
// * Before per-try timeout, an error response (per retry-on conditions) would be retried immediately or returned ot the client
// if there are no more retries left.
// * After per-try timeout, an error response would be discarded, as a retry in the form of a hedged request is already in progress.
//
// Note: For this to have effect, you must have a :ref:`RetryPolicy <envoy_api_msg_config.route.v3.RetryPolicy>` that retries at least
// one error code and specifies a maximum number of retries.
//
// Defaults to false.
bool hedge_on_per_try_timeout = 3;
}
Expand Down
19 changes: 12 additions & 7 deletions api/envoy/config/route/v4alpha/route_components.proto

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

11 changes: 6 additions & 5 deletions bazel/PPROF.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# CPU or memory consumption testing with `pprof`

To use `pprof` to analyze performance and memory consumption in Envoy, you can
use the built-in statically linked profiler, or dynamically link it in to a
use the built-in statically linked profiler provided by
[gperftools](https://github.com/gperftools/gperftools), or dynamically link it in to a
specific place yourself.

## Collecting CPU or heap profile for a full execution of envoy
Expand All @@ -14,7 +15,7 @@ inside

Build the static binary using bazel:

$ bazel build //source/exe:envoy-static
$ bazel build --define tcmalloc=gperftools //source/exe:envoy-static

### Collecting the profile

Expand All @@ -41,15 +42,15 @@ The profiler library is automatically linked into envoy_cc_test targets.

Run a test with heap profiling enabled, like so:

$ bazel test --test_env=HEAPPROFILE=/tmp/heapprof <test target>
$ bazel test --test_env=HEAPPROFILE=/tmp/heapprof --define tcmalloc=gperftools <test target>

Run a test with CPU profiling enabled, like so:

$ bazel test --test_env=CPUPROFILE=/tmp/cpuprof <test target>
$ bazel test --test_env=CPUPROFILE=/tmp/cpuprof --define tcmalloc=gperftools <test target>

Note that heap checks and heap profile collection in tests have noticiable performance implications. Use the following command to collect a CPU profile from a test target with heap check and heap profile collection disabled:

$ bazel test --test_env=CPUPROFILE=/tmp/cpuprof --test_env=HEAPPROFILE= --test_env=HEAPCHECK= <test target>
$ bazel test --test_env=CPUPROFILE=/tmp/cpuprof --test_env=HEAPPROFILE= --test_env=HEAPCHECK= --define tcmalloc=gperftools <test target>

## Starting and stopping profile programmatically

Expand Down
4 changes: 2 additions & 2 deletions bazel/foreign_cc/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,8 @@ envoy_cmake_external(
"//conditions:default": [],
}),
postfix_script = select({
"//bazel:windows_x86_64": "cp -L $EXT_BUILD_ROOT/external/com_github_c_ares_c_ares/src/lib/nameser.h $INSTALLDIR/include/nameser.h && cp -L $EXT_BUILD_ROOT/external/com_github_c_ares_c_ares/src/lib/ares_dns.h $INSTALLDIR/include/ares_dns.h",
"//conditions:default": "cp -L $EXT_BUILD_ROOT/external/com_github_c_ares_c_ares/src/lib/ares_dns.h $INSTALLDIR/include/ares_dns.h",
"//bazel:windows_x86_64": "cp -L $EXT_BUILD_ROOT/external/com_github_c_ares_c_ares/src/lib/nameser.h $INSTALLDIR/include/nameser.h && cp -L $EXT_BUILD_ROOT/external/com_github_c_ares_c_ares/include/ares_dns.h $INSTALLDIR/include/ares_dns.h",
"//conditions:default": "cp -L $EXT_BUILD_ROOT/external/com_github_c_ares_c_ares/include/ares_dns.h $INSTALLDIR/include/ares_dns.h",
}),
static_libraries = select({
"//bazel:windows_x86_64": ["cares.lib"],
Expand Down
12 changes: 6 additions & 6 deletions bazel/repository_locations.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -89,22 +89,22 @@ REPOSITORY_LOCATIONS_SPEC = dict(
project_name = "Abseil",
project_desc = "Open source collection of C++ libraries drawn from the most fundamental pieces of Google’s internal codebase",
project_url = "https://abseil.io/",
version = "8f1c34a77a2ba04512b7f9cbc6013d405e6a0b31",
sha256 = "635367c5cac4bbab95d0485ba9e68fa422546b06ce050190c99be7e23aba3ce3",
version = "5d8fc9192245f0ea67094af57399d7931d6bd53f",
sha256 = "e3812f256dd7347a33bf9d93a950cf356c61c0596842ff07d8154cd415145d83",
strip_prefix = "abseil-cpp-{version}",
urls = ["https://github.com/abseil/abseil-cpp/archive/{version}.tar.gz"],
use_category = ["dataplane_core", "controlplane"],
release_date = "2020-10-17",
release_date = "2020-11-24",
cpe = "N/A",
),
com_github_c_ares_c_ares = dict(
project_name = "c-ares",
project_desc = "C library for asynchronous DNS requests",
project_url = "https://c-ares.haxx.se/",
version = "c15f403875ababb1149215d49683d720b3d035c7",
sha256 = "0ad4f9748752909b00a0ca8d2b6a075e0a7a06ee922d9dcf9625d2562d7c894a",
version = "1.17.1",
sha256 = "d73dd0f6de824afd407ce10750ea081af47eba52b8a6cb307d220131ad93fc40",
strip_prefix = "c-ares-{version}",
urls = ["https://github.com/c-ares/c-ares/archive/{version}.tar.gz"],
urls = ["https://github.com/c-ares/c-ares/releases/download/cares-{underscore_version}/c-ares-{version}.tar.gz"],
use_category = ["dataplane_core", "controlplane"],
release_date = "2020-10-21",
cpe = "cpe:2.3:a:c-ares_project:c-ares:*",
Expand Down
53 changes: 39 additions & 14 deletions ci/repokitteh/modules/ownerscheck.star
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@
# "path": "api/",
# "label": "api",
# "allow_global_approval": True,
# "github_status_label" = "any API change",
# "github_status_label": "any API change",
# "auto_assign": True,
# },
# ],
# )
Expand All @@ -27,8 +28,13 @@
#
# 'label' refers to a GitHub label applied to any matching PR. The GitHub check status
# can be customized with `github_status_label`.
#
# If 'auto_assign' is set True, a randomly selected reviwer from the owner team will
# be selected and set as a reviewer on the PR if there is not already a member of the
# owner team set as reviewer or assignee for the PR.

load("text", "match")
load("time", "now")
load("github.com/repokitteh/modules/lib/utils.star", "react")

def _store_partial_approval(who, files):
Expand Down Expand Up @@ -64,7 +70,8 @@ def _get_relevant_specs(specs, changed_files):
label=spec.get("label", None),
path_match=path_match,
allow_global_approval=allow_global_approval,
status_label=status_label))
status_label=status_label,
auto_assign=spec.get("auto_assign", False)))

print("specs: %s" % relevant)

Expand Down Expand Up @@ -152,20 +159,19 @@ def _reconcile(config, specs=None):
return results


def _comment(config, results, force=False):
def _comment(config, results, assignees, sender, force=False):
lines = []

for spec, approved in results:
if approved:
continue

mention = spec.owner
owner = spec.owner

if mention[0] != '@':
mention = '@' + mention
if owner[-1] == '!':
owner = owner[:-1]

if mention[-1] == '!':
mention = mention[:-1]
mention = '@' + owner

match_description = spec.path_match
if match_description:
Expand All @@ -185,21 +191,40 @@ def _comment(config, results, force=False):
elif mode == 'fyi':
lines.append('CC %s: FYI only%s.' % (mention, match_description))

if mode != 'skip' and spec.auto_assign:
api_assignee = None
# Find owners via github.team_get_by_name, github.team_list_members
team_name = owner.split('/')[1]
team = github.team_get_by_name(team_name)
# Exclude author from assignment.
members = [m['login'] for m in github.team_list_members(team['id']) if m['login'] != sender]
# Is a team member already assigned? The first assigned team member is picked. Bad O(n^2) as
# Starlark doesn't have sets, n is small.
for assignee in assignees:
if assignee in members:
api_assignee = assignee
break
# Otherwise, pick at "random" (we just use timestamp).
if not api_assignee:
api_assignee = members[now().second % len(members)]
lines.append('API shepherd assignee is @%s' % api_assignee)
github.issue_assign(api_assignee)

if lines:
github.issue_create_comment('\n'.join(lines))


def _reconcile_and_comment(config):
_comment(config, _reconcile(config))
def _reconcile_and_comment(config, assignees, sender):
_comment(config, _reconcile(config), assignees, sender)


def _force_reconcile_and_comment(config):
_comment(config, _reconcile(config), force=True)
def _force_reconcile_and_comment(config, assignees, sender):
_comment(config, _reconcile(config), assignees, sender, force=True)


def _pr(action, config):
def _pr(action, config, assignees, sender):
if action in ['synchronize', 'opened']:
_reconcile_and_comment(config)
_reconcile_and_comment(config, assignees, sender)


def _pr_review(action, review_state, config):
Expand Down
12 changes: 7 additions & 5 deletions docs/root/faq/extensions/contract.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,13 @@ Is there a contract my HTTP filter must adhere to?

* Headers encoding/decoding

* During encoding/decoding of headers if a filter returns ``FilterHeadersStatus::StopIteration``,
the processing can be resumed if ``encodeData()``/``decodeData()`` return
* During encoding/decoding of headers if a local reply wasn't sent and a filter
returns ``FilterHeadersStatus::StopIteration``, the processing can be resumed
if ``encodeData()``/``decodeData()`` return
``FilterDataStatus::Continue`` or by explicitly calling
``continueEncoding()``/``continueDecoding()``.

* During encoding/decoding of headers if a filter returns
* During encoding/decoding of headers if a local reply wasn't sent and a filter returns
``FilterHeadersStatus::StopAllIterationAndBuffer`` or
``FilterHeadersStatus::StopAllIterationAndWatermark``, the processing can be resumed by calling
``continueEncoding()``/``continueDecoding()``.
Expand All @@ -24,15 +25,16 @@ Is there a contract my HTTP filter must adhere to?

* Data encoding/decoding

* During encoding/decoding of data if a filter returns
* During encoding/decoding of data if a local reply wasn't sent and a filter returns
``FilterDataStatus::StopIterationAndBuffer``, ``FilterDataStatus::StopIterationAndWatermark``,
or ``FilterDataStatus::StopIterationNoBuffer``, the processing can be resumed if
``encodeData()``/``decodeData()`` return ``FilterDataStatus::Continue`` or by explicitly
calling ``continueEncoding()``/``continueDecoding()``.

* Trailers encoding/decoding

* During encoding/decoding of trailers if a filter returns ``FilterTrailersStatus::StopIteration``,
* During encoding/decoding of trailers if a local reply wasn't sent and a filter
returns ``FilterTrailersStatus::StopIteration``,
the processing can be resumed by explicitly calling ``continueEncoding()``/``continueDecoding()``.

Are there well-known headers that will appear in the given headers map of ``decodeHeaders()``?
Expand Down
3 changes: 0 additions & 3 deletions docs/root/operations/admin.rst
Original file line number Diff line number Diff line change
Expand Up @@ -511,9 +511,6 @@ modify different aspects of the server:
but in response to user requests on high core-count machines, this
can cause performance issues due to mutex contention.

This admin endpoint requires Envoy to be started with option
`--use-fake-symbol-table 0`.

See :repo:`source/docs/stats.md` for more details.

Note also that actual mutex contention can be tracked via :http:get:`/contention`.
Expand Down
1 change: 1 addition & 0 deletions docs/root/version_history/current.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,3 +85,4 @@ Deprecated
* gzip: :ref:`HTTP Gzip filter <config_http_filters_gzip>` is rejected now unless explicitly allowed with :ref:`runtime override <config_runtime_deprecation>` `envoy.deprecated_features.allow_deprecated_gzip_http_filter` set to `true`.
* logging: the `--log-format-prefix-with-location` option is removed.
* ratelimit: the :ref:`dynamic metadata <envoy_v3_api_field_config.route.v3.RateLimit.Action.dynamic_metadata>` action is deprecated in favor of the more generic :ref:`metadata <envoy_v3_api_field_config.route.v3.RateLimit.Action.metadata>` action.
* stats: the `--use-fake-symbol-table` option is removed.
19 changes: 12 additions & 7 deletions generated_api_shadow/envoy/api/v2/route/route_components.proto

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit 779a753

Please sign in to comment.