Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - OpenSearch Dashboard V2.15.0 - JSON.parse: bad escaped character #7367

Closed
kksaha opened this issue Jul 19, 2024 · 40 comments · Fixed by #8603 · May be fixed by opensearch-project/opensearch-js#879
Closed

[BUG] - OpenSearch Dashboard V2.15.0 - JSON.parse: bad escaped character #7367

kksaha opened this issue Jul 19, 2024 · 40 comments · Fixed by #8603 · May be fixed by opensearch-project/opensearch-js#879
Labels
bug Something isn't working discover for discover reinvent needs more info Requires more information from poster

Comments

@kksaha
Copy link

kksaha commented Jul 19, 2024

Describe the bug

After upgrading from 2.13 to 2.15 Discover in Dashboards is barely usable due to the following error.

SyntaxError: Bad escaped character in JSON at position 911476 (line 1 column 911477) at fetch_Fetch.fetchResponse (https://dashboards.observability-opensearch.backend.ppe.cloud/7749/bundles/core/core.entry.js:15:243032) at async interceptResponse (https://dashboards.observability-opensearch.backend.ppe.cloud/7749/bundles/core/core.entry.js:15:237932) at async https://dashboards.observability-opensearch.backend.ppe.cloud/7749/bundles/core/core.entry.js:15:240899

Related component

Search

To Reproduce

Upgrade to version 2.15.0 and it will appear in the discover.

Expected behavior

See the following error in discover

SyntaxError: Bad escaped character in JSON at position 911476 (line 1 column 911477)

Additional Details

Someone already reported this issue in OpenSearch forum https://forum.opensearch.org/t/json-parse-bad-escaped-character/20211

@kksaha kksaha added bug Something isn't working untriaged labels Jul 19, 2024
@dblock
Copy link
Member

dblock commented Jul 19, 2024

@kksaha See what API is being called (maybe in dev tools) that causes this? Likely a server error that's not being parsed properly.

@kksaha
Copy link
Author

kksaha commented Jul 22, 2024

@dblock Other than launching the Discover tab with the default index, no specific API is being called. It appears that this problem has also been encountered by other users https://forum.opensearch.org/t/json-parse-bad-escaped-character/20211/6

@dblock
Copy link
Member

dblock commented Jul 22, 2024

I am just trying to route this somewhere. I will move this to Dashboards for now. If you can help narrow down where the error is coming from (it's caused by parsing something - what is that it's trying to parse?).

@dblock dblock transferred this issue from opensearch-project/OpenSearch Jul 22, 2024
@ashwin-pc
Copy link
Member

ashwin-pc commented Jul 22, 2024

The error is likely in this section of the code where the long numeral JSON parser cannot parse your json object. It howeer has a fallback mechanism and the fact that the error message does not indicate the parser (JSON11) makes this issue harder to debug. To reproduce this issue on our side can you help find the document causing the issue? You can:

  1. Narrow the timerange down to find the offending document causing this issue and share the sanitized offending document.
  2. Share the response for route /internal/search/opensearch-with-long-numerals or any internal/search call from the network tab when this issue occurs. (Sanitized response)
    cc: @AMoo-Miki any other debugging steps to help reproduce this?

p.s. I cant see the same issue on playground which is running 2.15 using the sample datasets which is why having that response of document is needed to root cause the issue here: Ref

@agoerl
Copy link

agoerl commented Jul 23, 2024

I have the same issue. I can narrow it down to an index but not to a single document (yet). Furthermore I don't think this is possible. When the error appears there no documents displayed anymore. After several refreshes the error eventually disappears and the documents re-appear.

Let me visualize. It looks like this after e.g. three refresh operations (so one error message per refresh):

image

....and normal again after some more refreshes...

image

If I now immediately increase the window from 15 minutes to 30 minutes I would expect to see that error again but this is not the case. So how would one ever find the document responsible?

@GSue53
Copy link

GSue53 commented Jul 23, 2024

same here

@ananzh ananzh added needs more info Requires more information from poster and removed untriaged labels Jul 23, 2024
@ashwin-pc
Copy link
Member

@GSue53 @agoerl are there any other things you can provide that will help reproduce this on our end. If you faced this issue for a whole index, its likely that all documents in your index have the same issue. If you can provide a sample document from that index, that would be really useful

@ashwin-pc ashwin-pc added the discover for discover reinvent label Jul 23, 2024
@ashwin-pc
Copy link
Member

Also what shows up on your network tab when this request occurs? Im looking particularly for the request and response payload and url for the request

@ananzh
Copy link
Member

ananzh commented Jul 23, 2024

Issue

SyntaxError: Bad escaped character in JSON at position 911476 (line 1 column 911477) at fetch_Fetch.fetchResponse (https://dashboards.observability-opensearch.backend.ppe.cloud/7749/bundles/core/core.entry.js:15:243032)

Analysis

What we know now:

  • The error is not silently caught, as we see it on the browser screen.

  • It's not from the innermost try-catch block that falls back to body = text; when JSON parsing fails.

  • The error message mentions fetch_Fetch.fetchResponse, which indicates it's happening within this fetchResponse method.

  • Given these points, the error is likely coming from one of these two lines:

    • This is unlikely to cause a "Bad escaped character in JSON" error because it's not parsing JSON, it's just getting the raw blob data.
body = await response.blob();
  • The 2nd place is possible. Here, there are two possible paths: a) If withLongNumeralsSupport or withLongNumerals is true, it uses parse(await response.text()).b) Otherwise, it uses await response.json().
  fetchOptions.withLongNumeralsSupport || fetchOptions.withLongNumerals
    ? parse(await response.text())
    : await response.json();
  • This method is not updated between 2.13 and 2.15. So it is not the method that causing the isuse.
  • One more callout: The position at position 911476 and other report at 2006464 and 1618143, which are quite large. This could indicate that the issue is related to how large responses are being handled. But OSD has payload limit and we don't see complaining about payload too large. Therefore, it should not be some truncation to cause some invalid escaped character. We need more info on this.

Reproduce

Given the limited information about the error related escaped character, the image of discover and location the catch the error, I tried to reproduce with a script which can create 500 documents with 11 fixed fields (10 specific fields plus the timestamp).
The content of each field is generated using the functions that create various types of escape sequences, nested objects, arrays, number formats, etc:

"standard_escapes": Covers all standard JSON escape characters
"unicode_escapes": Includes various Unicode escape sequences (control characters, surrogate pairs, emojis)
"escaped_unicode": Incorporates escaped Unicode sequences
"double_escaped": Uses double-escaped sequences
"invalid_escapes": Includes invalid escape sequences
"long_string": Generates very long strings with mixed escapes
"nested_object": Creates nested objects with various escapes
"array_with_escapes": Produces arrays with different types of escaped content
"number_fields": Includes number fields with very large integers, high-precision decimals, and special number formats
"mixed_field": Generates mixed fields combining text, numbers, and various escapes
"potential_problem_string": Creates a string that combines many challenging elements

generate.py

import json
import random
import string
import datetime

def generate_random_string(length):
    return ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(length))

def generate_long_number_as_string():
    return str(random.randint(10**50, 10**100))

def generate_random_float():
    return random.uniform(-1e100, 1e100)

def generate_all_standard_escapes():
    return json.dumps({
        "quotes": "\"Hello, World!\"",
        "backslash": "This is a backslash: \\",
        "forward_slash": "Forward slash: \/",
        "backspace": "Backspace:\b",
        "form_feed": "Form feed:\f",
        "newline": "Newline:\n",
        "carriage_return": "Carriage return:\r",
        "tab": "Tab:\t"
    })[1:-1]

def generate_unicode_escapes():
    control_chars = ''.join([f"\\u{i:04x}" for i in range(32)])
    surrogate_pairs = "\\uD83D\\uDE00"  # Smiling face emoji
    emojis = "\\u{1F600}\\u{1F64F}\\u{1F680}"  # Various emojis
    return f"{control_chars}{surrogate_pairs}{emojis}"

def generate_escaped_unicode():
    return "\\\\u00A9\\\\u00AE\\\\u2122"  # Copyright, Registered, Trademark symbols

def generate_double_escaped():
    return "\\\\\\\\u00A9\\\\\\\\n\\\\\\\\t"

def generate_invalid_escapes():
    return "\\a\\v\\0\\x\\u\\u123g"

def generate_very_long_string_with_escapes():
    base = "This is a long string with various escapes: "
    escapes = generate_all_standard_escapes() + generate_unicode_escapes()
    return base + escapes * 10

def generate_nested_object():
    return {
        "nested_escapes": generate_all_standard_escapes(),
        "nested_unicode": generate_unicode_escapes(),
        "nested_invalid": generate_invalid_escapes()
    }

def generate_array_with_escapes():
    return [
        generate_all_standard_escapes(),
        generate_unicode_escapes(),
        generate_escaped_unicode(),
        generate_invalid_escapes()
    ]

def generate_number_fields():
    return {
        "large_integer": generate_long_number_as_string(),
        "high_precision_decimal": f"{generate_random_float():.50f}",
        "special_format": f"{random.uniform(-1e100, 1e100):e}"
    }

def generate_mixed_field():
    return f"Text with number {generate_long_number_as_string()} and escapes: {generate_all_standard_escapes()}"

def generate_potential_problem_string():
    return (f"Problem string: {generate_all_standard_escapes()}{generate_unicode_escapes()}"
            f"{generate_escaped_unicode()}{generate_double_escaped()}{generate_invalid_escapes()}"
            f"{generate_long_number_as_string()}{generate_mixed_field()}")

def generate_document():
    return {
        "timestamp": datetime.datetime.now().isoformat(),
        "standard_escapes": generate_all_standard_escapes(),
        "unicode_escapes": generate_unicode_escapes(),
        "escaped_unicode": generate_escaped_unicode(),
        "double_escaped": generate_double_escaped(),
        "invalid_escapes": generate_invalid_escapes(),
        "long_string": generate_very_long_string_with_escapes(),
        "nested_object": generate_nested_object(),
        "array_with_escapes": generate_array_with_escapes(),
        "number_fields": generate_number_fields(),
        "mixed_field": generate_mixed_field(),
        "potential_problem_string": generate_potential_problem_string()
    }

# Generate and save 500 documents
documents = [generate_document() for _ in range(500)]

with open('test_documents.json', 'w') as f:
    for doc in documents:
        f.write(json.dumps({"index": {}}) + '\n')
        f.write(json.dumps(doc) + '\n')

print(f"Generated file: test_documents.json with 500 documents")
print("Use the following curl command to index these documents:")
print("curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/test_escape_index/_bulk' --data-binary '@test_documents.json'")

Then index it to 2.15 OS

curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/test_escape_index/_bulk' --data-binary '@test_documents.json'

Create index pattern using a plain 2.15 OSD. Then open in discover and all look good:
Screenshot 2024-07-23 at 12 48 15 PM

Fix / Discussion

Since I can't reproduce it, there are two ways to proceed:

  1. Adds a try-catch block specifically around the JSON parsing part. If parsing fails, it logs the error and falls back to returning the raw text of the response.This approach prevents the parsing error from breaking the application. But this might miss these errors and find out the true cause of the issue. As I mentioned earlier, this fetchResponse method is not changed between 2.13 to 2.15. We don't what causes either parse or response.json to throw the error. If we can't tolerate, the fix is to change
 fetchOptions.withLongNumeralsSupport || fetchOptions.withLongNumerals
    ? parse(await response.text())
    : await response.json();

to

try {
        body = fetchOptions.withLongNumeralsSupport || fetchOptions.withLongNumerals
          ? parse(await response.text())
          : await response.json();
      } catch (parseError) {
        console.error('Error parsing JSON response:', parseError);
        // Fallback to raw text
        body = await response.text();
      }
  1. Keep this open to collect more information to have a way to reproduce it.

@ashwin-pc
Copy link
Member

Thanks for the DeepDive Anan. Lets keep this open since we have more than one report of this being an issue. Looks like we will need at least one offending document to reproduce the error and understand what the issue is.

@ananzh
Copy link
Member

ananzh commented Jul 23, 2024

If we can't find the document as @agoerl then if we could share the index it would also be very helpful. Therefore are some wired things here:

  • large position 911476, 2006464 and 1618143: no parsing issue until this position
  • we have limit payload size and won't truncate the response, why large position has issue?
  • or not due to large position, but very specific low chance escaped character

Definitely need more info and help here. We really appreciate the current info from @GSue53 @kksaha and details from @agoerl

@agoerl if you can see a persistent issue from one index, could you help us to get more info?

@agoerl
Copy link

agoerl commented Jul 24, 2024

@GSue53 @agoerl are there any other things you can provide that will help reproduce this on our end. If you faced this issue for a whole index, its likely that all documents in your index have the same issue. If you can provide a sample document from that index, that would be really useful

@ashwin-pc In my case the index collects logs from all kubernetes pods in a specific environment, from all pods. Those pods are quite different, i.e. produce different logs in varying formats. I feel that if I post a sample it will be misleading. Nonetheless here is one document (with some parts redacted due to privacy/security concerns):

{ "_index": "app-au-containerlogs-2024.07.24", "_id": "BWBc45ABsQBQr2YguTRv", "_version": 1, "_score": null, "_source": { "container": { "image": { "name": "docker.io/opensearchproject/opensearch:2.15.0" }, "runtime": "containerd", "id": "d2e70983e2c09244a730f4a2824562d30f2e0ec405bc08ba4ffe77ef9b1a2299" }, "kubernetes": { "container": { "name": "opensearch" }, "node": { "uid": "21ebc5a1-216f-4879-b5fa-8bd74c9e1c74", "hostname": "aks-workers-19932621-vmss000002", "name": "aks-workers-19932621-vmss000002", "labels": { "kubernetes_azure_com/nodenetwork-vnetguid": "4438710a-3e5d-423a-83e6-70e3048f7ab3", "node-role_kubernetes_io/agent": "", "kubernetes_io/hostname": "aks-workers-19932621-vmss000002", "kubernetes_azure_com/node-image-version": "AKSUbuntu-2204gen2containerd-202405.03.0", "topology_kubernetes_io/region": "germanywestcentral", "kubernetes_azure_com/nodepool-type": "VirtualMachineScaleSets", "topology_disk_csi_azure_com/zone": "germanywestcentral-1", "agentpool": "workers", "kubernetes_io/arch": "amd64", "kubernetes_azure_com/podnetwork-subnet": "pods", "kubernetes_azure_com/cluster": "MC_app_app-au_germanywestcentral", "kubernetes_azure_com/mode": "system", "beta_kubernetes_io/instance-type": "standard_D4ads_v5", "failure-domain_beta_kubernetes_io/zone": "germanywestcentral-1", "kubernetes_azure_com/network-subnet": "aks", "beta_kubernetes_io/os": "linux", "kubernetes_azure_com/kubelet-identity-client-id": "dcbfacc7-8fde-548c-b922-163ec47462b1", "kubernetes_azure_com/podnetwork-subscription": "REDACTED", "beta_kubernetes_io/arch": "amd64", "kubernetes_azure_com/role": "agent", "kubernetes_azure_com/podnetwork-delegationguid": "4438710a-3e5d-423a-83e6-70e3048f7ab3", "kubernetes_azure_com/podnetwork-resourcegroup": "app", "topology_kubernetes_io/zone": "germanywestcentral-1", "kubernetes_azure_com/network-subscription": "REDACTED", "kubernetes_azure_com/os-sku": "Ubuntu", "failure-domain_beta_kubernetes_io/region": "germanywestcentral", "kubernetes_azure_com/network-resourcegroup": "app", "kubernetes_azure_com/podnetwork-name": "app-au", "kubernetes_io/role": "agent", "kubernetes_azure_com/network-name": "app-au", "node_kubernetes_io/instance-type": "standard_D4ads_v5", "kubernetes_azure_com/consolidated-additional-properties": "ec5bb2b9-2991-11ef-865b-669c675eec4a", "kubernetes_io/os": "linux", "kubernetes_azure_com/agentpool": "workers", "kubernetes_azure_com/podnetwork-type": "vnet" } }, "pod": { "uid": "2c109ff0-1679-4a87-93c8-4ee673d4ef86", "ip": "192.0.2.33", "name": "opensearch-nodes-0" }, "statefulset": { "name": "opensearch-nodes" }, "namespace": "opensearch", "namespace_uid": "9d1a5ddb-7d5a-41af-b7b3-f3f1a02a5494", "namespace_labels": { "kubernetes_io/metadata_name": "opensearch" }, "labels": { "controller-revision-hash": "opensearch-nodes-6d575878df", "opster_io/opensearch-nodepool": "nodes", "opster_io/opensearch-cluster": "opensearch", "opensearch_role": "cluster_manager", "statefulset_kubernetes_io/pod-name": "opensearch-nodes-0" } }, "agent": { "node": { "name": "aks-workers-19932621-vmss000002" }, "name": "filebeat-containerlogs-hlfbn", "id": "b054d4cc-2922-4b23-bc50-b47a996420a9", "ephemeral_id": "59e914b1-f9c4-4d76-b664-74c12aa91d83", "type": "filebeat", "version": "8.12.2" }, "log": { "file": { "path": "/var/log/containers/opensearch-nodes-0_opensearch_opensearch-d2e70983e2c09244a730f4a2824562d30f2e0ec405bc08ba4ffe77ef9b1a2299.log" }, "offset": 13622725 }, "message": " Executing attempt_transition_step for security-auditlog-2024.06.26", "tags": [ "containerlogs", "beats_input_codec_plain_applied" ], "input": { "type": "container" }, "@timestamp": "2024-07-24T06:10:20.574Z", "ecs": { "version": "8.0.0" }, "stream": "stdout", "ti": { "environment": "timp-au" }, "@version": "1", "host": { "name": "filebeat-containerlogs-hlfbn" }, "event": { "pipeline": "timp-containerlogs-opensearch", "node": "opensearch-nodes-0", "original": "[2024-07-24T06:10:20,574][INFO ][o.o.i.i.ManagedIndexRunner] [opensearch-nodes-0] Executing attempt_transition_step for security-auditlog-2024.06.26", "test": "2024-07-24T", "level": "INFO ", "module": "o.o.i.i.ManagedIndexRunner", "pipeline_info": "Provided Grok expressions do not match field value: [ Executing attempt_transition_step for security-auditlog-2024.06.26]" }, "message2": " Executing attempt_transition_step for security-auditlog-2024.06.26" }, "fields": { "@timestamp": [ "2024-07-24T06:10:20.574Z" ] }, "sort": [ 1721801420574 ] }

@agoerl
Copy link

agoerl commented Jul 24, 2024

Also what shows up on your network tab when this request occurs? Im looking particularly for the request and response payload and url for the request

Request Headers:

POST /internal/search/opensearch-with-long-numerals HTTP/2 Host: opensearch.env.dom.ain User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:126.0) Gecko/20100101 Firefox/126.0 Accept: */* Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate, br, zstd Referer: https://opensearch.env.dom.ain/app/data-explorer/discover Content-Type: application/json osd-version: 2.15.0 osd-xsrf: osd-fetch Content-Length: 1608 Origin: https://opensearch.env.dom.ain DNT: 1 Sec-GPC: 1 Connection: keep-alive Cookie: security_authentication_oidc1=REMOVED; security_authentication=REMOVED Sec-Fetch-Dest: empty Sec-Fetch-Mode: cors Sec-Fetch-Site: same-origin Priority: u=4 TE: trailers
Request:

{"params":{"index":"app-*-containerlogs-*","body":{"sort":[{"@timestamp":{"order":"desc","unmapped_type":"boolean"}}],"size":500,"version":true,"aggs":{"2":{"date_histogram":{"field":"@timestamp","fixed_interval":"30s","time_zone":"Europe/Berlin","min_doc_count":1}}},"stored_fields":["*"],"script_fields":{},"docvalue_fields":[{"field":"@timestamp","format":"date_time"},{"field":"event.@timestamp","format":"date_time"},{"field":"event.date","format":"date_time"},{"field":"falco.time","format":"date_time"},{"field":"filebeat.@timestamp","format":"date_time"},{"field":"filebeat.system_info.build.time","format":"date_time"},{"field":"filebeat.system_info.host.boot_time","format":"date_time"},{"field":"filebeat.system_info.process.start_time","format":"date_time"},{"field":"postgres.build.Date","format":"date_time"},{"field":"postgres.endTime","format":"date_time"},{"field":"postgres.startTime","format":"date_time"},{"field":"postgres.ts","format":"date_time"},{"field":"timestamp","format":"date_time"},{"field":"trivy.@timestamp","format":"date_time"},{"field":"trivy.created_at","format":"date_time"},{"field":"trivy.last_match","format":"date_time"}],"_source":{"excludes":[]},"query":{"bool":{"must":[],"filter":[{"match_all":{}},{"range":{"@timestamp":{"gte":"2024-07-24T06:03:09.149Z","lte":"2024-07-24T06:18:09.149Z","format":"strict_date_optional_time"}}}],"should":[],"must_not":[]}},"highlight":{"pre_tags":["@opensearch-dashboards-highlighted-field@"],"post_tags":["@/opensearch-dashboards-highlighted-field@"],"fields":{"*":{}},"fragment_size":2147483647}},"preference":1721801869806}

Response Headers:

HTTP/2 200 date: Wed, 24 Jul 2024 06:18:10 GMT content-type: application/json; charset=utf-8 osd-name: opensearch-dashboards cache-control: private, no-cache, no-store, must-revalidate set-cookie: security_authentication=REMOVED; Secure; HttpOnly; Path=/ vary: accept-encoding content-encoding: gzip strict-transport-security: max-age=31536000; includeSubDomains X-Firefox-Spdy: h2

I am unsure about the response. It is very long and has been truncated already in the browser to 1MB.

@ashwin-pc
Copy link
Member

Thanks @agoerl! let me see if i can reproduce this with 2.15

@GSue53
Copy link

GSue53 commented Jul 25, 2024

Hi @ashwin-pc, the snippet from @agoerl looks quite similar to my error. I can also provide more data if necessary.

@ananzh
Copy link
Member

ananzh commented Jul 25, 2024

From the sample document, I don't see any problematic escape characters in the provided sample document. All the JSON appears to be well-formed.

From the response, it seems we do have the truncate data. Let's use the sample data and explore possible truncate points and make up some data:

1.Truncate after a backslash in a string:

const mockResponseBody = '{ "kubernetes": { "node": { "name": "aks-workers-19932621-vmss000002\\';

2.Truncate in the middle of a Unicode escape sequence:

const mockResponseBody = '{ "kubernetes": { "node": { "labels": { "kubernetes_azure_com/nodenetwork-vnetguid": "4438710a-3e5d-423a-83e6-70e3048f7ab3\\u';

3.Create an invalid Unicode escape sequence:

const mockResponseBody = '{ "kubernetes": { "node": { "name": "aks-workers-19932621-vmss000002\\u00ZZ" } }';

4.Truncate after an escape character in a string:

const mockResponseBody = '{ "message": "Executing attempt_transition_step for security-auditlog-2024.06.26\\';

5.Create an invalid escape sequence:

const mockResponseBody = '{ "kubernetes": { "node": { "name": "aks-workers-19932621-vmss000002\\x" } }';

6.Truncate in the middle of a hex escape sequence:

const mockResponseBody = '{ "kubernetes": { "node": { "name": "aks-workers-19932621-vmss000002\\x2';

7.Create an incomplete escaped quote:

const mockResponseBody = '{ "kubernetes": { "node": { "name": "aks-workers-19932621-vmss000002\\"';

8.Introduce an unescaped control character:

const mockResponseBody = '{ "kubernetes": { "node": { "name": "aks-workers-19932621-vmss000002\n" } }';

9.Create an invalid escape in a key:

const mockResponseBody = '{ "kubernetes": { "no\\de": { "name": "aks-workers-19932621-vmss000002" } }';

10.Introduce a lone surrogate:

const mockResponseBody = '{ "kubernetes": { "node": { "name": "aks-workers-19932621-vmss000002\\uD800" } }';

Results:
1.SyntaxError: Unexpected end of JSON input
2. SyntaxError: Bad Unicode escape in JSON at position 124 (line 1 column 125)
3. SyntaxError: Bad Unicode escape in JSON at position 124 (line 1 column 125)
4. SyntaxError: Unexpected end of JSON input
5. SyntaxError: Bad escaped character in JSON at position 69 (line 1 column 70)
6. SyntaxError: Bad escaped character in JSON at position 69 (line 1 column 70)
7. SyntaxError: Unterminated string in JSON at position 70 (line 1 column 71)
8. SyntaxError: Bad control character in string literal in JSON at position 68 (line 1 column 69)
9. SyntaxError: Bad escaped character in JSON at position 22 (line 1 column 23)
10. SyntaxError: Expected ',' or '}' after property value in JSON at position 79 (line 1 column 80)

We see case 5, 6, 9 report SyntaxError: Bad escaped character in JSON. The key difference between these cases and those reporting "Unexpected end of JSON input" is that these contain invalid escape sequences rather than just being truncated.

In case 5, \x is an incomplete hexadecimal escape sequence.
In case 6, \x2 is also an incomplete hexadecimal escape sequence.
In case 9, \d is not a valid escape sequence in JSON.

The JSON parser is encountering these invalid escape sequences and reporting them as "Bad escaped character" errors before it reaches the end of the input. This is different from a simple truncation where the parser reaches the end of the input unexpectedly.

Again the current response is a status 200, we don't get much info from it. These are just possible guess based on a brief look at the sample data.

@ananzh
Copy link
Member

ananzh commented Jul 25, 2024

@GSue53 Thank you for your assistance so far. We've made some progress in analyzing potential causes, but we're currently at a point where we need more specific information to pinpoint the exact issue. If possible, could you help us identify the specific document that's triggering the error?

While we've explored various scenarios that could potentially cause this problem, without seeing the actual problematic data, we can't be certain that our proposed solutions will address the root cause. Any additional details you can provide about the exact document or context where the error occurs would be very helpful ensuring we implement an effective fix.

@matthias-prog
Copy link

@ananzh I have managed to find a very small excerpt of data that is causing issues. I have extracted the log message from the events, as that seems the most likely data to cause issues. Let me know if you can find anything in that data. If not I will look into redacting the internal data from the raw json directly from the api.
extract-redacted.json

@JannikBrand
Copy link

JannikBrand commented Jul 26, 2024

Same issue occurred on our side:
Version: v 2.15.0

SyntaxError: Bad escaped character in JSON at position 843603 (line 1 column 843604)
    at fetch_Fetch.fetchResponse (https://<endpoint_placeholder>/7749/bundles/core/core.entry.js:15:243032)
    at async interceptResponse (https://<endpoint_placeholder>/7749/bundles/core/core.entry.js:15:237932)
    at async https://<endpoint_placeholder>/7749/bundles/core/core.entry.js:15:240899

=> For me this happens depending on the time window that is used for the search:

When using absolute dates: e.g. if the end timestamp is Jul 25, 2024 @ 15:54:00.000 then the error occurrs. When I change (increase!) it to e.g. Jul 25, 2024 @ 15:54:01.000 it will work again.


Edit: This was the case because the problematic document was not within the first 500 results anymore..

@matthias-prog
Copy link

I have encountered another occurence of this error. I have been able to narrow down the cause. The commonality between the tow instaces seems to be escape sequences for colored console output.
extract.json

@JannikBrand
Copy link

@christiand93 and I can confirm the last comment:
On our side we narrowed down the problem to a single document causing it. It contains the following field:
"msg": "\x1b[0mPOST /oauth/authorize \x1b[32m200\x1b[0m 95.542 ms - 1499\x1b[0m"

When passing this string (surrounded by curly brackets) through a Javascript JSON parser it results in following error:
SyntaxError: Bad escaped character in JSON at position 9 (line 1 column 10)
This indicates a parsing error for the "\x1b[0m" color code.

@agoerl
Copy link

agoerl commented Jul 30, 2024

I can confirm that we have the same kind of logs. I will try to isolate those as well.

@ananzh
Copy link
Member

ananzh commented Aug 6, 2024

Hi @agoerl, @JannikBrand and @Christian93, both me and @LDrago27 tried several index ways. For example I tried using bulk api

#!/bin/bash

# Create a temp file for bulk data
cat << EOF > bulk_data.ndjson
{"index":{"_index":"test_index","_id":"1"}}
{"msg": "\x1b[0mPOST /oauth/authorize \x1b[32m200\x1b[0m 95.542 ms - 1499\x1b[0m"}
{"index":{"_index":"test_index","_id":"2"}}
{"msg": "This is a normal message without escape sequences"}
EOF

# Use curl to send the bulk request to OS
curl -H "Content-Type: application/x-ndjson" -XPOST "http://localhost:9200/_bulk" --data-binary "@bulk_data.ndjson"

I got this error

./test_bulk_index.sh
{"took":8,"errors":true,"items":[{"index":{"_index":"test_index","_id":"1","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse field [msg] of type [text] in document with id '1'. Preview of field's value: ''","caused_by":{"type":"json_parse_exception","reason":"Unrecognized character escape 'x' (code 120)\n at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 1, column: 11]"}}}},{"index":{"_index":"test_index","_id":"2","_version":2,"result":"updated","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":1,"_primary_term":1,"status":200}}]}

This result indicates that even the Bulk API is not permissive enough to allow the \x escape sequences in JSON.

@LDrago27
Copy link
Collaborator

LDrago27 commented Aug 6, 2024

Hi @agoerl , @JannikBrand and @Christian93. Adding a bit more details about our testing of this above bug.
We tried the indexing the document that you have provided us across 2.11, 2.13 & 2.15 versions of Opensearch releases using OSD's dev tools as well as directly hitting the Opensearch endpoints.

However we were unable to index the document. Adding the API calls that we have tried for reference below.

curl --location --request PUT 'http://localhost:9200/sample-index-2.11/_doc/1' \ --header 'Content-Type: application/json' \ --data '{ "msg": "\x1b[0mPOST /oauth/authorize \x1b[32m200\x1b[0m 95.542 ms - 1499\x1b[0m" }'

Response:
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Unrecognized character escape 'x' (code 120)\n at [Source: (byte[])\"{\n \"msg\": \"\\x1b[0mPOST /oauth/authorize \\x1b[32m200\\x1b[0m 95.542 ms - 1499\\x1b[0m\"\n}\"; line: 2, column: 15]"}},"status":400}

Bulk APi Call:
curl --location 'http://localhost:9200/_bulk' \ --header 'Content-Type: application/json' \ --data '{"index":{"_index":"test_index","_id":"1"}} {"msg": "\x1b[0mPOST /oauth/authorize \x1b[32m200\x1b[0m 95.542 ms - 1499\x1b[0m"} {"index":{"_index":"test_index","_id":"2"}} {"msg": "This is a normal message without escape sequences"} '

Response:
{"took":126,"errors":true,"items":[{"index":{"_index":"test_index","_id":"1","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Unrecognized character escape 'x' (code 120)\n at [Source: (byte[])\"{\"msg\": \"\\x1b[0mPOST /oauth/authorize \\x1b[32m200\\x1b[0m 95.542 ms - 1499\\x1b[0m\"}\"; line: 1, column: 12]"}}}},{"index":{"_index":"test_index","_id":"2","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_primary_term":1,"status":201}}]}

Overall we were unable to index the document you had provided. To investigate this issue we will need your help to understand how you had indexed this document in 2.13 version or older versions and it would also be great if you can share the index mapping for the index in 2.13 version. You can use the following link https://opensearch.org/docs/latest/field-types/ to get the command for obtaining the index mapping.

Regards,
cc. @ashwin-pc @ananzh

@JannikBrand
Copy link

@LDrago27 & @ananzh: Unfortunately, we just observed such data already being inside the OpenSearch clusters and could not reproduce it so far. Maybe @matthias-prog knows more? If not, I would still try to find out how such data got indexed in our clusters.

We tried to escape the string like the following during indexing. However, this just resulted in successfully indexing and searching the document in Discover view:
"\\x1b[0mPOST /oauth/authorize \\x1b[32m200\\x1b[0m 95.542 ms - 1499\\x1b[0m"

Regarding the field mapping, this is the one for the field in our case:

"msg": {
    "type": "text"
},

@matthias-prog
Copy link

@JannikBrand I also have been unsucessful in indexing a sample of our problematic data manually.
Maybe the behaviour of the JSON Parser changes with bigger amounts of data? We are using vector in version 0.39.0 to index our log data. Vector is indexing about 10000 documents at a time using the bulk API with the create action.
We have been able to work around this issue using the strip_ansi_escape_codes function of Vector (at least for new logs).

@JannikBrand
Copy link

JannikBrand commented Aug 9, 2024

There is a huge ingest volume for our OS cluster as well. Unfortunately, we could still not figure it out as well.

Not solving the real problem, but a possible work around for this might be to use Unicode escape sequence on the sender side instead, which will result in successfully indexing and searching the documents.

\u001b[0mPOST /oauth/authorize \u001b[32m200\u001b[0m 95.542 ms - 1499\u001b[0m

@fcuello-fudo
Copy link

Same here with version 2.16.0. We can help debug as well.

@taltsafrirpx
Copy link

We also have this bug with version 2.16

@maciejr-gain
Copy link

maciejr-gain commented Aug 30, 2024

We have the same issue.
Any chance of fixing that in 2.17?

@ashwin-pc
Copy link
Member

@taltsafrirpx yes we should use your help debugging the issue. We didn't have a way of reproducing the issue on our end, without that it's going to be hard for us to debug it. Any help in reproduction would be helpful

@wjordan
Copy link

wjordan commented Sep 26, 2024

I believe #6915 introduced this bug. Specifically, the new dependency on the json11 node module and the call to JSON11.stringify produces JSON5 documents that are not entirely compatible with JSON.parse used elsewhere in this project.

Here's a minimal demo of the underlying issue, using a "hello world" message with the second word wrapped in an ANSI escape code marking the color green.

Using standard JSON.stringify round-trips the original text without issue:

$ printf 'hello \u001b[38;5;2mworld\u001b[0m' |
  node -e "console.log(JSON.parse(JSON.stringify(require('fs').readFileSync(0, 'utf-8'))))"
hello world

However, using JSON11.stringify converts the ASCII escape character to \x1b (a valid JSON5 hexadecimal escape, but not valid JSON which only supports \u001b unicode escape), producing an error in JSON.parse:

$ printf 'hello \u001b[38;5;2mworld\u001b[0m' |
  node -e "console.log(JSON.parse(require('json11').stringify(require('fs').readFileSync(0, 'utf-8'), {quote: '\"'})))"
undefined:1
"hello \x1b[38;5;2mworld\x1b[0m"
        ^

SyntaxError: Bad escaped character in JSON at position 8 (line 1 column 9)
    at JSON.parse (<anonymous>)

[...]

wjordan added a commit to wjordan/OpenSearch-Dashboards that referenced this issue Sep 26, 2024
JSON11.stringify produces JSON5 documents with hex escape codes (`\x1b`),
which aren't standard JSON and cause `JSON.parse` to error.
When using JSON11, replace all `\xXX` escape codes with the JSON-compatible
equivalent Unicode escape codes (`\u00XX`).

Fixes opensearch-project#7367.
wjordan added a commit to wjordan/opensearch-js that referenced this issue Sep 26, 2024
JSON11.stringify produces JSON5 documents with hex escape codes (`\x1b`),
which aren't standard JSON and cause `JSON.parse` to error.
When using JSON11, replace all `\xXX` escape codes with the JSON-compatible
equivalent Unicode escape codes (`\u00XX`).

Fixes opensearch-project/OpenSearch-Dashboards#7367.
wjordan added a commit to wjordan/opensearch-js that referenced this issue Sep 26, 2024
JSON11.stringify produces JSON5 documents with hex escape codes (`\x1b`),
which aren't standard JSON and cause `JSON.parse` to error.
When using JSON11, replace all `\xXX` escape codes with the JSON-compatible
equivalent Unicode escape codes (`\u00XX`).

Fixes opensearch-project/OpenSearch-Dashboards#7367.

Signed-off-by: Will Jordan <[email protected]>
@damslo
Copy link

damslo commented Sep 27, 2024

Hello,
We ran into the same issue with our cluster after upgrading from 2.14 to 2.15

Workaround:

  1. Find which message is causing issues:
    • upgrade dashboards to 2.15
    • find a timestamp where you can display data, for example:
      image
    • increase time range until you get this error again, this way you will be able to find exact timestamp, in my case:
      image
    • downgrade dashboards
    • analyze events in timestamp that you found earlier - this way there should be less messages to go through.
    • example message that brakes my cluster:

image
On the first glance it seems like a normal message field, but if you go to json view:

image

  1. Delete those docs:
    example query:

    GET /index_name/_search { "query": { "wildcard": { "@message": "*\u001b*" } } }

    Make sure it will find only those messages you want to delete.
    You can reindex those docs to different index in different index pattern to not completely lose them, and then use
    delete by query on your main indices.
    Making query with those special characters might be tricky sometimes, it might depend on your analyzer/tokenizer
    settings etc.

  2. Create logstash filter to prevent from getting such messages in future, you can drop entire message or just
    remove those escape characters.
    This is how it might work, however I didn't test it yet.
    filter { if [@metadata][_index] =~ /^index-name.*/ { mutate { gsub => [ "@message", "\\u001b", "" # This removes only the escape character ] } } }

@wjordan
Copy link

wjordan commented Sep 27, 2024

I can confirm that a dev build that includes #8355 and opensearch-project/opensearch-js#879 fixes this bug in my deployment.

wjordan added a commit to wjordan/OpenSearch-Dashboards that referenced this issue Sep 27, 2024
JSON11.stringify produces JSON5 documents with hex escape codes (`\x1b`),
which aren't standard JSON and cause `JSON.parse` to error.
When using JSON11, replace all `\xXX` escape codes with the JSON-compatible
equivalent Unicode escape codes (`\u00XX`).

Partially addresses opensearch-project#7367.

Signed-off-by: Will Jordan <[email protected]>
@AMoo-Miki
Copy link
Collaborator

@wjordan can u try bumping JSON11 to 2, without the changes you had in mind to see if it solved it for you? To have the JS client get [email protected], you can add a resolution to OSD and then do a bootstrap.

@entrop-tankos
Copy link

I can confirm that bumping json11 to 2.0.0 fixes the issue. I made a rebuild of 2.16.0, the error is gone.

@AMoo-Miki
Copy link
Collaborator

AMoo-Miki commented Oct 3, 2024

Thanks @entrop-tankos; I will PR it on the JS client and OSD.

@ananzh ananzh added discover for discover reinvent and removed discover for discover reinvent labels Oct 11, 2024
@ananzh ananzh removed their assignment Oct 11, 2024
@donnergid
Copy link

Are there any timelines for a solution for this issue?

@ashwin-pc
Copy link
Member

@donnergid this should be available in the next release. 2.18

@taltsafrirpx
Copy link

We have seen this bug in a lot of versions of opensearch-dashboards so we are using version 2.8
Is this going to be fixed in other versions except 2.18?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working discover for discover reinvent needs more info Requires more information from poster
Projects
Status: Done