-
Notifications
You must be signed in to change notification settings - Fork 885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] - OpenSearch Dashboard V2.15.0 - JSON.parse: bad escaped character #7367
[BUG] - OpenSearch Dashboard V2.15.0 - JSON.parse: bad escaped character #7367
Comments
@kksaha See what API is being called (maybe in dev tools) that causes this? Likely a server error that's not being parsed properly. |
@dblock Other than launching the Discover tab with the default index, no specific API is being called. It appears that this problem has also been encountered by other users https://forum.opensearch.org/t/json-parse-bad-escaped-character/20211/6 |
I am just trying to route this somewhere. I will move this to Dashboards for now. If you can help narrow down where the error is coming from (it's caused by parsing something - what is that it's trying to parse?). |
The error is likely in this section of the code where the long numeral JSON parser cannot parse your json object. It howeer has a fallback mechanism and the fact that the error message does not indicate the parser (JSON11) makes this issue harder to debug. To reproduce this issue on our side can you help find the document causing the issue? You can:
p.s. I cant see the same issue on playground which is running 2.15 using the sample datasets which is why having that response of document is needed to root cause the issue here: Ref |
I have the same issue. I can narrow it down to an index but not to a single document (yet). Furthermore I don't think this is possible. When the error appears there no documents displayed anymore. After several refreshes the error eventually disappears and the documents re-appear. Let me visualize. It looks like this after e.g. three refresh operations (so one error message per refresh): ....and normal again after some more refreshes... If I now immediately increase the window from 15 minutes to 30 minutes I would expect to see that error again but this is not the case. So how would one ever find the document responsible? |
same here |
Also what shows up on your network tab when this request occurs? Im looking particularly for the request and response payload and url for the request |
Thanks for the DeepDive Anan. Lets keep this open since we have more than one report of this being an issue. Looks like we will need at least one offending document to reproduce the error and understand what the issue is. |
If we can't find the document as @agoerl then if we could share the index it would also be very helpful. Therefore are some wired things here:
Definitely need more info and help here. We really appreciate the current info from @GSue53 @kksaha and details from @agoerl @agoerl if you can see a persistent issue from one index, could you help us to get more info? |
@ashwin-pc In my case the index collects logs from all kubernetes pods in a specific environment, from all pods. Those pods are quite different, i.e. produce different logs in varying formats. I feel that if I post a sample it will be misleading. Nonetheless here is one document (with some parts redacted due to privacy/security concerns):
|
Request Headers:
Response Headers:
I am unsure about the response. It is very long and has been truncated already in the browser to 1MB. |
Thanks @agoerl! let me see if i can reproduce this with 2.15 |
Hi @ashwin-pc, the snippet from @agoerl looks quite similar to my error. I can also provide more data if necessary. |
From the sample document, I don't see any problematic escape characters in the provided sample document. All the JSON appears to be well-formed. From the response, it seems we do have the truncate data. Let's use the sample data and explore possible truncate points and make up some data: 1.Truncate after a backslash in a string:
2.Truncate in the middle of a Unicode escape sequence:
3.Create an invalid Unicode escape sequence:
4.Truncate after an escape character in a string:
5.Create an invalid escape sequence:
6.Truncate in the middle of a hex escape sequence:
7.Create an incomplete escaped quote:
8.Introduce an unescaped control character:
9.Create an invalid escape in a key:
10.Introduce a lone surrogate:
Results: We see case 5, 6, 9 report In case 5, \x is an incomplete hexadecimal escape sequence. The JSON parser is encountering these invalid escape sequences and reporting them as "Bad escaped character" errors before it reaches the end of the input. This is different from a simple truncation where the parser reaches the end of the input unexpectedly. Again the current response is a status 200, we don't get much info from it. These are just possible guess based on a brief look at the sample data. |
@GSue53 Thank you for your assistance so far. We've made some progress in analyzing potential causes, but we're currently at a point where we need more specific information to pinpoint the exact issue. If possible, could you help us identify the specific document that's triggering the error? While we've explored various scenarios that could potentially cause this problem, without seeing the actual problematic data, we can't be certain that our proposed solutions will address the root cause. Any additional details you can provide about the exact document or context where the error occurs would be very helpful ensuring we implement an effective fix. |
@ananzh I have managed to find a very small excerpt of data that is causing issues. I have extracted the log message from the events, as that seems the most likely data to cause issues. Let me know if you can find anything in that data. If not I will look into redacting the internal data from the raw json directly from the api. |
Same issue occurred on our side:
=> For me this happens depending on the time window that is used for the search: When using absolute dates: e.g. if the end timestamp is Edit: This was the case because the problematic document was not within the first 500 results anymore.. |
I have encountered another occurence of this error. I have been able to narrow down the cause. The commonality between the tow instaces seems to be escape sequences for colored console output. |
@christiand93 and I can confirm the last comment: When passing this string (surrounded by curly brackets) through a Javascript JSON parser it results in following error: |
I can confirm that we have the same kind of logs. I will try to isolate those as well. |
Hi @agoerl, @JannikBrand and @Christian93, both me and @LDrago27 tried several index ways. For example I tried using bulk api
I got this error
This result indicates that even the Bulk API is not permissive enough to allow the \x escape sequences in JSON. |
Hi @agoerl , @JannikBrand and @Christian93. Adding a bit more details about our testing of this above bug. However we were unable to index the document. Adding the API calls that we have tried for reference below.
Response: Bulk APi Call: Response: Overall we were unable to index the document you had provided. To investigate this issue we will need your help to understand how you had indexed this document in 2.13 version or older versions and it would also be great if you can share the index mapping for the index in 2.13 version. You can use the following link https://opensearch.org/docs/latest/field-types/ to get the command for obtaining the index mapping. Regards, |
@LDrago27 & @ananzh: Unfortunately, we just observed such data already being inside the OpenSearch clusters and could not reproduce it so far. Maybe @matthias-prog knows more? If not, I would still try to find out how such data got indexed in our clusters. We tried to escape the string like the following during indexing. However, this just resulted in successfully indexing and searching the document in Discover view: Regarding the field mapping, this is the one for the field in our case:
|
@JannikBrand I also have been unsucessful in indexing a sample of our problematic data manually. |
There is a huge ingest volume for our OS cluster as well. Unfortunately, we could still not figure it out as well. Not solving the real problem, but a possible work around for this might be to use Unicode escape sequence on the sender side instead, which will result in successfully indexing and searching the documents.
|
Same here with version 2.16.0. We can help debug as well. |
We also have this bug with version 2.16 |
We have the same issue. |
@taltsafrirpx yes we should use your help debugging the issue. We didn't have a way of reproducing the issue on our end, without that it's going to be hard for us to debug it. Any help in reproduction would be helpful |
I believe #6915 introduced this bug. Specifically, the new dependency on the Here's a minimal demo of the underlying issue, using a "hello world" message with the second word wrapped in an ANSI escape code marking the color green. Using standard
However, using
|
JSON11.stringify produces JSON5 documents with hex escape codes (`\x1b`), which aren't standard JSON and cause `JSON.parse` to error. When using JSON11, replace all `\xXX` escape codes with the JSON-compatible equivalent Unicode escape codes (`\u00XX`). Fixes opensearch-project#7367.
JSON11.stringify produces JSON5 documents with hex escape codes (`\x1b`), which aren't standard JSON and cause `JSON.parse` to error. When using JSON11, replace all `\xXX` escape codes with the JSON-compatible equivalent Unicode escape codes (`\u00XX`). Fixes opensearch-project/OpenSearch-Dashboards#7367.
JSON11.stringify produces JSON5 documents with hex escape codes (`\x1b`), which aren't standard JSON and cause `JSON.parse` to error. When using JSON11, replace all `\xXX` escape codes with the JSON-compatible equivalent Unicode escape codes (`\u00XX`). Fixes opensearch-project/OpenSearch-Dashboards#7367. Signed-off-by: Will Jordan <[email protected]>
I can confirm that a dev build that includes #8355 and opensearch-project/opensearch-js#879 fixes this bug in my deployment. |
JSON11.stringify produces JSON5 documents with hex escape codes (`\x1b`), which aren't standard JSON and cause `JSON.parse` to error. When using JSON11, replace all `\xXX` escape codes with the JSON-compatible equivalent Unicode escape codes (`\u00XX`). Partially addresses opensearch-project#7367. Signed-off-by: Will Jordan <[email protected]>
@wjordan can u try bumping JSON11 to 2, without the changes you had in mind to see if it solved it for you? To have the JS client get [email protected], you can add a resolution to OSD and then do a bootstrap. |
I can confirm that bumping json11 to 2.0.0 fixes the issue. I made a rebuild of 2.16.0, the error is gone. |
Thanks @entrop-tankos; I will PR it on the JS client and OSD. |
Are there any timelines for a solution for this issue? |
@donnergid this should be available in the next release. 2.18 |
We have seen this bug in a lot of versions of opensearch-dashboards so we are using version 2.8 |
Describe the bug
After upgrading from 2.13 to 2.15 Discover in Dashboards is barely usable due to the following error.
SyntaxError: Bad escaped character in JSON at position 911476 (line 1 column 911477) at fetch_Fetch.fetchResponse (https://dashboards.observability-opensearch.backend.ppe.cloud/7749/bundles/core/core.entry.js:15:243032) at async interceptResponse (https://dashboards.observability-opensearch.backend.ppe.cloud/7749/bundles/core/core.entry.js:15:237932) at async https://dashboards.observability-opensearch.backend.ppe.cloud/7749/bundles/core/core.entry.js:15:240899
Related component
Search
To Reproduce
Upgrade to version 2.15.0 and it will appear in the discover.
Expected behavior
See the following error in discover
SyntaxError: Bad escaped character in JSON at position 911476 (line 1 column 911477)
Additional Details
Someone already reported this issue in OpenSearch forum https://forum.opensearch.org/t/json-parse-bad-escaped-character/20211
The text was updated successfully, but these errors were encountered: