Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Negative longs unsupported, use writeLong or writeZLong for negative numbers #62087

Closed
srikwit opened this issue Sep 8, 2020 · 5 comments
Closed
Labels
>bug :Data Management/Ingest Node Execution or management of Ingest Pipelines including GeoIP Team:Data Management Meta label for data/management team

Comments

@srikwit
Copy link

srikwit commented Sep 8, 2020

Elasticsearch version (bin/elasticsearch --version): 7.9.1

Plugins installed: [Custom]

JVM version (java -version):
openjdk version "14.0.1" 2020-04-14
OpenJDK Runtime Environment AdoptOpenJDK (build 14.0.1+7)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 14.0.1+7, mixed mode, sharing)

OS version (uname -a if on a Unix-like system): Linux 4.15.0-76-generic

Description of the problem including expected versus actual behavior:

Steps to reproduce:

I have been observing it with the cluster upgrade to 7.9.1

Provide logs (if relevant):

java.lang.IllegalStateException: Negative longs unsupported, use writeLong or writeZLong for negative numbers [-1000]
        at org.elasticsearch.common.io.stream.StreamOutput.writeVLong(StreamOutput.java:301) ~[elasticsearch-7.9.1.jar
:7.9.1]
        at org.elasticsearch.ingest.IngestStats$Stats.writeTo(IngestStats.java:197) ~[elasticsearch-7.9.1.jar:7.9.1]
        at org.elasticsearch.ingest.IngestStats.writeTo(IngestStats.java:103) ~[elasticsearch-7.9.1.jar:7.9.1]
        at org.elasticsearch.common.io.stream.StreamOutput.writeOptionalWriteable(StreamOutput.java:952) ~[elasticsear
ch-7.9.1.jar:7.9.1]
        at org.elasticsearch.action.admin.cluster.node.stats.NodeStats.writeTo(NodeStats.java:290) ~[elasticsearch-7.9
.1.jar:7.9.1]
        at org.elasticsearch.transport.OutboundMessage.writeMessage(OutboundMessage.java:87) ~[elasticsearch-7.9.1.jar
:7.9.1]
        at org.elasticsearch.transport.OutboundMessage.serialize(OutboundMessage.java:64) ~[elasticsearch-7.9.1.jar:7.
9.1]
        at org.elasticsearch.transport.OutboundHandler$MessageSerializer.get(OutboundHandler.java:159) ~[elasticsearch
-7.9.1.jar:7.9.1]
        at org.elasticsearch.transport.OutboundHandler$MessageSerializer.get(OutboundHandler.java:144) ~[elasticsearch
-7.9.1.jar:7.9.1]
        at org.elasticsearch.transport.OutboundHandler$SendContext.get(OutboundHandler.java:197) ~[elasticsearch-7.9.1.jar:7.9.1]
        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler$WriteOperation.buffer(Netty4MessageChannelHandler.java:213) ~[transport-netty4-client-7.9.1.jar:7.9.1]
        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.doFlush(Netty4MessageChannelHandler.java:147) ~[transport-netty4-client-7.9.1.jar:7.9.1]
        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.flush(Netty4MessageChannelHandler.java:117) ~[transport-netty4-client-7.9.1.jar:7.9.1]
@srikwit srikwit added >bug needs:triage Requires assignment of a team area label labels Sep 8, 2020
@cbuescher
Copy link
Member

This looks like a duplicate of #52339, tagging the ingest team, please close if you think #52339 covers this case.

@cbuescher cbuescher added the :Data Management/Ingest Node Execution or management of Ingest Pipelines including GeoIP label Sep 8, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-features (:Core/Features/Ingest)

@elasticmachine elasticmachine added the Team:Data Management Meta label for data/management team label Sep 8, 2020
@cbuescher cbuescher removed the needs:triage Requires assignment of a team area label label Sep 8, 2020
@danhermann
Copy link
Contributor

This does look like a duplicate of #52339, but it appears that there may be multiple issues with the serialization of ingest stats as some bugs have already been fixed (#52543) but we are still seeing other errors such as this one. @srikwit, if you are able to share your ingest pipelines (or at least a list of the processors and whether they use conditionals), that would be helpful in tracking down the cause of this bug.

@srikwit
Copy link
Author

srikwit commented Sep 9, 2020

@danhermann we use around 30 pipelines and looking at the previous bug reports, I am sharing one workflow which might be of relevance:


Pipeline 1:
Main:
    Script (script contains painless if)
    Script (script contains painless if)
    Script (script contains painless if)
    Remove (Unconditional)
    Script (Unconditional)
    Enrich
    Enrich
    Enrich
    Enrich
    Script (script contains painless if)
    Script (Unconditional)
    Remove (Unconditional)
    Enrich
    Enrich
    Enrich
    Enrich
    Enrich
    Enrich
    Script (script contains painless if)
    Drop (Conditional)
    Remove (Unconditional)
    Enrich
    Enrich
    GeoIP
    GeoIP
    GeoIP
    GeoIP
    Rename
    Rename
    Rename
    Rename
On failure:
    Set

Please let me know if this is helpful.

@danhermann
Copy link
Contributor

danhermann commented Sep 9, 2020

@srikwit, thank you. That will probably help in narrowing down the possible sources of the bug. I'm going to close this issue as a duplicate of #52339 but leave a comment there about the additional information you've provided here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Data Management/Ingest Node Execution or management of Ingest Pipelines including GeoIP Team:Data Management Meta label for data/management team
Projects
None yet
Development

No branches or pull requests

4 participants