Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: When running in standalone + streaming node mode, even after the standalone pod kill chaos removed, the standalone pod still continues to crash. #36555

Closed
1 task done
zhuwenxing opened this issue Sep 26, 2024 · 5 comments
Assignees
Labels
feature/streaming node streaming node feature kind/bug Issues or changes related a bug priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. severity/critical Critical, lead to crash, data missing, wrong result, function totally doesn't work. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@zhuwenxing
Copy link
Contributor

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version:master-20240925-aee046e5-amd64
- Deployment mode(standalone or cluster):standalone
- MQ type(rocksmq, pulsar or kafka):    
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS): 
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior


[2024-09-25T19:29:46.642Z] + kubectl get pods -o wide

[2024-09-25T19:29:46.644Z] + grep standalone-pod-kill-99

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-etcd-0                                     1/1     Running       0                42m     10.104.18.176   4am-node25   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-etcd-1                                     1/1     Running       0                42m     10.104.30.195   4am-node38   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-etcd-2                                     1/1     Running       0                42m     10.104.17.9     4am-node23   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-milvus-standalone-765c89cc55-hw7w2         0/1     Running       7 (5m23s ago)    15m     10.104.32.197   4am-node39   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-minio-5c7bbf4c79-tzjp5                     1/1     Running       0                42m     10.104.32.143   4am-node39   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-bookie-0                            1/1     Running       0                42m     10.104.25.80    4am-node30   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-bookie-1                            1/1     Running       0                42m     10.104.18.178   4am-node25   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-bookie-2                            1/1     Running       0                42m     10.104.30.196   4am-node38   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-bookie-init-h8k6k                   0/1     Completed     0                42m     10.104.6.221    4am-node13   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-broker-0                            1/1     Running       0                42m     10.104.5.173    4am-node12   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-proxy-0                             1/1     Running       0                42m     10.104.6.222    4am-node13   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-pulsar-init-9pn8p                   0/1     Completed     0                42m     10.104.5.172    4am-node12   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-recovery-0                          1/1     Running       0                42m     10.104.6.223    4am-node13   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-zookeeper-0                         1/1     Running       0                42m     10.104.18.175   4am-node25   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-zookeeper-1                         1/1     Running       0                41m     10.104.16.26    4am-node21   <none>           <none>

[2024-09-25T19:29:46.899Z] standalone-pod-kill-99-pulsar-zookeeper-2                         1/1     Running       0                40m     10.104.19.162   4am-node28   <none>           <none>
[2024/09/25 19:30:06.774 +00:00] [INFO] [writebuffer/write_buffer.go:299] ["write buffer get segments to sync"] [segmentIDs="[452798738239470727]"]
[2024/09/25 19:30:06.774 +00:00] [WARN] [syncmgr/storage_serializer.go:129] ["failed to serialize merged stats log"] [segmentID=452798738239470727] [collectionID=452798738239265485] [channel=by-dev-rootcoord-dml_1_452798738239265485v0] [error="service internal error: shall not serialize zero length statslog list"]
[2024/09/25 19:30:06.774 +00:00] [FATAL] [writebuffer/write_buffer.go:340] ["failed to get sync task"] [segmentID=452798738239470727] [error="service internal error: shall not serialize zero length statslog list"] [stack="github.com/milvus-io/milvus/internal/flushcommon/writebuffer.(*writeBufferBase).syncSegments\n\t/workspace/source/internal/flushcommon/writebuffer/write_buffer.go:340\ngithub.com/milvus-io/milvus/internal/flushcommon/writebuffer.(*writeBufferBase).triggerSync\n\t/workspace/source/internal/flushcommon/writebuffer/write_buffer.go:301\ngithub.com/milvus-io/milvus/internal/flushcommon/writebuffer.(*l0WriteBuffer).BufferData\n\t/workspace/source/internal/flushcommon/writebuffer/l0_write_buffer.go:193\ngithub.com/milvus-io/milvus/internal/flushcommon/writebuffer.(*bufferManager).BufferData\n\t/workspace/source/internal/flushcommon/writebuffer/manager.go:202\ngithub.com/milvus-io/milvus/internal/flushcommon/pipeline.(*writeNode).Operate\n\t/workspace/source/internal/flushcommon/pipeline/flow_graph_write_node.go:94\ngithub.com/milvus-io/milvus/internal/util/flowgraph.(*nodeCtxManager).workNodeStart\n\t/workspace/source/internal/util/flowgraph/node.go:122"]

Expected Behavior

No response

Steps To Reproduce

No response

Milvus Log

failed job: https://qa-jenkins.milvus.io/blue/organizations/jenkins/chaos-test-straming-node-cron/detail/chaos-test-straming-node-cron/99/pipeline
log:
artifacts-standalone-pod-kill-99-server-logs.tar.gz

Anything else?

No response

@zhuwenxing zhuwenxing added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 26, 2024
@zhuwenxing zhuwenxing added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. severity/critical Critical, lead to crash, data missing, wrong result, function totally doesn't work. feature/streaming node streaming node feature labels Sep 26, 2024
@zhuwenxing
Copy link
Contributor Author

/assign @chyezh
PTAL

@zhuwenxing zhuwenxing changed the title [Bug]: When running in standalone + streaming node mode, even after the standalone pod is killed to eliminate the fault, the standalone node still continues to crash. [Bug]: When running in standalone + streaming node mode, even after the standalone pod kill chaos removed, the standalone node still continues to crash. Sep 26, 2024
@zhuwenxing zhuwenxing changed the title [Bug]: When running in standalone + streaming node mode, even after the standalone pod kill chaos removed, the standalone node still continues to crash. [Bug]: When running in standalone + streaming node mode, even after the standalone pod kill chaos removed, the standalone pod still continues to crash. Sep 26, 2024
@yanliang567 yanliang567 added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 27, 2024
@yanliang567
Copy link
Contributor

/unassign

@chyezh
Copy link
Contributor

chyezh commented Sep 27, 2024

because pkbf is removed from flusher of streaming node and datanode by default.

related PR: #36367

on fixing.

@chyezh
Copy link
Contributor

chyezh commented Sep 30, 2024

@zhuwenxing should be fixed, please verify at a47abb2f2be49f195500ec3c7da94b0053516a8d

@zhuwenxing
Copy link
Contributor Author

verified and passed with tag master-20240930-ecb2b242-amd64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature/streaming node streaming node feature kind/bug Issues or changes related a bug priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. severity/critical Critical, lead to crash, data missing, wrong result, function totally doesn't work. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

3 participants