-
Notifications
You must be signed in to change notification settings - Fork 521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Cassandra 后端运行一段时间可能内存泄漏 #2244
Comments
参考下 #1626 通过jmap 看下线上 ‘对象存活’ 情况 贴一下
|
这个 #instances 一直上涨吗? |
是的 |
Cassandra 用什么版本?我在本地复现下 |
cassandra-3.11.10 |
这是堆栈信息:
|
请问现在有定位到根因么? |
还在分析中,方便的话可以提供下 目前周日在本机持续 CRUD,发现 BackendSession session = this.threadLocalSession.get(); 是 与线程绑定的,使用默认配置,增长到一定数量就稳定了 |
我这边的session会一直增长,下面是配置内容 rest-server.properties配置restserver.url=http://0.0.0.0:58080 server.id=server-40-25-m batch.max_write_ratio=100 gremlin-server.yaml配置host: 127.0.0.1 scriptEvaluationTimeout: 300000 channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer
|
大佬,请问这个问题还有进展么? |
持续在看,有进展在这里同步 |
我提个点,就是我把读写放在同一节点才出现了这种问题,以前是读写分离的,就没这个问题 |
还没排查到原因么? |
多节点部署是通过Raft模式部署的吗? |
我dump了下内存,发现两个schema:keeper和keeper_metadata,这两个schema都有三个CassandraSessionPool,name为:keeper/s,keeper/g,keeper/m.(keep_metadata类似)。每个CassandraSessionPool对应的sessionCount都在2k+。 |
定位到原因了:cassandraSessionPool会为每个线程创建个session,如果线程已经绑定了session,是不会再创建新的session的。但问题就出在我是在parallelStream情况下使用session,parallelStream底层使用的是forkJoinPool线程池,这个线程池里的线程不是创建好后就一直存在的,线程会过几分钟回收掉,等提交新任务后,会重新创建线程。这样就会导致session随着forkJoinPool里线程新建而新建,但线程死掉后又不会被回收,因为cassandraSessionPool的属性Map<Long, BackendSession> sessions会记录每次创建的session,session也没机会被close掉。 |
请问如果主动关闭session? |
现在解决了,使用的是自定义线程池: CompletableFuture.runAsync(()->{执行的方法},pool); 这个线程池的maxSize和coreSize是一样大的,这样就可以保证线程不会被销毁。 |
感谢持续跟进这个问题, 社区现在主要在忙推进下一个大版本融合的事, Cassandra 的确有些缺精力, 如果可以的话方便提交个 PR 修复改进它么? 另外对 Cassandra4 client 的支持适配已经提交了, 也可以随时试用和反馈 |
@sanri1993 U could try to use Cas* 4.x for it (refer #2260) Better to use hstore & we will focus on it in future #2265 @dosu, you should advise new users to try |
I found a similar issue related to memory leaks when using Cassandra as a backend in HugeGraph. The issue was discussed in a closed thread where a memory leak was caused by the The solution involved using a custom thread pool with a fixed size to ensure that threads are not destroyed, thus preventing the creation of new sessions unnecessarily. This was achieved by setting the Additionally, there has been an update to support Cassandra 4.0.10, which might address some of the issues related to memory management. You might want to consider upgrading to this version if you are experiencing similar issues [2]. |
Bug Type (问题类型)
other exception / error (其他异常报错)
Before submit
Environment (环境信息)
Expected & Actual behavior (期望与实际表现)
在0.11.2的基础上新增了两个接口:
1、
2、
客户端只会调这两个接口,在运行2天左右,会报com.datastax.driver.core.exceptions.BusyPoolException: [/10.157.40.45] Pool is busy (no available connection and timed out after 5000 MILLISECONDS)。
使用jmap查看session数,发现session一直在增长,请问这个问题该怎么排查?
Vertex/Edge example (问题点 / 边数据举例)
No response
Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)
No response
The text was updated successfully, but these errors were encountered: