-
Notifications
You must be signed in to change notification settings - Fork 528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] PD GraphCache range attribute has a missing coverage range of 0-65535 #2634
Comments
The missing coverage range in the The Here is the relevant part of the @Data
public class GraphCache {
private Graph graph;
private AtomicBoolean initialized = new AtomicBoolean(false);
private AtomicBoolean writing = new AtomicBoolean(false);
private ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private Map<Integer, AtomicBoolean> state = new ConcurrentHashMap<>();
private Map<Integer, Partition> partitions = new ConcurrentHashMap<>();
private RangeMap<Long, Integer> range = TreeRangeMap.create();
public GraphCache(Graph graph) {
this.graph = graph;
}
public GraphCache() {
}
public Partition getPartition(Integer id) {
return partitions.get(id);
}
public Partition addPartition(Integer id, Partition p) {
return partitions.put(id, p);
}
public Partition removePartition(Integer id) {
return partitions.remove(id);
}
} To address this issue, you need to ensure that the Additionally, the |
@dosu |
Yes, there is a potential thread safety issue with the To address this, you should synchronize the operations on the public boolean update(String graphName, int partId, Partition partition) {
GraphCache graph = getGraphCache(graphName);
graph.lock.writeLock().lock();
try {
Partition p = graph.getPartition(partId);
if (p != null && p.equals(partition)) {
return false;
}
RangeMap<Long, Integer> range = graph.getRange();
graph.addPartition(partId, partition);
if (p != null) {
if (Objects.equals(partition.getId(), range.get(partition.getStartKey())) &&
Objects.equals(partition.getId(), range.get(partition.getEndKey() - 1))) {
range.remove(range.getEntry(partition.getStartKey()).getKey());
}
}
range.put(Range.closedOpen(partition.getStartKey(), partition.getEndKey()), partId);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
graph.lock.writeLock().unlock();
}
return true;
} By using a |
@dosu |
Yes, replacing the To ensure thread safety, you can use a The import com.google.common.collect.Range;
import com.google.common.collect.RangeMap;
import com.google.common.collect.TreeRangeMap;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class GraphCache {
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private final RangeMap<Long, Integer> range = TreeRangeMap.create();
private final ConcurrentHashMap<String, RangeMap<Long, Integer>> keyToPartIdCache = new ConcurrentHashMap<>();
public void putRange(String graphName, Range<Long> range, Integer value) {
lock.writeLock().lock();
try {
keyToPartIdCache.computeIfAbsent(graphName, k -> TreeRangeMap.create()).put(range, value);
} finally {
lock.writeLock().unlock();
}
}
public Integer getRange(String graphName, Long key) {
lock.readLock().lock();
try {
RangeMap<Long, Integer> rangeMap = keyToPartIdCache.get(graphName);
return rangeMap != null ? rangeMap.get(key) : null;
} finally {
lock.readLock().unlock();
}
}
public void removeRange(String graphName, Range<Long> range) {
lock.writeLock().lock();
try {
RangeMap<Long, Integer> rangeMap = keyToPartIdCache.get(graphName);
if (rangeMap != null) {
rangeMap.remove(range);
}
} finally {
lock.writeLock().unlock();
}
}
// Other methods to manipulate the range map
} In this example, a |
@dosu |
There is no concern about thread safety issues in the Here is the relevant code: public void waitGraphLock(String graphName) {
var lock = getOrCreateGraphLock(graphName);
while (lock.get()) {
Thread.onSpinWait();
}
}
private AtomicBoolean getOrCreateGraphLock(String graphName) {
var lock = this.locks.get(graphName);
if (lock == null) {
try {
writeLock.lock();
if ((lock = this.locks.get(graphName)) == null) {
lock = new AtomicBoolean();
locks.put(graphName, lock);
}
} finally {
writeLock.unlock();
}
}
return lock;
} The |
@dosu |
Yes, it is possible for another thread to make modifications in the To ensure thread safety in this scenario, you should use the public KVPair<Metapb.Partition, Metapb.Shard> getPartitionById(String graphName, int partId) {
try {
lockGraph(graphName); // Acquire the lock
GraphCache graph = initGraph(graphName);
Partition partition = graph.getPartition(partId);
if (partition == null) {
return null;
}
KVPair<ShardGroup, Shard> group = groups.get(partId);
if (group == null) {
return null;
}
Shard shard = group.getValue();
if (shard == null) {
return null;
}
return new KVPair<>(partition, shard);
} finally {
unlockGraph(graphName); // Release the lock
}
} By acquiring the lock before accessing the |
@dosu |
The Here is the relevant code snippet: public void waitGraphLock(String graphName) {
var lock = getOrCreateGraphLock(graphName);
while (lock.get()) {
Thread.onSpinWait();
}
} The method works by checking the state of an Although Here is how public void lockGraph(String graphName) {
var lock = getOrCreateGraphLock(graphName);
while (!lock.compareAndSet(false, true)) {
Thread.onSpinWait();
}
}
public void unlockGraph(String graphName) {
var lock = getOrCreateGraphLock(graphName);
lock.set(false);
} In summary, |
Bug Type (问题类型)
performance (性能下降)
Before submit
Environment (环境信息)
Expected & Actual behavior (期望与实际表现)
the speed of vertex writing suddenly decreases, while P99 increases; it was found that there is a missing coverage range in the GraphCache range;
Actual range :
Expected range:
Vertex/Edge example (问题点 / 边数据举例)
No response
Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)
No response
The text was updated successfully, but these errors were encountered: