You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running massively-parallel UAST extraction from PGA. I am using 64 goroutines to parse.
I noticed that some of my machines got stuck in a few days and I investigated why.
For example, let's take 45538c91acd1c5fa1f1e84246faf9ff58f31679f/0169ed02-f85d-f946-0448-fdbf6ec30038/dataset.py. The siva file name is 45538c91acd1c5fa1f1e84246faf9ff58f31679f.siva, the head is 0169ed02-f85d-f946-0448-fdbf6ec30038 and the file name is dataset.py.
My extractor is https://github.com/src-d/datasets/tree/master/PublicGitArchive/pga2uast
# /45/ is the first two chars
wget pga.sourced.tech/siva/v2/45/45538c91acd1c5fa1f1e84246faf9ff58f31679f.siva
pga2uast -lbash,cpp,csharp,go,java,javascript,php,python,ruby -m -o . .
When I kill my extractor process, those bblfsh processes continue to hang with 100% CPU.
When I restart bblfshd and reinitialize it, everything returns back to normal.
My hypothesis is that some drivers timeout or crash, and bblfshd continues to run only one instance of the driver. So my "hang" is just a single-threaded extraction, according to the logs.
The text was updated successfully, but these errors were encountered:
vmarkovtsev
changed the title
Timed out requests do not really end
Timed out requests lead to single driver instance running
Jul 27, 2019
This is a known issue - the request cancellation can only be propagated to the Go part of the driver, but the native part will continue parsing anyway.
Can be fixed by killing the native driver when the request is canceled. It will slow down the next request because we'll need to restart the native driver, but it's definitely better than hanging forever.
We will also examine the files you mentioned - those may trigger some infinite recursion in the driver which causes the parsing to stuck.
I am running massively-parallel UAST extraction from PGA. I am using 64 goroutines to parse.
I noticed that some of my machines got stuck in a few days and I investigated why.
Date: July 27 9am.
Machine 1
No other bblfsh processes except:
docker logs bblfshd | tail:
Machine 2
No other bblfsh processes except:
docker logs bblfshd | tail:
Reproduce
For example, let's take
45538c91acd1c5fa1f1e84246faf9ff58f31679f/0169ed02-f85d-f946-0448-fdbf6ec30038/dataset.py
. The siva file name is45538c91acd1c5fa1f1e84246faf9ff58f31679f.siva
, the head is0169ed02-f85d-f946-0448-fdbf6ec30038
and the file name isdataset.py
.My extractor is https://github.com/src-d/datasets/tree/master/PublicGitArchive/pga2uast
Versions
Speculation
When I kill my extractor process, those bblfsh processes continue to hang with 100% CPU.
When I restart bblfshd and reinitialize it, everything returns back to normal.
My hypothesis is that some drivers timeout or crash, and bblfshd continues to run only one instance of the driver. So my "hang" is just a single-threaded extraction, according to the logs.
The text was updated successfully, but these errors were encountered: