Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gitea keeping taking 100% CPU usage, and runs slowly in git pushing #10661

Closed
2 of 7 tasks
duchenpaul opened this issue Mar 7, 2020 · 53 comments
Closed
2 of 7 tasks

Gitea keeping taking 100% CPU usage, and runs slowly in git pushing #10661

duchenpaul opened this issue Mar 7, 2020 · 53 comments

Comments

@duchenpaul
Copy link

duchenpaul commented Mar 7, 2020

  • Gitea version (or commit ref): 1.12.0+dev-440-gf5a20250a

  • Git version: git version 2.20.1

  • Operating system: Raspbian GNU/Linux 10 (buster) on Raspberry Pi zero w

  • Database (use [x]):

    • PostgreSQL
    • MySQL
    • MSSQL
    • SQLite
  • Can you reproduce the bug at https://try.gitea.io:

    • Yes (provide example URL)
    • No
    • Not relevant
  • Log gist:

  • Architecture: linux-arm-6

Description

After upgrading the gitea to this version, I found that gitea runs very slow.
By checking the CPU usage, the CPU usage keeps 100%.
And I checked the ps, found that seems the gitea hook processes (2395, 964) are taking up lots resource, I would like to know what are they and how can I track them, and how to stop them.
As far as I know, there is no git hook setting in my repo that runs on this server. So I think this could be the gitea's system hook.
And I see PID 2366 saying duchenpaul/homebridge-docker.git is executing hook, but there is no hook script set in that repo
As the PID 961, I dont know what is going on there.

I checked the log, only the sql that queried in the log, no errors

  928 ?        Ss     0:00  \_ sshd: git [priv]
  937 ?        S      0:00  |   \_ sshd: git@notty
  939 ?        Rsl   14:49  |       \_ /home/git/gitea/gitea serv key-3 --config=/home/git/gitea/custom/conf/app.ini
  987 ?        Ss     0:00  \_ sshd: git [priv]
  996 ?        S      0:00  |   \_ sshd: git@notty
  998 ?        Rsl   12:43  |       \_ /home/git/gitea/gitea serv key-3 --config=/home/git/gitea/custom/conf/app.ini
 1310 ?        Ss     0:00  \_ sshd: git [priv]
 1319 ?        S      0:00  |   \_ sshd: git@notty
 1321 ?        Rsl   10:04  |       \_ /home/git/gitea/gitea serv key-3 --config=/home/git/gitea/custom/conf/app.ini
 2062 ?        Ss     0:00  \_ sshd: pi [priv]
 2071 ?        S      0:02  |   \_ sshd: pi@pts/1
 2074 pts/1    Ss     0:00  |       \_ -bash
 2482 pts/1    S+     0:00  |           \_ vi gitea.log
 2349 ?        Ss     0:00  \_ sshd: git [priv]
 2358 ?        S      0:00      \_ sshd: git@notty
 2360 ?        Ssl    0:03          \_ /home/git/gitea/gitea serv key-3 --config=/home/git/gitea/custom/conf/app.ini
 2366 ?        Sl     0:00              \_ git-receive-pack duchenpaul/homebridge-docker.git
 2390 ?        S      0:00                  \_ bash hooks/post-receive
 2394 ?        S      0:00                      \_ bash ./hooks/post-receive.d/gitea
 2395 ?        Rl     4:55                          \_ /home/git/gitea/gitea hook --config=/home/git/gitea/custom/conf/app.ini post-receive
  454 tty1     Ss+    0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
  525 ?        Ss     0:00 php-fpm: master process (/etc/php5/fpm/php-fpm.conf)
  619 ?        S      0:00  \_ php-fpm: pool www
  620 ?        S      0:00  \_ php-fpm: pool www
  586 ?        Ss     0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
  590 ?        S      0:00  \_ nginx: worker process
  654 ?        Ss     0:00 /lib/systemd/systemd --user
  680 ?        S      0:00  \_ (sd-pam)
  700 ?        Ss     0:00 /lib/systemd/systemd --user
  723 ?        S      0:00  \_ (sd-pam)
  753 ?        Ss     0:00 /usr/sbin/exim4 -bd -q30m
  961 ?        S      0:00 bash hooks/update refs/heads/master 2b1fb9ae3617e7fc0b9b496897b542e1e3e24375 9a610f1bb0fdd659a4fa5ef7576ff3dc27690f22
  963 ?        S      0:00  \_ bash ./hooks/update.d/gitea refs/heads/master 2b1fb9ae3617e7fc0b9b496897b542e1e3e24375 9a610f1bb0fdd659a4fa5ef7576ff3dc27690f22
  964 ?        Rl    14:45      \_ /home/git/gitea/gitea hook --config=/home/git/gitea/custom/conf/app.ini update refs/heads/master 2b1fb9ae3617e7fc0b9b496897b542e1e3e24375 9a610f1bb0fdd659a4fa5ef7576ff3dc27690f22
 2175 ?        Ssl    1:49 /home/git/gitea/gitea web

[Update] Summarize

  • The hanging of gitea serv key-3 --config=/home/git/gitea/custom/conf/app.ini is introduced from git client's requests, for some reason, some git request did not finishes properly, instead, taking up all the CPU for hours and slow down the system.
  • And these process does not related to the gitea main service, even i stops the gitea service, these process is still hanging there.
  • It looks the bug started from v1.11.2 (0768823) in the release, I am using v1.11.1 (ff24f81), it is ok, but as long as I switch to v1.11.2, above thing start to happen in a short time.

Screenshots

image

@duchenpaul duchenpaul changed the title Gitea keeping taking 100% CPU usage, and runs slowly in pushing Gitea keeping taking 100% CPU usage, and runs slowly in git pushing Mar 7, 2020
@duchenpaul
Copy link
Author

Also, I observed that there are 3 processes called gitea serv key-3 --config=/home/git/gitea/custom/conf/app.ini which taking up the most of CPU usage, what are they? How can it be stopped

@guillep2k
Copy link
Member

Those three processes (939, 998 and 1321) sound like the server side of git clients. The fact that all of them have key-3 means that they all used the same credentials (public key). It's probably you trying to push, then cancelling, but for some reason the server side didn't realize about that yet.

But the culprit seems to be:

  964 ?        Rl    14:45      \_ /home/git/gitea/gitea hook --config=/home/git/gitea/custom/conf/app.ini update refs/heads/master 2b1fb9ae3617e7fc0b9b496897b542e1e3e24375 9a610f1bb0fdd659a4fa5ef7576ff3dc27690f22

which is Gitea's update hook on the repository. The 14:45 value is the accumulated CPU time used, which is not normal unless you're pushing on the Linux kernel source code in an old Pentium computer.

And yes, those hooks are Gitea's git plumbing that make things work behind the scenes.

Some ideas:

  • Try rebooting. Seriously. If for some fortuitous reason some process hanged on you but it's still locking some files, this will kill it.
  • Try doing a git fsck and a git gc on the repo via the admin menu. It's possible that there's something wrong with it.

Also, a Raspberry Pi Zero W doesn't have much RAM in it. Try checking whether your system is paging too much. Use the top program and hit the m key until the memory graph shows up:

image

Or maybe vmstat 3 999 (you cancel that with CTRL+C):

image

The si and so columns should be 0 most of the time. Paging in and out is something that really slows down the system.

(Note: don't take the values in my pictures as expected for your system: mine is a CentOS 7 VM with 12GB of RAM assigned to it).

@duchenpaul
Copy link
Author

Thank you for your replying,
I did a reboot again, looks ok for now.
I didn't cancel any transactions in-fight, from what I see for client side, all transactions are finished normally.
I believe PID 964 I referring to is blocked due to the high CPU usage of ssh transactions

My questions are:

  • Is there are timeout settings that cancel the in-fight ssh transactions from server side which taking too much time?
  • And is there a way to track back what is going on with those ssh transactions.

anyway, I will keep monitoring for 3 days, if it is ok, I will close this issue.

This gitea server is only serving myself, I think it still is able to handle the job well. 😄
image

output for vmstat 3 999

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0   4036  68112  28512 141908    0    5   523    45  576 1210 17  9 72  2  0
 0  0   4036  68112  28512 141908    0    0     0     0  434  877  4  3 93  0  0
 0  0   4036  68112  28512 141908    0    0     0     0  405  856  6  1 93  0  0
 1  0   4036  68112  28512 141908    0    0     0     0  454  917  4  2 94  0  0
 3  0   4036  68112  28512 141908    0    0     0     0  532 1086  7  2 91  0  0

@duchenpaul
Copy link
Author

[Update] I found these 2 gitea serv process pops up again, while I didn't even do anything.
Anyone can help me find out who caused this?

image

pi@Venus ~ $ vmstat 3 999
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 4  0  16172  84076  11680 106668    0    1    42     5  548  996 84  3 13  0  0
 3  0  16172  84076  11680 106668    0    0     0     0  554  958 99  1  0  0  0
 3  0  16172  84076  11680 106668    0    0     0     0  582 1027 96  4  0  0  0
 4  0  16172  84076  11680 106668    0    0     0     0  704 1296 98  2  0  0  0
 3  0  16172  84076  11680 106668    0    0     0     0  561 1015 97  3  0  0  0
 4  0  16172  84076  11680 106668    0    0     0     0  648 1170 97  3  0  0  0

@duchenpaul
Copy link
Author

Close for now, I found it could be my github desktop causing this. I change to GitExtensions, it never hangs.

@lunny
Copy link
Member

lunny commented Mar 8, 2020

Some git tools may check repo status on background.

@duchenpaul
Copy link
Author

Yes, you can tell from the log of github desktop, but how come the process stalls on the server side, and taking up all cpu resource, even the client is quit, the client end PC is powered off.

@zeripath
Copy link
Contributor

zeripath commented Mar 8, 2020

Not sure - we would need to see some logs from gitea as to what it's doing.

For example the update hook should just be basically empty - I've suggested it's removal entirely as it only can slow things down.

@duchenpaul duchenpaul reopened this Mar 9, 2020
@duchenpaul
Copy link
Author

Yeah, I will reopen this issue, and set my gitea log level to debug to collect more info.
I didn't use update hook in any repo. only the post-receive one in a few repos just to deploy the code on remote server.
Now I found that even gitea dump can stall sometime, just hanging there and put the CPU in full throttle.

@duchenpaul
Copy link
Author

This is my log conf for debugging:

[log]
MODE      = file
LEVEL     = Debug
ROOT_PATH = /home/git/gitea/log
REDIRECT_MACARON_LOG = true
STACKTRACE_LEVEL = Debug
ENABLE_ACCESS_LOG = true
ENABLE_XORM_LOG = true
XORM = ,

is there any thing to add to help locate the problem?

@duchenpaul
Copy link
Author

duchenpaul commented Mar 9, 2020

It got stall again.
The PID 8480 takes 100% CPU usage. Even the gitea service is stoped by me.

 8469 ?        Ss     0:00  \_ sshd: git [priv]
 8478 ?        S      0:00  |   \_ sshd: git@notty
 8480 ?        Rsl    5:04  |       \_ /home/git/gitea/gitea --config=/home/git/gitea/custom/conf/app.ini serv key-6
 8486 ?        Ss     0:00  \_ sshd: pi [priv]
 8495 ?        S      0:00      \_ sshd: pi@pts/3
 8498 pts/3    Ss     0:00          \_ -bash
 8637 pts/3    R+     0:00              \_ ps xfa
  452 tty1     Ss+    0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
  568 ?        Ss     0:08 php-fpm: master process (/etc/php5/fpm/php-fpm.conf)
  653 ?        S      0:00  \_ php-fpm: pool www
  656 ?        S      0:00  \_ php-fpm: pool www
  633 ?        Ss     0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
  637 ?        S      0:00  \_ nginx: worker process
  725 ?        Ss     0:00 /usr/sbin/exim4 -bd -q30m
 7762 ?        Ss     0:00 /lib/systemd/systemd --user
 7766 ?        S      0:00  \_ (sd-pam)
 8334 ?        Ss     0:00 /lib/systemd/systemd --user
 8337 ?        S      0:00  \_ (sd-pam)
[ 09:52 - 192.168.31.150  ]
pi@Venus ~ $ ps -o lstart -o etime -p 8480
                 STARTED     ELAPSED
Mon Mar  9 09:46:14 2020       05:59
[ 09:52 - 192.168.31.150  ]
pi@Venus ~ $ 

This process starts at 09:46:14, I checked each log around that time, what I find is only Started Session c187 of user git. which my git client is tring to update the repo in background

attached the logs I can find below.

syslog.log
access.log
auth.log
daemon.log
gitea.log

@guillep2k
Copy link
Member

Have you tried running the fsck and gc? Those are important, given that you're using an SD-card to store the repos. If you are unable to do it via Gitea (e.g. system gets locked up before you get the chance), you can shutdown Gitea and run the commands yourself:

cd /home/git/gitea-repositories
( for user in * ; do cd "$user" ; echo "--- USER: $user ---"
    for repo in * ; do cd $repo ; echo "--- REPO: $repo ---"
        git fsck && git gc || exit
    cd .. ; done ; cd .. ; done )

(Change the initial path accordingly if required)

@guillep2k
Copy link
Member

Note: STACKTRACE_LEVEL = Debug is excessive. You should change it to Error so important messages aren't buried too deep in the logs.

@duchenpaul
Copy link
Author

Yes, I did tried fsck and gc, thank you for your bash snippt, it works like a charm, and ends perfectly.

I guess some bugs introduced between 1.11.0+dev-481-g1df701fd1(good) and 1.10.5+18-gd6657644a(bad) on arch linux-arm-6.

I just formated the SD card, to rule out my file system issue, and restore with the image I backupped days ago, with the gitea version: 1.11.0+dev-481-g1df701fd1.
Keeping running for hours, with some push pull actions performed. It is ok.
And I upgrade gitea to master version which is 1.10.5+18-gd6657644a, problem reproduces. Now I rolled back to the previous version, to see if my assuming is correct.

@guillep2k
Copy link
Member

Update

It seems that we had some issues when tagging our latest master image; it's 1.12.0+dev but the version string says 1.10.5. 😳


Original message (just in case it's useful)

But... those are not actual releases, and you are in fact downgrading. 🤔 Releases have only three digits in their version number, with nothing else after that.

Our latest release is 1.11.2. If you want the cutting edge (i.e. the version we're currently working on but is not polished), you could go for master. If you want to build from sources, you need to checkout exactly v1.11.2 (or master).

The master version is 1.12.0+dev-something, e.g. 1.12.0+dev-435-g6a57364dc which means "435 commits after the 1.12.0+dev tag was stablished", i.e. 435 commits after we've started working in 1.12.

1.10.5 is our latest patch release for the 1.10 branch, which we're still maintaining for severe bugs.

@duchenpaul
Copy link
Author

Yes, I noticed there is issue with version number. But I am sure there is issue in new version.
I download the gitea from the master folder here: https://dl.gitea.io/gitea/master/gitea-master-linux-arm-6, this is the newest version and where this thing occurs.

@duchenpaul
Copy link
Author

duchenpaul commented Mar 9, 2020

Upgrade to Gitea Version: 1.11.2 (0768823), from github releases, problem persists.
I will downgrade version by version to narrow down the problem

@duchenpaul
Copy link
Author

Looks like the bug is between this 2 commits: ff24f81 - 0768823.
I am running v1.11.1(ff24f81), so far so good

@zeripath
Copy link
Contributor

zeripath commented Mar 9, 2020

Are you definitely running MySQL and not Sqlite?

@duchenpaul
Copy link
Author

MySQL: version: mysql Ver 15.1 Distrib 10.3.22-MariaDB, for debian-linux-gnueabihf (armv8l) using readline 5.2

I dont think this is related to database.
the gitea server cannot handle some request from client side, these are processes like below, just handing there for hours, taking up all the CPU.
I wonder if there is a log to track this behavior? and interesting that this only happens on v1.11.2 in the github v1.11.x releases. Must be somewhere has issue.

gitea serv key-3 --config=/home/git/gitea/custom/conf/app.ini

@zeripath
Copy link
Contributor

zeripath commented Mar 9, 2020

The stacktraces are not helpful - they'd be more useful for figuring out why something is causing trouble once we have identified an issue. That being said they're not too difficult to remove from the logs for quick review.

I've looked at your daemon.log and I can't see any request to Gitea that is opened without closing and these are all at most around a second.

I'm looking at the others right now.

@zeripath
Copy link
Contributor

zeripath commented Mar 9, 2020

Do you have any logs from when there is a hang?

@duchenpaul
Copy link
Author

These are the logs I catched when there is a hang

@duchenpaul
Copy link
Author

I dont know if this is appropraite but if you like, I could give you ssh access to my server, just to help navigate the problem.
my email: [email protected]

@zeripath
Copy link
Contributor

zeripath commented Mar 9, 2020

Running grep git auth.log | grep sshd:session reveals only a few sessions greater than 2 minutes:

 "sshd[1969]:"    23.0
 "sshd[2114]:"     7.0
 "sshd[5028]:"    83.0
 "sshd[5363]:"    25.0
 "sshd[5574]:"     8.0
 "sshd[6406]:"    12.0

sshd[5028] is:

Mar  8 20:57:53 venus sshd[5028]: Accepted publickey for git from 61.155.3.10 port 6063 ssh2: RSA SHA256:
Mar  8 20:57:53 venus sshd[5028]: pam_unix(sshd:session): session opened for user git by (uid=0)
...
Mar  8 22:21:26 venus sshd[5028]: pam_unix(sshd:session): session closed for user git

@zeripath
Copy link
Contributor

zeripath commented Mar 9, 2020

But I can't see what was happening at that point because there are no logs.

@duchenpaul
Copy link
Author

Change a way to think, I pasted to process hanging details up there. This process started at Mar 9 09:46:14 and below is what I extracted from auth log, I believe this’s the hanging session, it doesn’t have a stop as I killed it manually or just reboot the server.

Mar  9 09:46:14 venus sshd[8469]: Accepted publickey for git from 192.168.2.1 port 50681 ssh2: RSA SHA256:rV+R5HyRUMVvl4taCsKmkY0IMhq7Rg0437wWScz6U1o
Mar  9 09:46:14 venus sshd[8469]: pam_unix(sshd:session): session opened for user git by (uid=0)
Mar  9 09:46:14 venus systemd-logind[253]: New session c187 of user git.

@duchenpaul
Copy link
Author

Of course you cannt find it. Let me summarize the problem:
The hanging of the process is introduced by git requests from client side.
My git client is github desktop, it does git fetch regularly and some of its request just hanging there.
Maybe I should post github desktop log.

@duchenpaul
Copy link
Author

I updated some info I collected in past few day at the issue report comment (the first comment)

@jedy
Copy link

jedy commented Mar 10, 2020

I have the same problem after I updated gitea to 1.11.2. My system is Linux amd64 and I use sqlite.

@lunny
Copy link
Member

lunny commented Mar 10, 2020

One process call /home/git/gitea/gitea hook --config=/home/git/gitea/custom/conf/app.ini post-receive on issue information.

@jedy If you exit all git clients and then upgrade to v1.11.2, are they still occured?

@guillep2k
Copy link
Member

I can only suggest creating a core dump from one of those processes. We may at least be able to see what it's doing. The problem is that there's no way of sanitizing a core dump, so you should not share its raw contents.

If you've got delve or gdb installed, maybe you could open those cores ant tell us what are they doing.

@duchenpaul
Copy link
Author

OK I will get it back to you.

@jedy
Copy link

jedy commented Mar 10, 2020

I compiled 1.11.2 with go 1.13.8. It seems OK now.

@duchenpaul
Copy link
Author

@jedy Could you compile a linux-arm-6 version for me if possible?

@jedy
Copy link

jedy commented Mar 10, 2020

@duchenpaul Sorry. I don't have an arm environment.It got some error to cross compile on linux.

@jedy
Copy link

jedy commented Mar 10, 2020

gdb backtrace of the hanging process:

[root@s60 git]# gdb -ex "set pagination 0" -ex "thread apply all bt" --batch -p 24006
0x000000000045b4e8 in runtime.step (p=..., pc=0xc000009828, val=0xc000009824, first=false, newp=..., ok=true) at /usr/local/go/src/runtime/symtab.go:867
867	/usr/local/go/src/runtime/symtab.go: No such file or directory.
warning: Missing auto-load scripts referenced in section .debug_gdb_scripts
of file /var/git/gitea
Use `info auto-load python [REGEXP]' to list them.

Thread 1 (process 24006):
#0  0x000000000045b4e8 in runtime.step (p=..., pc=0xc000009828, val=0xc000009824, first=false, newp=..., ok=true) at /usr/local/go/src/runtime/symtab.go:867
#1  0x000000000045a785 in runtime.pcvalue (f=..., off=11692510, targetpc=22398159, cache=0x0, strict=true, ~r5=<optimized out>) at /usr/local/go/src/runtime/symtab.go:687
#2  0x000000000045b276 in runtime.pcdatavalue (f=..., table=0, targetpc=22398159, cache=0x0, ~r4=<optimized out>) at /usr/local/go/src/runtime/symtab.go:822
#3  0x000000000043d17f in runtime.isAsyncSafePoint (gp=0xc000000180, pc=22398159, sp=824645152416, lr=<optimized out>, ~r4=<optimized out>) at /usr/local/go/src/runtime/preempt.go:396
#4  0x0000000000451858 in runtime.doSigPreempt (gp=0xc000000180, ctxt=0xc0000099d0) at /usr/local/go/src/runtime/signal_unix.go:329
#5  0x0000000000452429 in runtime.sighandler (sig=23, info=0xc000009bf0, ctxt=0xc000009ac0, gp=0xc000000180) at /usr/local/go/src/runtime/signal_unix.go:536
#6  0x0000000000451a49 in runtime.sigtrampgo (sig=23, info=0xc000009bf0, ctx=0xc000009ac0) at /usr/local/go/src/runtime/signal_unix.go:444
#7  0x00000000004707f3 in runtime.sigtramp () at /usr/local/go/src/runtime/sys_linux_amd64.s:389
#8  <signal handler called>
#9  0x000000000155c4cf in code.gitea.io/gitea/modules/public.glob..func1 (~r0=...) at /go/src/code.gitea.io/gitea/modules/public/bindata.go:14786
#10 0x0000000001565b52 in code.gitea.io/gitea/modules/public.init () at /go/src/code.gitea.io/gitea/modules/public/bindata.go:14787
#11 0x000000000044b0ea in runtime.doInit (t=0x42de880 <code.gitea.io/gitea/modules/public..inittask>) at /usr/local/go/src/runtime/stubs.go:12
#12 0x000000000044b0b7 in runtime.doInit (t=0x42ebb40 <code.gitea.io/gitea/routers/routes..inittask>) at /usr/local/go/src/runtime/proc.go:5409
#13 0x000000000044b0b7 in runtime.doInit (t=0x42edd00 <code.gitea.io/gitea/cmd..inittask>) at /usr/local/go/src/runtime/proc.go:5409
#14 0x000000000044b0b7 in runtime.doInit (t=0x42dd460 <main..inittask>) at /usr/local/go/src/runtime/proc.go:5409
#15 0x000000000043e4ce in runtime.main () at /usr/local/go/src/runtime/proc.go:190
#16 0x000000000046e8f1 in runtime.goexit () at /usr/local/go/src/runtime/asm_amd64.s:1373
#17 0x0000000000000000 in ?? ()

@jedy
Copy link

jedy commented Mar 10, 2020

I ran strace on the hanging process. A lots of SIGURG popped up. I thought it's maybe related to the new preemptible runtime of Go which uses SIGURG. So I tried go 1.13.8. After an hour of running well, I think that's the problem.

@guillep2k
Copy link
Member

I ran strace on the hanging process. A lots of SIGURG popped up. I thought it's maybe related to the new preemptible runtime of Go which uses SIGURG. So I tried go 1.13.8. After an hour of running well, I think that's the problem.

Nice catch! We need to look closely into this.

@guillep2k
Copy link
Member

The backtrace shows only the first thread, which is probably not the most useful. 🙁

@lunny
Copy link
Member

lunny commented Mar 10, 2020

#10684 may fix this, it will be released on v1.11.3

@duchenpaul
Copy link
Author

My go version is Go1.13.8, is that accurate?

image

@duchenpaul
Copy link
Author

Here is my gdb dump, any thought on this?

git@Venus ~ $ gdb -ex "set pagination 0" -ex "thread apply all bt" --batch -p 2327
[New LWP 2328]
[New LWP 2329]
[New LWP 2330]
[New LWP 2331]
0x00065868 in runtime.step (p=..., pc=0x4409b50, val=0x4409b4c, first=false, newp=..., ok=true) at /usr/local/go/src/runtime/symtab.go:867
867	/usr/local/go/src/runtime/symtab.go: No such file or directory.
warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts
of file /home/git/gitea/gitea.
Use `info auto-load python-scripts [REGEXP]' to list them.

Thread 5 (LWP 2331):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_arm.s:415
#1  0x00042568 in runtime.futexsleep (addr=0x411b8c4 <runtime.newmHandoff+12>, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x0001a404 in runtime.notesleep (n=0x411b8c4 <runtime.newmHandoff+12>) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0004c298 in runtime.templateThread () at /usr/local/go/src/runtime/proc.go:1806
#4  0x0004ad6c in runtime.mstart1 () at /usr/local/go/src/runtime/proc.go:1097
#5  0x0004aca8 in runtime.mstart () at /usr/local/go/src/runtime/proc.go:1062
#6  0x014ca07c in crosscall_arm1 () at gcc_arm.S:30
#7  0x014c9dc4 in threadentry ()
#8  0x014cb5a8 in start_thread (arg=0xa51ff2f0) at pthread_create.c:463
#9  0x014ff4c8 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

Thread 4 (LWP 2330):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_arm.s:415
#1  0x00042568 in runtime.futexsleep (addr=0x445066c, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x0001a404 in runtime.notesleep (n=0x445066c) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0004c378 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1828
#4  0x0004cb88 in runtime.startlockedm (gp=0x44000e0) at /usr/local/go/src/runtime/proc.go:2001
#5  0x0004e290 in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2557
#6  0x0004ee14 in runtime.goschedImpl (gp=0x4400700) at /usr/local/go/src/runtime/proc.go:2705
#7  0x0004ee9c in runtime.gosched_m (gp=0x4400700) at /usr/local/go/src/runtime/proc.go:2713
#8  0x0007671c in runtime.mcall () at /usr/local/go/src/runtime/asm_arm.s:285
#9  0x0444dfc8 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

Thread 3 (LWP 2329):
#0  runtime.futex () at /usr/local/go/src/runtime/sys_linux_arm.s:415
#1  0x00042568 in runtime.futexsleep (addr=0x445048c, val=0, ns=-1) at /usr/local/go/src/runtime/os_linux.go:44
#2  0x0001a404 in runtime.notesleep (n=0x445048c) at /usr/local/go/src/runtime/lock_futex.go:151
#3  0x0004c378 in runtime.stopm () at /usr/local/go/src/runtime/proc.go:1828
#4  0x0004cb88 in runtime.startlockedm (gp=0x44000e0) at /usr/local/go/src/runtime/proc.go:2001
#5  0x0004e290 in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2557
#6  0x0004ebf8 in runtime.park_m (gp=0x44007e0) at /usr/local/go/src/runtime/proc.go:2690
#7  0x0007671c in runtime.mcall () at /usr/local/go/src/runtime/asm_arm.s:285
#8  0x0444de00 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

Thread 2 (LWP 2328):
#0  runtime.usleep () at /usr/local/go/src/runtime/sys_linux_arm.s:575
#1  0x000531f8 in runtime.sysmon () at /usr/local/go/src/runtime/proc.go:4473
#2  0x0004ad6c in runtime.mstart1 () at /usr/local/go/src/runtime/proc.go:1097
#3  0x0004aca8 in runtime.mstart () at /usr/local/go/src/runtime/proc.go:1062
#4  0x014ca07c in crosscall_arm1 () at gcc_arm.S:30
#5  0x014c9dc4 in threadentry ()
#6  0x014cb5a8 in start_thread (arg=0xa6d2c2f0) at pthread_create.c:463
#7  0x014ff4c8 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

Thread 1 (LWP 2327):
#0  0x00065868 in runtime.step (p=..., pc=0x4409b50, val=0x4409b4c, first=false, newp=..., ok=true) at /usr/local/go/src/runtime/symtab.go:867
#1  0x00064d68 in runtime.pcvalue (f=..., off=7823431, targetpc=12885908, cache=0x0, strict=true, ~r5=<optimized out>) at /usr/local/go/src/runtime/symtab.go:687
#2  0x00065704 in runtime.pcdatavalue (f=..., table=0, targetpc=12885908, cache=0x0, ~r4=<optimized out>) at /usr/local/go/src/runtime/symtab.go:822
#3  0x000471e4 in runtime.isAsyncSafePoint (gp=0x44000e0, pc=12885908, sp=86869636, lr=<optimized out>, ~r4=<optimized out>) at /usr/local/go/src/runtime/preempt.go:396
#4  0x0005c9e0 in runtime.doSigPreempt (gp=0x44000e0, ctxt=0x4409c1c) at /usr/local/go/src/runtime/signal_unix.go:329
#5  0x0005d5d0 in runtime.sighandler (sig=23, info=0x4409c90, ctxt=0x4409d10, gp=0x44000e0) at /usr/local/go/src/runtime/signal_unix.go:536
#6  0x0005cc6c in runtime.sigtrampgo (sig=23, info=0x4409c90, ctx=0x4409d10) at /usr/local/go/src/runtime/signal_unix.go:444
#7  0x00079728 in runtime.sigtramp () at /usr/local/go/src/runtime/sys_linux_arm.s:534
#8  0x00000000 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
[Inferior 1 (process 2327) detached]
[ 19:07 - 192.168.31.150  ]
git@Venus ~ $ 

@joaodforce
Copy link

I'm having this same issue after updating to 1.11.2
I'm using sqlite and running under Fedora Server 30 x64

If someone needs more info regarding this I'm available.

@guillep2k
Copy link
Member

1.11.3 is out in a few minutes. Please try with that and let us know your results. (Make sure no Gitea process is still active before upgrading).

@duchenpaul
Copy link
Author

Looking good so far, but we lost code language bar?
There used to have an indicator showing the percentage of languages in this project.
image

@jolheiser
Copy link
Member

jolheiser commented Mar 11, 2020

Language statistics are only in versions 1.12+ (1.12 is the currently unreleased master branch)

@duchenpaul
Copy link
Author

duchenpaul commented Mar 11, 2020

OK but found in 1.11.2, looks it is fixed! Kudos!!

@lunny lunny closed this as completed Mar 11, 2020
@mrsdizzie
Copy link
Member

I'm not sure this is actually fixed -- if compiling with go 1.13 fixes it then master and new versions should still have the same problem using go 1.14 going forward

@zeripath
Copy link
Contributor

Yes @mrsdizzie We're gonna need to figure out what is causing this. The go release document is not very clear as to how this is supposed to be solved.

@duchenpaul
Copy link
Author

@zeripath @mrsdizzie let me know if you need any assistant from me

mrsdizzie added a commit to mrsdizzie/gitea that referenced this issue Apr 28, 2020
…14 in git env

As seen in trouble shooting go-gitea#11032 the new feature of Go 1.14 is causing several second delays in startup in certain situations. Debugging shows it spending several seconds handling SIGURG commands during init:

```
6922:04:51.984234 trace init() ./modules/queue/unique_queue_wrapped.go
remote: ) = 69 <0.000012>
remote: [pid 15984] 22:04:51 write(1, "\ttime taken: 236.761\302\265s\n\n", 25    time taken: 236.761µs
remote:
remote: ) = 25 <0.000011>
remote: [pid 15984] 22:04:51 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
remote: [pid 15984] 22:04:52 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
remote: [pid 15984] 22:04:52 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
```

This causes up to 20 seconds added to a push in some cases as it happens for each call of the gitea hook command. This is likely the cause of go-gitea#10661 as well and would start to effect users once we release 1.12 which would be the first release compiled with Go 1.14. I suspect this is just a slight issue with the upstream implementatation as there have been a few very similar bugs fixed and reported:

 golang/go#37741
 golang/go#37942

We should revisit this in the future and see if a newer version of Go has solved it, but for now disable this option in the environment that gitea hook runs in to avoid it.
techknowlogick pushed a commit that referenced this issue Apr 28, 2020
…14 in git env (#11237)

As seen in trouble shooting #11032 the new feature of Go 1.14 is causing several second delays in startup in certain situations. Debugging shows it spending several seconds handling SIGURG commands during init:

```
6922:04:51.984234 trace init() ./modules/queue/unique_queue_wrapped.go
remote: ) = 69 <0.000012>
remote: [pid 15984] 22:04:51 write(1, "\ttime taken: 236.761\302\265s\n\n", 25    time taken: 236.761µs
remote:
remote: ) = 25 <0.000011>
remote: [pid 15984] 22:04:51 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
remote: [pid 15984] 22:04:52 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
remote: [pid 15984] 22:04:52 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
```

This causes up to 20 seconds added to a push in some cases as it happens for each call of the gitea hook command. This is likely the cause of #10661 as well and would start to effect users once we release 1.12 which would be the first release compiled with Go 1.14. I suspect this is just a slight issue with the upstream implementatation as there have been a few very similar bugs fixed and reported:

 golang/go#37741
 golang/go#37942

We should revisit this in the future and see if a newer version of Go has solved it, but for now disable this option in the environment that gitea hook runs in to avoid it.
ydelafollye pushed a commit to ydelafollye/gitea that referenced this issue Jul 31, 2020
…14 in git env (go-gitea#11237)

As seen in trouble shooting go-gitea#11032 the new feature of Go 1.14 is causing several second delays in startup in certain situations. Debugging shows it spending several seconds handling SIGURG commands during init:

```
6922:04:51.984234 trace init() ./modules/queue/unique_queue_wrapped.go
remote: ) = 69 <0.000012>
remote: [pid 15984] 22:04:51 write(1, "\ttime taken: 236.761\302\265s\n\n", 25    time taken: 236.761µs
remote:
remote: ) = 25 <0.000011>
remote: [pid 15984] 22:04:51 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
remote: [pid 15984] 22:04:52 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
remote: [pid 15984] 22:04:52 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=15984, si_uid=0} ---
```

This causes up to 20 seconds added to a push in some cases as it happens for each call of the gitea hook command. This is likely the cause of go-gitea#10661 as well and would start to effect users once we release 1.12 which would be the first release compiled with Go 1.14. I suspect this is just a slight issue with the upstream implementatation as there have been a few very similar bugs fixed and reported:

 golang/go#37741
 golang/go#37942

We should revisit this in the future and see if a newer version of Go has solved it, but for now disable this option in the environment that gitea hook runs in to avoid it.
@go-gitea go-gitea locked and limited conversation to collaborators Nov 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants