Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

basically the application is broken. #3497

Open
jucajuca opened this issue Jan 24, 2024 · 44 comments
Open

basically the application is broken. #3497

jucajuca opened this issue Jan 24, 2024 · 44 comments
Labels

Comments

@jucajuca
Copy link

jucajuca commented Jan 24, 2024

I just wasted more than one hour trying to update my proxies.

I kept getting the same error again and again and again. Could not delete file. This is the absolutely basic functionality that should be offered by the nginx proxy manager right? I would expect a proxy manager to be able to edit the proxies.

But it basically can't do this basic thing. This added to the even worse handling of the SSL certificates led me to the conclusion that I need a better solution.

So to anyone reading this: look for other solutions: caddy, traefik, whatever. Do not waste time here.

And to the developers, please do the world a favor and archive this project.

[1/24/2024] [5:07:50 PM] [Nginx    ] › ⬤  debug     Deleting file: /data/nginx/proxy_host/14.conf

[1/24/2024] [5:07:50 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -t -g "error_log off;"

[1/24/2024] [5:07:50 PM] [Nginx    ] › ⬤  debug     Deleting file: /data/nginx/proxy_host/14.conf

[1/24/2024] [5:07:50 PM] [Nginx    ] › ⬤  debug     Could not delete file: {

  "errno": -2,

  "code": "ENOENT",

  "syscall": "unlink",

  "path": "/data/nginx/proxy_host/14.conf"

}

[1/24/2024] [5:07:50 PM] [Nginx    ] › ⬤  debug     Deleting file: /data/nginx/proxy_host/14.conf.err

[1/24/2024] [5:07:50 PM] [Nginx    ] › ⬤  debug     Could not delete file: {

  "errno": -2,

  "code": "ENOENT",

  "syscall": "unlink",

  "path": "/data/nginx/proxy_host/14.conf.err"

}

[1/24/2024] [5:07:50 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -t -g "error_log off;"

[1/24/2024] [5:07:51 PM] [Nginx    ] › ℹ  info      Reloading Nginx

[1/24/2024] [5:07:51 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -s reload

[1/24/2024] [5:07:57 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -t -g "error_log off;"

[1/24/2024] [5:07:57 PM] [Nginx    ] › ⬤  debug     Deleting file: /data/nginx/proxy_host/14.conf

[1/24/2024] [5:07:57 PM] [Nginx    ] › ⬤  debug     Could not delete file: {

  "errno": -2,

  "code": "ENOENT",

  "syscall": "unlink",

  "path": "/data/nginx/proxy_host/14.conf"

}

Checklist

  • Have you pulled and found the error with jc21/nginx-proxy-manager:latest docker image?
    • Yes : image: jc21/nginx-proxy-manager:2.11.1
  • Are you sure you're not using someone else's docker image?
    • Yes
  • Have you searched for similar issues (both open and closed)?
    • Yes. There are open and it seems that the team does not care.

Describe the bug

try to edit a proxy. It is impossible. Check the logs, see the result above.

Nginx Proxy Manager Version

image: jc21/nginx-proxy-manager:2.11.1

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior

Maybe it should work?

Screenshots

Operating System

Additional context

@jucajuca jucajuca added the bug label Jan 24, 2024
@samtoxie
Copy link

Same experience, downgraded to 2.9.22. Seems to work better but I'm considering migrating away as NPM requires more maintenance than plain NGINX :/

@jorgepsmatos
Copy link

I wish I could say I only wasted one hour on this. Even 2.9.22 I have the same error. Maybe because I'm using the ARM version

@blackoutland
Copy link

I can't even add new hosts now - it started initially when I mistakenly request a certificate before changing it in DNS thus leading to an error. From then on whatever I do even when using completely different domains and trying to add them it all fails with "Internal error" and the message shown above.

@pathia
Copy link

pathia commented Jan 28, 2024

2.10.4 doesn't seem to have this issue. Perhaps try reverting to that version in the meantime.

@blackoutland
Copy link

I looked at the code - the error is logged (I would simply add a missing "if" to check if the file to be deleted actually exists) but this does not cause any other issue - it's not aborting because of that - it just logs the error.

The problem I faced is that this tool is extremely optimistic as such that it does not actually care about any errors returned by letsencrypt and doesn't give any more details on this. For me the real reason was this: "Some challenges have failed." which came after this error.
It also logged "Saving debug log to /tmp/letsencrypt-log/letsencrypt.log" - when looking at this log I found out that for whatever reason Let's Encrypt used the old server IP and not the new one even though the NS was not changed, I changed the IP in both NS and when querying them with "dig" the correct new IP was returned - in WHOIS those NS were there for years.
But that's another issue.

So for me the main problem is that this tool does this:

  • always creates the host entry
  • when Let's Encrypt fails it only shows "Internal error" which is actually wrong, because it's an external error caused by Let#s Encrypt - this tool has to parse the Let's Encrypt result and show the corresponding message to the user, but it appears it was never implemented - that's what I call extremely optimistic
  • when you try too often Let's Encrypt itself locks you out for one hour for the given host - something you'll also only see in the log

@jucajuca Do you also get the error "Some challenges have failed" after this error? If you do you should look at the letsencrypt.log

@schneekluth
Copy link

i feel your pain. Lately I did some research for a replacement and I think this one looks promising. I just leave the link in case it might be an option for you guys.

Webseite: https://zoraxy.arozos.com/

Github: https://github.com/tobychui/zoraxy

@coeki
Copy link

coeki commented Jan 29, 2024

@blackoutland

I somewhat agree with you conclusions, I got side tracked by the error too, although at least something popped up.
The logs for almost everything are empty., docker compose logs gave me small hints.
In my case I made an error myself in configuration, but it let me request certs, set's up the host. But looking in the cert itself, dns failed. There's nothing in the logs anywhere, and you don't get a warning. My bad, I set the docker NPM container port to 8080 on the host, that's a nono ;).
Anyway it works now, but yeah logging and letting the user know, can obviously be better.

@manuelmanharta1
Copy link

Having the same issue but my letsencrypt renewal went through smoothly. The error is as non informative as it can get, because it tries to delete a file which is not even existing in my container (it is /data/nginx/proxy_host/3.conf in my case).

Altough I do like the UI, I am as well thinking to switch back to either a plain NGINX or to jwilder-nginx-proxy as I am running everything dockerized.

@manuelmanharta1
Copy link

Update: I could solve it through deleting the problematic entry and recreating it. From my (users) point of view nothing changed but it seems that something went corrupt under the hood.

Still it is very sad because when debugging is almost impossible, you can just hope to never encounter bugs.

@saman-taghavi
Copy link

though it has worked well for me but i too face the same problem hate to migrate though

@Reasonable-Human
Copy link

Hello, I have the same error as the original post, but it seems to only affect hosts that have custom location in use. After deleting the custom locations the proxy host works fine again.

@drzwi-cal
Copy link

I cannot work without custom locations. For me it is key feature. I need to proxy /uploads folder to another service. Is there any workaround for this to work, but without custom locations?

@pathia
Copy link

pathia commented Mar 11, 2024

I cannot work without custom locations. For me it is key feature. I need to proxy /uploads folder to another service. Is there any workaround for this to work, but without custom locations?
put uploads on a different subdomain?
Or otherwise plan nginx (https://hub.docker.com/_/nginx ) and create your own nginx proxy manager. You can re-use some of the configs inside the nginx-proxymanager container. There's no magic involved here. It's just a nice graphical ui over a vanilla nginx.

@bkilinc
Copy link

bkilinc commented Mar 11, 2024

As a workaround, I downgraded to version 2.10.4 with docker-compose. I hope there are no corruptions in the backend data ??

@pathia
Copy link

pathia commented Mar 11, 2024

As a workaround, I downgraded to version 2.10.4 with docker-compose. I hope there are no corruptions in the backend data ??

Remember this is usually a directly internet-facing service. I wouldn't do that (/for too long) for security reasons.
But that's just personal preference I guess.

@bkilinc
Copy link

bkilinc commented Mar 11, 2024

Remember this is usually a directly internet-facing service. I wouldn't do that (/for too long) for security reasons.
But that's just personal preference I guess.

Thx. Currently, I have no other choice. Just knowing that backend data is not corrupted is OK. I will move to Caddy in the meantime.

@pathia
Copy link

pathia commented Mar 11, 2024

Remember this is usually a directly internet-facing service. I wouldn't do that (/for too long) for security reasons.
But that's just personal preference I guess.

Thx. Currently, I have no other choice. Just knowing that backend data is not corrupted is OK. I will move to Caddy in the meantime.

In that case I can understand. Regarding your question about the backend data I really cannot answer as I'm not a dev on this project. But I wouldn't expect. I did the exact same rollback to 2.10.4 by only changing the latest tag to 2.10.4 and then docker compose up -d. It worked fine for the week I used it like that. After that I migrated to plain nginx.

@jucajuca
Copy link
Author

@jc21
can you archive the application? it is not maintained and is not working anymore.

@bkilinc
Copy link

bkilinc commented Mar 12, 2024

can you archive the application? it is not maintained and is not working anymore.

Just out of curiosity. Why, is it unmaintained ? The last commits were two weeks ago. And it is a quite popular software. There are 100 M+ pulls in docker hub and last update was 2 days ago.

@jucajuca
Copy link
Author

@bkilinc have you noticed the last 50+ issues? many point to the same problem. One issue says that basically the last x Versions are not working. The application is simply not working and evidently the new commits are not fixing the issues, so yes, you can commit and commit, does not mean that it is a working or quality application. It just mean that someone is writing some sort of code. Could be an update to the README.

Pulls... I can pull easily 10 images a day. A k8s cluster will pull 100s if not 1000s a day...

I strongly recommend to look for other solutions. I also worked with traefik and never experience such horrible issues.

@bkilinc
Copy link

bkilinc commented Mar 12, 2024

I strongly recommend to look for other solutions. I also worked with traefik and never experience such horrible issues.

Thx. I was just asking. I will place my bet on caddy. I don't trust anything with GUI. esp. for basic services.

@Intevel
Copy link

Intevel commented Mar 23, 2024

Facing the same issue.. Any fixes?

@repier37
Copy link

Rolling back to 2.10.4 worked for me

@Intevel
Copy link

Intevel commented Mar 24, 2024

Same for me, the error is shown but it's working on 2.10.4

@RedWings-R
Copy link

@bkilinc have you noticed the last 50+ issues? many point to the same problem. One issue says that basically the last x Versions are not working. The application is simply not working and evidently the new commits are not fixing the issues, so yes, you can commit and commit, does not mean that it is a working or quality application. It just mean that someone is writing some sort of code. Could be an update to the README.

Pulls... I can pull easily 10 images a day. A k8s cluster will pull 100s if not 1000s a day...

I strongly recommend to look for other solutions. I also worked with traefik and never experience such horrible issues.

+1

@Freekers
Copy link

@jc21 can you archive the application? it is not maintained and is not working anymore.

I get where you're coming from, but the latest release is just from 2 months ago and the latest commits date back to 4 weeks ago. I think this project is just a victim of it's own success, which is not manageable by a single person.

@CorneliusCornbread
Copy link

CorneliusCornbread commented Apr 14, 2024

@jc21 can you archive the application? it is not maintained and is not working anymore.

I get where you're coming from, but the latest release is just from 2 months ago and the latest commits date back to 4 weeks ago. I think this project is just a victim of it's own success, which is not manageable by a single person.

Despite that, it quite literally does not work on a single machine that any possible target user wants to put it on. Moreover, this is still the case 3 whole months after this application breaking issue has been opened. Worse yet is the fact that despite the developers 'maintaining' the project they haven't responded to any of these issues. A project without support for its outstanding issues that make it unusable, is dead. A project that is unusable and has no plans to fix itself in the near future, is dead.

@hermitguo
Copy link

I'm migrating from an earlier version, currently version 2.11.1, and I have the same problem as the owner, I can't delete new files when creating new records.
Maybe the data volume has some argument logic wrong, or if there is a problem with the judgment somewhere,

@hermitguo
Copy link

I'm migrating from an earlier version, currently version 2.11.1, and I have the same problem as the owner, I can't delete new files when creating new records.

Don't be angry, I encountered the same problem as you, I downgraded 2.10.4 version normal, npm is a very good product, and open source, software encounter some bugs is normal, maybe somewhere if there is a problem, we can find the problem together, fix it.

@jc21
Copy link
Member

jc21 commented May 4, 2024

Well this is awkward. This isn't my first open source package and it won't be my last but it always baffles me how the public love to criticise the project that is given to them for free and out of the goodness of their hearts. And before anyone talks about the donations, you can see from the donations page they are few and very far between.

Many thanks to those coming to my defense though, I really appreciate that :)

@jucajuca No I'm not going to archive this because you and a small percentage of users are having an issue. Yes I do maintain as much as I can given I too have a life to live. I've been overseas and without a computer for April.

@Freekers you are absolutely correct. This is only maintained by me. I rely on pull requests from the community for things I cannot test, mainly DNS providers. I had help for a while from someone I've never met, but they too have their own life to live. Sadly no-one has offered since.

@CorneliusCornbread No I'm not going to respond to all of the issues. I receive a LOT of github emails everyday I am not put on this earth to fix everyone's problems all on my own. As for the project being unusable, I cannot disagree enough. I deploy this project on 4 different homes and multiple architectures. I eat my own dog food.

But hey this project and the limited developer effort you're getting isn't for everyone.

As for the initial deletion issue itself, can someone tell me if they are depoying the project with PUID/PGID set or running as root? Also some steps to reproduce would be nice.

@Intevel
Copy link

Intevel commented May 4, 2024

Well this is awkward. This isn't my first open source package and it won't be my last but it always baffles me how the public love to criticise the project that is given to them for free and out of the goodness of their hearts. And before anyone talks about the donations, you can see from the donations page they are few and very far between.

Many thanks to those coming to my defense though, I really appreciate that :)

@jucajuca No I'm not going to archive this because you and a small percentage of users are having an issue. Yes I do maintain as much as I can given I too have a life to live. I've been overseas and without a computer for April.

@Freekers you are absolutely correct. This is only maintained by me. I rely on pull requests from the community for things I cannot test, mainly DNS providers. I had help for a while from someone I've never met, but they too have their own life to live. Sadly no-one has offered since.

@CorneliusCornbread No I'm not going to respond to all of the issues. I receive a LOT of github emails everyday I am not put on this earth to fix everyone's problems all on my own. As for the project being unusable, I cannot disagree enough. I deploy this project on 4 different homes and multiple architectures. I eat my own dog food.

But hey this project and the limited developer effort you're getting isn't for everyone.

As for the initial deletion issue itself, can someone tell me if they are depoying the project with PUID/PGID set or running as root? Also some steps to reproduce would be nice.

I've deployed as root.

@jucajuca
Copy link
Author

jucajuca commented May 5, 2024

You can draw your own conclusions from the nature of the errors, from the way the issue has been handled and from the response of the developers.

If anyone starts an open source project, it comes with a very big responsibility, a minimum would be to test the code before publishing it. Obviously in this project the code was never tested, the issue was neglected and it is unclear if if has been addressed. Unfortunately it also seems that the project lacks a proper community of maintainers and there is only one developer paying attention to it.

Given that this is a web-facing service I would yet again strongly recommend to look for other solutions. Given the points mentioned above, this could pose a major security risk.

https://traefik.io/traefik/
https://caddyserver.com/

I do not mind hurting here the pride that the developers can have for the project. If it is not properly maintained for whatever reasons, the project poses a risk.

@Freekers
Copy link

Freekers commented May 5, 2024

I do not mind hurting here the pride that the developers can have for the project.

This toxicity is uncalled for. You made your point.

@WilliDieEnte
Copy link

WilliDieEnte commented May 11, 2024

Yikes, I'm having the same issues with it, but holy some people can be very vocal about the free and open-source software they use... If it pains you so much, fix it yourself and don't talk down on someone. Just because a project they made and allowed everyone to use freely, doesn't perfectly work 100% of the time in your deployment / use case.
Am I annoyed and switching away from it? Yep, but I can also totally understand the developer and in the end it's my fault that I entirely relied on their hard work and not had any fallback.

@Merrit
Copy link

Merrit commented May 15, 2024

Wow, the toxic attitude from several here is astounding.

I hear that you've experienced frustration and disappointment - I've had no small amount of frustration myself. However directing that frustration and those toxic words towards developers and contributors is simply not acceptable.

If anyone feels the need to vent their frustrations like this, I strongly suggest that you take a step back and do some self reflection on this free, open source project that you feel so entitled to. It is fine to voice a frustration in a constructive way, but if you have nothing constructive so say and simply wish to vent; please find somewhere else to do so. The open source community does not accept that behaviour, and I have to speak up to have seen it.

@jc21 I am sorry you've had to deal with this, and know that as a frustrated user doing troubleshooting, I and the majority are 100% behind you and extremely appreciative of the project that you've shared with us all. 💙

@boehamian
Copy link

same issues here. tried version 2.9.22 and 2.10.4 as people have suggested with little results.

Interesting thing is installed once, didn't work.
installed a second time, didn't work.
installed a third time and worked.
then when I was mucking around with getting the time to be my local time zone accidently deleted the stack in portainer so had to reinstall. Now back to same errors.

@marcmacmac
Copy link

@jc21 first of all: Thank you for NPM! Nothing is perfect, no solution is one size fits all, haters gonna hate etc. pp. - NPM brought us all closer to nginx-config-peace than any other tool before.

What I found out: If an upstream in a custom location's host is not resolvable saving works, but the proxy host switches to offline. If you reload nginx you usually get an error for this. This error is missing and instead we get this "ENOENT" etc.
DNS errors is not the only cause. Any errors in the upstream host leads to this behaviour. But DNS errors might be the reason why most people end up here, since it only accurs when they update a host or the whole application. Once it's started an upstream can go offline and it keeps running.
Perhaps you can output the nginx error to help debug these situations better?

Thank you again and keep on with this beatuiful project!

@W00glin
Copy link

W00glin commented Jul 18, 2024

As for the initial deletion issue itself, can someone tell me if they are depoying the project with PUID/PGID set or running as root? Also some steps to reproduce would be nice.

So if someone wants to sanity check my methods, I think I am running this as root and running into this issue. For some context, I am running this on Portainer on an LXC host inside Proxmox. I am using Portainer Version CE 2.19.5. I am using (I think) the latest image of NGINX Proxy Manager (v2.11.3) but my docker-compose.yml file is configured to pull down the latest image.

To check and see I ran - docker exec -it your_container_id whoami

And I got root in response.

Next I just checked the logs and see what the IDs were -


 _   _ ____  __  __
| \ | |  _ \|  \/  |
|  \| | |_) | |\/| |
| |\  |  __/| |  | |
|_| \_|_|   |_|  |_|
-------------------------------------
User:  npm PUID:0 ID:0 GROUP:0
Group: npm PGID:0 ID:0
-------------------------------------

As far as steps to reproduce, it seems like whenever I add, or create a new proxy in the webUI I get this issue. It appears to occur even with a fresh deployment I had on another test machine. I did try to manually delete the file inside of the docker container itself, but that has not fixed the issue.

@flupkede
Copy link

flupkede commented Nov 1, 2024

Hi @jc21 , I just started using your package a week ago and it works perfectly, just what I needed. Thank you for the great work and all the effort you have put in it!

@jmarmstrong1207
Copy link

I was able to fix this issue on my side by deleting the container and recreating it

@bkilinc
Copy link

bkilinc commented Nov 4, 2024

I switched to Caddy on 3 three servers. They also became noticeably faster.

What I learned from this is; top level proxy configuration should be simple. Just domain to hostname/ip relation table. It is just a router, and a central place for certificates. If you need more complicated setup for services they should be handled by wrapping tiny proxy servers on top of services. With this simple setup, you should not need UI.

I also don't trust services that use docker compose labels. They access docker sockets. Proxy is the service that is facing internet, which should be simple and secure.

@mirfani340
Copy link

Have u guys try to fix using this? #2892

@W00glin
Copy link

W00glin commented Dec 10, 2024

Have u guys try to fix using this? #2892

I just tried doing the outlined fix -

`Check the existence and permissions of the directory:

ls -la /data/letsencrypt-acme-challenge/.well-known/acme-challenge/

If the directory does not exist, create it and set the correct permissions:

mkdir -p /data/letsencrypt-acme-challenge/.well-known/acme-challenge/

chmod -R 755 /data/letsencrypt-acme-challenge/.well-known/acme-challenge/

Create a test file and check its accessibility:
echo "test" > /data/letsencrypt-acme-challenge/.well-known/acme-challenge/test

curl -I http://%NPM_HOSTNAME.TLD%/.well-known/acme-challenge/test

Here is the response I got from the curl -

curl -I http://REMOVEDt/.well-known/acme-challenge/test

`HTTP/1.1 200 OK
Server: openresty
Date: Tue, 10 Dec 2024 14:06:56 GMT
Content-Type: text/plain
Content-Length: 5
Last-Modified: Tue, 10 Dec 2024 13:50:23 GMT
Connection: keep-alive
ETag: "6758471f-5"
Accept-Ranges: bytes

Tried creating new certs and still getting errors -

```2024-12-10T14:03:18.539665000Z [12/10/2024] [2:03:18 PM] [Nginx    ] › ⬤  debug     Could not delete file: {
2024-12-10T14:03:18.539779000Z   "errno": -2,
2024-12-10T14:03:18.539854000Z   "code": "ENOENT",
2024-12-10T14:03:18.539924000Z   "syscall": "unlink",
2024-12-10T14:03:18.539988000Z   "path": "/data/nginx/proxy_host/14.conf"
2024-12-10T14:03:18.540116000Z }
2024-12-10T14:03:18.576055000Z [12/10/2024] [2:03:18 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -t -g "error_log off;"
2024-12-10T14:03:18.717124000Z [12/10/2024] [2:03:18 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -t -g "error_log off;"
2024-12-10T14:03:18.833269000Z [12/10/2024] [2:03:18 PM] [Nginx    ] › ℹ  info      Reloading Nginx
2024-12-10T14:03:18.833454000Z [12/10/2024] [2:03:18 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -s reload

@mirfani340
Copy link

Have u guys try to fix using this? #2892

I just tried doing the outlined fix -

`Check the existence and permissions of the directory:

ls -la /data/letsencrypt-acme-challenge/.well-known/acme-challenge/

If the directory does not exist, create it and set the correct permissions:

mkdir -p /data/letsencrypt-acme-challenge/.well-known/acme-challenge/

chmod -R 755 /data/letsencrypt-acme-challenge/.well-known/acme-challenge/

Create a test file and check its accessibility: echo "test" > /data/letsencrypt-acme-challenge/.well-known/acme-challenge/test

curl -I http://%NPM_HOSTNAME.TLD%/.well-known/acme-challenge/test

Here is the response I got from the curl -

curl -I http://REMOVEDt/.well-known/acme-challenge/test

`HTTP/1.1 200 OK Server: openresty Date: Tue, 10 Dec 2024 14:06:56 GMT Content-Type: text/plain Content-Length: 5 Last-Modified: Tue, 10 Dec 2024 13:50:23 GMT Connection: keep-alive ETag: "6758471f-5" Accept-Ranges: bytes

Tried creating new certs and still getting errors -

```2024-12-10T14:03:18.539665000Z [12/10/2024] [2:03:18 PM] [Nginx    ] › ⬤  debug     Could not delete file: {
2024-12-10T14:03:18.539779000Z   "errno": -2,
2024-12-10T14:03:18.539854000Z   "code": "ENOENT",
2024-12-10T14:03:18.539924000Z   "syscall": "unlink",
2024-12-10T14:03:18.539988000Z   "path": "/data/nginx/proxy_host/14.conf"
2024-12-10T14:03:18.540116000Z }
2024-12-10T14:03:18.576055000Z [12/10/2024] [2:03:18 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -t -g "error_log off;"
2024-12-10T14:03:18.717124000Z [12/10/2024] [2:03:18 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -t -g "error_log off;"
2024-12-10T14:03:18.833269000Z [12/10/2024] [2:03:18 PM] [Nginx    ] › ℹ  info      Reloading Nginx
2024-12-10T14:03:18.833454000Z [12/10/2024] [2:03:18 PM] [Global   ] › ⬤  debug     CMD: /usr/sbin/nginx -s reload

In my case, i was using custom SSL, my action was just delete the custom SSL and upload it again, btw im planning to change from Nginx Reverse Proxy to another like caddy, this thing has many bug in my experience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests