Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

ipfs-http-client extremely odd behavior with nginx / remote node #2926

Closed
obo20 opened this issue Mar 16, 2020 · 12 comments
Closed

ipfs-http-client extremely odd behavior with nginx / remote node #2926

obo20 opened this issue Mar 16, 2020 · 12 comments

Comments

@obo20
Copy link
Contributor

obo20 commented Mar 16, 2020

The ipfs-http-client is currently doing some extremely strange things with a particular setup we've been testing out. I've tested versions of the client from the current master all the way back to v34 and I'm still getting this issue. The bug can be recreated as follows:

Essentially what's happening is that I add a directory to the remote node and it works. Then I try to do the same thing a second time and it locks up. The only way I can get around this is to restart my nodeJS express API which seems strange as the http-client should be stateless as we're creating a new instance of it each time our API endpoint is called.

The steps to recreate this are as follows.

Step 1: create a temporarily IPFS client for adding to the remote node:

 const tempIPFS = new ipfsClient({
        host: node.host, //this is an ip address for us
        port: node.port, //this is port 6001 for us
        protocol: node.protocol // this is http for us
        headers: { Authorization: `Basic ${process.env.PINATA_NODE_BASIC_AUTH}` }
 });

The remote node is on a server and has the default port 5001 setting for the API, but is sitting behind an NGINX config that's reverse proxying port 6001 to port 5001 with basic auth restricting access.

The NGINX config looks like this:

server {
        listen 6001;
        server_name _;
        underscores_in_headers on;
        location / {
                auth_basic "testing node";
                auth_basic_user_file /etc/apache2/.htpasswd;
                proxy_pass              http://127.0.0.1:5001;
                proxy_set_header        Host $host;
                proxy_set_header        X-Real-IP $remote_addr;
                proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header        X-Forwarded-Proto $scheme;
                client_max_body_size 50000M;
                proxy_read_timeout 600s;
        }
}

Strangely, we didn't encounter this bug if we directly connected to the remote node by exposing port 5001 on it (and avoiding NGINX / basic auth). But we don't want to be doing this outside of a testing environment for security reasons.

@achingbrain
Copy link
Member

I'm trying to replicate this locally - I had some problems with node crashing with ERR_HTTP_TRAILER_INVALID which I fixed by adding proxy_http_version 1.1; to the application block in the nginx config.

Here's my test script:

'use strict'

const ipfsClient = require('ipfs-http-client')
const { globSource } = ipfsClient
const all = require('it-all')
const map = require('it-map')

const node = {
  host: '127.0.0.1',
  port: 6001,
  protocol: 'http'
}

async function main () {
  const ipfs = new ipfsClient({
    host: node.host, //this is an ip address for us
    port: node.port, //this is port 6001 for us
    protocol: node.protocol, // this is http for us
    headers: {
      Authorization: `Basic an-auth-token`
    }
  })

  console.info(await all(map(ipfs.add(globSource('path/to/dir', { recursive: true })), (thing) => ({
    ...thing,
    cid: thing.cid.toString()
  }))))

  console.info(await all(map(ipfs.add(globSource('path/to/dir', { recursive: true })), (thing) => ({
    ...thing,
    cid: thing.cid.toString()
  }))))
}

main()

It seems to work, once I made the change to the nginx config.

Could you post a more complete example of how you trigger the problem?

@SahidMiller
Copy link

There's only one place where IPFS requests a lock and that's the ~.jsipfs/repo.lock. Maybe if you were starting up a new daemon on each request to the same repo, I'd expect something like this.

@achingbrain
Copy link
Member

No, if a second daemon starts up and encounters a lock file in the repo it'll exit with an error.

You can see the steps you have to take to start two nodes on the same machine in the running-multiple-nodes example, it involves configuring them with different repos.

@obo20
Copy link
Contributor Author

obo20 commented Mar 17, 2020

There's only one place where IPFS requests a lock and that's the ~.jsipfs/repo.lock. Maybe if you were starting up a new daemon on each request to the same repo, I'd expect something like this.

I should clarify here, that we're hitting a go-ipfs node (v0.4.23) that is online 24/7. We aren't spinning up / spinning down a node.

@obo20
Copy link
Contributor Author

obo20 commented Mar 17, 2020

From further investigations, I tried your code @achingbrain and it worked correctly.

This seems to be somehow isolated to my nodeJS / express application. I've once again confirmed that:
With NGINX: (broken) (even with the proxy_http_version 1.1; addition you mentioned
Without NGINX: Just fine

I'm using multer for file uploads so maybe that's somehow related, but I question the correlation because the multer upload operation finishes before I add to IPFS.

@obo20
Copy link
Contributor Author

obo20 commented Mar 17, 2020

update: I can confirm that basic auth plays no role here. Even when removing basic auth requirements, it appears to that the reverse proxy is still causing the issue

@achingbrain
Copy link
Member

@obo20 are you still having problems with this or did you get to the bottom of it?

@obo20
Copy link
Contributor Author

obo20 commented May 15, 2020

@achingbrain I wasn't ever able to figure this out unfortunately. We implemented a workaround to avoid using NGINX entirely for adding content to nodes (however we'd like to move back to NGINX for this if possible).

I'm fairly sure this problem still exists.

@achingbrain
Copy link
Member

Would you be able to put a small demo repo together that shows the problem? Hopefully it'll be a fairly straightforward fix with a reproducible case.

@autonome
Copy link
Contributor

@obo20 We discussed in triage today and it sounds like this is not a problem in the IPFS code. Without a clear reproduction scenario, is impossible to know.

@obo20
Copy link
Contributor Author

obo20 commented May 28, 2020

@autonome @achingbrain apologies for the delay and getting back to you here. We've been so swamped that I haven't had a lot of time to put together a demo repo for you.

Things are working alright for us now as is with our workaround, but I'll try to find some time here in the near future to put together something reproducible for you.

For now I wouldn't spend any resources on this.

@autonome
Copy link
Contributor

👍🏼 Closing for now - reopen if there's a narrower testcase showing the problem is in IPFS not with nginx config, etc.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants