Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIDs are not broadcast in a 2 node private network #8346

Open
3 tasks done
vdamle opened this issue Aug 16, 2021 · 4 comments
Open
3 tasks done

CIDs are not broadcast in a 2 node private network #8346

vdamle opened this issue Aug 16, 2021 · 4 comments
Labels
kind/bug A bug in existing code (including security flaws) need/analysis Needs further analysis before proceeding

Comments

@vdamle
Copy link

vdamle commented Aug 16, 2021

Checklist

Installation method

third-party binary

Version

go-ipfs version: 0.9.1-dc2715a
Repo version: 11
System version: amd64/linux
Golang version: go1.15.2

Config

{
  "API": {
    "HTTPHeaders": {
      "Access-Control-Allow-Origin": [
        "*"
      ],
      "Server": [
        "go-ipfs/0.5.0"
      ]
    }
  },
  "Addresses": {
    "API": "/ip4/0.0.0.0/tcp/5001",
    "Announce": [],
    "Gateway": "/ip4/0.0.0.0/tcp/8080",
    "NoAnnounce": [],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/quic",
      "/ip6/::/udp/4001/quic"
    ]
  },
  "AutoNAT": {},
  "Bootstrap": [
    "/ip4/10.2.97.84/tcp/4001/p2p/QmPaCd2ckD9ffRwTzmQJNrGQS8J6JoAHvyP1MsWSCdcaqn"
  ],
  "DNS": {
    "Resolvers": null
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "10GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": false,
      "Interval": 10
    }
  },
  "Experimental": {
    "AcceleratedDHTClient": false,
    "FilestoreEnabled": false,
    "GraphsyncEnabled": false,
    "Libp2pStreamMounting": false,
    "P2pHttpProxy": false,
    "ShardingEnabled": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "APICommands": null,
    "HTTPHeaders": {
      "Access-Control-Allow-Headers": [
        "X-Requested-With",
        "Range",
        "User-Agent"
      ],
      "Access-Control-Allow-Methods": [
        "GET"
      ],
      "Access-Control-Allow-Origin": [
        "*"
      ]
    },
    "NoDNSLink": false,
    "NoFetch": false,
    "PathPrefixes": [],
    "PublicGateways": null,
    "RootRedirect": "",
    "Writable": false
  },
  "Identity": {
    "PeerID": "QmWwaegHzNhBewhj97fbZdSrcT6YXuq2wEwAorrhnWi4T9"
  },
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": null,
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": null
  },
  "Pinning": {
    "RemoteServices": {}
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {
    "Interval": "12h",
    "Strategy": "all"
  },
  "Routing": {
    "Type": "dht"
  },
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {
      "GracePeriod": "20s",
      "HighWater": 900,
      "LowWater": 600,
      "Type": "basic"
    },
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": false,
    "EnableAutoRelay": false,
    "EnableRelayHop": false,
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
  }
}
{
  "API": {
    "HTTPHeaders": {
      "Access-Control-Allow-Origin": [
        "*"
      ],
      "Server": [
        "go-ipfs/0.5.0"
      ]
    }
  },
  "Addresses": {
    "API": "/ip4/0.0.0.0/tcp/5001",
    "Announce": [],
    "Gateway": "/ip4/0.0.0.0/tcp/8080",
    "NoAnnounce": [],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/quic",
      "/ip6/::/udp/4001/quic"
    ]
  },
  "AutoNAT": {},
  "Bootstrap": [
    "/ip4/10.2.104.238/tcp/4001/p2p/QmWwaegHzNhBewhj97fbZdSrcT6YXuq2wEwAorrhnWi4T9"
  ],
  "DNS": {
    "Resolvers": null
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "10GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": false,
      "Interval": 10
    }
  },
  "Experimental": {
    "AcceleratedDHTClient": false,
    "FilestoreEnabled": false,
    "GraphsyncEnabled": false,
    "Libp2pStreamMounting": false,
    "P2pHttpProxy": false,
    "ShardingEnabled": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "APICommands": null,
    "HTTPHeaders": {
      "Access-Control-Allow-Headers": [
        "X-Requested-With",
        "Range",
        "User-Agent"
      ],
      "Access-Control-Allow-Methods": [
        "GET"
      ],
      "Access-Control-Allow-Origin": [
        "*"
      ]
    },
    "NoDNSLink": false,
    "NoFetch": false,
    "PathPrefixes": [],
    "PublicGateways": null,
    "RootRedirect": "",
    "Writable": false
  },
  "Identity": {
    "PeerID": "QmPaCd2ckD9ffRwTzmQJNrGQS8J6JoAHvyP1MsWSCdcaqn"
  },
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": null,
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": null
  },
  "Pinning": {
    "RemoteServices": {}
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {
    "Interval": "12h",
    "Strategy": "all"
  },
  "Routing": {
    "Type": "dht"
  },
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {
      "GracePeriod": "20s",
      "HighWater": 900,
      "LowWater": 600,
      "Type": "basic"
    },
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": false,
    "EnableAutoRelay": false,
    "EnableRelayHop": false,
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
  }
}

Description

I've got a 2 node private network with Node 1 designated as bootstrap node. After the nodes are up and running, I upload a text file using the web UI on Node 1 and I can ipfs cat QmcD4SAMXhr4Upko9eC5XKDuGYyP2Q4GRYindJEE3qmesH the content on Node 1, but the same command hangs on Node 2.

When checking the output of ipfs stats, we observed that:

  • DHT queries appear to be working on both nodes and reflect responses from the other node:
## Node 2

ipfs stats dht 

DHT wan (0 peers):
DHT lan (1 peers):
  Bucket  0 (0 peers) - refreshed 7s ago:
    Peer                                            last useful  last queried  Agent Version

  Bucket  1 (1 peers) - refreshed 2m40s ago:
    Peer                                            last useful  last queried  Agent Version
  @ QmWwaegHzNhBewhj97fbZdSrcT6YXuq2wEwAorrhnWi4T9  7s ago       7s ago        go-ipfs/0.9.1/dc2715a

## Node 1

ipfs stats dht

DHT wan (0 peers):
DHT lan (1 peers):
  Bucket  0 (0 peers) - refreshed 3m3s ago:
    Peer                                            last useful  last queried  Agent Version

  Bucket  1 (1 peers) - refreshed 3m3s ago:
    Peer                                            last useful  last queried  Agent Version
  @ QmPaCd2ckD9ffRwTzmQJNrGQS8J6JoAHvyP1MsWSCdcaqn  1m10s ago    24s ago       go-ipfs/0.9.1/dc2715a
  • Bitswap stats show that Node 2 lists 0 partners and Node 1 lists 1 partner
### Node 1

bitswap status
	provides buffer: 0 / 256
	blocks received: 0
	blocks sent: 0
	data received: 0
	data sent: 0
	dup blocks received: 0
	dup data received: 0
	wantlist [0 keys]
	partners [1]

### Node 2

bitswap status
	provides buffer: 0 / 256
	blocks received: 0
	blocks sent: 0
	data received: 0
	data sent: 0
	dup blocks received: 0
	dup data received: 0
	wantlist [0 keys]
	partners [0]
  • We tried to execute ipfs bitswap reprovide on Node 1 but that did not help.
@vdamle vdamle added kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization labels Aug 16, 2021
@welcome
Copy link

welcome bot commented Aug 16, 2021

Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
In the meantime, please double-check that you have provided all the necessary information to make this process easy! Any information that can help save additional round trips is useful! We currently aim to give initial feedback within two business days. If this does not happen, feel free to leave a comment.
Please keep an eye on how this issue will be labeled, as labels give an overview of priorities, assignments and additional actions requested by the maintainers:

  • "Priority" labels will show how urgent this is for the team.
  • "Status" labels will show if this is ready to be worked on, blocked, or in progress.
  • "Need" labels will indicate if additional input or analysis is required.

Finally, remember to use https://discuss.ipfs.io if you just need general support.

@vdamle
Copy link
Author

vdamle commented Aug 18, 2021

Hello @aschmahmann , we were looking at a couple other issues:

Could you please take a look and advise if we can help gather any further information from our setup that may help troubleshoot this? Thanks!

@vdamle
Copy link
Author

vdamle commented Aug 19, 2021

I turned on DEBUG level logs on both nodes in a separate re-create of this issue. It appears that the following happens:

  • Node2 sends a broadcast: broadcastWantHaves and tries to findProviders for CID
  • Node1 receives the type: 3 Message_GET_PROVIDERS message from Node2 and responds.
  • Node2 receives Node1's response and recognizes that a peer has the content it is looking for.
  • However, Node2 never generates a WANT_HAVE message to Node1.

Node2 - Not working

2021-08-19T16:28:05.983Z    ^[[35mDEBUG^[[0m    bitswap providerquerymanager/providerquerymanager.go:323    New Provider Query on cid: Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd
2021-08-19T16:28:05.983Z    ^[[35mDEBUG^[[0m    bs:sess session/session.go:466  broadcastWantHaves  {"session": 1, "cids": [{"/":"Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd"}]}
2021-08-19T16:28:05.983Z    ^[[35mDEBUG^[[0m    bitswap providerquerymanager/providerquerymanager.go:230    Beginning Find Provider Request for cid: Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd
2021-08-19T16:28:05.983Z    ^[[35mDEBUG^[[0m    dht [email protected]/routing.go:502    finding providers   {"cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "mh": "bciqmp4cuzucgqyukt2fkmke3dyhgcmsco4sau76miv7mlkqsrrdgnga"}
2021-08-19T16:28:05.983Z    ^[[35mDEBUG^[[0m    dht [email protected]/routing.go:502    finding providers   {"cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "mh": "bciqmp4cuzucgqyukt2fkmke3dyhgcmsco4sau76miv7mlkqsrrdgnga"}
2021-08-19T16:28:05.984Z    ^[[35mDEBUG^[[0m    bitswap providerquerymanager/providerquerymanager.go:323    Received provider (QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN) for cid (Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd)
2021-08-19T16:28:05.985Z    ^[[35mDEBUG^[[0m    dht [email protected]/routing.go:550    1 provider entries
2021-08-19T16:28:05.985Z    ^[[35mDEBUG^[[0m    dht [email protected]/routing.go:555    got provider: {QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN: [/ip4/10.2.73.192/tcp/4001 /ip4/127.0.0.1/tcp/4001]}
2021-08-19T16:28:05.985Z    ^[[35mDEBUG^[[0m    dht [email protected]/routing.go:572    got closer peers: 0 []
2021-08-19T16:28:05.985Z    ^[[35mDEBUG^[[0m    dht [email protected]/dht.go:628    peer found  {"peer": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN"}
2021-08-19T16:28:05.985Z    ^[[35mDEBUG^[[0m    addrutil    [email protected]/addr.go:64  adding resolved addr:/ip4/0.0.0.0/tcp/4001 /ip4/10.2.95.245/tcp/4001 [/ip4/10.2.95.245/tcp/4001]
2021-08-19T16:28:05.985Z    ^[[35mDEBUG^[[0m    addrutil    [email protected]/addr.go:64  adding resolved addr:/ip4/0.0.0.0/tcp/4001 /ip4/127.0.0.1/tcp/4001 [/ip4/10.2.95.245/tcp/4001 /ip4/127.0.0.1/tcp/4001]
2021-08-19T16:28:05.985Z    ^[[35mDEBUG^[[0m    addrutil    [email protected]/addr.go:109 ResolveUnspecifiedAddresses:[/p2p-circuit /ip4/0.0.0.0/tcp/4001 /ip6/::/tcp/4001] [/ip4/10.2.95.245 /ip4/127.0.0.1] [/p2p-circuit /ip4/10.2.95.245/tcp/4001 /ip4/127.0.0.1/tcp/4001]
2021-08-19T16:28:05.985Z    ^[[35mDEBUG^[[0m    bitswap providerquerymanager/providerquerymanager.go:323    Finished Provider Query on cid: Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd

Node1 - Not working

2021-08-19T16:28:05.984Z    ^[[35mDEBUG^[[0m    dht [email protected]/dht.go:628    peer found  {"peer": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm"}
2021-08-19T16:28:05.984Z    ^[[35mDEBUG^[[0m    addrutil    [email protected]/addr.go:64  adding resolved addr:/ip4/0.0.0.0/tcp/4001 /ip4/10.2.73.192/tcp/4001 [/ip4/10.2.73.192/tcp/4001]
2021-08-19T16:28:05.984Z    ^[[35mDEBUG^[[0m    addrutil    [email protected]/addr.go:64  adding resolved addr:/ip4/0.0.0.0/tcp/4001 /ip4/127.0.0.1/tcp/4001 [/ip4/10.2.73.192/tcp/4001 /ip4/127.0.0.1/tcp/4001]
2021-08-19T16:28:05.984Z    ^[[35mDEBUG^[[0m    addrutil    [email protected]/addr.go:109 ResolveUnspecifiedAddresses:[/p2p-circuit /ip4/0.0.0.0/tcp/4001 /ip6/::/tcp/4001] [/ip4/10.2.73.192 /ip4/127.0.0.1] [/p2p-circuit /ip4/10.2.73.192/tcp/4001 /ip4/127.0.0.1/tcp/4001]
2021-08-19T16:28:05.984Z    ^[[35mDEBUG^[[0m    dht [email protected]/dht_net.go:116    handling message    {"from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "type": 3, "key": "EiDH8FTNBGhiip6KpiibHg5hMkJ3JAp/zEV+xaoSjEZmmA=="}
2021-08-19T16:28:05.984Z    ^[[35mDEBUG^[[0m    dht [email protected]/dht_net.go:133    handled message {"from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "type": 3, "key": "EiDH8FTNBGhiip6KpiibHg5hMkJ3JAp/zEV+xaoSjEZmmA==", "time": 0.000397602}
2021-08-19T16:28:05.984Z    ^[[35mDEBUG^[[0m    dht [email protected]/dht_net.go:159    responded to message    {"from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "type": 3, "key": "EiDH8FTNBGhiip6KpiibHg5hMkJ3JAp/zEV+xaoSjEZmmA==", "time": 0.000424566}
  • In a normal working scenario (a restart of Node1 fixes the issue), I see that Node2 generates a WANT_HAVE message with the CID, as a result of which Node1 goes in search of the blocks locally, finds it and then streams it to Node2.

Node2 - Working after restart of Node1

2021-08-19T18:08:11.422Z    ^[[35mDEBUG^[[0m    blockservice    [email protected]/blockservice.go:238  Blockservice: Searching bitswap
2021-08-19T18:08:11.422Z    ^[[35mDEBUG^[[0m    bitswap getter/getter.go:80 Bitswap.GetBlockRequest.Start   {"cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd"}
2021-08-19T18:08:11.422Z    ^[[34mINFO^[[0m bs:sess session/session.go:459  No peers - broadcasting {"session": 4, "want-count": 1}
2021-08-19T18:08:11.479Z    ^[[35mDEBUG^[[0m    bs:sess session/session.go:466  broadcastWantHaves  {"session": 4, "cids": [{"/":"Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd"}]}
2021-08-19T18:08:11.481Z    ^[[35mDEBUG^[[0m    swarm2  [email protected]/swarm.go:344 [QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm] opening stream to peer [QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN]
2021-08-19T18:08:11.481Z    ^[[35mDEBUG^[[0m    bitswap messagequeue/messagequeue.go:637    sent message    {"type": "WANT_HAVE", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "local": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "to": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN"}
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    basichost   basic/basic_host.go:419 protocol negotiation took 195.567µs
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    bitswap_network network/ipfs_impl.go:404    bitswap net handleNewStream from QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    bs:sess session/session.go:223  Bitswap <- HAVE {"local": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "from": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "session": 4}
2021-08-19T18:08:11.493Z    ^[[35mDEBUG^[[0m    bs:sprmgr   sessionpeermanager/sessionpeermanager.go:68 Bitswap: Added peer to session  {"session": 4, "peer": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "peerCount": 1}
2021-08-19T18:08:11.494Z    ^[[35mDEBUG^[[0m    bitswap messagequeue/messagequeue.go:644    sent message    {"type": "WANT_BLOCK", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "local": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "to": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN"}
2021-08-19T18:08:11.500Z    ^[[35mDEBUG^[[0m    basichost   basic/basic_host.go:419 protocol negotiation took 67.01µs
2021-08-19T18:08:11.500Z    ^[[35mDEBUG^[[0m    bitswap_network network/ipfs_impl.go:404    bitswap net handleNewStream from QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN
2021-08-19T18:08:11.500Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/bitswap.go:457    [recv] block; cid=Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd, peer=QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN
2021-08-19T18:08:11.518Z    ^[[34mINFO^[[0m bitswap messagequeue/messagequeue.go:210    Bitswap: timeout waiting for blocks {"cids": [{"/":"Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd"}], "peer": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN"}
2021-08-19T18:08:11.518Z    ^[[35mDEBUG^[[0m    bs:sess session/session.go:226  Bitswap <- DONT_HAVE    {"local": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "from": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "session": 4}
2021-08-19T18:08:11.618Z    ^[[35mDEBUG^[[0m    bs:sess session/session.go:220  Bitswap <- block    {"local": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "from": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "session": 4}
2021-08-19T18:08:11.618Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:577  Bitswap engine <- block {"local": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "from": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "size": 1381}
2021-08-19T18:08:11.618Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/bitswap.go:428    Bitswap.GetBlockRequest.End {"cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd"}
2021-08-19T18:08:11.618Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/workers.go:173    Bitswap.ProvideWorker.Loop

Node1 - Working after restart of Node1

2021-08-19T18:08:11.482Z    ^[[35mDEBUG^[[0m    bitswap_network network/ipfs_impl.go:404    bitswap net handleNewStream from QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm
2021-08-19T18:08:11.482Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:424  Bitswap engine <- msg   {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "entryCount": 1}
2021-08-19T18:08:11.482Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:428  Bitswap engine <- want-have {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd"}
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    engine  decision/ledger.go:32   peer QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm wants Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:517  Bitswap engine: block found {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "isWantBlock": false}
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:330  Bitswap process tasks   {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "taskCount": 1}
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:383  Bitswap engine -> msg   {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "to": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "blockCount": 0, "presenceCount": 1, "size": 36}
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    swarm2  [email protected]/swarm.go:344 [QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN] opening stream to peer [QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm]
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/workers.go:83 sent message    {"type": "HAVE", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "to": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm"}
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/workers.go:137    sent message    {"peer": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm"}
2021-08-19T18:08:11.492Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/workers.go:46 Bitswap.TaskWorker.Loop {"ID": 4}
2021-08-19T18:08:11.495Z    ^[[35mDEBUG^[[0m    bitswap_network network/ipfs_impl.go:404    bitswap net handleNewStream from QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm
2021-08-19T18:08:11.495Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:424  Bitswap engine <- msg   {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "entryCount": 1}
2021-08-19T18:08:11.495Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:430  Bitswap engine <- want-block    {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd"}
2021-08-19T18:08:11.495Z    ^[[35mDEBUG^[[0m    engine  decision/ledger.go:32   peer QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm wants Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd
2021-08-19T18:08:11.495Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:517  Bitswap engine: block found {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "from": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "isWantBlock": true}
2021-08-19T18:08:11.495Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:330  Bitswap process tasks   {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "taskCount": 1}
2021-08-19T18:08:11.500Z    ^[[35mDEBUG^[[0m    engine  decision/engine.go:383  Bitswap engine -> msg   {"local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "to": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm", "blockCount": 1, "presenceCount": 0, "size": 1381}
2021-08-19T18:08:11.500Z    ^[[35mDEBUG^[[0m    swarm2  [email protected]/swarm.go:344 [QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN] opening stream to peer [QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm]
2021-08-19T18:08:11.500Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/workers.go:102    sent message    {"type": "BLOCK", "cid": "Qmbo8PfVs4WWanjFPMboP8Gx5hunN1Tn5kLMmhvB3L1LVd", "local": "QmUAE47723v3bXjYurnQUbH6EpzhfN38mGTATu6VgrygGN", "to": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm"}
2021-08-19T18:08:11.500Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/workers.go:137    sent message    {"peer": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm"}
2021-08-19T18:08:11.500Z    ^[[35mDEBUG^[[0m    bitswap [email protected]/workers.go:46 Bitswap.TaskWorker.Loop {"ID": 6}
2021-08-19T18:08:11.620Z    ^[[35mDEBUG^[[0m    dht [email protected]/dht.go:628    peer found  {"peer": "QmZfrjDiCE3CwGhVHWWSadChbGuMDWcpXTjGbdgM29qpYm"}

@BigLep
Copy link
Contributor

BigLep commented Sep 24, 2021

To evaluate this we need someone who has high-level awareness of whether this is a content -routing or bitswap issue. go-ipfs maintainers are currently strapped and I don't have a commitment on when we can investigate this further.

@BigLep BigLep added need/analysis Needs further analysis before proceeding and removed need/triage Needs initial labeling and prioritization labels Sep 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) need/analysis Needs further analysis before proceeding
Projects
None yet
Development

No branches or pull requests

2 participants