Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPFS private network video load break #4756

Open
llgoer opened this issue Mar 2, 2018 · 8 comments
Open

IPFS private network video load break #4756

llgoer opened this issue Mar 2, 2018 · 8 comments

Comments

@llgoer
Copy link

llgoer commented Mar 2, 2018

Version information:

go-ipfs version: 0.4.13-
Repo version: 6
System version: amd64/darwin
Golang version: go1.9.2

Type:

Bug

Description:

When I use IPFS private network,add mp4 file,bug the video can not play completed,the error show like:

16:28:17.183 DEBUG       mdns: mdns query complete mdns.go:143
16:28:22.175 DEBUG    bitswap: 164 keys in bitswap wantlist workers.go:183
16:28:22.181 DEBUG       mdns: starting mdns query mdns.go:130
16:28:22.183 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4013 QmWDUqqGwHGhWWi538P68Tr5nGNHUHNDUM6pvt4vvk9Kk3 mdns.go:152
16:28:22.183 DEBUG       mdns: got our own mdns entry, skipping mdns.go:160
16:28:22.183 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4011 QmVQjdsdyyNiNgRusUBgzYURbdUXQjHGocQ6yAq3Wd6FST mdns.go:152
16:28:22.183 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4012 QmYFUMf9RDiLz8JADXFspoz4WsEWGyh3GKMHyKvdnSJPoF mdns.go:152
16:28:22.183 WARNI       core: trying peer info: {<peer.ID VQjdsd> [/ip4/192.168.254.1/tcp/4011]} core.go:417
16:28:22.183 WARNI       core: trying peer info: {<peer.ID YFUMf9> [/ip4/192.168.254.1/tcp/4012]} core.go:417
16:28:22.478 DEBUG       mdns: Handling MDNS entry: 10.0.1.59:4001 QmXibDf1TMr5obHigdALwy7iq3bTr2JY2RskW6C3aFcZHu mdns.go:152
16:28:22.478 WARNI       core: trying peer info: {<peer.ID XibDf1> [/ip4/10.0.1.59/tcp/4001]} core.go:417
16:28:22.478 DEBUG  basichost: host %!s(func() peer.ID=0x14d24f0) dialing <peer.ID XibDf1> basic_host.go:479
16:28:22.478 DEBUG     swarm2: [<peer.ID WDUqqG>] network dialing peer [<peer.ID XibDf1>] swarm_net.go:42
16:28:22.479 DEBUG   addrutil: InterfaceAddresses: from manet:[/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::1cbc:ff22:b0b3:db18 /ip4/10.0.1.225 /ip6/fe80::52b0:8658:c5b9:325b /ip4/192.168.100.1 /ip4/192.168.254.1] addr.go:220
16:28:22.479 DEBUG   addrutil: InterfaceAddresses: usable:[/ip4/127.0.0.1 /ip6/::1 /ip4/10.0.1.225 /ip4/192.168.100.1 /ip4/192.168.254.1] addr.go:232
16:28:22.479 DEBUG   addrutil: adding resolved addr:/ip4/0.0.0.0/tcp/4013 /ip4/127.0.0.1/tcp/4013 [/ip4/127.0.0.1/tcp/4013] addr.go:163
16:28:22.479 DEBUG   addrutil: adding resolved addr:/ip4/0.0.0.0/tcp/4013 /ip4/10.0.1.225/tcp/4013 [/ip4/127.0.0.1/tcp/4013 /ip4/10.0.1.225/tcp/4013] addr.go:163
16:28:22.479 DEBUG   addrutil: adding resolved addr:/ip4/0.0.0.0/tcp/4013 /ip4/192.168.100.1/tcp/4013 [/ip4/127.0.0.1/tcp/4013 /ip4/10.0.1.225/tcp/4013 /ip4/192.168.100.1/tcp/4013] addr.go:163
16:28:22.479 DEBUG   addrutil: adding resolved addr:/ip4/0.0.0.0/tcp/4013 /ip4/192.168.254.1/tcp/4013 [/ip4/127.0.0.1/tcp/4013 /ip4/10.0.1.225/tcp/4013 /ip4/192.168.100.1/tcp/4013 /ip4/192.168.254.1/tcp/4013] addr.go:163
16:28:22.479 DEBUG   addrutil: adding resolved addr:/ip6/::/tcp/4013 /ip6/::1/tcp/4013 [/ip6/::1/tcp/4013] addr.go:163
16:28:22.480 DEBUG   addrutil: ResolveUnspecifiedAddresses:[/p2p-circuit/ipfs/QmWDUqqGwHGhWWi538P68Tr5nGNHUHNDUM6pvt4vvk9Kk3 /ip4/0.0.0.0/tcp/4013 /ip6/::/tcp/4013] [/ip4/127.0.0.1 /ip6/::1 /ip4/10.0.1.225 /ip4/192.168.100.1 /ip4/192.168.254.1] [/p2p-circuit/ipfs/QmWDUqqGwHGhWWi538P68Tr5nGNHUHNDUM6pvt4vvk9Kk3 /ip4/127.0.0.1/tcp/4013 /ip4/10.0.1.225/tcp/4013 /ip4/192.168.100.1/tcp/4013 /ip4/192.168.254.1/tcp/4013 /ip6/::1/tcp/4013] addr.go:208
16:28:22.480 DEBUG     swarm2: <peer.ID WDUqqG> swarm dialing <peer.ID XibDf1> swarm_dial.go:290
16:28:22.480 DEBUG     swarm2: <peer.ID WDUqqG> swarm dialing <peer.ID XibDf1> /ip4/10.0.1.59/tcp/4001 swarm_dial.go:350
16:28:27.183 DEBUG       mdns: mdns query complete mdns.go:143
16:28:32.176 DEBUG    bitswap: 164 keys in bitswap wantlist workers.go:183
16:28:32.181 DEBUG       mdns: starting mdns query mdns.go:130
16:28:32.182 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4013 QmWDUqqGwHGhWWi538P68Tr5nGNHUHNDUM6pvt4vvk9Kk3 mdns.go:152
16:28:32.182 DEBUG       mdns: got our own mdns entry, skipping mdns.go:160
16:28:32.183 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4012 QmYFUMf9RDiLz8JADXFspoz4WsEWGyh3GKMHyKvdnSJPoF mdns.go:152
16:28:32.183 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4011 QmVQjdsdyyNiNgRusUBgzYURbdUXQjHGocQ6yAq3Wd6FST mdns.go:152
16:28:32.183 WARNI       core: trying peer info: {<peer.ID YFUMf9> [/ip4/192.168.254.1/tcp/4012]} core.go:417
16:28:32.183 WARNI       core: trying peer info: {<peer.ID VQjdsd> [/ip4/192.168.254.1/tcp/4011]} core.go:417
16:28:32.235 DEBUG        nat: Attempting port map: tcp/4013 nat.go:168
16:28:32.247 DEBUG        nat: NAT Mapping: /ip4/0.0.0.0/tcp/4013 --> /ip4/114.84.243.237/tcp/12179 nat.go:207
16:28:32.309 DEBUG       mdns: Handling MDNS entry: 10.0.1.59:4001 QmXibDf1TMr5obHigdALwy7iq3bTr2JY2RskW6C3aFcZHu mdns.go:152
16:28:32.309 WARNI       core: trying peer info: {<peer.ID XibDf1> [/ip4/10.0.1.59/tcp/4001]} core.go:417
16:28:32.309 DEBUG  basichost: host %!s(func() peer.ID=0x14d24f0) dialing <peer.ID XibDf1> basic_host.go:479
16:28:32.309 DEBUG     swarm2: [<peer.ID WDUqqG>] network dialing peer [<peer.ID XibDf1>] swarm_net.go:42
16:28:32.479 DEBUG     swarm2: dial end <nil> swarm_dial.go:204
16:28:32.479 WARNI       core: Failed to connect to peer found by discovery: dial attempt failed: context deadline exceeded core.go:421
16:28:32.479 WARNI       core: Failed to connect to peer found by discovery: dial attempt failed: context deadline exceeded core.go:421
16:28:37.182 DEBUG       mdns: mdns query complete mdns.go:143
16:28:38.247 DEBUG        dht: <peer.ID WDUqqG> handleGetProviders(<peer.ID VQjdsd>, QmSZocgTjWQLbtHfvsmUJRBFRgkHdotvmcgMRiKp45aCKR):  begin handlers.go:228
16:28:38.248 DEBUG        dht: <peer.ID WDUqqG> handleGetProviders(<peer.ID VQjdsd>, QmSZocgTjWQLbtHfvsmUJRBFRgkHdotvmcgMRiKp45aCKR):  have 1 closer peers: [{<peer.ID YFUMf9> [/ip4/127.0.0.1/tcp/4012 /ip4/10.0.1.225/tcp/4012 /ip4/192.168.100.1/tcp/4012 /ip4/192.168.254.1/tcp/4012 /ip6/::1/tcp/4012 /ip4/114.84.243.237/tcp/39881]}] handlers.go:256
16:28:38.248 DEBUG        dht: <peer.ID WDUqqG> handleGetProviders(<peer.ID VQjdsd>, QmSZocgTjWQLbtHfvsmUJRBFRgkHdotvmcgMRiKp45aCKR):  end handlers.go:259
16:28:42.175 DEBUG    bitswap: 164 keys in bitswap wantlist workers.go:183
16:28:42.181 DEBUG       mdns: starting mdns query mdns.go:130
16:28:42.181 DEBUG       core: <peer.ID WDUqqG> no more bootstrap peers to create 2 connections bootstrap.go:141
16:28:42.181 DEBUG       core: <peer.ID WDUqqG> bootstrap error: not enough bootstrap peers to bootstrap bootstrap.go:87
16:28:42.182 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4013 QmWDUqqGwHGhWWi538P68Tr5nGNHUHNDUM6pvt4vvk9Kk3 mdns.go:152
16:28:42.182 DEBUG       mdns: got our own mdns entry, skipping mdns.go:160
16:28:42.182 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4011 QmVQjdsdyyNiNgRusUBgzYURbdUXQjHGocQ6yAq3Wd6FST mdns.go:152
16:28:42.182 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4012 QmYFUMf9RDiLz8JADXFspoz4WsEWGyh3GKMHyKvdnSJPoF mdns.go:152
16:28:42.183 WARNI       core: trying peer info: {<peer.ID YFUMf9> [/ip4/192.168.254.1/tcp/4012]} core.go:417
16:28:42.183 WARNI       core: trying peer info: {<peer.ID VQjdsd> [/ip4/192.168.254.1/tcp/4011]} core.go:417
16:28:42.509 DEBUG       mdns: Handling MDNS entry: 10.0.1.59:4001 QmXibDf1TMr5obHigdALwy7iq3bTr2JY2RskW6C3aFcZHu mdns.go:152
16:28:42.509 WARNI       core: trying peer info: {<peer.ID XibDf1> [/ip4/10.0.1.59/tcp/4001]} core.go:417
16:28:42.510 DEBUG  basichost: host %!s(func() peer.ID=0x14d24f0) dialing <peer.ID XibDf1> basic_host.go:479
16:28:42.510 DEBUG     swarm2: [<peer.ID WDUqqG>] network dialing peer [<peer.ID XibDf1>] swarm_net.go:42
16:28:42.510 WARNI       core: Failed to connect to peer found by discovery: dial backoff core.go:421
16:28:47.182 DEBUG       mdns: mdns query complete mdns.go:143
16:28:52.176 DEBUG    bitswap: 164 keys in bitswap wantlist workers.go:183
16:28:52.180 DEBUG       mdns: starting mdns query mdns.go:130
16:28:52.182 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4013 QmWDUqqGwHGhWWi538P68Tr5nGNHUHNDUM6pvt4vvk9Kk3 mdns.go:152
16:28:52.182 DEBUG       mdns: got our own mdns entry, skipping mdns.go:160
16:28:52.182 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4012 QmYFUMf9RDiLz8JADXFspoz4WsEWGyh3GKMHyKvdnSJPoF mdns.go:152
16:28:52.182 DEBUG       mdns: Handling MDNS entry: 192.168.254.1:4011 QmVQjdsdyyNiNgRusUBgzYURbdUXQjHGocQ6yAq3Wd6FST mdns.go:152
16:28:52.182 WARNI       core: trying peer info: {<peer.ID YFUMf9> [/ip4/192.168.254.1/tcp/4012]} core.go:417
16:28:52.182 WARNI       core: trying peer info: {<peer.ID VQjdsd> [/ip4/192.168.254.1/tcp/4011]} core.go:417
16:28:52.248 DEBUG        nat: Attempting port map: tcp/4013 nat.go:168
16:28:52.269 DEBUG        nat: NAT Mapping: /ip4/0.0.0.0/tcp/4013 --> /ip4/114.84.243.237/tcp/12179 nat.go:207
16:28:52.429 DEBUG       mdns: Handling MDNS entry: 10.0.1.59:4001 QmXibDf1TMr5obHigdALwy7iq3bTr2JY2RskW6C3aFcZHu mdns.go:152
16:28:52.429 WARNI       core: trying peer info: {<peer.ID XibDf1> [/ip4/10.0.1.59/tcp/4001]} core.go:417
16:28:52.429 DEBUG  basichost: host %!s(func() peer.ID=0x14d24f0) dialing <peer.ID XibDf1> basic_host.go:479
16:28:52.429 DEBUG     swarm2: [<peer.ID WDUqqG>] network dialing peer [<peer.ID XibDf1>] swarm_net.go:42
16:28:52.430 DEBUG   addrutil: InterfaceAddresses: from manet:[/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::1cbc:ff22:b0b3:db18 /ip4/10.0.1.225 /ip6/fe80::52b0:8658:c5b9:325b /ip4/192.168.100.1 /ip4/192.168.254.1] addr.go:220
16:28:52.430 DEBUG   addrutil: InterfaceAddresses: usable:[/ip4/127.0.0.1 /ip6/::1 /ip4/10.0.1.225 /ip4/192.168.100.1 /ip4/192.168.254.1] addr.go:232
16:28:52.430 DEBUG   addrutil: adding resolved addr:/ip6/::/tcp/4013 /ip6/::1/tcp/4013 [/ip6/::1/tcp/4013] addr.go:163
16:28:52.430 DEBUG   addrutil: adding resolved addr:/ip4/0.0.0.0/tcp/4013 /ip4/127.0.0.1/tcp/4013 [/ip4/127.0.0.1/tcp/4013] addr.go:163
16:28:52.430 DEBUG   addrutil: adding resolved addr:/ip4/0.0.0.0/tcp/4013 /ip4/10.0.1.225/tcp/4013 [/ip4/127.0.0.1/tcp/4013 /ip4/10.0.1.225/tcp/4013] addr.go:163
16:28:52.430 DEBUG   addrutil: adding resolved addr:/ip4/0.0.0.0/tcp/4013 /ip4/192.168.100.1/tcp/4013 [/ip4/127.0.0.1/tcp/4013 /ip4/10.0.1.225/tcp/4013 /ip4/192.168.100.1/tcp/4013] addr.go:163
16:28:52.430 DEBUG   addrutil: adding resolved addr:/ip4/0.0.0.0/tcp/4013 /ip4/192.168.254.1/tcp/4013 [/ip4/127.0.0.1/tcp/4013 /ip4/10.0.1.225/tcp/4013 /ip4/192.168.100.1/tcp/4013 /ip4/192.168.254.1/tcp/4013] addr.go:163
16:28:52.431 DEBUG   addrutil: ResolveUnspecifiedAddresses:[/ip6/::/tcp/4013 /p2p-circuit/ipfs/QmWDUqqGwHGhWWi538P68Tr5nGNHUHNDUM6pvt4vvk9Kk3 /ip4/0.0.0.0/tcp/4013] [/ip4/127.0.0.1 /ip6/::1 /ip4/10.0.1.225 /ip4/192.168.100.1 /ip4/192.168.254.1] [/ip6/::1/tcp/4013 /p2p-circuit/ipfs/QmWDUqqGwHGhWWi538P68Tr5nGNHUHNDUM6pvt4vvk9Kk3 /ip4/127.0.0.1/tcp/4013 /ip4/10.0.1.225/tcp/4013 /ip4/192.168.100.1/tcp/4013 /ip4/192.168.254.1/tcp/4013] addr.go:208
16:28:52.431 DEBUG     swarm2: <peer.ID WDUqqG> swarm dialing <peer.ID XibDf1> swarm_dial.go:290
16:28:52.431 DEBUG     swarm2: <peer.ID WDUqqG> swarm dialing <peer.ID XibDf1> /ip4/10.0.1.59/tcp/4001 swarm_dial.go:350
16:28:57.182 DEBUG       mdns: mdns query complete mdns.go:143

@Kubuxu
Copy link
Member

Kubuxu commented Mar 2, 2018

After you start daemon, without logging, it should print out up Private Network fingerprint, could you compare it between machines that it is the same.

@llgoer
Copy link
Author

llgoer commented Mar 2, 2018

@Kubuxu i run the same daemon in the same machines.
i run the three node like this
base env:

CUR=$(shell pwd)
IPFSCLI=$(CUR)/bin/mac/ipfs
export LIBP2P_FORCE_PNET=1

server node:

nodeserver:export IPFS_PATH=$(CUR)/node1
nodeserver:
	rm -rf $(IPFS_PATH)
	$(IPFSCLI) init
	ipfs-swarm-key-gen > $(IPFS_PATH)/swarm.key
	$(IPFSCLI) bootstrap rm --all
	$(IPFSCLI) config Addresses.Gateway /ip4/127.0.0.1/tcp/8181
	$(IPFSCLI) config Addresses.API /ip4/127.0.0.1/tcp/5011
	$(IPFSCLI) config --json Addresses.Swarm '["/ip4/0.0.0.0/tcp/4011","/ip6/::/tcp/4011"]'
	$(IPFSCLI) daemon

node client

nodeclient:export IPFS_PATH=$(CUR)/node2
nodeclient:
	rm -rf $(IPFS_PATH)
	$(IPFSCLI) init
	cp $(CUR)/node1/swarm.key $(CUR)/node2/swarm.key 
	$(IPFSCLI) bootstrap rm --all
	$(IPFSCLI) config Addresses.Gateway /ip4/127.0.0.1/tcp/8182
	$(IPFSCLI) config Addresses.API /ip4/127.0.0.1/tcp/5012
	$(IPFSCLI) config --json Addresses.Swarm '["/ip4/0.0.0.0/tcp/4012","/ip6/::/tcp/4012"]'
	$(IPFSCLI) bootstrap add /ip4/127.0.0.1/tcp/4011/ipfs/QmVQjdsdyyNiNgRusUBgzYURbdUXQjHGocQ6yAq3Wd6FST
	$(IPFSCLI) daemon

then add the file to server node,get video from 127.0.0.1:8181 success ,but get it from client node break.

@Kubuxu
Copy link
Member

Kubuxu commented Mar 2, 2018

ipfs init generates a new keypair, then the address in bootstrap (on the node client) is no longer valid unless you update it manually. You can use ipfs id to get the ID of the node (the Qm...in bootstrap line).

You are saying that you are running, 3 nodes. You've sent configs for just two. Can you show the third?

Also just to clear up: you are trying to add file on client and fetch it from the server?

@llgoer
Copy link
Author

llgoer commented Mar 2, 2018

node 3 is

nodeclient2:export IPFS_PATH=$(CUR)/node3
nodeclient2:
	rm -rf $(IPFS_PATH)
	$(IPFSCLI) init
	cp $(CUR)/node1/swarm.key $(CUR)/node3/swarm.key 
	$(IPFSCLI) bootstrap rm --all
	$(IPFSCLI) config Addresses.Gateway /ip4/127.0.0.1/tcp/8183
	$(IPFSCLI) config Addresses.API /ip4/127.0.0.1/tcp/5013
	$(IPFSCLI) config --json Addresses.Swarm '["/ip4/0.0.0.0/tcp/4013","/ip6/::/tcp/4013"]'
	$(IPFSCLI) bootstrap add /ip4/127.0.0.1/tcp/4011/ipfs/QmVQjdsdyyNiNgRusUBgzYURbdUXQjHGocQ6yAq3Wd6FST
	$(IPFSCLI) daemon --debug

the ID:QmVQjdsdyyNiNgRusUBgzYURbdUXQjHGocQ6yAq3Wd6FST is the server node id.

when I use ipfs swarm peers can see the other two peers.I think connect success , but when I get video,it load 1%,then break.

@kvm2116
Copy link
Contributor

kvm2116 commented Aug 23, 2018

Were you able to find a solution to this?
I think I am running into a similar issue #5328

@Stebalien
Copy link
Member

@kvm2116 are you using private networks?

@kvm2116
Copy link
Contributor

kvm2116 commented Aug 23, 2018

Not using private networks. I changed the bootstrap nodes list.
I have only two nodes in the network. The bootstrap list only has one entry: the ID of other node.

@asdg-asdf
Copy link

= 5 node is good , private networks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants