Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Echo example with Routed host #275

Closed
jvsteiner opened this issue Feb 15, 2018 · 9 comments
Closed

Echo example with Routed host #275

jvsteiner opened this issue Feb 15, 2018 · 9 comments

Comments

@jvsteiner
Copy link
Contributor

jvsteiner commented Feb 15, 2018

Hi, I'm working on an example using a routed host, and am roughly doing the following, which is based off advice from @Stebalien here.

import (
	"context"

	rhost "github.com/libp2p/go-libp2p/p2p/host/routed"
	libp2p "github.com/libp2p/go-libp2p"
	dht "github.com/libp2p/go-libp2p-kad-dht"
	dsync "github.com/ipfs/go-datastore/sync"
	ds "github.com/ipfs/go-datastore"
)

// ....

ctx, shutdown := context.WithCancel(context.Background())

/* ... your libp2p setup code ... */

// Finish by constructing the basic, non-routed host)
basicHost, err := libp2p.New(ctx, opts...)

// Construct a datastore (needed by the DHT). This is just a simple, in-memory thread-safe datastore.
dstore := dsync.MutexWrap(ds.NewMapDatastore())

// Make the DHT NOTE - Using Client constructor
dht := dht.NewDHTClient(ctx, basicHost, dstore)

// Make the routed host
host := rhost.Wrap(host, dht)

// I think I need to Bootstrap it....
err = dht.Bootstrap(ctx)
if err != nil {
	return nil, err
}

The only real things I did differently (and I have tried a bunch of different combinations) is use the NewDHTClient constructor, instead of NewDHT and also to try and bootstrap the dht.

Currently I'm stuck at this error at runtime - although I have experienced some others:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x13a696a]

goroutine 22 [running]:
gx/ipfs/QmSxBdRmouj5T3c8UV4Y1zvShCHe7APV9RU3tEn5voLQjb/go-peerstream.(*Swarm).NewStreamWithConn(0xc4200c43c0, 0xc4204fe080, 0x1, 0x1, 0x169f601)
	/Users/jamie/Code/go/src/gx/ipfs/QmSxBdRmouj5T3c8UV4Y1zvShCHe7APV9RU3tEn5voLQjb/go-peerstream/swarm.go:328 +0xba
gx/ipfs/QmSxBdRmouj5T3c8UV4Y1zvShCHe7APV9RU3tEn5voLQjb/go-peerstream.(*Swarm).newStreamSelectConn(0xc4200c43c0, 0x169e6d0, 0xc42049e058, 0x1, 0x1, 0x1, 0x11a3cea, 0xc4200779e0)
	/Users/jamie/Code/go/src/gx/ipfs/QmSxBdRmouj5T3c8UV4Y1zvShCHe7APV9RU3tEn5voLQjb/go-peerstream/swarm.go:284 +0xd6
gx/ipfs/QmSxBdRmouj5T3c8UV4Y1zvShCHe7APV9RU3tEn5voLQjb/go-peerstream.(*Swarm).NewStreamWithGroup(0xc4200c43c0, 0x14f93e0, 0xc4204e4c40, 0x1, 0xc4204e4c50, 0x11a3d7e)
	/Users/jamie/Code/go/src/gx/ipfs/QmSxBdRmouj5T3c8UV4Y1zvShCHe7APV9RU3tEn5voLQjb/go-peerstream/swarm.go:306 +0x9b
gx/ipfs/QmSKrS9EF4V8FpD1d5FUGQiwYLNkXcxKabWgT2aWNVnQie/go-libp2p-swarm.(*Swarm).NewStreamWithPeer(0xc4200ac900, 0x1914100, 0xc42001c0d8, 0xc420106150, 0x22, 0x2, 0x1, 0x0)
	/Users/jamie/Code/go/src/gx/ipfs/QmSKrS9EF4V8FpD1d5FUGQiwYLNkXcxKabWgT2aWNVnQie/go-libp2p-swarm/swarm.go:285 +0x104
gx/ipfs/QmSKrS9EF4V8FpD1d5FUGQiwYLNkXcxKabWgT2aWNVnQie/go-libp2p-swarm.(*Network).NewStream(0xc4200ac900, 0x1914100, 0xc42001c0d8, 0xc420106150, 0x22, 0xc4204c2180, 0x0, 0x1, 0x0)
	/Users/jamie/Code/go/src/gx/ipfs/QmSKrS9EF4V8FpD1d5FUGQiwYLNkXcxKabWgT2aWNVnQie/go-libp2p-swarm/swarm_net.go:142 +0x160
gx/ipfs/QmPd5qhppUqewTQMfStvNNCFtcxiWGsnE6Vs3va6788gsX/go-libp2p/p2p/host/basic.(*BasicHost).NewStream(0xc42043d100, 0x1914100, 0xc42001c0d8, 0xc420106150, 0x22, 0xc4204c2150, 0x1, 0x1, 0x0, 0x0, ...)
	/Users/jamie/Code/go/src/gx/ipfs/QmPd5qhppUqewTQMfStvNNCFtcxiWGsnE6Vs3va6788gsX/go-libp2p/p2p/host/basic/basic_host.go:355 +0x23a
gx/ipfs/QmPd5qhppUqewTQMfStvNNCFtcxiWGsnE6Vs3va6788gsX/go-libp2p/p2p/host/routed.(*RoutedHost).NewStream(0xc4204b6f40, 0x1914100, 0xc42001c0d8, 0xc420106150, 0x22, 0xc4204c2150, 0x1, 0x1, 0x0, 0x0, ...)
	/Users/jamie/Code/go/src/gx/ipfs/QmPd5qhppUqewTQMfStvNNCFtcxiWGsnE6Vs3va6788gsX/go-libp2p/p2p/host/routed/routed.go:139 +0x186
gx/ipfs/QmSjoxpBJV71bpSojnUY1K382Ly3Up55EspnDx6EKAmQX4/go-libp2p-floodsub.(*PubSubNotif).Connected.func1(0xc42008cd10, 0x19171c0, 0xc4204fe080)
	/Users/jamie/Code/go/src/gx/ipfs/QmSjoxpBJV71bpSojnUY1K382Ly3Up55EspnDx6EKAmQX4/go-libp2p-floodsub/notify.go:20 +0xfc
created by gx/ipfs/QmSjoxpBJV71bpSojnUY1K382Ly3Up55EspnDx6EKAmQX4/go-libp2p-floodsub.(*PubSubNotif).Connected
	/Users/jamie/Code/go/src/gx/ipfs/QmSjoxpBJV71bpSojnUY1K382Ly3Up55EspnDx6EKAmQX4/go-libp2p-floodsub/notify.go:19 +0x53
exit status 2

which traces back to this (go-peerstream/swarm.go:328):

if conn.smuxConn.IsClosed() {  // <- this line
	go conn.Close()
	return nil, errors.New("conn is closed")
}

so it seems like maybe there is supposed to be a mux in there somewhere and it isn't...

@nuxibyte
Copy link
Contributor

nuxibyte commented Feb 15, 2018

Not sure if this is the same issue but I saw something similar when trying to get the echo example to work.

I fixed my problem by setting the default mux in the opts as shown by @dmbreaker in issue #263

Worth a try?

opts := []libp2p.Option{
libp2p.ListenAddrStrings(fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", listenPort)),
libp2p.Identity(priv),
libp2p.Muxer(libp2p.DefaultMuxer()), // <<
}```

@jvsteiner
Copy link
Contributor Author

that did the trick actually!

@jvsteiner
Copy link
Contributor Author

jvsteiner commented Feb 15, 2018

so I have it running, but the peers don't actually find each other at the moment. Even though I've called Bootstrap I don't see any requests going out of the local machine. My mental model of this might be wrong though - I'm thinking that these peers should be bootstrapping onto the ipfs DHT and then find each other that way. Is that right?

Update - looks like the problem was I unwisely chose to use dht.NewDHTClient against @Stebalien 's advice...

@jvsteiner jvsteiner reopened this Feb 15, 2018
@Stebalien
Copy link
Member

@jvsteiner actually, looking at my comment, I made two conflicting suggestions (the text said to use the client...).

Regardless, either should have worked (assuming you were able to bootstrap) off the ipfs network. what bootstrap nodes did you use?

@jvsteiner
Copy link
Contributor Author

at least in my current code, it works if I construct it as dht.NewDHT and not if I call dht.NewDHTClient. Bootstrap receives only a context, so I didn't specify any nodes, but reading through the code, it looks like the algorithm "builds up list of peers by requesting random peer IDs" so I didn't question why I didn't need to provide a node to start.

@jvsteiner
Copy link
Contributor Author

unfortunately, in re-reading the routed host Connect code, I have realized that I haven't yet succeeded because of the fact that the host caches addresses that it has seen before, and this is happening because of the way my tests are constructed. How long should it take for the Bootstrap to succeed? I am still not seeing any external requests being made, but if I need to provide a node to bootstrap from I don't see how...

@jvsteiner jvsteiner reopened this Feb 15, 2018
@Stebalien
Copy link
Member

So, I thought this logic lived in libp2p but apparently not... Bootstrap just tries to increase "connectedness" but requires at least one connection. In go-ipfs, we have a periodic check that sees if the number of active connections drops below some number and, if so, connects to predefined "bootstrap" nodes (that we run) and then calls Bootstrap to discover new nodes.

Code: https://github.com/ipfs/go-ipfs/blob/master/core/bootstrap.go

We should probably break this out of go-ipfs (at least the core logic) but it may be a while until we do that. For now, you can probably replicate a simplified version in your code.

Current bootstrap nodes:

/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM
/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu
/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64
/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd
/ip6/2604:a880:1:20::203:d001/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM
/ip6/2400:6180:0:d0::151:6001/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu
/ip6/2604:a880:800:10::4a:5001/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64
/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNA

@jvsteiner
Copy link
Contributor Author

jvsteiner commented Feb 16, 2018

Progress report: managed to get my nodes bootstrapped and finding each other via the ipfs DHT, however, running into some strange issues, for example I need to have a local ipfs daemon (v0.4.13) running for it to work with --pubsub-experiment-enabled , AND, it seems like when i start the daemon via command line it still doesn't work, yet when I start it via brew services (same version, with pubsub also enabled) it does. so i'm goin to bang on it some more this weekend, and see if i can get confident enough to PR something.

@jvsteiner
Copy link
Contributor Author

PR: #278

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants