Skip to content
This repository was archived by the owner on Sep 9, 2020. It is now read-only.

Leverage running IPFS daemon instead of spawning a new one. #677

Open
Gozala opened this issue Jul 11, 2019 · 3 comments
Open

Leverage running IPFS daemon instead of spawning a new one. #677

Gozala opened this issue Jul 11, 2019 · 3 comments

Comments

@Gozala
Copy link

Gozala commented Jul 11, 2019

Radicle ends up operating own IPFS node / daemon even if one is already running on the system. It is not ideal as IPFS is already resource intensive and this multiplies the strain

@jkarni
Copy link
Contributor

jkarni commented Jul 12, 2019

I agree it's not ideal, but our experiments indicated that joining the main network (which as I understand is still currently the only way to have a single daemon) substantially decreases performance.

I haven't profiled IPFS to know where the baseline resource usage is coming from, but if it's from maintaining connections and discovering peers, I'm also a little skeptical of a unified IPFS daemon (which is what I presume the go-ipfs-daemon comment in the linked thread is about) wouldn't either decrease performance, or end up with roughly the same resource usage as independent IPFS daemons, since the peers/connections radicle IPFS wants are ultimately a largely disjoint set from the rest of IPFS. (E.g., radicle doesn't need to know about DHT items for anything that's not radicle related, and for the most part won't retrieve IPFS data from anything that wasn't put there by another radicle instance.)

@Gozala
Copy link
Author

Gozala commented Jul 12, 2019

I agree it's not ideal, but our experiments indicated that joining the main network (which as I understand is still currently the only way to have a single daemon) substantially decreases performance.

That is interesting & worth discussing & investigating in that other thread. I presume you mean that radicle becoming slower at propagating changes because node is busy doing other stuff. If so it seems like a IPFS needs to get better at resource management and prioritize active tasks submitted by clients rather over whatever else is doing while being conceptually idle.

I haven't profiled IPFS to know where the baseline resource usage is coming from, but if it's from maintaining connections and discovering peers, I'm also a little skeptical of a unified IPFS daemon (which is what I presume the go-ipfs-daemon comment in the linked thread is about) wouldn't either decrease performance, or end up with roughly the same resource usage as independent IPFS daemons, since the peers/connections radicle IPFS wants are ultimately a largely disjoint set from the rest of IPFS. (E.g., radicle doesn't need to know about DHT items for anything that's not radicle related, and for the most part won't retrieve IPFS data from anything that wasn't put there by another radicle instance.)

If unified daemon performs worse than separate peers it implies a sub-optimal resource management & is worth figuring out. Truth is is if I have multiple IPFS nodes each comes with own overhead so collectively they make my machine perform more poorly.

I think your remark on disjoint pen sets is a good point, however as (and if) IPFS gains more adoption that may not be true. Chances are that peers I'm collaborating with are also the peers I'm interacting with my textile app and in the future and drafting roadmaps on https://anytype.io/. Furthermore it would be far more convenient if all of that was mirrored by the same server peer which seems more straight forward with single node than on per app basis.

Anyway I started all this issues to start the conversation across groups in a hope to identify goals & constraints and attempt to converge to some reasonable plan to improve things.

@kim
Copy link
Contributor

kim commented Jul 15, 2019

I might be mistaken, but the performance issues seemed mainly around IPNS and PubSub, not data replication.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants