Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure out HTTP RPC API access with Manifest V3 #1036

Closed
lidel opened this issue Dec 8, 2021 · 11 comments
Closed

Figure out HTTP RPC API access with Manifest V3 #1036

lidel opened this issue Dec 8, 2021 · 11 comments
Labels
exp/expert Having worked on the specific codebase is important need/analysis Needs further analysis before proceeding P1 High: Likely tackled by core team if no one steps up

Comments

@lidel
Copy link
Member

lidel commented Dec 8, 2021

This is part of #666

Background

IPFS Companion talks to local go-ipfs node over HTTP RPC API.
The API comes with CORS protections in the browser context (ipfs/ipfs-docs#959) and to avoid the need for CORS safelisting, Companion adjusts the Origin header.

Problem

Seems that we cannot modify headers with onBeforeSendHeaders anymore, because the blocking API is gone.
This means Companion is not able to talk to the RPC API, which is needed for file uploads, content-path resolution, DNSLink resolution and so on.

Research solutions

  • see if extension Origin is stable enough to be implicitly safelisted in go-ipfs by default
    • .. if that does not work, figure out some onboarding flow for adding Origin to the CORS safelist in go-ipfs
  • close eyes and wait until the Manifest V3 problem disappears 🙈 (jk, unlikely)
  • ? (suggestions welcome)
@lidel lidel added exp/expert Having worked on the specific codebase is important P1 High: Likely tackled by core team if no one steps up need/analysis Needs further analysis before proceeding labels Dec 8, 2021
@meandavejustice
Copy link
Collaborator

It looks like adding the extension UUID for chrome is straight forward enough:

Once uploaded to the Chrome Web Store, your extension ID is fixed and cannot be changed any more.
The ID is derived from the .pem file that was created the first time you (or the Chrome Web Store) packed the extension in a .crx file.

🔗 https://stackoverflow.com/questions/21497781/how-to-change-chrome-packaged-app-id-or-why-do-we-need-key-field-in-the-manifest

Firefox is a different story 😭

Yes inside the extension you can do whatever you want with the dynamic identifier, unfortunately the issue isn’t using the URI inside of the extension. This design makes it impossible to whitelist an authentication callback url inside the extension. The callback url would necessarily need to be hosted somewhere outside of the extension, with a well know address to be added to a whitelist.

🔗 https://discourse.mozilla.org/t/constant-or-well-known-mox-extension-uuid-for-webextension/9701/28246

Bugzilla ticket

@meandavejustice
Copy link
Collaborator

Firefox not having a consistent UUID isn't actually a blocker here, as we can still use blocking webRequest in firefox to set the access control header for access to ipfs API. We will continue to track it so we can one day have one code path for HTTP API access.

@whizzzkid
Copy link
Contributor

AdGuard seems to have released their implementation of MV3 based AdBlocking:

But since they are solving a different problem than this, I am wondering if it would be possible to implement an API-authentication PSK/token in go-ipfs, we can then add CORS headers whitelisting any referrer/origin that provides a valid token in the request dynamically, this way we unblock both chrome and firefox and it simplifies extensions origin..

@lidel
Copy link
Member Author

lidel commented Oct 18, 2022

@whizzzkid the main problem is the need for manual config change in go-ipfs. Companion users should not be asked to manually change config of IPFS daemon, that is UX self-sabotage.

Whatever we come up with, needs to create an end-to-end user onboarding flow that feels seamless.

@whizzzkid
Copy link
Contributor

@lidel I think we need to work on:

  • Daemon discovery and determine capabilities
  • Configure companion to work with daemon.
  • Fallback flow for advanced users.

@lidel
Copy link
Member Author

lidel commented Oct 19, 2022

Indeed, some notes below @whizzzkid

Current landscape

In case this is useful, the way things look like in context of our MV3 future that needs to happen in the beginning of 2023:

So we are good.. for now.

What could go wrong in the future?

In both cases, Companion users are cut-off and (assuming RPC access is required) have to set CORS headers manually, which is a terrible, terrible UX, and we should avoid it at all cost.

How can we prepare for the worst?

Have a plan in place. Ideally something that leverages our Gateway work, which aims to decrease need for /api/v0 for data retrieval, and (in near future) data ingestion.

So far I see two approaches we could explore (fine to do so in parallel, just limit the time spent on RPC hacks):

  1. Keep things as they are, but find a way to minimize pain around Kubo RPC port access
    • find creative ways to safelist extension Origin via CORS
    • worst case, we could have opt-in or onboarding screen for advanced users that want to manage daemon via RPC, with instructions how to set up CORS and/or access token setup – or decide to remove things that can't be done with gateway alone.
  2. Slowly remove the need for Kubo RPC port access and focus on Companion using Gateway instead
    • Does Companion need to talk to RPC API (/api/v0) at all? Main features are around gateway redirect, and gateways are becoming more useful over time, removing the need for RPC for basic things.
    • If we end up in a pickle, the default post-install state could be implementation-agnostic and Gateway-only.
    • Gateways have way more liberal CORS than the RPC from Kubo
    • Some things like Quick Upload may need to be parked until we have writable/ingestion gateways.
    • this applies for everything else that is missing. this is a good oportunity to dogfood spec process: identify gaps, and propose IPIP with gateway improvements.
    • end game is Companion being independent of Kubo, only requiring a spec-compliant gateway (could be Kubo, Iroh, or something else)

Ideas welcome

Open for ideas / suggestions, especially radical ones.

@lidel
Copy link
Member Author

lidel commented Jan 19, 2023

Update: @ikreymer is experimenting with Origin header override via declarative_net_request and ModifyHeaderInfo in Chromium 101+, to safelist access to Kubo RPC port without asking user to set custom CORS headers in their config.

he caveat is to ensure the override is applied only for requests done by extension (Override Origin only when it is matching the Origin of the extension itself), so we still block third-party websites from having Admin access to RPC API.

I think we can use initiatorDomains for this, and put Companion's Origin there. This way it works the same way it did in MV2.

@whizzzkid
Copy link
Contributor

he caveat is to ensure the override is applied only for requests done by extension (Override Origin only when it is matching the Origin of the extension itself), so we still block third-party websites from having Admin access to RPC API.

I think we can use initiatorDomains for this, and put Companion's Origin there.

@lidel I am not sure I understand the working of this. Is there more info available on this? Why would a third-party website have access if the rpc endpoint does not respond to requests not generating from the extension-url. Also, if rpc port allows access from an extension which modifies the request to have origin header set, then what's stopping a malicious extension doing the same? My concern is an extension can be created such that which sets origin header as our extension url and then exploit rpc port for any malicious purpose. Hence I was always in favour of:

  1. clamp down access over rpc, basic metrics and simple tasks should be refactored to be accessible publically.
  2. For more intrusive changes like uploads etc, an auth mechanism that pairs the node/gateway with the extension and this info is unique for the pair, so it cannot be exploited.

Also this won't work on firefox.

@ikreymer
Copy link

@lidel I am not sure I understand the working of this. Is there more info available on this? Why would a third-party website have access if the rpc endpoint does not respond to requests not generating from the extension-url. Also, if rpc port allows access from an extension which modifies the request to have origin header set, then what's stopping a malicious extension doing the same? My concern is an extension can be created such that which sets origin header as our extension url and then exploit rpc port for any malicious purpose. Hence I was always in favour of:

  1. clamp down access over rpc, basic metrics and simple tasks should be refactored to be accessible publically.
  2. For more intrusive changes like uploads etc, an auth mechanism that pairs the node/gateway with the extension and this info is unique for the pair, so it cannot be exploited.

I think @lidel is saying that the declarativeNetRequest actually applies globally (I was surprised by this too!) and not just for requests originating from the extension. In practice, this doesn't actually work in this case, as access to localhost is further restricted from any third-party website (fetch to localhost fails currently). But, the risk is from any third-party extension, not from any website, as you point out. This is not quite as bad, but still kind of bad, because any Chromium extension out there could thus access the local node by spoofing the Origin header, in both MV2 and MV3.
Some other auth mechanism is needed to make this work, both with general kubo and the kubo that's bundled with Brave, where there is perhaps more flexibility for custom configuration.

@whizzzkid
Copy link
Contributor

Thanks @ikreymer 🙏🏽, that clears it up. I was wondering if I missed something around this way of implementing this. We'll need to work on a pairing mechanism between extension and node (maybe multiple nodes.)

And this should be automatic in the way where we can trigger this by default in implementation like brave.

@whizzzkid
Copy link
Contributor

@lidel I'm marking this as done. Fixed in #1250

@github-project-automation github-project-automation bot moved this from Needs Grooming to Done in IPFS-GUI (PL EngRes) Sep 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exp/expert Having worked on the specific codebase is important need/analysis Needs further analysis before proceeding P1 High: Likely tackled by core team if no one steps up
Projects
No open projects
Status: Done
Development

No branches or pull requests

4 participants