Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Make chain && state RPCs async #3480

Merged
merged 8 commits into from
Sep 1, 2019
Merged

Make chain && state RPCs async #3480

merged 8 commits into from
Sep 1, 2019

Conversation

svyatonik
Copy link
Contributor

@svyatonik svyatonik commented Aug 26, 2019

The problem is that currently there's no difference between full && light RPC implementations - both are calling Client methods. And if light Client fails to find required data in the local db, then it tries to fetch it from remote node synchronously. This leads to following problems

  1. calling RPC from light-in-browser leads to panic now because there's single thread there && we can't ever block it;
  2. I also found (during testing this fix) that calling light Client methods that are requesting data from remote nodes sometimes leads to panic on local non-browser node as well. Like if we have failed to find authorities in the cache && are trying to fetch these from remote (from within the import queue worker), then it fails panic like cannot execute LocalPool executor from within another executor. I believe it is since the moment when we have removed dedicated import queue threads.

So the general idea is to:

  1. make separate full && light RPC implementations for RPCs that are potentially fetching data from remote node; (that is what this PR is about)
  2. update light Client backend to work only with local database && fail with NotAvailableOnLightClient if it requires some data from remote node. Initially I thought about slightly different approach (which is implemented now), where Client itself would fallback to fetch-from-remote if anything isn't available locally. But this won't work from within executor threads. This will be fixed in follow-up PRs - it is going to be a quite large change, though mostly removing lines/code dependencies, etc

As for this PR, here are some details:

  1. I've introduced the RemoteBlockchain trait, which either reads data from local DB, or prepares request to fetch it from remote node. This is different from current light blockchain impl, which will dispatch request itself;
  2. I've introduced ChainBackend and StateBackend traits inside substrate-rpc crate. They have implementation for both light and full nodes. Full implementation should be the same as before. Light implementation will use RemoteBlockchain and Fetcher to retrieve required data either from local db, or from remote node;
  3. most of chain and state RPC methods are now async, though on full nodes they're just future::ready().

This PR needs:

  • some cleanup;
  • it should be updated once Service factory refactor #3382 is in (instantiation of light/full RPC backends currently is just a draft code just for testing);
  • I'm still not sure whether we want storage/version subscriptions on light nodes - they're quite ineffective. For now, probably, I'll leave these to work the same way on full+light. But in future, maybe separate light implementation will be required.

Copy link
Contributor

@tomusdrw tomusdrw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks reasonable. I think @tomaka would be more in favour of removing any blocking code - so that both full and light client has the same asynchronous API - that would also simplify stuff on the RPC level, since we would not need two separate paths, but I guess it's more practical to do the following.

As an alternative we can consider @tomaka's idea of having separate asynchronous service that handles dispatching and keep RPC fully async - as in system API currently. That would move a large part of the code there and also keep RPCs unpolluted.

@tomaka
Copy link
Contributor

tomaka commented Aug 26, 2019

As an alternative we can consider @tomaka's idea of having separate asynchronous service that handles dispatching and keep RPC fully async - as in system API currently. That would move a large part of the code there and also keep RPCs unpolluted.

That's indeed kind of the direction I was aiming forward. To me, the RPC crate should handle only the requests and response mechanisms, but not how we actually build the response. The objective would be to remove any dependency on substrate-client (or anything, really) from substrate-rpc.

@svyatonik
Copy link
Contributor Author

@tomusdrw @tomaka I'm going to handle this today. So what's the verdict? Should I move both backends (the full/light separation will still be required) to service?

If so, then I'll also move all tests there, which will be a lot of code => service will grow huge over the time (as we move all RPC there). Is that what we want?

@tomusdrw
Copy link
Contributor

@svyatonik @tomaka I'm not a fan of moving the code into the service either. Now with traits being separated from the implementations (see #3502) perhaps we could still keep the logic within substrate-rpc crate, just make sure that we use single thread that processes all of the requests or sth?

I'm not strong on this though, I'm fine with this PR as-is.

@tomaka
Copy link
Contributor

tomaka commented Aug 29, 2019

(I'm moving out of my role of Substrate refactoring, so feel free to do whatever you want.)

@svyatonik svyatonik marked this pull request as ready for review August 30, 2019 08:07
@svyatonik svyatonik added A0-please_review Pull request needs code review. and removed A1-onice labels Aug 30, 2019
@gavofyork gavofyork merged commit 0c08276 into master Sep 1, 2019
@gavofyork gavofyork deleted the async_state_rpc branch September 1, 2019 00:20
jimpo pushed a commit to jimpo/substrate that referenced this pull request Sep 3, 2019
* chain+state RPCs are async now

* wrapped too long lines

* create full/light RPC impls from service

* use ordering

* post-merge fix
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants