-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable. I think @tomaka would be more in favour of removing any blocking code - so that both full and light client has the same asynchronous API - that would also simplify stuff on the RPC level, since we would not need two separate paths, but I guess it's more practical to do the following.
As an alternative we can consider @tomaka's idea of having separate asynchronous service that handles dispatching and keep RPC fully async - as in system
API currently. That would move a large part of the code there and also keep RPCs unpolluted.
That's indeed kind of the direction I was aiming forward. To me, the RPC crate should handle only the requests and response mechanisms, but not how we actually build the response. The objective would be to remove any dependency on |
@tomusdrw @tomaka I'm going to handle this today. So what's the verdict? Should I move both backends (the full/light separation will still be required) to service? If so, then I'll also move all tests there, which will be a lot of code => service will grow huge over the time (as we move all RPC there). Is that what we want? |
@svyatonik @tomaka I'm not a fan of moving the code into the service either. Now with traits being separated from the implementations (see #3502) perhaps we could still keep the logic within I'm not strong on this though, I'm fine with this PR as-is. |
(I'm moving out of my role of Substrate refactoring, so feel free to do whatever you want.) |
* chain+state RPCs are async now * wrapped too long lines * create full/light RPC impls from service * use ordering * post-merge fix
The problem is that currently there's no difference between full && light RPC implementations - both are calling
Client
methods. And if lightClient
fails to find required data in the local db, then it tries to fetch it from remote node synchronously. This leads to following problemsClient
methods that are requesting data from remote nodes sometimes leads to panic on local non-browser node as well. Like if we have failed to find authorities in the cache && are trying to fetch these from remote (from within the import queue worker), then it fails panic likecannot execute LocalPool executor from within another executor
. I believe it is since the moment when we have removed dedicated import queue threads.So the general idea is to:
Client
backend to work only with local database && fail withNotAvailableOnLightClient
if it requires some data from remote node. Initially I thought about slightly different approach (which is implemented now), whereClient
itself would fallback to fetch-from-remote if anything isn't available locally. But this won't work from within executor threads. This will be fixed in follow-up PRs - it is going to be a quite large change, though mostly removing lines/code dependencies, etcAs for this PR, here are some details:
RemoteBlockchain
trait, which either reads data from local DB, or prepares request to fetch it from remote node. This is different from current light blockchain impl, which will dispatch request itself;ChainBackend
andStateBackend
traits insidesubstrate-rpc
crate. They have implementation for both light and full nodes. Full implementation should be the same as before. Light implementation will useRemoteBlockchain
andFetcher
to retrieve required data either from local db, or from remote node;future::ready()
.This PR needs: