-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Conversation
if( !trx.context_free_actions.empty()) { | ||
bsoncxx::builder::basic::array action_array; | ||
for( const auto& cfa : trx.context_free_actions ) { | ||
process_action( trx_id_str, cfa, action_array ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
context_free_actions need to be marked with something to distinguish them from non-contrext-free-actions (actions).
Great news! |
Hi @heifner , have you build successfully?
Dockerfile:
Build command:
Can you look into this or am I missing something? |
I'm seeing the same error @noprom |
@noprom @liamcurry You need to upgrade your mongo drivers. See the eosio_build scripts. |
Hi @heifner , will try that immediately. |
I am able to compile and populate Mongo by modifying the Dockerfile to use the FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install -y git sudo
RUN git clone -b release/1.1 https://github.com/EOSIO/eos.git
WORKDIR /eos
RUN git fetch --all --tags --prune \
&& git merge -m "merge" --commit origin/gh#3030-enable-mongodb
RUN git submodule update --init --recursive
RUN echo 1 | ./eosio_build.sh The MongoDB plugin is showing |
@liamcurry Did you run without a Looks like I should remove that info log message as I see it can be rather annoying. I need to think of how to load the abi if you want to start without processing everything from genesis. UPDATE: I noticed your block numbers are very low. So what did you do? |
@liamcurry I believe I was able to duplicate your issue. Please try again now. |
Hi @heifner , I was able to build the builder/Dockerfile
So I made a PR: #4356 Build my image with mongo
And it builds successfully. Start my nodeos with mongo_db_pluginI use my docker-compose file with my own built image: docker-compose-eosblock-init.yml
Enable
|
@noprom Looks like a connection issue. The plugin is unable to connect to the mongo database. |
@heifner that fixed the log errors, thanks. I am running a local dev chain with @noprom do you see any log messages from MongoDB after you see that error from |
@noprom Sorry I don't use docker. |
@liamcurry Thanks that was my guess. |
@heifner I'm not concerned about this, but FYI I see the error message again when setting contracts: |
Hi @liamcurry , I do setup a mongo instance before running nodeos, can you show me your commands when starting nodeos? |
@noprom using In the same directory as your #!/bin/sh
sleep 20
exec /opt/eosio/bin/nodeos --data-dir=/data --genesis-json=/etc/nodeos/genesis.json --config-dir=/etc/nodeos --delete-all-blocks --mongodb-wipe --mongodb-uri mongodb://mongo:27017/EOS Make sure this file is executable by running Then in your nodeosd:
image: noprom/eos:mongo
command: /start_nodeosd.sh
hostname: nodeosd
links:
- mongo
ports:
- 8891:8888
- 9880:9876
expose:
- "8888"
volumes:
- /data/eos/nodeos-data-volume:/etc/nodeos
- /data/eos/nodeos-data-volume/data:/data
- ./start_nodeosd.sh:/start_nodeosd.sh:ro |
@liamcurry , cool! Thanks! Already syncing the data to the mongo database:
|
@heifner
This error happens only on:
However, this error does not happen on:
After having upgraded to Ubuntu 17.10, I can build EOS successfully. |
@jafri Not sure the use case for running with a database that already has blocks? You can wipe the database manually or via command line There is an There does seem to be an issue with the |
I was thinking more of running through the database and finding any inconsistencies (missed block somewhere perhaps) in which case the indexer would upsert it. The exclusion of onblock would make action collection more manageable. |
@heifner I want to ask you a question: When executing: git merge -m "merge" –commit origin/gh#3030-enable-mongodb command, |
@maixiaohe its already merged into release/1.1 |
@jafri Means I can skip this step? Execute ./eosio_build.sh? |
@maixiaohe |
To all who may have been encountering all issues on the branch Never ever use this buggy revision The stable revision that I've been using is:
|
@heifner does this not pick up inline actions? For example transfers from vpay, the query |
@jafri |
@heifner The plugin seems to insert a document twice using This was run with the following command:
|
@Jeiwan genesis block 1 is not signaled to accepted_block. Is there something in the genesis block you need? |
@jafri The insert instead of upsert was by design. However, I think I'll give upsert a try and see how it goes. |
@heifner I'm building a blockchain explorer and want to have block 1 in the database. But I think it'll be ok to just put it there manually. Another question. It feels that the node synchronizes much slower when the plugin is enabled. I mean bare node without blocks and other data. For example, I received this message several minutes ago:
But there are only 956 blocks in Mongo. And it keeps saving transaction traces and actions. Is it true that the plugin blocks the synchronization until all the previous data is saved? And saving (or parsing?) the data to Mongo is computationally difficult? The node uses just one core on my laptop and the load is 100%. |
I did some tests and, yes, the plugin slows down the synchronization: without it the first 1000 blocks are synchronized much faster. Is there a way to improve this? Maybe via adjusting the queue size between node and MongoDB? If I fully synchronize the node without the plugin and then replay all blocks with the plugin enabled, will it be faster? |
So it seems like transaction traces and actions parsing takes a lot of time and CPU. New blocks won't be saved until all previous traces are parsed and saved. It's likely that the node won't be able to process new blocks in less than 500 milliseconds, and it's already so for some blocks. |
@Jeiwan The mongo plugin is cpu intensive. In future versions, we may add some additional threads or otherwise optimize it. As it is there is a separate thread for serialization and pushing to mongo. However, on replay (and if slow machine) it will push back on the main thread if the queue fills up. Giving it a larger queue size gives it more room before it pushes back on the main thread. |
@heifner Got it, thanks! |
@Jeiwan Other than mongo or other public block explorers, no. |
if you get error something like: |
@heifner I am seeing duplicate creation of accounts after the node is fully synced sometimes |
@jafri I can see how that could happen. I will get a fix in. |
EDIT: Solution #4304 (comment) @heifner You mention that EOS/actions
|
accpeted transaction is irreversible? if not, how to rollback account's abi when the transaction is fail? |
When the node is fully synced, same transaction was emit accepted_transaction singal twice so that data is duplicate. appiled_transcation is same case. How to slove it? @heifner |
@qq1032246642 Can you create an issue and provide some examples. |
At this time of writing, the current block that were collected is at 6,535,852 on the machine i used. However, the latest block height is at 9,664,262. Not sure if this is due to the machine used as a nodeos or mongodb plugin issue. could anyone advise on this or how to speed up the sync? The machine has been running around 1 week plus. The machine specs: Thanks. |
i can't find the block_num 1 in mongoplugin's blocks by init genesis.json. {
"initial_timestamp": "2018-06-08T08:08:08.888",
"initial_key": "EOS7EarnUhcyYqmdnPon8rm7mBCTnBoot6o7fE2WzjvEX2TdggbL3",
"initial_configuration": {
"max_block_net_usage": 1048576,
"target_block_net_usage_pct": 1000,
"max_transaction_net_usage": 524288,
"base_per_transaction_net_usage": 12,
"net_usage_leeway": 500,
"context_free_discount_net_usage_num": 20,
"context_free_discount_net_usage_den": 100,
"max_block_cpu_usage": 200000,
"target_block_cpu_usage_pct": 1000,
"max_transaction_cpu_usage": 150000,
"min_transaction_cpu_usage": 100,
"max_transaction_lifetime": 3600,
"deferred_trx_expiration_window": 600,
"max_transaction_delay": 3888000,
"max_inline_action_size": 4096,
"max_inline_action_depth": 4,
"max_authority_depth": 6
},
"initial_chain_id": "0000000000000000000000000000000000000000000000000000000000000000"
} |
Resolves #3030
The mongo db plugin has been updated to work with 1.1 release.
It is recommended that a large
--abi-serializer-max-time-ms
value be passed into the nodeos running the mongo_db_plugin as the default abi serializer time limit may not be large enough to serialize large blocks.It will create the following collections:
accounts - created on accepted transaction
-- Currently limited to just name and ABI if contract on account
actions - created on accepted transaction
-- action_num - the # of the action in its transaction
-- trx_id
-- account
-- name
-- authorization
--- actor
--- permission
-- data
-- hex_data
block_states - created on accepted block
-- block_num
-- block_id
-- block_header_state
-- validated
-- in_current_chain
blocks - created on accepted block
-- block_num
-- block_id
-- irreversible - updated to true on irreversible block
-- block
transaction_traces - created on applied transaction
-- transaction_trace
transactions - created on accepted transaction
-- trx_id
-- irreversible - updated to true on irreversible block
-- transaction_header
-- signing_keys
-- actions
-- context_free_actions
-- transaction_extensions
-- signatures
-- context_free_data