* Comments and todos
Use `snapshot_sync` as logging target
* fix compilation
* More todos, more logs
* Fix picking snapshot peer: prefer the one with the highest block number
More docs, comments, todos
* Adjust WAIT_PEERS_TIMEOUT to be a multiple of MAINTAIN_SYNC_TIMER to try to fix snapshot startup problems
Docs, todos, comments
* Tabs
* Formatting
* Don't build new rlp::EMPTY_LIST_RLP instances
* Dial down debug logging
* Don't warn about missing hashes in the manifest: it's normal
Log client version on peer connect
* Cleanup
* Do not skip snapshots further away than 30k block from the highest block seen
Currently we look for peers that seed snapshots that are close to the highest block seen on the network (where "close" means withing 30k blocks). When a node starts up we wait for some time (5sec, increased here to 10sec) to let peers connect and if we have found a suitable peer to sync a snapshot from at the end of that delay, we start the download; if none is found and --warp-barrier is used we stall, otherwise we start a slow-sync.
When looking for a suitable snapshot, we use the highest block seen on the network to check if a peer has a snapshot that is within 30k blocks of that highest block number. This means that in a situation where all available snapshots are older than that, we will often fail to start a snapshot at all. What's worse is that the longer we delay starting a snapshot sync (to let more peers connect, in the hope of finding a good snapshot), the more likely we are to have seen a high block and thus the more likely we become to accept a snapshot.
This commit removes this comparison with the highest blocknumber criteria entirely and picks the best snapshot we find in 10sec.
* lockfile
* Add a `ChunkType::Dupe` variant so that we do not disconnect a peer if they happen to send us a duplicate chunk (just ignore the chunk and keep going)
Resolve some documentation todos, add more
* tweak log message
* Don't warp sync twice
Check if our own block is beyond the given warp barrier (can happen after we've completed a warp sync but are not quite yet synced up to the tip) and if so, don't sync.
More docs, resolve todos.
Dial down some `sync` debug level logging to trace
* Avoid iterating over all snapshot block/state hashes to find the next work item
Use a HashSet instead of a Vec and remove items from the set as chunks are processed. Calculate and store the total number of chunks in the `Snapshot` struct instead of counting pending chunks each time.
* Address review grumbles
* Log correct number of bytes written to disk
* Revert ChunkType::Dup change
* whitespace grumble
* Cleanup debugging code
* Fix docs
* Fix import and a typo
* Fix test impl
* Use `indexmap::IndexSet` to ensure chunk hashes are accessed in order
* Revert increased SNAPSHOT_MANIFEST_TIMEOUT: 5sec should be enough
* WIP. Typos and logging.
* Format todos
* Pause pruning while a snapshot is under way
Logs, docs and todos
* Allocate memory for the full chunk
* Name snapshotting threads
* Ensure `taking_snapshot` is set to false whenever and however `take_snapshot`returns
Rename `take_at` to `request_snapshot_at`
Cleanup
* Let "in_progress" deletion fail
Fix tests
* Just use an atomic
* Review grumbles
* Finish the sentence
* Resolve a few todos and clarify comments.
* Calculate progress rate since last update
* Lockfile
* Fix tests
* typo
* Reinstate default snapshotting frequency
Cut down on the logging noise
* address grumble
* Log memory use with `journal_size()` and explain why.
* Upgrade to jsonrpc v14
Contains https://github.com/paritytech/jsonrpc/pull/495 with good bugfixes to resource usage.
* Bump tokio & futures.
* Bump even further.
* Upgrade tokio to 0.1.22
* Partially revert "Bump tokio & futures."
This reverts commit 100907eb91907aa124d856d52374637256118e86.
* Rename RegistryInfo -> RegistryInfoDeprecated
* Add BlockId parameter to Registrar::get_address and RegistrarClient::call_contract
* Remove RegistrarClient::Call (use async for now); add RegistrarClient::get_address
* Remove Registrar type in favour of naked trait
* Use CallContract trait bound instead of separate call_contract method
* Make RegistrarClient::get_address and URLHint::resolve synchronous
* RegistrarClient::get_address: return check if address is zero
* Modify RegistryInfo::registry_address to take &str
* return Result from RegistryInfo::registry_address
* Replace RegistryInfo with RegistrarClient
- Modifed RegistrarClient::registrar_address to return Option
- Removed BlockChainClient::registrar_address
* Fix other build configs
* Fix unit test builds
* Remove local RegistrarClient type from run::execute_impl
* Remove registrar.json from ethcore
* Formatting/line breaks
* Update RegistrarClient docs, remove explicit lifetime
* Weak ref to ethcore client from hash fetch client
* Fix unit tests
* Update a few dependencies
Updates two dependencies: `kvdb-rocksdb` and `h2`. Brings in `parking_lot 0.9` which is unintended but possibly fine.
* Bump parking_lot to 0.9
Bump kvdb-memorydb to 0.2 (from git atm)
* New kvdb-memorydb is not breaking
* Remove [patch]
* inject_batch && commit_batch are no longer a part of journaldb
* get rid of redundant KeyedHashDB trait
* journaldb edition 2018
* journaldb trait moved to the lib.rs file
* making journaldb more idiomatic
* fix parity_bytes reexport
* rename parity-util-mem package in Cargo.toml file
* Run cargo fix on `vm`
* Run cargo fix on ethcore-db
* Run cargo fix on evm
* Run cargo fix on ethcore-light
* Run cargo fix on journaldb
* Run cargo fix on wasm
* Missing docs
* Run cargo fix on ethcore-sync
* Stop breaking out of loop if a non-canonical hash is found
* include expected hash in log msg
* More logging
* Scope
* Syntax
* Log in blank RollingFinality
Escalate bad proposer to warning
* Check validator set size: warn if 1 or even number
* More readable code
* Use SimpleList::new
* Extensive logging on unexpected non-canonical hash
* Wording
* wip
* Update ethcore/blockchain/src/blockchain.rs
Co-Authored-By: Tomasz Drwięga <tomusdrw@users.noreply.github.com>
* Improved logging, address grumbles
* Update ethcore/src/engines/validator_set/simple_list.rs
Co-Authored-By: Luke Schoen <ltfschoen@users.noreply.github.com>
* Report benign misbehaviour iff currently a validator
* Report malicious behaviour iff we're a validator
* Escalate to warning and fix wording
* Test reporting behaviour
Don't require node to be part of the validator set to report malicious behaviour
* Include missing parent hash in MissingParent error
* Update ethcore/src/engines/validator_set/simple_list.rs
Co-Authored-By: Luke Schoen <ltfschoen@users.noreply.github.com>
* docs
* remove unneeded into()
Move check for parent_step == step for clarity&efficiency
Remove dead code for Seal::Proposal
* typo
* Wording
* naming
* WIP
* cleanup
* cosmetics
* cosmetics and one less lvar
* spelling
* Better loggin when a block is already in chain
* More logging
* On second thought non-validators are allowed to report
* cleanup
* remove dead code
* Keep track of the hash of the last imported block
* Let it lock
* Serialize access to block sealing
* Take a lock while sealing a block
* Cleanup
* whitespace
* Replace error chain for network error
* Fix usages and add manual From impls
* OnDemand Error and remove remaining dependencies
* Die error_chain, die.
* DIE
* Hasta la vista, baby
* get node IP address and udp port from Socket, if not included in PING packet
* prevent bootnodes from being added to host nodes
* code corrections
* code corrections
* code corrections
* code corrections
* docs
* code corrections
* code corrections
* Apply suggestions from code review
Co-Authored-By: David <dvdplm@gmail.com>
* [whisper] Move needed aes_gcm crypto in-crate
In the latest `parity-crypto` release (upcoming 0.4), the aes GCM features were removed (done to remove the dependency on `ring`).
This PR adds the bare minimum crypto needed for Whisper directly to the crate itself and as those were the only features needed from `parity-crypto`, removes the dependency on that crate altogether.
* Upgrade to parity-crypto 0.4
Reverts using NonZeroU32 (introduced [here](b347599cf7)).
* Check for 0 in `args.arg_keys_iteration`
* Use beta.4
* parity-crypto 0.4.0 is released
* Fix nasty typo in NodeTable::update (add ;)
* Add limiting for NodeTable
* Add cache for NodeFilter
* Use expect instead of unwrap
* Move node in ordered_ids if it exists there in note_failure and note_success + fix expect msg
* Add comment
* Improve code style
* DRY in note_failure and note_success
* Fix nodes ordering
* Simplify match expression
* Add tests for get_index_to_insert
* Remove get_mut method from NodeTable, Add get method to NodeTable
* Fix table_last_contact_order for macos failing because of lost nanosecond precision