* Comments and todos
Use `snapshot_sync` as logging target
* fix compilation
* More todos, more logs
* Fix picking snapshot peer: prefer the one with the highest block number
More docs, comments, todos
* Adjust WAIT_PEERS_TIMEOUT to be a multiple of MAINTAIN_SYNC_TIMER to try to fix snapshot startup problems
Docs, todos, comments
* Tabs
* Formatting
* Don't build new rlp::EMPTY_LIST_RLP instances
* Dial down debug logging
* Don't warn about missing hashes in the manifest: it's normal
Log client version on peer connect
* Cleanup
* Do not skip snapshots further away than 30k block from the highest block seen
Currently we look for peers that seed snapshots that are close to the highest block seen on the network (where "close" means withing 30k blocks). When a node starts up we wait for some time (5sec, increased here to 10sec) to let peers connect and if we have found a suitable peer to sync a snapshot from at the end of that delay, we start the download; if none is found and --warp-barrier is used we stall, otherwise we start a slow-sync.
When looking for a suitable snapshot, we use the highest block seen on the network to check if a peer has a snapshot that is within 30k blocks of that highest block number. This means that in a situation where all available snapshots are older than that, we will often fail to start a snapshot at all. What's worse is that the longer we delay starting a snapshot sync (to let more peers connect, in the hope of finding a good snapshot), the more likely we are to have seen a high block and thus the more likely we become to accept a snapshot.
This commit removes this comparison with the highest blocknumber criteria entirely and picks the best snapshot we find in 10sec.
* lockfile
* Add a `ChunkType::Dupe` variant so that we do not disconnect a peer if they happen to send us a duplicate chunk (just ignore the chunk and keep going)
Resolve some documentation todos, add more
* tweak log message
* Don't warp sync twice
Check if our own block is beyond the given warp barrier (can happen after we've completed a warp sync but are not quite yet synced up to the tip) and if so, don't sync.
More docs, resolve todos.
Dial down some `sync` debug level logging to trace
* Avoid iterating over all snapshot block/state hashes to find the next work item
Use a HashSet instead of a Vec and remove items from the set as chunks are processed. Calculate and store the total number of chunks in the `Snapshot` struct instead of counting pending chunks each time.
* Address review grumbles
* Log correct number of bytes written to disk
* Revert ChunkType::Dup change
* whitespace grumble
* Cleanup debugging code
* Fix docs
* Fix import and a typo
* Fix test impl
* Use `indexmap::IndexSet` to ensure chunk hashes are accessed in order
* Revert increased SNAPSHOT_MANIFEST_TIMEOUT: 5sec should be enough
* Update a few dependencies
Updates two dependencies: `kvdb-rocksdb` and `h2`. Brings in `parking_lot 0.9` which is unintended but possibly fine.
* Bump parking_lot to 0.9
Bump kvdb-memorydb to 0.2 (from git atm)
* New kvdb-memorydb is not breaking
* Remove [patch]
* Replace error chain for network error
* Fix usages and add manual From impls
* OnDemand Error and remove remaining dependencies
* Die error_chain, die.
* DIE
* Hasta la vista, baby
* get node IP address and udp port from Socket, if not included in PING packet
* prevent bootnodes from being added to host nodes
* code corrections
* code corrections
* code corrections
* code corrections
* docs
* code corrections
* code corrections
* Apply suggestions from code review
Co-Authored-By: David <dvdplm@gmail.com>
* [whisper] Move needed aes_gcm crypto in-crate
In the latest `parity-crypto` release (upcoming 0.4), the aes GCM features were removed (done to remove the dependency on `ring`).
This PR adds the bare minimum crypto needed for Whisper directly to the crate itself and as those were the only features needed from `parity-crypto`, removes the dependency on that crate altogether.
* Upgrade to parity-crypto 0.4
Reverts using NonZeroU32 (introduced [here](b347599cf7)).
* Check for 0 in `args.arg_keys_iteration`
* Use beta.4
* parity-crypto 0.4.0 is released
* Fix nasty typo in NodeTable::update (add ;)
* Add limiting for NodeTable
* Add cache for NodeFilter
* Use expect instead of unwrap
* Move node in ordered_ids if it exists there in note_failure and note_success + fix expect msg
* Add comment
* Improve code style
* DRY in note_failure and note_success
* Fix nodes ordering
* Simplify match expression
* Add tests for get_index_to_insert
* Remove get_mut method from NodeTable, Add get method to NodeTable
* Fix table_last_contact_order for macos failing because of lost nanosecond precision
* Don't add discovery initiators to the node table
* Use enums for tracking state of the nodes in discovery
* Dont try to ping ourselves
* Fix minor nits
* Update timeouts when observing an outdated node
* Extracted update_bucket_record from update_node
* Fixed typo
* Fix two final nits from @todr
* Increase the number of block bodies requested during Sync.
* Increase the number of block bodies requested during Sync.
* Check if our peer is an older parity client with the bug
of not handling large requests properly
* Add a ClientVersion struct and a ClientCapabilites trait
* Make ClientVersion its own module
* Refactor and extend use of ClientVersion
* Replace strings with ClientVersion in PeerInfo
* Group further functionality in ClientCapabilities
* Move parity client version data from tuple to its own struct.
* Implement accessor methods for ParityClientData and remove them
from ClientVersion.
* Minor fixes
* Make functions specific to parity return types specific to parity.
* Test for shorter ID strings
* Fix formatting and remove unneeded dependencies.
* Roll back Cargo.lock
* Commit last Cargo.lock
* Convert from string to ClientVersion
* * When checking if peer accepts service transactions just check
if it's parity, remove version check.
* Remove dependency on semver in ethcore-sync
* Remove unnecessary String instantiation
* Rename peer_info to peer_version
* Update RPC test helpers
* Simplify From<String>
* Parse static version string only once
* Update RPC tests to new ClientVersion struct
* Document public members
* More robust parsing of ID string
* Minor changes.
* Update version in which large block bodies requests appear.
* Update ethcore/sync/src/block_sync.rs
Co-Authored-By: elferdo <elferdo@gmail.com>
* Update util/network/src/client_version.rs
Co-Authored-By: elferdo <elferdo@gmail.com>
* Update util/network/src/client_version.rs
Co-Authored-By: elferdo <elferdo@gmail.com>
* Update tests.
* Minor fixes.
* fix(ManageNetwork): replace Range -> RangeIncls
Fixes `TODO: Range should be changed to RangeInclusive once stable (https://github.com/rust-lang/rust/pull/50758)`
* fix(tests)
* fix(grumbles): off-by-one error in debug_asserts
* RangeInclusive::end() is inclusive which means that if start and end is equal the `debug_assert(range.end() >
range.start()` will fail which is shouldn't
* Replace ethcore-logger with env-logger.
* Fix logger initialization in WASM tests.
* uncomment logger initialization in secret store
* Don't use ethcore-logger in whisper.
* Move ethcore-logger within parity dir.
* Uncomment rest from secret-store.
* Use `let _ =` in private_contract for consistency
* `ok()` to `let _ =` fix in service
* Use `let _ = ` for state_db
* Replace `tokio_core` with `tokio`.
* Remove `tokio-core` and replace with `tokio` in
- `ethcore/stratum`
- `secret_store`
- `util/fetch`
- `util/reactor`
* Bump hyper to 0.12 in
- `miner`
- `util/fake-fetch`
- `util/fetch`
- `secret_store`
* Bump `jsonrpc-***` to 0.9 in
- `parity`
- `ethcore/stratum`
- `ipfs`
- `rpc`
- `rpc_client`
- `whisper`
* Bump `ring` to 0.13
* Use a more graceful shutdown process in `secret_store` tests.
* Convert some mutexes to rwlocks in `secret_store`.
* Consolidate Tokio Runtime use, remove `CpuPool`.
* Rename and move the `tokio_reactor` crate (`util/reactor`) to
`tokio_runtime` (`util/runtime`).
* Rename `EventLoop` to `Runtime`.
- Rename `EventLoop::spawn` to `Runtime::with_default_thread_count`.
- Add the `Runtime::with_thread_count` method.
- Rename `Remote` to `Executor`.
* Remove uses of `CpuPool` and spawn all tasks via the `Runtime` executor
instead.
* Other changes related to `CpuPool` removal:
- Remove `Reservations::with_pool`. `::new` now takes an `Executor` as an argument.
- Remove `SenderReservations::with_pool`. `::new` now takes an `Executor` as an argument.
* discovery: Only add nodes to routing table after receiving pong.
Previously the discovery algorithm would add nodes to the routing table
before confirming that the endpoint is participating in the protocol. This
now tracks in-flight pings and adds to the routing table only after receiving
a response.
* discovery: Refactor packet creation into its own function.
This function is useful inside unit tests.
* discovery: Additional testing for new add_node behavior.
* discovery: Track expiration of pings to non-yet-in-bucket nodes.
Now that we may ping nodes before adding to a k-bucket, the timeout tracking
must be separate from BucketEntry.
* discovery: Verify echo hash on pong packets.
Stores packet hash with in-flight requests and matches with pong response.
* discovery: Track timeouts on FIND_NODE requests.
* discovery: Retry failed pings with exponential backoff.
UDP packets may get dropped, so instead of immediately booting nodes that fail
to respond to a ping, retry 4 times with exponential backoff.
* !fixup Use slice instead of Vec for request_backoff.