Snapshot restoration overhaul (#11219)

* Comments and todos
Use `snapshot_sync` as logging target

* fix compilation

* More todos, more logs

* Fix picking snapshot peer: prefer the one with the highest block number
More docs, comments, todos

* Adjust WAIT_PEERS_TIMEOUT to be a multiple of MAINTAIN_SYNC_TIMER to try to fix snapshot startup problems
Docs, todos, comments

* Tabs

* Formatting

* Don't build new rlp::EMPTY_LIST_RLP instances

* Dial down debug logging

* Don't warn about missing hashes in the manifest: it's normal
Log client version on peer connect

* Cleanup

* Do not skip snapshots further away than 30k block from the highest block seen

Currently we look for peers that seed snapshots that are close to the highest block seen on the network (where "close" means withing 30k blocks). When a node starts up we wait for some time (5sec, increased here to 10sec) to let peers connect and if we have found a suitable peer to sync a snapshot from at the end of that delay, we start the download; if none is found and --warp-barrier is used we stall, otherwise we start a slow-sync.
When looking for a suitable snapshot, we use the highest block seen on the network to check if a peer has a snapshot that is within 30k blocks of that highest block number. This means that in a situation where all available snapshots are older than that, we will often fail to start a snapshot at all. What's worse is that the longer we delay starting a snapshot sync (to let more peers connect, in the hope of finding a good snapshot), the more likely we are to have seen a high block and thus the more likely we become to accept a snapshot.
This commit removes this comparison with the highest blocknumber criteria entirely and picks the best snapshot we find in 10sec.

* lockfile

* Add a `ChunkType::Dupe` variant so that we do not disconnect a peer if they happen to send us a duplicate chunk (just ignore the chunk and keep going)
Resolve some documentation todos, add more

* tweak log message

* Don't warp sync twice
Check if our own block is beyond the given warp barrier (can happen after we've completed a warp sync but are not quite yet synced up to the tip) and if so, don't sync.
More docs, resolve todos.
Dial down some `sync` debug level logging to trace

* Avoid iterating over all snapshot block/state hashes to find the next work item

Use a HashSet instead of a Vec and remove items from the set as chunks are processed. Calculate and store the total number of chunks in the `Snapshot`  struct instead of counting pending chunks each time.

* Address review grumbles

* Log correct number of bytes written to disk

* Revert ChunkType::Dup change

* whitespace grumble

* Cleanup debugging code

* Fix docs

* Fix import and a typo

* Fix test impl

* Use `indexmap::IndexSet` to ensure chunk hashes are accessed in order

* Revert increased SNAPSHOT_MANIFEST_TIMEOUT: 5sec should be enough
This commit is contained in:
David 2019-10-31 16:07:21 +01:00 committed by GitHub
parent 6b17e321df
commit 8c2199dd2a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 423 additions and 292 deletions

100
Cargo.lock generated
View File

@ -145,7 +145,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "ascii"
version = "0.7.1"
version = "0.9.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@ -189,7 +189,7 @@ dependencies = [
"ethjson 0.1.0",
"itertools 0.5.10 (registry+https://github.com/rust-lang/crates.io-index)",
"keccak-hash 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"lru-cache 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"machine 0.1.0",
@ -207,7 +207,7 @@ dependencies = [
[[package]]
name = "autocfg"
version = "0.1.4"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@ -280,7 +280,7 @@ name = "bincode"
version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"autocfg 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)",
"byteorder 1.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.99 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -392,7 +392,7 @@ source = "git+https://github.com/paritytech/bn#6beba2ed6c9351622f9e948ccee406384
dependencies = [
"byteorder 1.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"crunchy 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.5.5 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-hex 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -402,7 +402,7 @@ name = "bstr"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-automata 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.99 (registry+https://github.com/rust-lang/crates.io-index)",
@ -442,7 +442,7 @@ name = "c2-chacha"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"ppv-lite86 0.2.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -549,7 +549,7 @@ dependencies = [
"ethereum-types 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"ethjson 0.1.0",
"keccak-hash 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"lru-cache 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"machine 0.1.0",
@ -582,10 +582,10 @@ dependencies = [
[[package]]
name = "combine"
version = "3.6.1"
version = "3.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ascii 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)",
"ascii 0.9.3 (registry+https://github.com/rust-lang/crates.io-index)",
"byteorder 1.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -643,7 +643,7 @@ dependencies = [
"criterion-plot 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"csv 1.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"itertools 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_core 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_os 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
@ -691,7 +691,7 @@ dependencies = [
"arrayvec 0.4.11 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam-utils 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"memoffset 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"scopeguard 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -715,7 +715,7 @@ version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -800,7 +800,7 @@ name = "derive_more"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"proc-macro2 0.4.20 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.8 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -844,7 +844,7 @@ name = "docopt"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.99 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.89 (registry+https://github.com/rust-lang/crates.io-index)",
@ -872,10 +872,10 @@ dependencies = [
"ethabi 9.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"ethereum-types 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"failure 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
"indexmap 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"indexmap 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"itertools 0.7.8 (registry+https://github.com/rust-lang/crates.io-index)",
"keccak-hash 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lunarity-lexer 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-hex 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1076,7 +1076,7 @@ dependencies = [
"kvdb 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"kvdb-memorydb 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"kvdb-rocksdb 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"machine 0.1.0",
"macros 0.1.0",
@ -1280,7 +1280,7 @@ dependencies = [
"arrayvec 0.4.11 (registry+https://github.com/rust-lang/crates.io-index)",
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"env_logger 0.5.13 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"parking_lot 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1331,7 +1331,7 @@ dependencies = [
"ethcore-io 1.12.0",
"ethereum-types 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"ipnetwork 0.12.8 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.62 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-crypto 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-snappy 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1448,7 +1448,7 @@ dependencies = [
"keccak-hash 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"kvdb 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"kvdb-rocksdb 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-crypto 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1524,6 +1524,7 @@ dependencies = [
"ethereum-types 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"fastmap 0.1.0",
"futures 0.1.29 (registry+https://github.com/rust-lang/crates.io-index)",
"indexmap 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"keccak-hash 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"kvdb-memorydb 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1649,7 +1650,7 @@ dependencies = [
"ethereum-types 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"hex-literal 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"keccak-hash 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"memory-cache 0.1.0",
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1814,7 +1815,7 @@ name = "fs-swap"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.62 (registry+https://github.com/rust-lang/crates.io-index)",
"libloading 0.5.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1922,7 +1923,7 @@ dependencies = [
"fnv 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)",
"futures 0.1.29 (registry+https://github.com/rust-lang/crates.io-index)",
"http 0.1.17 (registry+https://github.com/rust-lang/crates.io-index)",
"indexmap 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"indexmap 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"slab 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"string 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -1953,7 +1954,7 @@ version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ahash 0.2.16 (registry+https://github.com/rust-lang/crates.io-index)",
"autocfg 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -2144,8 +2145,11 @@ dependencies = [
[[package]]
name = "indexmap"
version = "1.0.2"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "instant-seal"
@ -2241,7 +2245,7 @@ version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cesu8 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"combine 3.6.1 (registry+https://github.com/rust-lang/crates.io-index)",
"combine 3.8.1 (registry+https://github.com/rust-lang/crates.io-index)",
"error-chain 0.12.0 (registry+https://github.com/rust-lang/crates.io-index)",
"jni-sys 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
@ -2342,7 +2346,7 @@ dependencies = [
"bytes 0.4.12 (registry+https://github.com/rust-lang/crates.io-index)",
"globset 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"jsonrpc-core 14.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"tokio 0.1.22 (registry+https://github.com/rust-lang/crates.io-index)",
"tokio-codec 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -2448,7 +2452,7 @@ dependencies = [
[[package]]
name = "lazy_static"
version = "1.3.0"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@ -2962,7 +2966,7 @@ dependencies = [
"digest 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"ethereum-types 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"hmac 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-secp256k1 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
"pbkdf2 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.7.2 (registry+https://github.com/rust-lang/crates.io-index)",
@ -3318,7 +3322,7 @@ dependencies = [
"ethcore-sync 1.12.0",
"ethereum-types 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"keccak-hash 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"matches 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
@ -3371,7 +3375,7 @@ name = "parity-wordlist"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.6.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -3891,7 +3895,7 @@ dependencies = [
"crossbeam-deque 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam-queue 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam-utils 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 1.10.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -3966,7 +3970,7 @@ version = "0.14.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cc 1.0.46 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.62 (registry+https://github.com/rust-lang/crates.io-index)",
"spin 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)",
"untrusted 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)",
@ -3996,7 +4000,7 @@ name = "rlp_compress"
version = "0.1.0"
dependencies = [
"elastic-array 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rlp 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -4273,7 +4277,7 @@ dependencies = [
"keccak-hasher 0.1.1",
"kvdb 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"kvdb-rocksdb 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 1.10.1 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
@ -4319,7 +4323,7 @@ dependencies = [
"keccak-hasher 0.1.1",
"kvdb 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"kvdb-rocksdb 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-crypto 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
@ -4573,7 +4577,7 @@ name = "thread_local"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -4713,7 +4717,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"crossbeam-utils 0.5.0 (registry+https://github.com/rust-lang/crates.io-index)",
"futures 0.1.29 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"mio 0.6.19 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 1.10.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -4773,7 +4777,7 @@ dependencies = [
"crossbeam-queue 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam-utils 0.6.6 (registry+https://github.com/rust-lang/crates.io-index)",
"futures 0.1.29 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"num_cpus 1.10.1 (registry+https://github.com/rust-lang/crates.io-index)",
"slab 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -5070,7 +5074,7 @@ version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"idna 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.99 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.89 (registry+https://github.com/rust-lang/crates.io-index)",
@ -5097,7 +5101,7 @@ dependencies = [
"executive-state 0.1.0",
"keccak-hash 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"kvdb 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"machine 0.1.0",
"memory-cache 0.1.0",
@ -5119,7 +5123,7 @@ version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"if_chain 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"proc-macro2 0.4.20 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.6.8 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
@ -5389,11 +5393,11 @@ dependencies = [
"checksum arrayref 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "0d382e583f07208808f6b1249e60848879ba3543f57c32277bf52d69c2f0f0ee"
"checksum arrayvec 0.4.11 (registry+https://github.com/rust-lang/crates.io-index)" = "b8d73f9beda665eaa98ab9e4f7442bd4e7de6652587de55b2525e52e29c1b0ba"
"checksum arrayvec 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "cff77d8686867eceff3105329d4698d96c2391c176d5d03adc90c7389162b5b8"
"checksum ascii 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)" = "3ae7d751998c189c1d4468cf0a39bb2eae052a9c58d50ebb3b9591ee3813ad50"
"checksum ascii 0.9.3 (registry+https://github.com/rust-lang/crates.io-index)" = "eab1c04a571841102f5345a8fc0f6bb3d31c315dec879b5c6e42e40ce7ffa34e"
"checksum assert_matches 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "7deb0a829ca7bcfaf5da70b073a8d128619259a7be8216a355e23f00763059e5"
"checksum attohttpc 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "eaf0ec4b0e00f61ee75556ca027485b7b354f4a714d88cc03f4468abd9378c86"
"checksum atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "9a7d5b8723950951411ee34d271d99dddcc2035a16ab25310ea2c8cfd4369652"
"checksum autocfg 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "0e49efa51329a5fd37e7c79db4621af617cd4e3e5bc224939808d076077077bf"
"checksum autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)" = "1d49d90015b3c36167a20fe2810c5cd875ad504b39cff3d4eae7977e6b7c1cb2"
"checksum backtrace 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)" = "89a47830402e9981c5c41223151efcced65a0510c13097c769cede7efb34782a"
"checksum backtrace-sys 0.1.24 (registry+https://github.com/rust-lang/crates.io-index)" = "c66d56ac8dabd07f6aacdaf633f4b8262f5b3601a810a0dcddffd5c22c69daa0"
"checksum base-x 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "5cda5d0f5584d129112ad8bf4775b9fd2b9f1e30738c7b1a25314ba2244d6a51"
@ -5427,7 +5431,7 @@ dependencies = [
"checksum clap 2.33.0 (registry+https://github.com/rust-lang/crates.io-index)" = "5067f5bb2d80ef5d68b4c87db81601f0b75bca627bc2ef76b141d7b846a3c6d9"
"checksum cloudabi 0.0.3 (registry+https://github.com/rust-lang/crates.io-index)" = "ddfc5b9aa5d4507acaf872de71051dfd0e309860e88966e1051e462a077aac4f"
"checksum cmake 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)" = "6ec65ee4f9c9d16f335091d23693457ed4928657ba4982289d7fafee03bc614a"
"checksum combine 3.6.1 (registry+https://github.com/rust-lang/crates.io-index)" = "fc1d011beeed29187b8db2ac3925c8dd4d3e87db463dc9d2d2833985388fc5bc"
"checksum combine 3.8.1 (registry+https://github.com/rust-lang/crates.io-index)" = "da3da6baa321ec19e1cc41d31bf599f00c783d0517095cdaf0332e3fe8d20680"
"checksum const-random 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "7b641a8c9867e341f3295564203b1c250eb8ce6cb6126e007941f78c4d2ed7fe"
"checksum const-random-macro 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c750ec12b83377637110d5a57f5ae08e895b06c4b16e2bdbf1a94ef717428c59"
"checksum criterion 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "938703e165481c8d612ea3479ac8342e5615185db37765162e762ec3523e2fc6"
@ -5509,7 +5513,7 @@ dependencies = [
"checksum impl-codec 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "3fa0086251524c50fd53b32e7b05eb6d79e2f97221eaf0c53c0ca9c3096f21d3"
"checksum impl-rlp 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f39b9963cf5f12fcc4ae4b30a6927ed67d6b4ea4cbe7d17a41131163b401303b"
"checksum impl-serde 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "bbb1ea6188aca47a0eaeeb330d8a82f16cd500f30b897062d23922568727333a"
"checksum indexmap 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7e81a7c05f79578dbc15793d8b619db9ba32b4577003ef3af1a91c416798c58d"
"checksum indexmap 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712d7b3ea5827fcb9d4fda14bf4da5f136f0db2ae9c8f4bd4e2d1c6fde4e6db2"
"checksum integer-encoding 1.0.5 (registry+https://github.com/rust-lang/crates.io-index)" = "26746cbc2e680af687e88d717f20ff90079bd10fc984ad57d277cd0e37309fa5"
"checksum interleaved-ordered 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "141340095b15ed7491bd3d4ced9d20cebfb826174b6bb03386381f62b01e3d77"
"checksum iovec 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "dbe6e417e7d0975db6512b90796e8ce223145ac4e33c377e4a42882a0e88bb08"
@ -5536,7 +5540,7 @@ dependencies = [
"checksum kvdb 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "72ae89206cea31c32014b39d5a454b96135894221610dbfd19cf4d2d044fa546"
"checksum kvdb-memorydb 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "296c12309ed36cb74d59206406adbf1971c3baa56d5410efdb508d8f1c60a351"
"checksum kvdb-rocksdb 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "96eb0e0112bb66fe5401294ca0f43c9cb771456af9270443545026e55fd00912"
"checksum lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "bc5729f27f159ddd61f4df6228e827e86643d4d3e7c32183cb30a1c08f604a14"
"checksum lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
"checksum lazycell 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ddba4c30a78328befecec92fc94970e53b3ae385827d28620f0f5bb2493081e0"
"checksum libc 0.2.62 (registry+https://github.com/rust-lang/crates.io-index)" = "34fcd2c08d2f832f376f4173a231990fa5aef4e99fb569867318a227ef4c06ba"
"checksum libloading 0.5.0 (registry+https://github.com/rust-lang/crates.io-index)" = "9c3ad660d7cb8c5822cd83d10897b0f1f1526792737a179e73896152f85b88c2"

View File

@ -452,7 +452,6 @@ impl StateRebuilder {
StateDB::commit_bloom(&mut batch, bloom_journal)?;
self.db.inject(&mut batch)?;
backing.write_buffered(batch);
trace!(target: "snapshot", "current state root: {:?}", self.state_root);
Ok(())
}

View File

@ -161,6 +161,7 @@ impl Restoration {
if let Some(ref mut writer) = self.writer.as_mut() {
writer.write_state_chunk(hash, chunk)?;
trace!(target: "snapshot", "Wrote {}/{} bytes of state to db/disk. Current state root: {:?}", len, chunk.len(), self.state.state_root());
}
self.state_chunks_left.remove(&hash);
@ -676,7 +677,6 @@ impl<C> Service<C> where C: SnapshotClient + ChainInfo {
} else if manifest.state_hashes.contains(&hash) {
true
} else {
warn!(target: "snapshot", "Hash of the content of {:?} not present in the manifest block/state hashes.", path);
return Ok(false);
};
@ -788,7 +788,7 @@ impl<C> Service<C> where C: SnapshotClient + ChainInfo {
false => Ok(())
}
}
other => other.map(drop),
Err(e) => Err(e)
};
(res, db)
}

View File

@ -52,8 +52,7 @@ pub trait SnapshotService : Sync + Send {
fn status(&self) -> RestorationStatus;
/// Begin snapshot restoration.
/// If restoration in-progress, this will reset it.
/// From this point on, any previous snapshot may become unavailable.
/// If a restoration is in progress, this will reset it and clear all data.
fn begin_restore(&self, manifest: ManifestData);
/// Abort an in-progress restoration if there is one.

View File

@ -19,6 +19,7 @@ ethcore-private-tx = { path = "../private-tx" }
ethereum-types = "0.8.0"
fastmap = { path = "../../util/fastmap" }
futures = "0.1"
indexmap = "1.3.0"
keccak-hash = "0.4.0"
light = { package = "ethcore-light", path = "../light" }
log = "0.4"

View File

@ -295,7 +295,7 @@ pub struct EthSync {
light_subprotocol_name: [u8; 3],
/// Priority tasks notification channel
priority_tasks: Mutex<mpsc::Sender<PriorityTask>>,
/// for state tracking
/// Track the sync state: are we importing or verifying blocks?
is_major_syncing: Arc<AtomicBool>
}

View File

@ -309,7 +309,7 @@ impl BlockDownloader {
}
}
}
// Update the highest block number seen on the network from the header.
if let Some((number, _)) = last_header {
if self.highest_block.as_ref().map_or(true, |n| number > *n) {
self.highest_block = Some(number);

View File

@ -43,7 +43,7 @@ use ethereum_types::{H256, U256};
use keccak_hash::keccak;
use network::PeerId;
use network::client_version::ClientVersion;
use log::{debug, trace, error};
use log::{debug, trace, error, warn};
use rlp::Rlp;
use common_types::{
BlockNumber,
@ -76,14 +76,14 @@ impl SyncHandler {
SignedPrivateTransactionPacket => SyncHandler::on_signed_private_transaction(sync, io, peer, &rlp),
PrivateStatePacket => SyncHandler::on_private_state_data(sync, io, peer, &rlp),
_ => {
debug!(target: "sync", "{}: Unknown packet {}", peer, packet_id.id());
trace!(target: "sync", "{}: Unknown packet {}", peer, packet_id.id());
Ok(())
}
};
match result {
Err(DownloaderImportError::Invalid) => {
debug!(target:"sync", "{} -> Invalid packet {}", peer, packet_id.id());
trace!(target:"sync", "{} -> Invalid packet {}", peer, packet_id.id());
io.disable_peer(peer);
sync.deactivate_peer(io, peer);
},
@ -96,7 +96,7 @@ impl SyncHandler {
},
}
} else {
debug!(target: "sync", "{}: Unknown packet {}", peer, packet_id);
trace!(target: "sync", "{}: Unknown packet {}", peer, packet_id);
}
}
@ -117,14 +117,14 @@ impl SyncHandler {
sync.active_peers.remove(&peer_id);
if sync.state == SyncState::SnapshotManifest {
// Check if we are asking other peers for
// the snapshot manifest as well.
// If not, return to initial state
let still_asking_manifest = sync.peers.iter()
// Check if we are asking other peers for a snapshot manifest as well. If not,
// set our state to initial state (`Idle` or `WaitingPeers`).
let still_seeking_manifest = sync.peers.iter()
.filter(|&(id, p)| sync.active_peers.contains(id) && p.asking == PeerAsking::SnapshotManifest)
.next().is_none();
.next().is_some();
if still_asking_manifest {
if !still_seeking_manifest {
warn!(target: "snapshot_sync", "The peer we were downloading a snapshot from ({}) went away. Retrying.", peer_id);
sync.state = ChainSync::get_init_state(sync.warp_sync, io.chain());
}
}
@ -371,18 +371,18 @@ impl SyncHandler {
let block_set = sync.peers.get(&peer_id).and_then(|p| p.block_set).unwrap_or(BlockSet::NewBlocks);
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockHeaders) {
debug!(target: "sync", "{}: Ignored unexpected headers", peer_id);
trace!(target: "sync", "{}: Ignored unexpected headers", peer_id);
return Ok(());
}
let expected_hash = match expected_hash {
Some(hash) => hash,
None => {
debug!(target: "sync", "{}: Ignored unexpected headers (expected_hash is None)", peer_id);
trace!(target: "sync", "{}: Ignored unexpected headers (expected_hash is None)", peer_id);
return Ok(());
}
};
if !allowed {
debug!(target: "sync", "{}: Ignored unexpected headers (peer not allowed)", peer_id);
trace!(target: "sync", "{}: Ignored unexpected headers (peer not allowed)", peer_id);
return Ok(());
}
@ -466,12 +466,12 @@ impl SyncHandler {
/// Called when snapshot manifest is downloaded from a peer.
fn on_snapshot_manifest(sync: &mut ChainSync, io: &mut dyn SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), DownloaderImportError> {
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
trace!(target: "sync", "Ignoring snapshot manifest from unconfirmed peer {}", peer_id);
trace!(target: "snapshot_sync", "Ignoring snapshot manifest from unconfirmed peer {}", peer_id);
return Ok(());
}
sync.clear_peer_download(peer_id);
if !sync.reset_peer_asking(peer_id, PeerAsking::SnapshotManifest) || sync.state != SyncState::SnapshotManifest {
trace!(target: "sync", "{}: Ignored unexpected/expired manifest", peer_id);
trace!(target: "snapshot_sync", "{}: Ignored unexpected/expired manifest", peer_id);
return Ok(());
}
@ -482,10 +482,12 @@ impl SyncHandler {
.map_or(false, |(l, h)| manifest.version >= l && manifest.version <= h);
if !is_supported_version {
trace!(target: "sync", "{}: Snapshot manifest version not supported: {}", peer_id, manifest.version);
warn!(target: "snapshot_sync", "{}: Snapshot manifest version not supported: {}", peer_id, manifest.version);
return Err(DownloaderImportError::Invalid);
}
sync.snapshot.reset_to(&manifest, &keccak(manifest_rlp.as_raw()));
debug!(target: "snapshot_sync", "{}: Peer sent a snapshot manifest we can use. Block number #{}, block chunks: {}, state chunks: {}",
peer_id, manifest.block_number, manifest.block_hashes.len(), manifest.state_hashes.len());
io.snapshot_service().begin_restore(manifest);
sync.state = SyncState::SnapshotData;
@ -495,12 +497,12 @@ impl SyncHandler {
/// Called when snapshot data is downloaded from a peer.
fn on_snapshot_data(sync: &mut ChainSync, io: &mut dyn SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), DownloaderImportError> {
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
trace!(target: "sync", "Ignoring snapshot data from unconfirmed peer {}", peer_id);
trace!(target: "snapshot_sync", "Ignoring snapshot data from unconfirmed peer {}", peer_id);
return Ok(());
}
sync.clear_peer_download(peer_id);
if !sync.reset_peer_asking(peer_id, PeerAsking::SnapshotData) || (sync.state != SyncState::SnapshotData && sync.state != SyncState::SnapshotWaiting) {
trace!(target: "sync", "{}: Ignored unexpected snapshot data", peer_id);
trace!(target: "snapshot_sync", "{}: Ignored unexpected snapshot data", peer_id);
return Ok(());
}
@ -508,12 +510,12 @@ impl SyncHandler {
let status = io.snapshot_service().status();
match status {
RestorationStatus::Inactive | RestorationStatus::Failed => {
trace!(target: "sync", "{}: Snapshot restoration aborted", peer_id);
trace!(target: "snapshot_sync", "{}: Snapshot restoration status: {:?}", peer_id, status);
sync.state = SyncState::WaitingPeers;
// only note bad if restoration failed.
if let (Some(hash), RestorationStatus::Failed) = (sync.snapshot.snapshot_hash(), status) {
trace!(target: "sync", "Noting snapshot hash {} as bad", hash);
debug!(target: "snapshot_sync", "Marking snapshot manifest hash {} as bad", hash);
sync.snapshot.note_bad(hash);
}
@ -521,30 +523,30 @@ impl SyncHandler {
return Ok(());
},
RestorationStatus::Initializing { .. } => {
trace!(target: "warp", "{}: Snapshot restoration is initializing", peer_id);
trace!(target: "snapshot_sync", "{}: Snapshot restoration is initializing. Can't accept data right now.", peer_id);
return Ok(());
}
RestorationStatus::Finalizing => {
trace!(target: "warp", "{}: Snapshot finalizing restoration", peer_id);
trace!(target: "snapshot_sync", "{}: Snapshot finalizing restoration. Can't accept data right now.", peer_id);
return Ok(());
}
RestorationStatus::Ongoing { .. } => {
trace!(target: "sync", "{}: Snapshot restoration is ongoing", peer_id);
trace!(target: "snapshot_sync", "{}: Snapshot restoration is ongoing", peer_id);
},
}
let snapshot_data: Bytes = r.val_at(0)?;
match sync.snapshot.validate_chunk(&snapshot_data) {
Ok(ChunkType::Block(hash)) => {
trace!(target: "sync", "{}: Processing block chunk", peer_id);
trace!(target: "snapshot_sync", "{}: Processing block chunk", peer_id);
io.snapshot_service().restore_block_chunk(hash, snapshot_data);
}
Ok(ChunkType::State(hash)) => {
trace!(target: "sync", "{}: Processing state chunk", peer_id);
trace!(target: "snapshot_sync", "{}: Processing state chunk", peer_id);
io.snapshot_service().restore_state_chunk(hash, snapshot_data);
}
Err(()) => {
trace!(target: "sync", "{}: Got bad snapshot chunk", peer_id);
trace!(target: "snapshot_sync", "{}: Got bad snapshot chunk", peer_id);
io.disconnect_peer(peer_id);
return Ok(());
}
@ -566,7 +568,7 @@ impl SyncHandler {
let warp_protocol = warp_protocol_version != 0;
let private_tx_protocol = warp_protocol_version >= PAR_PROTOCOL_VERSION_3.0;
let peer = PeerInfo {
protocol_version: protocol_version,
protocol_version,
network_id: r.val_at(1)?,
difficulty: Some(r.val_at(2)?),
latest_hash: r.val_at(3)?,
@ -595,7 +597,8 @@ impl SyncHandler {
latest:{}, \
genesis:{}, \
snapshot:{:?}, \
private_tx_enabled:{})",
private_tx_enabled:{}, \
client_version: {})",
peer_id,
peer.protocol_version,
peer.network_id,
@ -603,7 +606,8 @@ impl SyncHandler {
peer.latest_hash,
peer.genesis,
peer.snapshot_number,
peer.private_tx_enabled
peer.private_tx_enabled,
peer.client_version,
);
if io.is_expired() {
trace!(target: "sync", "Status packet from expired session {}:{}", peer_id, io.peer_version(peer_id));

View File

@ -115,7 +115,7 @@ use ethereum_types::{H256, U256};
use fastmap::{H256FastMap, H256FastSet};
use futures::sync::mpsc as futures_mpsc;
use keccak_hash::keccak;
use log::{error, trace, debug};
use log::{error, trace, debug, warn};
use network::client_version::ClientVersion;
use network::{self, PeerId, PacketId};
use parity_util_mem::{MallocSizeOfExt, malloc_size_of_is_0};
@ -172,18 +172,29 @@ const MAX_NEW_BLOCK_AGE: BlockNumber = 20;
// maximal packet size with transactions (cannot be greater than 16MB - protocol limitation).
// keep it under 8MB as well, cause it seems that it may result oversized after compression.
const MAX_TRANSACTION_PACKET_SIZE: usize = 5 * 1024 * 1024;
// Min number of blocks to be behind for a snapshot sync
// Min number of blocks to be behind the tip for a snapshot sync to be considered useful to us.
const SNAPSHOT_RESTORE_THRESHOLD: BlockNumber = 30000;
/// We prefer to sync snapshots that are available from this many peers. If we have not found a
/// snapshot available from `SNAPSHOT_MIN_PEERS` peers within `WAIT_PEERS_TIMEOUT`, then we make do
/// with a single peer to sync from.
const SNAPSHOT_MIN_PEERS: usize = 3;
/// To keep memory from growing uncontrollably we restore chunks as we download them and write them
/// to disk only after we have processed them; we also want to avoid pausing the chunk download too
/// often, so we allow a little bit of leeway here and let the downloading be
/// `MAX_SNAPSHOT_CHUNKS_DOWNLOAD_AHEAD` chunks ahead of the restoration.
const MAX_SNAPSHOT_CHUNKS_DOWNLOAD_AHEAD: usize = 5;
const MAX_SNAPSHOT_CHUNKS_DOWNLOAD_AHEAD: usize = 3;
const WAIT_PEERS_TIMEOUT: Duration = Duration::from_secs(5);
const STATUS_TIMEOUT: Duration = Duration::from_secs(5);
/// Time to wait for snapshotting peers to show up with a snapshot we want to use. Beyond this time,
/// a single peer is enough to start downloading.
const WAIT_PEERS_TIMEOUT: Duration = Duration::from_secs(10);
/// Time to wait for a peer to start being useful to us in some form. After this they are
/// disconnected.
const STATUS_TIMEOUT: Duration = Duration::from_secs(10);
const HEADERS_TIMEOUT: Duration = Duration::from_secs(15);
const BODIES_TIMEOUT: Duration = Duration::from_secs(20);
const RECEIPTS_TIMEOUT: Duration = Duration::from_secs(10);
const FORK_HEADER_TIMEOUT: Duration = Duration::from_secs(3);
/// Max time to wait for the Snapshot Manifest packet to arrive from a peer after it's being asked.
const SNAPSHOT_MANIFEST_TIMEOUT: Duration = Duration::from_secs(5);
const SNAPSHOT_DATA_TIMEOUT: Duration = Duration::from_secs(120);
const PRIVATE_STATE_TIMEOUT: Duration = Duration::from_secs(120);
@ -276,7 +287,7 @@ impl SyncStatus {
}
#[derive(PartialEq, Eq, Debug, Clone)]
/// Peer data type requested
/// Peer data type requested from a peer by us.
pub enum PeerAsking {
Nothing,
ForkHeader,
@ -296,7 +307,7 @@ pub enum BlockSet {
/// Missing old blocks
OldBlocks,
}
#[derive(Clone, Eq, PartialEq)]
#[derive(Clone, Eq, PartialEq, Debug)]
pub enum ForkConfirmation {
/// Fork block confirmation pending.
Unconfirmed,
@ -306,7 +317,7 @@ pub enum ForkConfirmation {
Confirmed,
}
#[derive(Clone)]
#[derive(Clone, Debug)]
/// Syncing peer information
pub struct PeerInfo {
/// eth protocol version
@ -319,7 +330,7 @@ pub struct PeerInfo {
latest_hash: H256,
/// Peer total difficulty if known
difficulty: Option<U256>,
/// Type of data currenty being requested from peer.
/// Type of data currently being requested by us from a peer.
asking: PeerAsking,
/// A set of block numbers being requested
asking_blocks: Vec<H256>,
@ -455,6 +466,7 @@ impl ChainSyncApi {
///
/// NOTE This method should only handle stuff that can be canceled and would reach other peers
/// by other means.
/// Called every `PRIORITY_TIMER` (0.25sec)
pub fn process_priority_queue(&self, io: &mut dyn SyncIo) {
fn check_deadline(deadline: Instant) -> Option<Duration> {
let now = Instant::now();
@ -589,12 +601,26 @@ impl ChainSync {
peers
}
/// Reset the client to its initial state:
/// - if warp sync is enabled, start looking for peers to sync a snapshot from
/// - if `--warp-barrier` is used, ensure we're not synced beyond the barrier and start
/// looking for peers to sync a snapshot from
/// - otherwise, go `Idle`.
fn get_init_state(warp_sync: WarpSync, chain: &dyn BlockChainClient) -> SyncState {
let best_block = chain.chain_info().best_block_number;
match warp_sync {
WarpSync::Enabled => SyncState::WaitingPeers,
WarpSync::OnlyAndAfter(block) if block > best_block => SyncState::WaitingPeers,
_ => SyncState::Idle,
WarpSync::Enabled => {
debug!(target: "sync", "Setting the initial state to `WaitingPeers`. Our best block: #{}; warp_sync: {:?}", best_block, warp_sync);
SyncState::WaitingPeers
},
WarpSync::OnlyAndAfter(block) if block > best_block => {
debug!(target: "sync", "Setting the initial state to `WaitingPeers`. Our best block: #{}; warp_sync: {:?}", best_block, warp_sync);
SyncState::WaitingPeers
},
_ => {
debug!(target: "sync", "Setting the initial state to `Idle`. Our best block: #{}", best_block);
SyncState::Idle
},
}
}
}
@ -615,7 +641,7 @@ pub struct ChainSync {
state: SyncState,
/// Last block number for the start of sync
starting_block: BlockNumber,
/// Highest block number seen
/// Highest block number seen on the network.
highest_block: Option<BlockNumber>,
/// All connected peers
peers: Peers,
@ -687,7 +713,7 @@ impl ChainSync {
sync
}
/// Returns synchonization status
/// Returns synchronization status
pub fn status(&self) -> SyncStatus {
let last_imported_number = self.new_blocks.last_imported_block_number();
SyncStatus {
@ -745,7 +771,7 @@ impl ChainSync {
receiver
}
/// notify all subscibers of a new SyncState
/// Notify all subscribers of a new SyncState
fn notify_sync_state(&mut self, state: SyncState) {
// remove any sender whose receiving end has been dropped
self.status_sinks.retain(|sender| {
@ -765,7 +791,7 @@ impl ChainSync {
fn reset(&mut self, io: &mut dyn SyncIo, state: Option<SyncState>) {
self.new_blocks.reset();
let chain_info = io.chain().chain_info();
for (_, ref mut p) in &mut self.peers {
for (_, mut p) in &mut self.peers {
if p.block_set != Some(BlockSet::OldBlocks) {
p.reset_asking();
if p.difficulty.is_none() {
@ -787,10 +813,12 @@ impl ChainSync {
pub fn reset_and_continue(&mut self, io: &mut dyn SyncIo) {
trace!(target: "sync", "Restarting");
if self.state == SyncState::SnapshotData {
debug!(target:"sync", "Aborting snapshot restore");
debug!(target:"snapshot_sync", "Aborting snapshot restore");
io.snapshot_service().abort_restore();
}
self.snapshot.clear();
// Passing `None` here means we'll end up in either `SnapshotWaiting` or `Idle` depending on
// the warp sync settings.
self.reset(io, None);
self.continue_sync(io);
}
@ -798,17 +826,17 @@ impl ChainSync {
/// Remove peer from active peer set. Peer will be reactivated on the next sync
/// round.
fn deactivate_peer(&mut self, _io: &mut dyn SyncIo, peer_id: PeerId) {
trace!(target: "sync", "Deactivating peer {}", peer_id);
debug!(target: "sync", "Deactivating peer {}", peer_id);
self.active_peers.remove(&peer_id);
}
/// Decide if we should start downloading a snapshot and from who. Called once per second.
fn maybe_start_snapshot_sync(&mut self, io: &mut dyn SyncIo) {
if !self.warp_sync.is_enabled() || io.snapshot_service().supported_versions().is_none() {
trace!(target: "sync", "Skipping warp sync. Disabled or not supported.");
return;
}
if self.state != SyncState::WaitingPeers && self.state != SyncState::Blocks && self.state != SyncState::Waiting {
trace!(target: "sync", "Skipping warp sync. State: {:?}", self.state);
use SyncState::*;
if self.state != WaitingPeers && self.state != Blocks && self.state != Waiting {
return;
}
// Make sure the snapshot block is not too far away from best block and network best block and
@ -816,71 +844,112 @@ impl ChainSync {
let our_best_block = io.chain().chain_info().best_block_number;
let fork_block = self.fork_block.map_or(0, |(n, _)| n);
let (best_hash, max_peers, snapshot_peers) = {
let expected_warp_block = match self.warp_sync {
WarpSync::OnlyAndAfter(block) => block,
_ => 0,
};
//collect snapshot infos from peers
let snapshots = self.peers.iter()
.filter(|&(_, p)| p.is_allowed() && p.snapshot_number.map_or(false, |sn|
// Snapshot must be old enough that it's usefull to sync with it
our_best_block < sn && (sn - our_best_block) > SNAPSHOT_RESTORE_THRESHOLD &&
// Snapshot must have been taken after the Fork
sn > fork_block &&
// Snapshot must be greater than the warp barrier if any
sn > expected_warp_block &&
// If we know a highest block, snapshot must be recent enough
self.highest_block.map_or(true, |highest| {
highest < sn || (highest - sn) <= SNAPSHOT_RESTORE_THRESHOLD
})
))
.filter_map(|(p, peer)| peer.snapshot_hash.map(|hash| (p, hash.clone())))
.filter(|&(_, ref hash)| !self.snapshot.is_known_bad(hash));
let expected_warp_block = match self.warp_sync {
WarpSync::OnlyAndAfter(warp_block) => {
if our_best_block >= warp_block {
trace!(target: "snapshot_sync",
"Our best block (#{}) is already beyond the warp barrier block (#{})",
our_best_block, warp_block);
return;
}
warp_block
},
_ => 0,
};
// Collect snapshot info from peers and check if we can use their snapshots to sync.
let (best_snapshot_block, best_hash, max_peers, snapshot_peers) = {
let mut snapshots = self.peers.iter()
.filter(|&(_, p)|
// filter out expired peers and peers from whom we do not have fork confirmation.
p.is_allowed() &&
p.snapshot_number.map_or(false, |sn|
// Snapshot must be sufficiently better than what we have that it's useful to
// sync with it: more than 30k blocks beyond our best block
our_best_block < sn && (sn - our_best_block) > SNAPSHOT_RESTORE_THRESHOLD &&
// Snapshot must have been taken after the fork block (if any is configured)
sn > fork_block &&
// Snapshot must be greater or equal to the warp barrier, if any
sn >= expected_warp_block
)
)
.filter_map(|(p, peer)| {
peer.snapshot_hash.map(|hash| (p, hash))
.filter(|(_, hash)| !self.snapshot.is_known_bad(&hash) )
.and_then(|(p, hash)| peer.snapshot_number.map(|n| (*p, n, hash) ) )
})
.collect::<Vec<(PeerId, BlockNumber, H256)>>();
// Sort collection of peers by highest block number.
snapshots.sort_by(|&(_, ref b1, _), &(_, ref b2, _)| b2.cmp(b1) );
let mut snapshot_peers = HashMap::new();
let mut max_peers: usize = 0;
let mut best_hash = None;
for (p, hash) in snapshots {
let mut best_snapshot_block = None;
// Of the available snapshots, find the one seeded by the most peers. On a tie, the
// snapshot closest to the tip will be used (unfortunately this is the common case).
for (p, snapshot_block, hash) in snapshots {
let peers = snapshot_peers.entry(hash).or_insert_with(Vec::new);
peers.push(*p);
peers.push(p);
if peers.len() > max_peers {
trace!(target: "snapshot_sync", "{} is the new best snapshotting peer, has snapshot at block #{}/{}", p, snapshot_block, hash);
max_peers = peers.len();
best_hash = Some(hash);
best_snapshot_block = Some(snapshot_block);
}
}
(best_hash, max_peers, snapshot_peers)
(best_snapshot_block, best_hash, max_peers, snapshot_peers)
};
// If we've waited long enough (10sec), a single peer will have to be enough for the snapshot sync to start.
let timeout = (self.state == WaitingPeers) &&
self.sync_start_time.map_or(false, |t| t.elapsed() > WAIT_PEERS_TIMEOUT);
let timeout = (self.state == SyncState::WaitingPeers) && self.sync_start_time.map_or(false, |t| t.elapsed() > WAIT_PEERS_TIMEOUT);
if let (Some(hash), Some(peers)) = (best_hash, best_hash.map_or(None, |h| snapshot_peers.get(&h))) {
if let (Some(block), Some(hash), Some(peers)) = (
best_snapshot_block,
best_hash,
best_hash.map_or(None, |h| snapshot_peers.get(&h))
) {
trace!(target: "snapshot_sync", "We can sync a snapshot at #{:?}/{:?} from {} peer(s): {:?}",
best_snapshot_block, best_hash, max_peers, snapshot_peers.values());
if max_peers >= SNAPSHOT_MIN_PEERS {
trace!(target: "sync", "Starting confirmed snapshot sync {:?} with {:?}", hash, peers);
debug!(target: "snapshot_sync", "Starting confirmed snapshot sync for a snapshot at #{}/{:?} with peer {:?}", block, hash, peers);
self.start_snapshot_sync(io, peers);
} else if timeout {
trace!(target: "sync", "Starting unconfirmed snapshot sync {:?} with {:?}", hash, peers);
debug!(target: "snapshot_sync", "Starting unconfirmed snapshot sync for a snapshot at #{}/{:?} with peer {:?}", block, hash, peers);
self.start_snapshot_sync(io, peers);
} else {
trace!(target: "snapshot_sync", "Waiting a little more to let more snapshot peers connect.")
}
} else if timeout {
if !self.warp_sync.is_warp_only() {
debug!(target: "snapshot_sync", "Not syncing snapshots (or none found), proceeding with normal sync.");
self.set_state(SyncState::Idle);
self.continue_sync(io);
} else {
warn!(target: "snapshot_sync", "No snapshots currently available at #{}. Try using a smaller value for --warp-barrier", expected_warp_block);
}
} else if timeout && !self.warp_sync.is_warp_only() {
trace!(target: "sync", "No snapshots found, starting full sync");
self.set_state(SyncState::Idle);
self.continue_sync(io);
}
}
/// Start a snapshot with all peers that we are not currently asking something else from. If
/// we're already snapshotting with a peer, set sync state to `SnapshotData` and continue
/// fetching the snapshot. Note that we only ever sync snapshots from one peer so here we send
/// out the request for a manifest to all the peers that have it and start syncing the snapshot
/// with the first that responds.
fn start_snapshot_sync(&mut self, io: &mut dyn SyncIo, peers: &[PeerId]) {
if !self.snapshot.have_manifest() {
for p in peers {
if self.peers.get(p).map_or(false, |p| p.asking == PeerAsking::Nothing) {
// When we get a response we call `SyncHandler::on_snapshot_manifest`
SyncRequester::request_snapshot_manifest(self, io, *p);
}
}
self.set_state(SyncState::SnapshotManifest);
trace!(target: "sync", "New snapshot sync with {:?}", peers);
trace!(target: "snapshot_sync", "New snapshot sync with {:?}", peers);
} else {
self.set_state(SyncState::SnapshotData);
trace!(target: "sync", "Resumed snapshot sync with {:?}", peers);
trace!(target: "snapshot_sync", "Resumed snapshot sync with {:?}", peers);
}
}
@ -910,7 +979,8 @@ impl ChainSync {
}
}
/// Resume downloading
/// Resume downloading.
/// Called every `CONTINUE_SYNC_TIMER` (2.5sec)
pub fn continue_sync(&mut self, io: &mut dyn SyncIo) {
if self.state == SyncState::Waiting {
trace!(target: "sync", "Waiting for the block queue");
@ -927,7 +997,7 @@ impl ChainSync {
).collect();
if peers.len() > 0 {
trace!(
debug!(
target: "sync",
"Syncing with peers: {} active, {} available, {} total",
self.active_peers.len(), peers.len(), self.peers.len()
@ -943,9 +1013,8 @@ impl ChainSync {
}
}
if
(self.state == SyncState::Blocks || self.state == SyncState::NewBlocks) &&
!self.peers.values().any(|p| p.asking != PeerAsking::Nothing && p.block_set != Some(BlockSet::OldBlocks) && p.can_sync())
if (self.state == SyncState::Blocks || self.state == SyncState::NewBlocks)
&& !self.peers.values().any(|p| p.asking != PeerAsking::Nothing && p.block_set != Some(BlockSet::OldBlocks) && p.can_sync())
{
self.complete_sync(io);
}
@ -987,13 +1056,14 @@ impl ChainSync {
let higher_difficulty = peer_difficulty.map_or(true, |pd| pd > syncing_difficulty);
if force || higher_difficulty || self.old_blocks.is_some() {
match self.state {
SyncState::WaitingPeers => {
SyncState::WaitingPeers if peer_snapshot_number > 0 => {
trace!(
target: "sync",
"Checking snapshot sync: {} vs {} (peer: {})",
target: "snapshot_sync",
"{}: Potential snapshot sync peer; their highest block: #{} vs our highest: #{} (peer: {})",
peer_id,
peer_snapshot_number,
chain_info.best_block_number,
peer_id
io.peer_enode(peer_id).unwrap_or_else(|| "enode://???".to_string()),
);
self.maybe_start_snapshot_sync(io);
},
@ -1038,17 +1108,18 @@ impl ChainSync {
},
SyncState::SnapshotData => {
match io.snapshot_service().status() {
RestorationStatus::Ongoing { state_chunks_done, block_chunks_done, .. } => {
RestorationStatus::Ongoing { state_chunks_done, block_chunks_done, state_chunks, block_chunks } => {
// Initialize the snapshot if not already done
self.snapshot.initialize(io.snapshot_service());
self.snapshot.initialize(io.snapshot_service(), block_chunks as usize + state_chunks as usize);
if self.snapshot.done_chunks() - (state_chunks_done + block_chunks_done) as usize > MAX_SNAPSHOT_CHUNKS_DOWNLOAD_AHEAD {
trace!(target: "sync", "Snapshot queue full, pausing sync");
trace!(target: "snapshot_sync", "Snapshot queue full, pausing sync");
self.set_state(SyncState::SnapshotWaiting);
return;
}
},
RestorationStatus::Initializing { .. } => {
trace!(target: "warp", "Snapshot is stil initializing.");
RestorationStatus::Initializing { state_chunks, block_chunks, chunks_done } => {
debug!(target: "snapshot_sync", "Snapshot is initializing: state chunks={}, block chunks={}, chunks done={}",
state_chunks, block_chunks, chunks_done);
return;
},
_ => {
@ -1063,16 +1134,17 @@ impl ChainSync {
},
SyncState::SnapshotManifest | //already downloading from other peer
SyncState::Waiting |
SyncState::SnapshotWaiting => ()
SyncState::SnapshotWaiting => (),
_ => ()
}
} else {
trace!(target: "sync", "Skipping peer {}, force={}, td={:?}, our td={}, state={:?}", peer_id, force, peer_difficulty, syncing_difficulty, self.state);
}
}
/// Clear all blocks/headers marked as being downloaded by a peer.
/// Clear all blocks/headers marked as being downloaded by us from a peer.
fn clear_peer_download(&mut self, peer_id: PeerId) {
if let Some(ref peer) = self.peers.get(&peer_id) {
if let Some(peer) = self.peers.get(&peer_id) {
match peer.asking {
PeerAsking::BlockHeaders => {
if let Some(ref hash) = peer.asking_hash {
@ -1150,7 +1222,7 @@ impl ChainSync {
peer.expired = false;
peer.block_set = None;
if peer.asking != asking {
trace!(target:"sync", "Asking {:?} while expected {:?}", peer.asking, asking);
trace!(target:"sync", "{}: Asking {:?} while expected {:?}", peer_id, peer.asking, asking);
peer.asking = PeerAsking::Nothing;
return false;
} else {
@ -1190,6 +1262,9 @@ impl ChainSync {
io.respond(StatusPacket.id(), packet.out())
}
/// Check if any tasks we have on-going with a peer is taking too long (if so, disconnect them).
/// Also checks handshaking peers.
/// Called every `PEERS_TIMER` (0.7sec).
pub fn maintain_peers(&mut self, io: &mut dyn SyncIo) {
let tick = Instant::now();
let mut aborting = Vec::new();
@ -1206,7 +1281,7 @@ impl ChainSync {
PeerAsking::PrivateState => elapsed > PRIVATE_STATE_TIMEOUT,
};
if timeout {
debug!(target:"sync", "Timeout {}", peer_id);
debug!(target:"sync", "Peer {} timeout while we were asking them for {:?}; disconnecting.", peer_id, peer.asking);
io.disconnect_peer(*peer_id);
aborting.push(*peer_id);
}
@ -1240,24 +1315,24 @@ impl ChainSync {
SyncState::SnapshotWaiting => {
match io.snapshot_service().status() {
RestorationStatus::Inactive => {
trace!(target:"sync", "Snapshot restoration is complete");
trace!(target:"snapshot_sync", "Snapshot restoration is complete");
self.restart(io);
},
RestorationStatus::Initializing { .. } => {
trace!(target:"sync", "Snapshot restoration is initializing");
trace!(target:"snapshot_sync", "Snapshot restoration is initializing");
},
RestorationStatus::Finalizing { .. } => {
trace!(target:"sync", "Snapshot finalizing restoration");
trace!(target:"snapshot_sync", "Snapshot finalizing restoration");
},
RestorationStatus::Ongoing { state_chunks_done, block_chunks_done, .. } => {
if !self.snapshot.is_complete() && self.snapshot.done_chunks() - (state_chunks_done + block_chunks_done) as usize <= MAX_SNAPSHOT_CHUNKS_DOWNLOAD_AHEAD {
trace!(target:"sync", "Resuming snapshot sync");
trace!(target:"snapshot_sync", "Resuming snapshot sync");
self.set_state(SyncState::SnapshotData);
self.continue_sync(io);
}
},
RestorationStatus::Failed => {
trace!(target: "sync", "Snapshot restoration aborted");
trace!(target: "snapshot_sync", "Snapshot restoration aborted");
self.set_state(SyncState::WaitingPeers);
self.snapshot.clear();
self.continue_sync(io);
@ -1322,7 +1397,8 @@ impl ChainSync {
).collect()
}
/// Maintain other peers. Send out any new blocks and transactions
/// Maintain other peers. Send out any new blocks and transactions. Called every
/// `MAINTAIN_SYNC_TIMER` (1.1sec).
pub fn maintain_sync(&mut self, io: &mut dyn SyncIo) {
self.maybe_start_snapshot_sync(io);
self.check_resume(io);
@ -1369,7 +1445,8 @@ impl ChainSync {
SyncHandler::on_peer_connected(self, io, peer);
}
/// propagates new transactions to all peers
/// Propagates new transactions to all peers.
/// Called every `TX_TIMER` (1.3sec).
pub fn propagate_new_transactions(&mut self, io: &mut dyn SyncIo) {
let deadline = Instant::now() + Duration::from_millis(500);
SyncPropagator::propagate_new_transactions(self, io, || {

View File

@ -87,11 +87,11 @@ impl SyncRequester {
SyncRequester::send_request(sync, io, peer_id, PeerAsking::ForkHeader, GetBlockHeadersPacket, rlp.out());
}
/// Find some headers or blocks to download for a peer.
/// Find some headers or blocks to download from a peer.
pub fn request_snapshot_data(sync: &mut ChainSync, io: &mut dyn SyncIo, peer_id: PeerId) {
// find chunk data to download
if let Some(hash) = sync.snapshot.needed_chunk() {
if let Some(ref mut peer) = sync.peers.get_mut(&peer_id) {
if let Some(mut peer) = sync.peers.get_mut(&peer_id) {
peer.asking_snapshot_data = Some(hash.clone());
}
SyncRequester::request_snapshot_chunk(sync, io, peer_id, &hash);
@ -100,9 +100,8 @@ impl SyncRequester {
/// Request snapshot manifest from a peer.
pub fn request_snapshot_manifest(sync: &mut ChainSync, io: &mut dyn SyncIo, peer_id: PeerId) {
trace!(target: "sync", "{} <- GetSnapshotManifest", peer_id);
let rlp = RlpStream::new_list(0);
SyncRequester::send_request(sync, io, peer_id, PeerAsking::SnapshotManifest, GetSnapshotManifestPacket, rlp.out());
trace!(target: "sync", "{}: requesting a snapshot manifest", peer_id);
SyncRequester::send_request(sync, io, peer_id, PeerAsking::SnapshotManifest, GetSnapshotManifestPacket, rlp::EMPTY_LIST_RLP.to_vec());
}
pub fn request_private_state(sync: &mut ChainSync, io: &mut dyn SyncIo, peer_id: PeerId, hash: &H256) {

View File

@ -116,7 +116,7 @@ impl SyncSupplier {
debug!(target:"sync", "Unexpected packet {} from unregistered peer: {}:{}", packet_id, peer, io.peer_version(peer));
return;
}
debug!(target: "sync", "{} -> Dispatching packet: {}", peer, packet_id);
trace!(target: "sync", "{} -> Dispatching packet: {}", peer, packet_id);
match id {
ConsensusDataPacket => {

View File

@ -22,20 +22,43 @@ use keccak_hash::keccak;
use log::trace;
use snapshot::SnapshotService;
use common_types::snapshot::ManifestData;
use indexmap::IndexSet;
#[derive(PartialEq, Eq, Debug)]
/// The type of data contained in a chunk: state or block.
pub enum ChunkType {
/// The chunk contains state data (aka account data).
State(H256),
/// The chunk contains block data.
Block(H256),
}
#[derive(Default, MallocSizeOf)]
pub struct Snapshot {
pending_state_chunks: Vec<H256>,
pending_block_chunks: Vec<H256>,
/// List of hashes of the state chunks we need to complete the warp sync from this snapshot.
/// These hashes are contained in the Manifest we downloaded from the peer(s).
/// Note: this is an ordered set so that state restoration happens in order, which keeps
/// memory usage down.
// See https://github.com/paritytech/parity-common/issues/255
#[ignore_malloc_size_of = "no impl for IndexSet (yet)"]
pending_state_chunks: IndexSet<H256>,
/// List of hashes of the block chunks we need to complete the warp sync from this snapshot.
/// These hashes are contained in the Manifest we downloaded from the peer(s).
/// Note: this is an ordered set so that state restoration happens in order, which keeps
/// memory usage down.
// See https://github.com/paritytech/parity-common/issues/255
#[ignore_malloc_size_of = "no impl for IndexSet (yet)"]
pending_block_chunks: IndexSet<H256>,
/// Set of hashes of chunks we are currently downloading.
downloading_chunks: HashSet<H256>,
/// The set of chunks (block or state) that we have successfully downloaded.
completed_chunks: HashSet<H256>,
/// The hash of the the `ManifestData` RLP that we're downloading.
snapshot_hash: Option<H256>,
/// Total number of chunks in the current snapshot.
total_chunks: Option<usize>,
/// Set of snapshot hashes we failed to import. We will not try to sync with
/// this snapshot again until restart.
bad_hashes: HashSet<H256>,
initialized: bool,
}
@ -47,7 +70,7 @@ impl Snapshot {
}
/// Sync the Snapshot completed chunks with the Snapshot Service
pub fn initialize(&mut self, snapshot_service: &dyn SnapshotService) {
pub fn initialize(&mut self, snapshot_service: &dyn SnapshotService, total_chunks: usize) {
if self.initialized {
return;
}
@ -57,111 +80,122 @@ impl Snapshot {
}
trace!(
target: "snapshot",
"Snapshot is now initialized with {} completed chunks.",
self.completed_chunks.len(),
target: "snapshot_sync",
"Snapshot initialized. {}/{} completed chunks.",
self.completed_chunks.len(), total_chunks
);
self.total_chunks = Some(total_chunks);
self.initialized = true;
}
/// Clear everything.
/// Clear everything and set `initialized` to false.
pub fn clear(&mut self) {
self.pending_state_chunks.clear();
self.pending_block_chunks.clear();
self.downloading_chunks.clear();
self.completed_chunks.clear();
self.snapshot_hash = None;
self.total_chunks = None;
self.initialized = false;
}
/// Check if currently downloading a snapshot.
/// Check if we're currently downloading a snapshot.
pub fn have_manifest(&self) -> bool {
self.snapshot_hash.is_some()
}
/// Reset collection for a manifest RLP
/// Clear the `Snapshot` and reset it with data from a `ManifestData` (i.e. the lists of
/// block&state chunk hashes contained in the `ManifestData`).
pub fn reset_to(&mut self, manifest: &ManifestData, hash: &H256) {
self.clear();
self.pending_state_chunks = manifest.state_hashes.clone();
self.pending_block_chunks = manifest.block_hashes.clone();
self.pending_state_chunks = IndexSet::from_iter(manifest.state_hashes.clone());
self.pending_block_chunks = IndexSet::from_iter(manifest.block_hashes.clone());
self.total_chunks = Some(self.pending_block_chunks.len() + self.pending_state_chunks.len());
self.snapshot_hash = Some(hash.clone());
}
/// Validate chunk and mark it as downloaded
/// Check if the the chunk is known, i.e. downloaded already or currently downloading; if so add
/// it to the `completed_chunks` set.
/// Returns a `ChunkType` with the hash of the chunk.
pub fn validate_chunk(&mut self, chunk: &[u8]) -> Result<ChunkType, ()> {
let hash = keccak(chunk);
if self.completed_chunks.contains(&hash) {
trace!(target: "sync", "Ignored proccessed chunk: {:x}", hash);
trace!(target: "snapshot_sync", "Already proccessed chunk {:x}. Ignoring.", hash);
return Err(());
}
self.downloading_chunks.remove(&hash);
if self.pending_block_chunks.iter().any(|h| h == &hash) {
self.completed_chunks.insert(hash.clone());
return Ok(ChunkType::Block(hash));
}
if self.pending_state_chunks.iter().any(|h| h == &hash) {
self.completed_chunks.insert(hash.clone());
return Ok(ChunkType::State(hash));
}
trace!(target: "sync", "Ignored unknown chunk: {:x}", hash);
Err(())
self.pending_block_chunks.take(&hash)
.and_then(|h| {
self.completed_chunks.insert(h);
Some(ChunkType::Block(hash))
})
.or(
self.pending_state_chunks.take(&hash)
.and_then(|h| {
self.completed_chunks.insert(h);
Some(ChunkType::State(hash))
})
).ok_or_else(|| {
trace!(target: "snapshot_sync", "Ignoring unknown chunk: {:x}", hash);
()
})
}
/// Find a chunk to download
/// Pick a chunk to download.
/// Note: the order in which chunks are processed is somewhat important. The account state
/// sometimes spills over into more than one chunk and the parts of state that are missing
/// pieces are held in memory while waiting for the next chunk(s) to show up. This means that
/// when chunks are processed out-of-order, memory usage goes up, sometimes significantly (see
/// e.g. https://github.com/paritytech/parity-ethereum/issues/8825).
pub fn needed_chunk(&mut self) -> Option<H256> {
// Find next needed chunk: first block, then state chunks
let chunk = {
let chunk_filter = |h| !self.downloading_chunks.contains(h) && !self.completed_chunks.contains(h);
let needed_block_chunk = self.pending_block_chunks.iter()
.filter(|&h| chunk_filter(h))
let filter = |h| !self.downloading_chunks.contains(h) && !self.completed_chunks.contains(h);
self.pending_block_chunks.iter()
.find(|&h| filter(h))
.or(self.pending_state_chunks.iter()
.find(|&h| filter(h))
)
.map(|h| *h)
.next();
// If no block chunks to download, get the state chunks
if needed_block_chunk.is_none() {
self.pending_state_chunks.iter()
.filter(|&h| chunk_filter(h))
.map(|h| *h)
.next()
} else {
needed_block_chunk
}
};
if let Some(hash) = chunk {
self.downloading_chunks.insert(hash.clone());
}
chunk
}
/// Remove a chunk from the set of chunks we're interested in downloading.
pub fn clear_chunk_download(&mut self, hash: &H256) {
self.downloading_chunks.remove(hash);
}
// note snapshot hash as bad.
/// Mark a snapshot hash as bad.
pub fn note_bad(&mut self, hash: H256) {
self.bad_hashes.insert(hash);
}
// whether snapshot hash is known to be bad.
/// Whether a snapshot hash is known to be bad.
pub fn is_known_bad(&self, hash: &H256) -> bool {
self.bad_hashes.contains(hash)
}
/// Hash of the snapshot we're currently downloading/importing.
pub fn snapshot_hash(&self) -> Option<H256> {
self.snapshot_hash
}
/// Total number of chunks in the snapshot we're currently working on (state + block chunks).
pub fn total_chunks(&self) -> usize {
self.pending_block_chunks.len() + self.pending_state_chunks.len()
self.total_chunks.unwrap_or_default()
}
/// Number of chunks we've processed so far (state and block chunks).
pub fn done_chunks(&self) -> usize {
self.completed_chunks.len()
}
/// Are we done downloading all chunks?
pub fn is_complete(&self) -> bool {
self.total_chunks() == self.completed_chunks.len()
}
@ -214,25 +248,30 @@ mod test {
let mut snapshot = Snapshot::new();
let (manifest, mhash, state_chunks, block_chunks) = test_manifest();
snapshot.reset_to(&manifest, &mhash);
assert_eq!(snapshot.done_chunks(), 0);
assert!(snapshot.validate_chunk(&H256::random().as_bytes().to_vec()).is_err());
assert_eq!(snapshot.done_chunks(), 0, "no chunks done at outset");
assert!(snapshot.validate_chunk(&H256::random().as_bytes().to_vec()).is_err(), "random chunk is invalid");
// request all 20 + 20 chunks
let requested: Vec<H256> = (0..40).map(|_| snapshot.needed_chunk().unwrap()).collect();
assert!(snapshot.needed_chunk().is_none());
assert!(snapshot.needed_chunk().is_none(), "no chunks left after all are drained");
let requested_all_block_chunks = manifest.block_hashes.iter()
.all(|h| requested.iter().any(|rh| rh == h));
assert!(requested_all_block_chunks);
assert!(requested_all_block_chunks, "all block chunks in the manifest accounted for");
let requested_all_state_chunks = manifest.state_hashes.iter()
.all(|h| requested.iter().any(|rh| rh == h));
assert!(requested_all_state_chunks);
assert!(requested_all_state_chunks, "all state chunks in the manifest accounted for");
assert_eq!(snapshot.downloading_chunks.len(), 40);
assert_eq!(snapshot.downloading_chunks.len(), 40, "all requested chunks are downloading");
assert_eq!(snapshot.validate_chunk(&state_chunks[4]), Ok(ChunkType::State(manifest.state_hashes[4].clone())));
assert_eq!(snapshot.completed_chunks.len(), 1);
assert_eq!(snapshot.downloading_chunks.len(), 39);
assert_eq!(
snapshot.validate_chunk(&state_chunks[4]),
Ok(ChunkType::State(manifest.state_hashes[4].clone())),
"4th state chunk hash validates as such"
);
assert_eq!(snapshot.completed_chunks.len(), 1, "after validating a chunk, it's in the completed set");
assert_eq!(snapshot.downloading_chunks.len(), 39, "after validating a chunk, there's one less in the downloading set");
assert_eq!(snapshot.validate_chunk(&block_chunks[10]), Ok(ChunkType::Block(manifest.block_hashes[10].clone())));
assert_eq!(snapshot.completed_chunks.len(), 2);
@ -250,7 +289,7 @@ mod test {
}
}
assert!(snapshot.is_complete());
assert!(snapshot.is_complete(), "when all chunks have been validated, we're done");
assert_eq!(snapshot.done_chunks(), 40);
assert_eq!(snapshot.done_chunks(), snapshot.total_chunks());
assert_eq!(snapshot.snapshot_hash(), Some(keccak(manifest.into_rlp())));

View File

@ -50,6 +50,8 @@ pub trait SyncIo {
fn peer_version(&self, peer_id: PeerId) -> ClientVersion {
ClientVersion::from(peer_id.to_string())
}
/// Returns the peer enode string
fn peer_enode(&self, peer_id: PeerId) -> Option<String>;
/// Returns information on p2p session
fn peer_session_info(&self, peer_id: PeerId) -> Option<SessionInfo>;
/// Maximum mutually supported ETH protocol version
@ -115,10 +117,6 @@ impl<'s> SyncIo for NetSyncIo<'s> {
self.chain
}
fn chain_overlay(&self) -> &RwLock<HashMap<BlockNumber, Bytes>> {
self.chain_overlay
}
fn snapshot_service(&self) -> &dyn SnapshotService {
self.snapshot_service
}
@ -127,12 +125,20 @@ impl<'s> SyncIo for NetSyncIo<'s> {
self.private_state.clone()
}
fn peer_session_info(&self, peer_id: PeerId) -> Option<SessionInfo> {
self.network.session_info(peer_id)
fn peer_version(&self, peer_id: PeerId) -> ClientVersion {
self.network.peer_client_version(peer_id)
}
fn is_expired(&self) -> bool {
self.network.is_expired()
fn peer_enode(&self, peer_id: PeerId) -> Option<String> {
self.network.session_info(peer_id).and_then(|info| {
info.id.map(|node_id| {
format!("enode:://{}@{}", node_id, info.remote_address)
})
})
}
fn peer_session_info(&self, peer_id: PeerId) -> Option<SessionInfo> {
self.network.session_info(peer_id)
}
fn eth_protocol_version(&self, peer_id: PeerId) -> u8 {
@ -143,8 +149,12 @@ impl<'s> SyncIo for NetSyncIo<'s> {
self.network.protocol_version(*protocol, peer_id).unwrap_or(0)
}
fn peer_version(&self, peer_id: PeerId) -> ClientVersion {
self.network.peer_client_version(peer_id)
fn is_expired(&self) -> bool {
self.network.is_expired()
}
fn chain_overlay(&self) -> &RwLock<HashMap<BlockNumber, Bytes>> {
self.chain_overlay
}
fn payload_soft_limit(&self) -> usize {

View File

@ -114,25 +114,17 @@ impl<'p, C> SyncIo for TestIo<'p, C> where C: FlushingBlockChainClient, C: 'p {
self.to_disconnect.insert(peer_id);
}
fn is_expired(&self) -> bool {
false
}
fn respond(&mut self, packet_id: PacketId, data: Vec<u8>) -> Result<(), network::Error> {
self.packets.push(TestPacket {
data: data,
packet_id: packet_id,
recipient: self.sender.unwrap()
});
self.packets.push(
TestPacket { data, packet_id, recipient: self.sender.unwrap() }
);
Ok(())
}
fn send(&mut self,peer_id: PeerId, packet_id: SyncPacket, data: Vec<u8>) -> Result<(), network::Error> {
self.packets.push(TestPacket {
data,
packet_id: packet_id.id(),
recipient: peer_id,
});
self.packets.push(
TestPacket { data, packet_id: packet_id.id(), recipient: peer_id }
);
Ok(())
}
@ -140,6 +132,14 @@ impl<'p, C> SyncIo for TestIo<'p, C> where C: FlushingBlockChainClient, C: 'p {
&*self.chain
}
fn snapshot_service(&self) -> &dyn SnapshotService {
self.snapshot_service
}
fn private_state(&self) -> Option<Arc<PrivateStateDB>> {
self.private_state_db.clone()
}
fn peer_version(&self, peer_id: PeerId) -> ClientVersion {
self.peers_info.get(&peer_id)
.cloned()
@ -147,12 +147,8 @@ impl<'p, C> SyncIo for TestIo<'p, C> where C: FlushingBlockChainClient, C: 'p {
.into()
}
fn snapshot_service(&self) -> &dyn SnapshotService {
self.snapshot_service
}
fn private_state(&self) -> Option<Arc<PrivateStateDB>> {
self.private_state_db.clone()
fn peer_enode(&self, _peer_id: usize) -> Option<String> {
unimplemented!()
}
fn peer_session_info(&self, _peer_id: PeerId) -> Option<SessionInfo> {
@ -167,6 +163,10 @@ impl<'p, C> SyncIo for TestIo<'p, C> where C: FlushingBlockChainClient, C: 'p {
if protocol == &WARP_SYNC_PROTOCOL_ID { PAR_PROTOCOL_VERSION_4.0 } else { self.eth_protocol_version(peer_id) }
}
fn is_expired(&self) -> bool {
false
}
fn chain_overlay(&self) -> &RwLock<HashMap<BlockNumber, Bytes>> {
&self.overlay
}

View File

@ -170,9 +170,9 @@ pub type ChunkSink<'a> = dyn FnMut(&[u8]) -> std::io::Result<()> + 'a;
/// Statuses for snapshot restoration.
#[derive(PartialEq, Eq, Clone, Copy, Debug)]
pub enum RestorationStatus {
/// No restoration.
/// No restoration activity currently.
Inactive,
/// Restoration is initializing
/// Restoration is initializing.
Initializing {
/// Total number of state chunks.
state_chunks: u32,
@ -192,7 +192,7 @@ pub enum RestorationStatus {
/// Number of block chunks completed.
block_chunks_done: u32,
},
/// Finalizing restoration
/// Finalizing restoration.
Finalizing,
/// Failed restoration.
Failed,

View File

@ -37,7 +37,6 @@ use mio::{
use parity_path::restrict_permissions_owner;
use parking_lot::{Mutex, RwLock};
use rlp::{Encodable, RlpStream};
use rustc_hex::ToHex;
use ethcore_io::{IoContext, IoHandler, IoManager, StreamToken, TimerToken};
use parity_crypto::publickey::{Generator, KeyPair, Random, Secret};

View File

@ -393,42 +393,42 @@ impl NonReservedPeerMode {
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct IpFilter {
pub predefined: AllowIP,
pub custom_allow: Vec<IpNetwork>,
pub custom_block: Vec<IpNetwork>,
pub predefined: AllowIP,
pub custom_allow: Vec<IpNetwork>,
pub custom_block: Vec<IpNetwork>,
}
impl Default for IpFilter {
fn default() -> Self {
IpFilter {
predefined: AllowIP::All,
custom_allow: vec![],
custom_block: vec![],
}
}
fn default() -> Self {
IpFilter {
predefined: AllowIP::All,
custom_allow: vec![],
custom_block: vec![],
}
}
}
impl IpFilter {
/// Attempt to parse the peer mode from a string.
pub fn parse(s: &str) -> Result<IpFilter, IpNetworkError> {
let mut filter = IpFilter::default();
for f in s.split_whitespace() {
match f {
"all" => filter.predefined = AllowIP::All,
"private" => filter.predefined = AllowIP::Private,
"public" => filter.predefined = AllowIP::Public,
"none" => filter.predefined = AllowIP::None,
custom => {
if custom.starts_with("-") {
filter.custom_block.push(IpNetwork::from_str(&custom.to_owned().split_off(1))?)
} else {
filter.custom_allow.push(IpNetwork::from_str(custom)?)
}
}
}
}
Ok(filter)
}
/// Attempt to parse the peer mode from a string.
pub fn parse(s: &str) -> Result<IpFilter, IpNetworkError> {
let mut filter = IpFilter::default();
for f in s.split_whitespace() {
match f {
"all" => filter.predefined = AllowIP::All,
"private" => filter.predefined = AllowIP::Private,
"public" => filter.predefined = AllowIP::Public,
"none" => filter.predefined = AllowIP::None,
custom => {
if custom.starts_with("-") {
filter.custom_block.push(IpNetwork::from_str(&custom.to_owned().split_off(1))?)
} else {
filter.custom_allow.push(IpNetwork::from_str(custom)?)
}
}
}
}
Ok(filter)
}
}
/// IP fiter
@ -440,6 +440,6 @@ pub enum AllowIP {
Private,
/// Connect to public network only
Public,
/// Block all addresses
None,
/// Block all addresses
None,
}