* Use parity-crypto updated to use upstream rust-secp256k1
* Fetch dependency from git
* Missed a session ID
* Add free-standing inversion function that uses `libsecp256k1`
* fixed tests
* Update deps
* Use parity-crypto 0.5.0
Use libsecp256k1 0.3.5
* Review grumble
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
* Only use kvdb "column families"
This PR contains the changes necessary to use the `kvdb-*` crates from https://github.com/paritytech/parity-common/pull/278 (so a synchronized merge is required) which drops support for the old-style rocksdb "default" column to get a smaller and less complex API.
As it stands this PR is working correctly except for secret-store; we need to migrate it to use a new column family.
* Fix secretstore build
* Fix secretstore build: include ethkey when building with the "accounts" feature
* typos
* Restore state test commit
* Override all of parity-common from git
* Be precise about version requirement to migrate secretstore code
* Update ethcore/db/src/db.rs
Co-Authored-By: Niklas Adolfsson <niklasadolfsson1@gmail.com>
* Address review grumbles
* Review grumbles
* Cleanup
Co-authored-by: Niklas Adolfsson <niklasadolfsson1@gmail.com>
* Add randomness contract support to Authority Round.
Changes have been cherry-picked from poanetwork's aura-pos branch.
Most of the work has been done by @mbr.
* Address review comments for randomness contract.
Co-Authored-By: David <dvdplm@gmail.com>
* Rename revealSecret to revealNumber
* Update Randomness contract bytecode
* Use H256, rename secret to random number.
* Use get_commit_and_cipher
* Clean up Miner::prepare_block.
* Remove is_reveal_phase call.
* Add more comments, require randomness contract map.
* Simplify run_randomness_phase
* Address review comments.
* Remove Client::transact_contract.
* Use upstream rocksdb
…by way of https://github.com/paritytech/parity-common/pull/257 by @ordian.
* Hint at how `parity db reset` works in the error message
* migration-rocksdb: fix build
* Cargo.toml: use git dependency instead of path
* update to latest kvdb-rocksdb
* fix tests
* saner default for light client
* rename open_db to open_db_light
* update to latest kvdb-rocksdb
* moar update to latest kvdb-rocksdb
* even moar update to latest kvdb-rocksdb
* use kvdb-rocksdb from crates.io
* Update parity/db/rocksdb/helpers.rs
* add docs to memory_budget division
* Add a benchmark for snapshot::account::to_fat_rlps()
`to_fat_rlps()` is a hot call during snapshots. I don't think it has a perf problem per se, but better to have benchmark for it.
The data used is a piece of Ropsten data sized to the ~95% percentile of account size on that network.
* Benchmark with more chunks, including mainnet data
* whitespace
* Move `used_code` inside the benchmark iteration
* Revert "Move `used_code` inside the benchmark iteration"
This reverts commit cff33ab30acbd1c009e745f646f1cc655ee01d8c.
* Comments and todos
Use `snapshot_sync` as logging target
* fix compilation
* More todos, more logs
* Fix picking snapshot peer: prefer the one with the highest block number
More docs, comments, todos
* Adjust WAIT_PEERS_TIMEOUT to be a multiple of MAINTAIN_SYNC_TIMER to try to fix snapshot startup problems
Docs, todos, comments
* Tabs
* Formatting
* Don't build new rlp::EMPTY_LIST_RLP instances
* Dial down debug logging
* Don't warn about missing hashes in the manifest: it's normal
Log client version on peer connect
* Cleanup
* Do not skip snapshots further away than 30k block from the highest block seen
Currently we look for peers that seed snapshots that are close to the highest block seen on the network (where "close" means withing 30k blocks). When a node starts up we wait for some time (5sec, increased here to 10sec) to let peers connect and if we have found a suitable peer to sync a snapshot from at the end of that delay, we start the download; if none is found and --warp-barrier is used we stall, otherwise we start a slow-sync.
When looking for a suitable snapshot, we use the highest block seen on the network to check if a peer has a snapshot that is within 30k blocks of that highest block number. This means that in a situation where all available snapshots are older than that, we will often fail to start a snapshot at all. What's worse is that the longer we delay starting a snapshot sync (to let more peers connect, in the hope of finding a good snapshot), the more likely we are to have seen a high block and thus the more likely we become to accept a snapshot.
This commit removes this comparison with the highest blocknumber criteria entirely and picks the best snapshot we find in 10sec.
* lockfile
* Add a `ChunkType::Dupe` variant so that we do not disconnect a peer if they happen to send us a duplicate chunk (just ignore the chunk and keep going)
Resolve some documentation todos, add more
* tweak log message
* Don't warp sync twice
Check if our own block is beyond the given warp barrier (can happen after we've completed a warp sync but are not quite yet synced up to the tip) and if so, don't sync.
More docs, resolve todos.
Dial down some `sync` debug level logging to trace
* Avoid iterating over all snapshot block/state hashes to find the next work item
Use a HashSet instead of a Vec and remove items from the set as chunks are processed. Calculate and store the total number of chunks in the `Snapshot` struct instead of counting pending chunks each time.
* Address review grumbles
* Log correct number of bytes written to disk
* Revert ChunkType::Dup change
* whitespace grumble
* Cleanup debugging code
* Fix docs
* Fix import and a typo
* Fix test impl
* Use `indexmap::IndexSet` to ensure chunk hashes are accessed in order
* Revert increased SNAPSHOT_MANIFEST_TIMEOUT: 5sec should be enough
* WIP. Typos and logging.
* Format todos
* Pause pruning while a snapshot is under way
Logs, docs and todos
* Allocate memory for the full chunk
* Name snapshotting threads
* Ensure `taking_snapshot` is set to false whenever and however `take_snapshot`returns
Rename `take_at` to `request_snapshot_at`
Cleanup
* Let "in_progress" deletion fail
Fix tests
* Just use an atomic
* Review grumbles
* Finish the sentence
* Resolve a few todos and clarify comments.
* Calculate progress rate since last update
* Lockfile
* Fix tests
* typo
* Reinstate default snapshotting frequency
Cut down on the logging noise
* Use a lock instead of atomics for snapshot Progress
* Update ethcore/types/src/snapshot.rs
Co-Authored-By: Andronik Ordian <write@reusable.software>
* Avoid truncating cast
Cleanup
* WIP. Typos and logging.
* Format todos
* Pause pruning while a snapshot is under way
Logs, docs and todos
* Allocate memory for the full chunk
* Name snapshotting threads
* Ensure `taking_snapshot` is set to false whenever and however `take_snapshot`returns
Rename `take_at` to `request_snapshot_at`
Cleanup
* Let "in_progress" deletion fail
Fix tests
* Just use an atomic
* Review grumbles
* Finish the sentence
* Resolve a few todos and clarify comments.
* Calculate progress rate since last update
* Lockfile
* Fix tests
* typo
* Reinstate default snapshotting frequency
Cut down on the logging noise
* address grumble
* Log memory use with `journal_size()` and explain why.
* Update a few dependencies
Updates two dependencies: `kvdb-rocksdb` and `h2`. Brings in `parking_lot 0.9` which is unintended but possibly fine.
* Bump parking_lot to 0.9
Bump kvdb-memorydb to 0.2 (from git atm)
* New kvdb-memorydb is not breaking
* Remove [patch]
* Move snapshot to own crate
Sort out imports
* WIP cargo toml
* Make snapshotting generic over the client
Sort out tests
* Sort out types from blockchain and client
* Sort out sync
* Sort out imports and generics
* Sort out main binary
* Fix sync test-helpers
* Sort out import for secret-store
* Sort out more imports
* Fix easy todos
* cleanup
* Move SnapshotClient and SnapshotWriter to their proper places
Sort out the circular dependency between snapshot and ethcore by moving all snapshot tests to own crate, snapshot-tests
* cleanup
* Cleanup
* fix merge issues
* Update ethcore/snapshot/snapshot-tests/Cargo.toml
Co-Authored-By: Andronik Ordian <write@reusable.software>
* Sort out botched merge
* Ensure snapshot-tests run
* Docs
* Fix grumbles
* Move snapshot to own crate
Sort out imports
* WIP cargo toml
* Make snapshotting generic over the client
Sort out tests
* Sort out types from blockchain and client
* Sort out sync
* Sort out imports and generics
* Sort out main binary
* Fix sync test-helpers
* Sort out import for secret-store
* Sort out more imports
* Fix easy todos
* cleanup
* Cleanup
* remove unneded workspace member
* cleanup
* Sort out test-helpers dependency on account-db
* Update ethcore/client-traits/src/lib.rs
Co-Authored-By: Niklas Adolfsson <niklasadolfsson1@gmail.com>
* Update ethcore/snapshot/Cargo.toml