* only add transactions to signing-queue if it is enabled
* Update rpc/src/v1/helpers/errors.rs
Co-Authored-By: David <dvdplm@gmail.com>
* use errors::codes::ACCOUNT_LOCKED
* bail early if account isn't unlocked
* use errors::signing
* Update rpc/src/v1/helpers/errors.rs
* Update rpc/src/v1/helpers/errors.rs
* test
* adds cli flag to enable signing queue.
* use helper method siginig_queue_disabled instead of accounts::SignError
* fix typo, use raw i64
* fixed tests
* Use upstream rocksdb
…by way of https://github.com/paritytech/parity-common/pull/257 by @ordian.
* Hint at how `parity db reset` works in the error message
* migration-rocksdb: fix build
* Cargo.toml: use git dependency instead of path
* update to latest kvdb-rocksdb
* fix tests
* saner default for light client
* rename open_db to open_db_light
* update to latest kvdb-rocksdb
* moar update to latest kvdb-rocksdb
* even moar update to latest kvdb-rocksdb
* use kvdb-rocksdb from crates.io
* Update parity/db/rocksdb/helpers.rs
* add docs to memory_budget division
* [ethcore]: apply filter when `from_queue`
In `ready_transactions_filtered` the filter were never applied when
the options PendingSet::AlwaysQueue` was configured which this fixes
It also adds a two tests for it
* [ethcore test-helpers]: stray printlns
* docs(ethcore filter options): more generic desc
* tests(ethcore miner): simply filter tests
* [ethcore filter_options]: fix nits
* doc: nit
Co-Authored-By: David <dvdplm@gmail.com>
* doc: nit
Co-Authored-By: David <dvdplm@gmail.com>
* doc: nit
Co-Authored-By: David <dvdplm@gmail.com>
* doc: nit
Co-Authored-By: David <dvdplm@gmail.com>
* doc: nit
Co-Authored-By: David <dvdplm@gmail.com>
* doc(miner filter): simplify documentation
* [rpc]: make tests compile
* fixed verify_uncles error type
* cleanup and document fn verify_uncles bounds checking
* find_uncle_headers and find_uncle_hashes take u64 instead of u32 as an input param
* Update ethcore/verification/src/verification.rs
Co-Authored-By: David <dvdplm@gmail.com>
* Ensure jsonrpc threading settings are sane
Starting with `jsonrpc` v14, the "server threads" setting is more important than before and the current default of 1 means the https server is effectively single-threaded. This PR proposes a new default of 4 (and ensures that crazy settings like e.g. `0` are bumped to at least `1`).
Also included: some docs, tests and cosmetics.
* Update parity/rpc.rs
Co-Authored-By: Tomasz Drwięga <tomusdrw@users.noreply.github.com>
* Update parity/rpc.rs
Co-Authored-By: Tomasz Drwięga <tomusdrw@users.noreply.github.com>
* Remove (i.e. deprecate) `--jsonrpc-threads` command line option
* Call numbers NUM
* Don't show a default for --jsonrpc-threads (deprecated)
* Show deprecation warning when using `--jsonrpc-threads` or `processing_threads`
* Update parity/deprecated.rs
Co-Authored-By: Niklas Adolfsson <niklasadolfsson1@gmail.com>
* Fix test
* Fix tests for real
* Add a benchmark for snapshot::account::to_fat_rlps()
`to_fat_rlps()` is a hot call during snapshots. I don't think it has a perf problem per se, but better to have benchmark for it.
The data used is a piece of Ropsten data sized to the ~95% percentile of account size on that network.
* Benchmark with more chunks, including mainnet data
* whitespace
* Move `used_code` inside the benchmark iteration
* Revert "Move `used_code` inside the benchmark iteration"
This reverts commit cff33ab30acbd1c009e745f646f1cc655ee01d8c.
* simplify verifier, remove NoopVerifier
* simplify verifier by removing Verifier trait and its only implementation
* remove unused imports
* fixed verification test failing to compile
* Make InstantSeal Instant again
* update_sealing if there are transactions in pool after impoerting a block, some line formatting
* Apply suggestions from code review
Co-Authored-By: Tomasz Drwięga <tomusdrw@users.noreply.github.com>
* InstantSeal specific behaviour
* introduce engine.should_reseal_on_update, remove InstantSealService
* remove unused code
* add force param to update_sealing
* better docc
* even better docs
* revert code changes, doc corrections, sort dep
* code optimization
* fix test
* fix bench
* [builtin]: impl new builtin type
Have an enum to deserialize either a builtin of a single price or several prices
* [builtin]: style cleanup
* [builtin]: fix tests
* [builtin]: replace boxing with wrapper enum
* cleanup
* fix: make it backward compatible with old builtin
* fix: update chain specs
* fix: revert use of `type alias` on enum
The CI doesn't use the latest rust.
This commit reverts that change
* fix: builtin tests
* fix: revert use of `type alias` on enum
* [basic-authority]: update test-chainspec
* fix failing tests
* [builtin]: multi-prices add `info field`
It might be hard to read chain specs with several activations points.
This commit introduces a `info` field which may be used to write some
information about the current activation such as:
`Istanbul hardfork EIP-1108` or something similar.
* fix: bad rebase
Co-Authored-By: David <dvdplm@gmail.com>
* fix(grumbles): make it backward compatible
* grumbles: resolve `NOTE`
* revert chain specs changes
* rename test
Co-Authored-By: David <dvdplm@gmail.com>
* [builtin docs]: price -> Fixed price
Co-Authored-By: Andronik Ordian <write@reusable.software>
* [json]: address naming grumbles
InnerPricing -> PricingInner
PriceWithActivationAt -> PricingAt
* docs: revert changes for `AltBn128ConstOperations`
* [json]: usize -> u64
Use explicit types to cope with platform dependent issues for `usize`
* grumble: simplify `spec_backward_compability.json`
* docs: add issue link to `TODO`
* [builtin]: replace `match` with `map`
* [builtin]: add deprecation message `eip1108` params
* nits
* [json spec tests]: fix json indentation
* [json docs]: fix typos
* [json]: `compability layer` + deser to BTreeMap
Previously we had to match `Pricing::Single` and `PricingMulti` which this fixes.
It does by introducing a compability layer and into() implemenentation.
In addition, I switched the deserialization to `BTreeMap` instead of `Vec`.
That changes the format of the chain spec again
* [json]: rename `BuiltinCombat` -> `BuiltinCompat`
* Update ethcore/builtin/src/lib.rs
Co-Authored-By: David <dvdplm@gmail.com>
* [json builtin]: improve docs
Co-Authored-By: David <dvdplm@gmail.com>
* [json builtin]: improve docs
Co-Authored-By: David <dvdplm@gmail.com>
* chore(builtin): sort dependencies
* [json builtin]: deprecate `eip1108` params
* [machine]: add bench for calling builtin contract
* [machine]: reduce calls to `Builtin::is_active`
* [builtin]: fix nits
* [json]: revert breakage of chain specs
* [json builtin]: remove `eip1108` params
* [chain specs]: update to new format
* [machine]: revert changes
* [devp2p]: revert change
* [builtin]: doc nits
* Comments and todos
Use `snapshot_sync` as logging target
* fix compilation
* More todos, more logs
* Fix picking snapshot peer: prefer the one with the highest block number
More docs, comments, todos
* Adjust WAIT_PEERS_TIMEOUT to be a multiple of MAINTAIN_SYNC_TIMER to try to fix snapshot startup problems
Docs, todos, comments
* Tabs
* Formatting
* Don't build new rlp::EMPTY_LIST_RLP instances
* Dial down debug logging
* Don't warn about missing hashes in the manifest: it's normal
Log client version on peer connect
* Cleanup
* Do not skip snapshots further away than 30k block from the highest block seen
Currently we look for peers that seed snapshots that are close to the highest block seen on the network (where "close" means withing 30k blocks). When a node starts up we wait for some time (5sec, increased here to 10sec) to let peers connect and if we have found a suitable peer to sync a snapshot from at the end of that delay, we start the download; if none is found and --warp-barrier is used we stall, otherwise we start a slow-sync.
When looking for a suitable snapshot, we use the highest block seen on the network to check if a peer has a snapshot that is within 30k blocks of that highest block number. This means that in a situation where all available snapshots are older than that, we will often fail to start a snapshot at all. What's worse is that the longer we delay starting a snapshot sync (to let more peers connect, in the hope of finding a good snapshot), the more likely we are to have seen a high block and thus the more likely we become to accept a snapshot.
This commit removes this comparison with the highest blocknumber criteria entirely and picks the best snapshot we find in 10sec.
* lockfile
* Add a `ChunkType::Dupe` variant so that we do not disconnect a peer if they happen to send us a duplicate chunk (just ignore the chunk and keep going)
Resolve some documentation todos, add more
* tweak log message
* Don't warp sync twice
Check if our own block is beyond the given warp barrier (can happen after we've completed a warp sync but are not quite yet synced up to the tip) and if so, don't sync.
More docs, resolve todos.
Dial down some `sync` debug level logging to trace
* Avoid iterating over all snapshot block/state hashes to find the next work item
Use a HashSet instead of a Vec and remove items from the set as chunks are processed. Calculate and store the total number of chunks in the `Snapshot` struct instead of counting pending chunks each time.
* Address review grumbles
* Log correct number of bytes written to disk
* Revert ChunkType::Dup change
* whitespace grumble
* Cleanup debugging code
* Fix docs
* Fix import and a typo
* Fix test impl
* Use `indexmap::IndexSet` to ensure chunk hashes are accessed in order
* Revert increased SNAPSHOT_MANIFEST_TIMEOUT: 5sec should be enough
* Fix `invalid transaction price` error message
* Setup Calibrated GasPriceConfig when usd-per-eth is an endpoint
The change will try to check if the specified value is an endpoint.
If the value of `auto` is specified, the default endpoint URL will be used
otherwise, the user-provided value will be taken as-is for an endpoint.
* Use if-let and check for usd-per-eth arg:
1. auto = use etherscan
2. value = use fixed pricer
3. endpoint = use the provided endpoint as-is
* Fix typo in to_pricce error message
* Correct whitespace indentation
* Use arg_usd_per_eth directly
* WIP. Typos and logging.
* Format todos
* Pause pruning while a snapshot is under way
Logs, docs and todos
* Allocate memory for the full chunk
* Name snapshotting threads
* Ensure `taking_snapshot` is set to false whenever and however `take_snapshot`returns
Rename `take_at` to `request_snapshot_at`
Cleanup
* Let "in_progress" deletion fail
Fix tests
* Just use an atomic
* Review grumbles
* Finish the sentence
* Resolve a few todos and clarify comments.
* Calculate progress rate since last update
* Lockfile
* Fix tests
* typo
* Reinstate default snapshotting frequency
Cut down on the logging noise
* Use a lock instead of atomics for snapshot Progress
* Update ethcore/types/src/snapshot.rs
Co-Authored-By: Andronik Ordian <write@reusable.software>
* Avoid truncating cast
Cleanup
* [informant]: `MillisecondDuration` -> `as_millis()`
This commit removes the trait `MillisecondDuration` and
replaces it with `Duration::as_millis` instead
* [grumble]: extract `elapsed()` to variable
Fixes#11202
The `Display` implementation for `SpecHardcodedSync` used the `Display` implementation of
`ethereum_types::H256` which doesn't show the full hash which this fixes.
* WIP. Typos and logging.
* Format todos
* Pause pruning while a snapshot is under way
Logs, docs and todos
* Allocate memory for the full chunk
* Name snapshotting threads
* Ensure `taking_snapshot` is set to false whenever and however `take_snapshot`returns
Rename `take_at` to `request_snapshot_at`
Cleanup
* Let "in_progress" deletion fail
Fix tests
* Just use an atomic
* Review grumbles
* Finish the sentence
* Resolve a few todos and clarify comments.
* Calculate progress rate since last update
* Lockfile
* Fix tests
* typo
* Reinstate default snapshotting frequency
Cut down on the logging noise
* address grumble
* Log memory use with `journal_size()` and explain why.