* Replace ethcore-logger with env-logger.
* Fix logger initialization in WASM tests.
* uncomment logger initialization in secret store
* Don't use ethcore-logger in whisper.
* Move ethcore-logger within parity dir.
* Uncomment rest from secret-store.
* Use `let _ =` in private_contract for consistency
* `ok()` to `let _ =` fix in service
* Use `let _ = ` for state_db
* Add private tx enabled flag into status packet
* Error log added for the case with no peers available
* Add flag only for supported protocol versions
* Work with private handler refactored
* Log target changed
* Cargo.lock updated
* fix#10125
fix service transaction version detection if --identity is enabled, change test to match how --identity actually works
* fix wrong var
* get the index of v, not /
* idx, not idx.len()
* Update ethcore/sync/src/chain/propagator.rs
Co-Authored-By: joshua-mir <43032097+joshua-mir@users.noreply.github.com>
* Update ethcore/sync/src/chain/propagator.rs
Co-Authored-By: joshua-mir <43032097+joshua-mir@users.noreply.github.com>
* change version prefix to a const
* space
Co-Authored-By: joshua-mir <43032097+joshua-mir@users.noreply.github.com>
Fix: new blocks notifications sometimes missing in pubsub RPC
Implement new struct to pass to `new_blocks()` with extra parameter - `has_more_blocks_to_import`, which was previously used to determine whether the notification should be sent. Now it's up to each implementation to decide what to do.
Updated all implementations to behave as before, except `eth_pubsub`, which will send notification even when the queue is not empty.
Update tests.
* Add `is_idle` to LightSync to check importing status
* Use SyncStateWrapper to make sure is_idle gets updates
* Update is_major_import to use verified queue size as well
* Add comment for `is_idle`
* Add Debug to `SyncStateWrapper`
* `fn get` -> `fn into_inner`
* PIP Table Cost relative to average peers instead of max peers
* Add tracing in PIP new_cost_table
* Update stat peer_count
* Use number of leeching peers for Light serve costs
* Fix test::light_params_load_share_depends_on_max_peers (wrong type)
* Remove (now) useless test
* Remove `load_share` from LightParams.Config
Prevent div. by 0
* Add LEECHER_COUNT_FACTOR
* PR Grumble: u64 to u32 for f64 casting
* Prevent u32 overflow for avg_peer_count
* Add tests for LightSync::Statistics
* Rename db_restore => client
* First step: make it compile!
* Second step: working implementation!
* Refactoring
* Fix tests
* PR Grumbles
* PR Grumbles WIP
* Migrate ancient blocks interating backward
* Early return in block migration if snapshot is aborted
* Remove RwLock getter (PR Grumble I)
* Remove dependency on `Client`: only used Traits
* Add test for recovering aborted snapshot recovery
* Add test for migrating old blocks
* Fix build
* PR Grumble I
* PR Grumble II
* PR Grumble III
* PR Grumble IV
* PR Grumble V
* PR Grumble VI
* Fix one test
* Fix test
* PR Grumble
* PR Grumbles
* PR Grumbles II
* Fix tests
* Release RwLock earlier
* Revert Cargo.lock
* Update _update ancient block_ logic: set local in `commit`
* Update typo in ethcore/src/snapshot/service.rs
Co-Authored-By: ngotchac <ngotchac@gmail.com>
* If no subchain heads then try a different peer
* Add log when useless chain head
* Restrict ChainHead useless peer to ancient blocks
* sync: replace `limit_reorg` with `block_set` condition
* Log block set in block_sync for easier debugging
* logging macros
* Match no args in sync logging macros
* Add QueueFull error
* Only allow importing headers if the first matches requested
* WIP
* Test for chain head gaps and log
* Calc distance even with 2 heads
* Revert previous commits, preparing simple fix
This reverts commit 5f38aa885b22ebb0e3a1d60120cea69f9f322628.
* Reject headers with no gaps when ChainHead
* Reset block sync download when queue full
* Simplify check for subchain heads
* Add comment to explain subchain heads filter
* Fix is_subchain_heads check and comment
* Prevent premature round completion after restart
This is a problem on mainnet where multiple stale peer requests will
force many rounds to complete quickly, forcing the retraction.
* Reset stale old blocks request after queue full
* Revert "Reject headers with no gaps when ChainHead"
This reverts commit 0eb865539e5dee37ab34f168f5fb643300de5ace.
* Add BlockSet to BlockDownloader logging
Currently it is difficult to debug this because there are two instances,
one for OldBlocks and one for NewBlocks. This adds the BlockSet to all
log messages for easy log filtering.
* Reset OldBlocks download from last enqueued
Previously when the ancient block queue was full it would restart the
download from the last imported block, so the ones still in the queue would be
redownloaded. Keeping the existing downloader instance and just
resetting it will start again from the last enqueued block.:wq
* Ignore expired Body and Receipt requests
* Log when ancient block download being restarted
* Only request old blocks from peers with >= difficulty
https://github.com/paritytech/parity-ethereum/pull/9226 might be too
permissive and causing the behaviour of the retraction soon after the
fork block. With this change the peer difficulty has to be greater than
or euqal to our syncing difficulty, so should still fix
https://github.com/paritytech/parity-ethereum/issues/9225
* Some logging and clear stalled blocks head
* Revert "Some logging and clear stalled blocks head"
This reverts commit 757641d9b817ae8b63fec684759b0815af9c4d0e.
* Reset stalled header if useless more than once
* Store useless headers in HashSet
* Add sync target to logging macro
* Don't disable useless peer and fix log macro
* Clear useless headers on reset and comments
* Use custom error for collecting blocks
Previously we resued BlockImportError, however only the Invalid case and
this made little sense with the QueueFull error.
* Remove blank line
* Test for reset sync after consecutive useless headers
* Don't reset after consecutive headers when chain head
* Delete commented out imports
* Return DownloadAction from collect_blocks instead of error
* Don't reset after round complete, was causing test hangs
* Add comment explaining reset after useless
* Replace HashSet with counter for useless headers
* Refactor sync reset on bad block/queue full
* Add missing target for log message
* Fix compiler errors and test after merge
* ethcore: revert ethereum tests submodule update
* sync: Validate received BlockHeaders packets against stored request.
* sync: Validate received BlockBodies and BlockReceipts.
* sync: Fix broken tests.
* sync: Unit tests for BlockDownloader::import_headers.
* sync: Unit tests for import_{bodies,receipts}.
* tests: Add missing method doc.
This PR is fixing deadlock for #8918
It avoids some recursive calls on light_sync by making state check optional for Informant.
The current behavior is to display the information when informant checks if block is major version.
This change a bit the informant behavior, but not on most cases.
To remember where and how this kind of deadlock are likely to happen (not seen with Parkinglot deadlock detection because it uses std condvar), I am adding a description of the deadlock.
Also, for the reviewers there may be better solution than modifying the informant.
### Thread1
- ethcore/sync/light_sync/mod.rs
A call to the light handler through any Io (having a loop of rpc query running on like client makes the dead lock way more likely).
At the end of those calls we systematically call `maintain_sync` method.
Here maintain_sync locks `state` (it is the deadlock cause), with a write purpose
`maintain_sync` -> `begin_search` with the state locked open
`begin_search` -> lightcliennt `flush_queue` method
- ethcore/light/src/client/mod.rs
`flush_queue` -> `flush` on queue (HeaderQueue aka VerificationQueue of headers)
- ethcore/src/verification/queue/mod.rs
Condition there is some unverified or verifying content
`flush` wait on a condvar until the queue is empty. The only way to unlock condvar is that worker is empty and unlock it (so thread 2 is Verification worker).
### Thread2
A verification worker at the end of a verify loop (new block).
- ethcore/src/verification/queue/mod.rs
thread loops on `verify` method.
End of loop condition is_ready -> Import the block immediately
calls `set_sync` on QueueSignal which send a BlockVerified ClientIoMessage in inner channel (IoChannel of ClientIoMessage) using `send_sync`
- util/io/src/service_mio.rs
IoChannel `send_sync` method calls all handlers with `message` method; one of the handlers is ImportBlocks IoHandler (with a single inner Client service field)
- ethcore/light/src/client/service.rs
`message` trigger inner method `import_verified`
- core/light/src/client/mod.rs
`import_verified` at the very end notify the listeners of a new_headers, one of the listeners is Informant `listener` method
- parity/informant.rs
`newHeaders` run up to call to `is_major_importing` on its target (again clinet)
- ethcore/sync/src/light_sync/mod.rs
Here `is_major_importing` tries to get state lock (read purpose only) but cannot because of previous state lock, thus deadlock
* Verify private transaction before propagating
* Private transactions queue reworked with tx pool queue direct usage
* Styling fixed
* Prevent resending private packets to the sender
* Process signed private transaction packets via io queue
* Test fixed
* Build and test fixed after merge
* Comments after review fixed
* Signed transaction taken from verified
* Fix after merge
* Pool scoring generalized in order to use externally
* Lib refactored according to the review comments
* Ready state refactored
* Redundant bound and copying removed
* Fixed build after the merge
* Forgotten case reworked
* Review comments fixed
* Logging reworked, target added
* Fix after merge
* Add a `fastmap` crate that provides the H256FastMap specialized HashMap
* Use `fastmap` instead of `plain_hasher`
* Update submodules for Reasons™
* Submodule update
Closes#9255
This PR also removes the limit of max 64 transactions per packet, currently we only attempt to prevent the packet size to go over 8MB. This will only be the case for super-large transactions or high-block-gas-limit chains.
Patching this is important only for chains that have blocks that can fit more than 4k transactions (over 86M block gas limit)
For mainnet, we should actually see a tiny bit faster propagation since instead of computing 4k pending set, we only need `4 * 8M / 21k = 1523` transactions.
Running some tests on `dekompile` node right now, to check how it performs in the wild.
Previously we only allow downloading of old blocks if the peer
difficulty was greater than our syncing difficulty. This change allows
downloading of blocks from peers where the difficulty is greater then
the last downloaded old block.
* Store recently rejected transactions.
* Don't cache AlreadyImported rejections.
* Make the size of transaction verification queue dependent on pool size.
* Add a test for recently rejected.
* Fix logging for recently rejected.
* Make rejection cache smaller.
* obsolete test removed
* obsolete test removed
* Construct cache with_capacity.
The `patricia_trie` crate is generic over the hasher (by way of HashDB) and node encoding scheme. Adds a new `patricia_trie_ethereum` crate with concrete impls for Keccak/RLP.
* new blooms database
* fixed conflict in Cargo.lock
* removed bloomchain
* cleanup in progress
* all tests passing in trace db with new blooms-db
* added trace_blooms to BlockChainDB interface, fixed db flushing
* BlockChainDB no longer exposes RwLock in the interface
* automatically flush blooms-db after every insert
* blooms-db uses io::BufReader to read files, wrap blooms-db into Mutex, cause fs::File is just a shared file handle
* fix json_tests
* blooms-db can filter multiple possibilities at the same time
* removed enum trace/db.rs CacheId
* lint fixes
* fixed tests
* kvdb-rocksdb uses fs-swap crate
* update Cargo.lock
* use fs::rename
* fixed failing test on linux
* fix tests
* use fs_swap
* fixed failing test on linux
* cleanup after swap
* fix tests
* fixed osx permissions
* simplify parity database opening functions
* added migration to blooms-db
* address @niklasad1 grumbles
* fix license and authors field of blooms-db Cargo.toml
* restore blooms-db after snapshot