* Log block set in block_sync for easier debugging
* logging macros
* Match no args in sync logging macros
* Add QueueFull error
* Only allow importing headers if the first matches requested
* WIP
* Test for chain head gaps and log
* Calc distance even with 2 heads
* Revert previous commits, preparing simple fix
This reverts commit 5f38aa885b22ebb0e3a1d60120cea69f9f322628.
* Reject headers with no gaps when ChainHead
* Reset block sync download when queue full
* Simplify check for subchain heads
* Add comment to explain subchain heads filter
* Fix is_subchain_heads check and comment
* Prevent premature round completion after restart
This is a problem on mainnet where multiple stale peer requests will
force many rounds to complete quickly, forcing the retraction.
* Reset stale old blocks request after queue full
* Revert "Reject headers with no gaps when ChainHead"
This reverts commit 0eb865539e5dee37ab34f168f5fb643300de5ace.
* Add BlockSet to BlockDownloader logging
Currently it is difficult to debug this because there are two instances,
one for OldBlocks and one for NewBlocks. This adds the BlockSet to all
log messages for easy log filtering.
* Reset OldBlocks download from last enqueued
Previously when the ancient block queue was full it would restart the
download from the last imported block, so the ones still in the queue would be
redownloaded. Keeping the existing downloader instance and just
resetting it will start again from the last enqueued block.:wq
* Ignore expired Body and Receipt requests
* Log when ancient block download being restarted
* Only request old blocks from peers with >= difficulty
https://github.com/paritytech/parity-ethereum/pull/9226 might be too
permissive and causing the behaviour of the retraction soon after the
fork block. With this change the peer difficulty has to be greater than
or euqal to our syncing difficulty, so should still fix
https://github.com/paritytech/parity-ethereum/issues/9225
* Some logging and clear stalled blocks head
* Revert "Some logging and clear stalled blocks head"
This reverts commit 757641d9b817ae8b63fec684759b0815af9c4d0e.
* Reset stalled header if useless more than once
* Store useless headers in HashSet
* Add sync target to logging macro
* Don't disable useless peer and fix log macro
* Clear useless headers on reset and comments
* Use custom error for collecting blocks
Previously we resued BlockImportError, however only the Invalid case and
this made little sense with the QueueFull error.
* Remove blank line
* Test for reset sync after consecutive useless headers
* Don't reset after consecutive headers when chain head
* Delete commented out imports
* Return DownloadAction from collect_blocks instead of error
* Don't reset after round complete, was causing test hangs
* Add comment explaining reset after useless
* Replace HashSet with counter for useless headers
* Refactor sync reset on bad block/queue full
* Add missing target for log message
* Fix compiler errors and test after merge
* ethcore: revert ethereum tests submodule update
* sync: Validate received BlockHeaders packets against stored request.
* sync: Validate received BlockBodies and BlockReceipts.
* sync: Fix broken tests.
* sync: Unit tests for BlockDownloader::import_headers.
* sync: Unit tests for import_{bodies,receipts}.
* tests: Add missing method doc.
This PR is fixing deadlock for #8918
It avoids some recursive calls on light_sync by making state check optional for Informant.
The current behavior is to display the information when informant checks if block is major version.
This change a bit the informant behavior, but not on most cases.
To remember where and how this kind of deadlock are likely to happen (not seen with Parkinglot deadlock detection because it uses std condvar), I am adding a description of the deadlock.
Also, for the reviewers there may be better solution than modifying the informant.
### Thread1
- ethcore/sync/light_sync/mod.rs
A call to the light handler through any Io (having a loop of rpc query running on like client makes the dead lock way more likely).
At the end of those calls we systematically call `maintain_sync` method.
Here maintain_sync locks `state` (it is the deadlock cause), with a write purpose
`maintain_sync` -> `begin_search` with the state locked open
`begin_search` -> lightcliennt `flush_queue` method
- ethcore/light/src/client/mod.rs
`flush_queue` -> `flush` on queue (HeaderQueue aka VerificationQueue of headers)
- ethcore/src/verification/queue/mod.rs
Condition there is some unverified or verifying content
`flush` wait on a condvar until the queue is empty. The only way to unlock condvar is that worker is empty and unlock it (so thread 2 is Verification worker).
### Thread2
A verification worker at the end of a verify loop (new block).
- ethcore/src/verification/queue/mod.rs
thread loops on `verify` method.
End of loop condition is_ready -> Import the block immediately
calls `set_sync` on QueueSignal which send a BlockVerified ClientIoMessage in inner channel (IoChannel of ClientIoMessage) using `send_sync`
- util/io/src/service_mio.rs
IoChannel `send_sync` method calls all handlers with `message` method; one of the handlers is ImportBlocks IoHandler (with a single inner Client service field)
- ethcore/light/src/client/service.rs
`message` trigger inner method `import_verified`
- core/light/src/client/mod.rs
`import_verified` at the very end notify the listeners of a new_headers, one of the listeners is Informant `listener` method
- parity/informant.rs
`newHeaders` run up to call to `is_major_importing` on its target (again clinet)
- ethcore/sync/src/light_sync/mod.rs
Here `is_major_importing` tries to get state lock (read purpose only) but cannot because of previous state lock, thus deadlock
* Verify private transaction before propagating
* Private transactions queue reworked with tx pool queue direct usage
* Styling fixed
* Prevent resending private packets to the sender
* Process signed private transaction packets via io queue
* Test fixed
* Build and test fixed after merge
* Comments after review fixed
* Signed transaction taken from verified
* Fix after merge
* Pool scoring generalized in order to use externally
* Lib refactored according to the review comments
* Ready state refactored
* Redundant bound and copying removed
* Fixed build after the merge
* Forgotten case reworked
* Review comments fixed
* Logging reworked, target added
* Fix after merge
* Add a `fastmap` crate that provides the H256FastMap specialized HashMap
* Use `fastmap` instead of `plain_hasher`
* Update submodules for Reasons™
* Submodule update
Closes#9255
This PR also removes the limit of max 64 transactions per packet, currently we only attempt to prevent the packet size to go over 8MB. This will only be the case for super-large transactions or high-block-gas-limit chains.
Patching this is important only for chains that have blocks that can fit more than 4k transactions (over 86M block gas limit)
For mainnet, we should actually see a tiny bit faster propagation since instead of computing 4k pending set, we only need `4 * 8M / 21k = 1523` transactions.
Running some tests on `dekompile` node right now, to check how it performs in the wild.
Previously we only allow downloading of old blocks if the peer
difficulty was greater than our syncing difficulty. This change allows
downloading of blocks from peers where the difficulty is greater then
the last downloaded old block.
* Store recently rejected transactions.
* Don't cache AlreadyImported rejections.
* Make the size of transaction verification queue dependent on pool size.
* Add a test for recently rejected.
* Fix logging for recently rejected.
* Make rejection cache smaller.
* obsolete test removed
* obsolete test removed
* Construct cache with_capacity.
The `patricia_trie` crate is generic over the hasher (by way of HashDB) and node encoding scheme. Adds a new `patricia_trie_ethereum` crate with concrete impls for Keccak/RLP.
* new blooms database
* fixed conflict in Cargo.lock
* removed bloomchain
* cleanup in progress
* all tests passing in trace db with new blooms-db
* added trace_blooms to BlockChainDB interface, fixed db flushing
* BlockChainDB no longer exposes RwLock in the interface
* automatically flush blooms-db after every insert
* blooms-db uses io::BufReader to read files, wrap blooms-db into Mutex, cause fs::File is just a shared file handle
* fix json_tests
* blooms-db can filter multiple possibilities at the same time
* removed enum trace/db.rs CacheId
* lint fixes
* fixed tests
* kvdb-rocksdb uses fs-swap crate
* update Cargo.lock
* use fs::rename
* fixed failing test on linux
* fix tests
* use fs_swap
* fixed failing test on linux
* cleanup after swap
* fix tests
* fixed osx permissions
* simplify parity database opening functions
* added migration to blooms-db
* address @niklasad1 grumbles
* fix license and authors field of blooms-db Cargo.toml
* restore blooms-db after snapshot
* Unordered iterator.
* Use unordered and limited set if full not required.
* Split timeout work into smaller timers.
* Avoid collecting all pending transactions when mining
* Remove println.
* Use priority ordering in eth-filter.
* Fix ethcore-miner tests and tx propagation.
* Review grumbles addressed.
* Add test for unordered not populating the cache.
* Fix ethcore tests.
* Fix light tests.
* Fix ethcore-sync tests.
* Fix RPC tests.
* Update `add_license` script
* run script
* add `remove duplicate lines script` and run it
* Revert changes `English spaces`
* strip whitespaces
* Revert `GPL` in files with `apache/mit license`
* don't append `gpl license` in files with other lic
* Don't append `gpl header` in files with other lic.
* re-ran script
* include c and cpp files too
* remove duplicate header
* rebase nit
* rlp::decode returns Result
* Fix journaldb to handle rlp::decode Result
* Fix ethcore to work with rlp::decode returning Result
* Light client handles rlp::decode returning Result
* Fix tests in rlp_derive
* Fix tests
* Cleanup
* cleanup
* Allow panic rather than breaking out of iterator
* Let decoding failures when reading from disk blow up
* syntax
* Fix the trivial grumbles
* Fix failing tests
* Make Account::from_rlp return Result
* Syntx, sigh
* Temp-fix for decoding failures
* Header::decode returns Result
Handle new return type throughout the code base.
* Do not continue reading from the DB when a value could not be read
* Fix tests
* Handle header decoding in light_sync
* Handling header decoding errors
* Let the DecodeError bubble up unchanged
* Remove redundant error conversion