Compare commits
73 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3fd58bdcbd | ||
|
|
ecbafb2390 | ||
|
|
adabd8198c | ||
|
|
c2487cfe07 | ||
|
|
e0141f8324 | ||
|
|
b52ac20660 | ||
|
|
3c85f29f11 | ||
|
|
d9673b0d6b | ||
|
|
89f828be1c | ||
|
|
ec56b1f09d | ||
|
|
95236d25b2 | ||
|
|
7b2afdfc8c | ||
|
|
440e52f410 | ||
|
|
8840a293dd | ||
|
|
89d627769e | ||
|
|
4e2e88a620 | ||
|
|
ebf51c0be0 | ||
|
|
04c6867660 | ||
|
|
0199acbece | ||
|
|
e4c2fe9e72 | ||
|
|
407de5e8c4 | ||
|
|
7d26a82232 | ||
|
|
3b23817936 | ||
|
|
aa8487c1d0 | ||
|
|
9cb8606103 | ||
|
|
6cf3ba7efd | ||
|
|
023e511f83 | ||
|
|
17042e9c32 | ||
|
|
f2c34f7ca2 | ||
|
|
375a8daeb4 | ||
|
|
b700ff3501 | ||
|
|
9519493e32 | ||
|
|
037fd1b309 | ||
|
|
78a534633d | ||
|
|
effead9ba5 | ||
|
|
a8ee3c97e6 | ||
|
|
fb461659c7 | ||
|
|
a574df3132 | ||
|
|
d83143d0ba | ||
|
|
f875175325 | ||
|
|
c9db8ea21d | ||
|
|
a16bad4175 | ||
|
|
595dac6c3f | ||
|
|
82a148a99b | ||
|
|
4320c9bc4f | ||
|
|
23d977ecce | ||
|
|
ab27848dc4 | ||
|
|
742a6007fe | ||
|
|
91933d857d | ||
|
|
3e1d73126c | ||
|
|
7014642815 | ||
|
|
1bd4564216 | ||
|
|
97cb010df8 | ||
|
|
ed18c7b54c | ||
|
|
e71598d876 | ||
|
|
3d0ce10fa6 | ||
|
|
cfc8df156b | ||
|
|
94cb3b6e0e | ||
|
|
fefec000fb | ||
|
|
c7ded6a785 | ||
|
|
2fbb952cdd | ||
|
|
e2ab3e4f5b | ||
|
|
1871275ecd | ||
|
|
afc1b72611 | ||
|
|
c5c3fb6a75 | ||
|
|
bceb883d99 | ||
|
|
fcccbf3b75 | ||
|
|
9ad71b7baa | ||
|
|
4311d43497 | ||
|
|
0815cc3b83 | ||
|
|
b21844b371 | ||
|
|
f825048efa | ||
|
|
2cbffe36e2 |
294
CHANGELOG.md
294
CHANGELOG.md
@@ -1,162 +1,152 @@
|
||||
## Parity-Ethereum [v2.3.0](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.0) (2019-01-16)
|
||||
## Parity-Ethereum [v2.4.3](https://github.com/paritytech/parity-ethereum/releases/tag/v2.4.3) (2019-03-22)
|
||||
|
||||
Parity-Ethereum 2.3.0-beta is a consensus-relevant security release that reverts Constantinople on the Ethereum network. Upgrading is mandatory for Ethereum, and strongly recommended for other networks.
|
||||
|
||||
- **Consensus** - Ethereum Network: Pull Constantinople protocol upgrade on Ethereum (#10189)
|
||||
- Read more: [Security Alert: Ethereum Constantinople Postponement](https://blog.ethereum.org/2019/01/15/security-alert-ethereum-constantinople-postponement/)
|
||||
- **Networking** - All networks: Ping nodes from discovery (#10167)
|
||||
- **Wasm** - Kovan Network: Update pwasm-utils to 0.6.1 (#10134)
|
||||
|
||||
Other notable changes:
|
||||
|
||||
- Existing blocks in the database are now kept when restoring a Snapshot. (#8643)
|
||||
- Block and transaction propagation is improved significantly. (#9954)
|
||||
- The ERC-191 Signed Data Standard is now supported by `personal_sign191`. (#9701)
|
||||
- Add support for ERC-191/712 `eth_signTypedData` as a standard for machine-verifiable and human-readable typed data signing with Ethereum keys. (#9631)
|
||||
- Add support for ERC-1186 `eth_getProof` (#9001)
|
||||
- Add experimental RPCs flag to enable ERC-191, ERC-712, and ERC-1186 APIs via `--jsonrpc-experimental` (#9928)
|
||||
- Make `CALLCODE` to trace value to be the code address. (#9881)
|
||||
|
||||
Configuration changes:
|
||||
|
||||
- The EIP-98 transition is now disabled by default. If you previously had no `eip98transition` specified in your chain specification, you would enable this now manually on block `0x0`. (#9955)
|
||||
- Also, unknown fields in chain specs are now rejected. (#9972)
|
||||
- The Tendermint engine was removed from Parity Ethereum and is no longer available and maintained. (#9980)
|
||||
- Ropsten testnet data and keys moved from `test/` to `ropsten/` subdir. To reuse your old keys and data either copy or symlink them to the new location. (#10123)
|
||||
- Strict empty steps validation (#10041)
|
||||
- If you have a chain with`empty_steps` already running, some blocks most likely contain non-strict entries (unordered or duplicated empty steps). In this release `strict_empty_steps_transition` is enabled by default at block `0x0` for any chain with `empty_steps`.
|
||||
- If your network uses `empty_steps` you **must** (A) plan a hard fork and change `strict_empty_steps_transition` to the desired fork block and (B) update the clients of the whole network to 2.2.7-stable / 2.3.0-beta. If for some reason you don't want to do this please set`strict_empty_steps_transition` to `0xfffffffff` to disable it.
|
||||
|
||||
_Note:_ This release marks Parity 2.3 as _beta_. All versions of Parity 2.2 are now considered _stable_.
|
||||
Parity-Ethereum 2.4.3-beta is a bugfix release that improves performance and stability. This patch release contains a critical bug fix where serving light clients previously led to client crashes. Upgrading is highly recommended.
|
||||
|
||||
The full list of included changes:
|
||||
- 2.4.3 beta backports ([#10508](https://github.com/paritytech/parity-ethereum/pull/10508))
|
||||
- Version: bump beta
|
||||
- Add additional request tests ([#10503](https://github.com/paritytech/parity-ethereum/pull/10503))
|
||||
|
||||
- Backports for 2.3.0 beta ([#10164](https://github.com/paritytech/parity-ethereum/pull/10164))
|
||||
- Snap: fix path in script ([#10157](https://github.com/paritytech/parity-ethereum/pull/10157))
|
||||
- Make sure parent block is not in importing queue when importing ancient blocks ([#10138](https://github.com/paritytech/parity-ethereum/pull/10138))
|
||||
- Ci: re-enable snap publishing ([#10142](https://github.com/paritytech/parity-ethereum/pull/10142))
|
||||
- Hf in POA Core (2019-01-18) - Constantinople ([#10155](https://github.com/paritytech/parity-ethereum/pull/10155))
|
||||
- Update EWF's tobalaba chainspec ([#10152](https://github.com/paritytech/parity-ethereum/pull/10152))
|
||||
- Replace ethcore-logger with env-logger. ([#10102](https://github.com/paritytech/parity-ethereum/pull/10102))
|
||||
- Finality: dont require chain head to be in the chain ([#10054](https://github.com/paritytech/parity-ethereum/pull/10054))
|
||||
- Remove caching for node connections ([#10143](https://github.com/paritytech/parity-ethereum/pull/10143))
|
||||
- Blooms file iterator empty on out of range position. ([#10145](https://github.com/paritytech/parity-ethereum/pull/10145))
|
||||
- Autogen docs for the "Configuring Parity Ethereum" wiki page. ([#10067](https://github.com/paritytech/parity-ethereum/pull/10067))
|
||||
- Misc: bump license header to 2019 ([#10135](https://github.com/paritytech/parity-ethereum/pull/10135))
|
||||
- Hide most of the logs from cpp example. ([#10139](https://github.com/paritytech/parity-ethereum/pull/10139))
|
||||
- Don't try to send oversized packets ([#10042](https://github.com/paritytech/parity-ethereum/pull/10042))
|
||||
- Private tx enabled flag added into STATUS packet ([#9999](https://github.com/paritytech/parity-ethereum/pull/9999))
|
||||
- Update pwasm-utils to 0.6.1 ([#10134](https://github.com/paritytech/parity-ethereum/pull/10134))
|
||||
- Extract blockchain from ethcore ([#10114](https://github.com/paritytech/parity-ethereum/pull/10114))
|
||||
- Ethcore: update hardcoded headers ([#10123](https://github.com/paritytech/parity-ethereum/pull/10123))
|
||||
- Identity fix ([#10128](https://github.com/paritytech/parity-ethereum/pull/10128))
|
||||
- Use LenCachingMutex to optimize verification. ([#10117](https://github.com/paritytech/parity-ethereum/pull/10117))
|
||||
- Pyethereum keystore support ([#9710](https://github.com/paritytech/parity-ethereum/pull/9710))
|
||||
- Bump rocksdb-sys to 0.5.5 ([#10124](https://github.com/paritytech/parity-ethereum/pull/10124))
|
||||
- Parity-clib: `async C bindings to RPC requests` + `subscribe/unsubscribe to websocket events` ([#9920](https://github.com/paritytech/parity-ethereum/pull/9920))
|
||||
- Refactor (hardware wallet) : reduce the number of threads ([#9644](https://github.com/paritytech/parity-ethereum/pull/9644))
|
||||
- Hf in POA Sokol (2019-01-04) ([#10077](https://github.com/paritytech/parity-ethereum/pull/10077))
|
||||
- Fix broken links ([#10119](https://github.com/paritytech/parity-ethereum/pull/10119))
|
||||
- Follow-up to [#10105](https://github.com/paritytech/parity-ethereum/issues/10105) ([#10107](https://github.com/paritytech/parity-ethereum/pull/10107))
|
||||
- Move EIP-712 crate back to parity-ethereum ([#10106](https://github.com/paritytech/parity-ethereum/pull/10106))
|
||||
- Move a bunch of stuff around ([#10101](https://github.com/paritytech/parity-ethereum/pull/10101))
|
||||
- Revert "Add --frozen when running cargo ([#10081](https://github.com/paritytech/parity-ethereum/pull/10081))" ([#10105](https://github.com/paritytech/parity-ethereum/pull/10105))
|
||||
- Fix left over small grumbles on whitespaces ([#10084](https://github.com/paritytech/parity-ethereum/pull/10084))
|
||||
- Add --frozen when running cargo ([#10081](https://github.com/paritytech/parity-ethereum/pull/10081))
|
||||
- Fix pubsub new_blocks notifications to include all blocks ([#9987](https://github.com/paritytech/parity-ethereum/pull/9987))
|
||||
- Update some dependencies for compilation with pc-windows-gnu ([#10082](https://github.com/paritytech/parity-ethereum/pull/10082))
|
||||
- Fill transaction hash on ethGetLog of light client. ([#9938](https://github.com/paritytech/parity-ethereum/pull/9938))
|
||||
- Update changelog update for 2.2.5-beta and 2.1.10-stable ([#10064](https://github.com/paritytech/parity-ethereum/pull/10064))
|
||||
- Implement len caching for parking_lot RwLock ([#10032](https://github.com/paritytech/parity-ethereum/pull/10032))
|
||||
- Update parking_lot to 0.7 ([#10050](https://github.com/paritytech/parity-ethereum/pull/10050))
|
||||
- Bump crossbeam. ([#10048](https://github.com/paritytech/parity-ethereum/pull/10048))
|
||||
- Ethcore: enable constantinople on ethereum ([#10031](https://github.com/paritytech/parity-ethereum/pull/10031))
|
||||
- Strict empty steps validation ([#10041](https://github.com/paritytech/parity-ethereum/pull/10041))
|
||||
- Center the Subtitle, use some CAPS ([#10034](https://github.com/paritytech/parity-ethereum/pull/10034))
|
||||
- Change test miner max memory to malloc reports. ([#10024](https://github.com/paritytech/parity-ethereum/pull/10024))
|
||||
- Sort the storage for private state ([#10018](https://github.com/paritytech/parity-ethereum/pull/10018))
|
||||
- Fix: test corpus_inaccessible panic ([#10019](https://github.com/paritytech/parity-ethereum/pull/10019))
|
||||
- Ci: move future releases to ethereum subdir on s3 ([#10017](https://github.com/paritytech/parity-ethereum/pull/10017))
|
||||
- Light(on_demand): decrease default time window to 10 secs ([#10016](https://github.com/paritytech/parity-ethereum/pull/10016))
|
||||
- Light client : failsafe crate (circuit breaker) ([#9790](https://github.com/paritytech/parity-ethereum/pull/9790))
|
||||
- Lencachingmutex ([#9988](https://github.com/paritytech/parity-ethereum/pull/9988))
|
||||
- Version and notification for private contract wrapper added ([#9761](https://github.com/paritytech/parity-ethereum/pull/9761))
|
||||
- Handle failing case for update account cache in require ([#9989](https://github.com/paritytech/parity-ethereum/pull/9989))
|
||||
- Add tokio runtime to ethcore io worker ([#9979](https://github.com/paritytech/parity-ethereum/pull/9979))
|
||||
- Move daemonize before creating account provider ([#10003](https://github.com/paritytech/parity-ethereum/pull/10003))
|
||||
- Docs: update changelogs ([#9990](https://github.com/paritytech/parity-ethereum/pull/9990))
|
||||
- Fix daemonize ([#10000](https://github.com/paritytech/parity-ethereum/pull/10000))
|
||||
- Fix Bloom migration ([#9992](https://github.com/paritytech/parity-ethereum/pull/9992))
|
||||
- Remove tendermint engine support ([#9980](https://github.com/paritytech/parity-ethereum/pull/9980))
|
||||
- Calculate gas for deployment transaction ([#9840](https://github.com/paritytech/parity-ethereum/pull/9840))
|
||||
- Fix unstable peers and slowness in sync ([#9967](https://github.com/paritytech/parity-ethereum/pull/9967))
|
||||
- Adds parity_verifySignature RPC method ([#9507](https://github.com/paritytech/parity-ethereum/pull/9507))
|
||||
- Improve block and transaction propagation ([#9954](https://github.com/paritytech/parity-ethereum/pull/9954))
|
||||
- Deny unknown fields for chainspec ([#9972](https://github.com/paritytech/parity-ethereum/pull/9972))
|
||||
- Fix docker build ([#9971](https://github.com/paritytech/parity-ethereum/pull/9971))
|
||||
- Ci: rearrange pipeline by logic ([#9970](https://github.com/paritytech/parity-ethereum/pull/9970))
|
||||
- Add changelogs for 2.0.9, 2.1.4, 2.1.6, and 2.2.1 ([#9963](https://github.com/paritytech/parity-ethereum/pull/9963))
|
||||
- Add Error message when sync is still in progress. ([#9475](https://github.com/paritytech/parity-ethereum/pull/9475))
|
||||
- Make CALLCODE to trace value to be the code address ([#9881](https://github.com/paritytech/parity-ethereum/pull/9881))
|
||||
- Fix light client informant while syncing ([#9932](https://github.com/paritytech/parity-ethereum/pull/9932))
|
||||
- Add a optional json dump state to evm-bin ([#9706](https://github.com/paritytech/parity-ethereum/pull/9706))
|
||||
- Disable EIP-98 transition by default ([#9955](https://github.com/paritytech/parity-ethereum/pull/9955))
|
||||
- Remove secret_store runtimes. ([#9888](https://github.com/paritytech/parity-ethereum/pull/9888))
|
||||
- Fix a deadlock ([#9952](https://github.com/paritytech/parity-ethereum/pull/9952))
|
||||
- Chore(eip712): remove unused `failure-derive` ([#9958](https://github.com/paritytech/parity-ethereum/pull/9958))
|
||||
- Do not use the home directory as the working dir in docker ([#9834](https://github.com/paritytech/parity-ethereum/pull/9834))
|
||||
- Prevent silent errors in daemon mode, closes [#9367](https://github.com/paritytech/parity-ethereum/issues/9367) ([#9946](https://github.com/paritytech/parity-ethereum/pull/9946))
|
||||
- Fix empty steps ([#9939](https://github.com/paritytech/parity-ethereum/pull/9939))
|
||||
- Adjust requests costs for light client ([#9925](https://github.com/paritytech/parity-ethereum/pull/9925))
|
||||
- Eip-1186: add `eth_getProof` RPC-Method ([#9001](https://github.com/paritytech/parity-ethereum/pull/9001))
|
||||
- Missing blocks in filter_changes RPC ([#9947](https://github.com/paritytech/parity-ethereum/pull/9947))
|
||||
- Allow rust-nightly builds fail in nightly builds ([#9944](https://github.com/paritytech/parity-ethereum/pull/9944))
|
||||
- Update eth-secp256k1 to include fix for BSDs ([#9935](https://github.com/paritytech/parity-ethereum/pull/9935))
|
||||
- Unbreak build on rust -stable ([#9934](https://github.com/paritytech/parity-ethereum/pull/9934))
|
||||
- Keep existing blocks when restoring a Snapshot ([#8643](https://github.com/paritytech/parity-ethereum/pull/8643))
|
||||
- Add experimental RPCs flag ([#9928](https://github.com/paritytech/parity-ethereum/pull/9928))
|
||||
- Clarify poll lifetime ([#9922](https://github.com/paritytech/parity-ethereum/pull/9922))
|
||||
- Docs(require rust 1.30) ([#9923](https://github.com/paritytech/parity-ethereum/pull/9923))
|
||||
- Use block header for building finality ([#9914](https://github.com/paritytech/parity-ethereum/pull/9914))
|
||||
- Simplify cargo audit ([#9918](https://github.com/paritytech/parity-ethereum/pull/9918))
|
||||
- Light-fetch: Differentiate between out-of-gas/manual throw and use required gas from response on failure ([#9824](https://github.com/paritytech/parity-ethereum/pull/9824))
|
||||
- Eip 191 ([#9701](https://github.com/paritytech/parity-ethereum/pull/9701))
|
||||
- Fix(logger): `reqwest` no longer a dependency ([#9908](https://github.com/paritytech/parity-ethereum/pull/9908))
|
||||
- Remove rust-toolchain file ([#9906](https://github.com/paritytech/parity-ethereum/pull/9906))
|
||||
- Foundation: 6692865, ropsten: 4417537, kovan: 9363457 ([#9907](https://github.com/paritytech/parity-ethereum/pull/9907))
|
||||
- Ethcore: use Machine::verify_transaction on parent block ([#9900](https://github.com/paritytech/parity-ethereum/pull/9900))
|
||||
- Chore(rpc-tests): remove unused rand ([#9896](https://github.com/paritytech/parity-ethereum/pull/9896))
|
||||
- Fix: Intermittent failing CI due to addr in use ([#9885](https://github.com/paritytech/parity-ethereum/pull/9885))
|
||||
- Chore(bump docopt): 0.8 -> 1.0 ([#9889](https://github.com/paritytech/parity-ethereum/pull/9889))
|
||||
- Use expect ([#9883](https://github.com/paritytech/parity-ethereum/pull/9883))
|
||||
- Use Weak reference in PubSubClient ([#9886](https://github.com/paritytech/parity-ethereum/pull/9886))
|
||||
- Ci: nuke the gitlab caches ([#9855](https://github.com/paritytech/parity-ethereum/pull/9855))
|
||||
- Remove unused code ([#9884](https://github.com/paritytech/parity-ethereum/pull/9884))
|
||||
- Fix json tracer overflow ([#9873](https://github.com/paritytech/parity-ethereum/pull/9873))
|
||||
- Allow to seal work on latest block ([#9876](https://github.com/paritytech/parity-ethereum/pull/9876))
|
||||
- Fix docker script ([#9854](https://github.com/paritytech/parity-ethereum/pull/9854))
|
||||
- Health endpoint ([#9847](https://github.com/paritytech/parity-ethereum/pull/9847))
|
||||
- Gitlab-ci: make android release build succeed ([#9743](https://github.com/paritytech/parity-ethereum/pull/9743))
|
||||
- Clean up existing benchmarks ([#9839](https://github.com/paritytech/parity-ethereum/pull/9839))
|
||||
- Update Callisto block reward code to support HF1 ([#9811](https://github.com/paritytech/parity-ethereum/pull/9811))
|
||||
- Option to disable keep alive for JSON-RPC http transport ([#9848](https://github.com/paritytech/parity-ethereum/pull/9848))
|
||||
- Classic.json Bootnode Update ([#9828](https://github.com/paritytech/parity-ethereum/pull/9828))
|
||||
- Support MIX. ([#9767](https://github.com/paritytech/parity-ethereum/pull/9767))
|
||||
- Ci: remove failing tests for android, windows, and macos ([#9788](https://github.com/paritytech/parity-ethereum/pull/9788))
|
||||
- Implement NoProof for json tests and update tests reference (replaces [#9744](https://github.com/paritytech/parity-ethereum/issues/9744)) ([#9814](https://github.com/paritytech/parity-ethereum/pull/9814))
|
||||
- Chore(bump regex) ([#9842](https://github.com/paritytech/parity-ethereum/pull/9842))
|
||||
- Ignore global cache for patched accounts ([#9752](https://github.com/paritytech/parity-ethereum/pull/9752))
|
||||
- Move state root verification before gas used ([#9841](https://github.com/paritytech/parity-ethereum/pull/9841))
|
||||
- Fix(docker-aarch64) : cross-compile config ([#9798](https://github.com/paritytech/parity-ethereum/pull/9798))
|
||||
- Version: bump nightly to 2.3.0 ([#9819](https://github.com/paritytech/parity-ethereum/pull/9819))
|
||||
- Tests modification for windows CI ([#9671](https://github.com/paritytech/parity-ethereum/pull/9671))
|
||||
- Eip-712 implementation ([#9631](https://github.com/paritytech/parity-ethereum/pull/9631))
|
||||
- Fix typo ([#9826](https://github.com/paritytech/parity-ethereum/pull/9826))
|
||||
- Clean up serde rename and use rename_all = camelCase when possible ([#9823](https://github.com/paritytech/parity-ethereum/pull/9823))
|
||||
## Parity-Ethereum [v2.4.2](https://github.com/paritytech/parity-ethereum/releases/tag/v2.4.2) (2019-03-20)
|
||||
|
||||
Parity-Ethereum 2.4.2-beta is a bugfix release that improves performance and stability.
|
||||
|
||||
The full list of included changes:
|
||||
- 2.4.2 beta backports ([#10488](https://github.com/paritytech/parity-ethereum/pull/10488))
|
||||
- Version: bump beta
|
||||
- Сaching through docker volume ([#10477](https://github.com/paritytech/parity-ethereum/pull/10477))
|
||||
- fix win&mac build ([#10486](https://github.com/paritytech/parity-ethereum/pull/10486))
|
||||
- fix(extract `timestamp_checked_add` as lib) ([#10383](https://github.com/paritytech/parity-ethereum/pull/10383))
|
||||
|
||||
## Parity-Ethereum [v2.4.1](https://github.com/paritytech/parity-ethereum/releases/tag/v2.4.1) (2019-03-19)
|
||||
|
||||
Parity-Ethereum 2.4.1-beta is a bugfix release that improves performance and stability.
|
||||
|
||||
The full list of included changes:
|
||||
- 2.4.1 beta backports ([#10471](https://github.com/paritytech/parity-ethereum/pull/10471))
|
||||
- Version: bump beta
|
||||
- Implement parity_versionInfo & parity_setChain on LC; fix parity_setChain ([#10312](https://github.com/paritytech/parity-ethereum/pull/10312))
|
||||
- CI publish to aws ([#10446](https://github.com/paritytech/parity-ethereum/pull/10446))
|
||||
- CI aws git checkout ([#10451](https://github.com/paritytech/parity-ethereum/pull/10451))
|
||||
- Revert "CI aws git checkout ([#10451](https://github.com/paritytech/parity-ethereum/pull/10451))" ([#10456](https://github.com/paritytech/parity-ethereum/pull/10456))
|
||||
- Tests parallelized ([#10452](https://github.com/paritytech/parity-ethereum/pull/10452))
|
||||
- Ensure static validator set changes are recognized ([#10467](https://github.com/paritytech/parity-ethereum/pull/10467))
|
||||
|
||||
## Parity-Ethereum [v2.4.0](https://github.com/paritytech/parity-ethereum/releases/tag/v2.4.0) (2019-02-25)
|
||||
|
||||
Parity-Ethereum 2.4.0-beta is our trifortnightly minor version release coming with a lot of new features as well as bugfixes and performance improvements.
|
||||
|
||||
Notable changes:
|
||||
- Account management is now deprecated ([#10213](https://github.com/paritytech/parity-ethereum/pull/10213))
|
||||
- Local accounts can now be specified via CLI ([#9960](https://github.com/paritytech/parity-ethereum/pull/9960))
|
||||
- Chains can now be reset to a particular block via CLI ([#9782](https://github.com/paritytech/parity-ethereum/pull/9782))
|
||||
- Ethash now additionally implements ProgPoW ([#9762](https://github.com/paritytech/parity-ethereum/pull/9762))
|
||||
- The `eip1283DisableTransition` flag was added to revert EIP-1283 ([#10214](https://github.com/paritytech/parity-ethereum/pull/10214))
|
||||
|
||||
The full list of included changes:
|
||||
- More Backports for Beta 2.4.0 ([#10431](https://github.com/paritytech/parity-ethereum/pull/10431))
|
||||
- Revert some changes, could be buggy ([#10399](https://github.com/paritytech/parity-ethereum/pull/10399))
|
||||
- Ci: clean up gitlab-ci.yml leftovers from previous merge ([#10429](https://github.com/paritytech/parity-ethereum/pull/10429))
|
||||
- 10000 > 5000 ([#10422](https://github.com/paritytech/parity-ethereum/pull/10422))
|
||||
- Fix underflow in pip, closes [#10419](https://github.com/paritytech/parity-ethereum/pull/10419) ([#10423](https://github.com/paritytech/parity-ethereum/pull/10423))
|
||||
- Fix panic when logging directory does not exist, closes [#10420](https://github.com/paritytech/parity-ethereum/pull/10420) ([#10424](https://github.com/paritytech/parity-ethereum/pull/10424))
|
||||
- Update hardcoded headers for Foundation, Ropsten, Kovan and Classic ([#10417](https://github.com/paritytech/parity-ethereum/pull/10417))
|
||||
- Backports for Beta 2.4.0 ([#10416](https://github.com/paritytech/parity-ethereum/pull/10416))
|
||||
- No-git for publish jobs, empty artifacts dir ([#10393](https://github.com/paritytech/parity-ethereum/pull/10393))
|
||||
- Snap: reenable i386, arm64, armhf architecture publishing ([#10386](https://github.com/paritytech/parity-ethereum/pull/10386))
|
||||
- Tx pool: always accept local transactions ([#10375](https://github.com/paritytech/parity-ethereum/pull/10375))
|
||||
- Fix to_pod storage trie value decoding ([#10368](https://github.com/paritytech/parity-ethereum/pull/10368))
|
||||
- Version: mark 2.4.0 beta
|
||||
- Update to latest mem-db, hash-db and trie-db. ([#10314](https://github.com/paritytech/parity-ethereum/pull/10314))
|
||||
- Tx pool: always accept local transactions ([#10375](https://github.com/paritytech/parity-ethereum/pull/10375))
|
||||
- Fix(trace_main! macro): don't re-export ([#10384](https://github.com/paritytech/parity-ethereum/pull/10384))
|
||||
- Exchanged old(azure) bootnodes with new(ovh) ones ([#10309](https://github.com/paritytech/parity-ethereum/pull/10309))
|
||||
- Ethash: implement Progpow ([#9762](https://github.com/paritytech/parity-ethereum/pull/9762))
|
||||
- Snap: add the removable-media plug ([#10377](https://github.com/paritytech/parity-ethereum/pull/10377))
|
||||
- Add message to IO errors ([#10324](https://github.com/paritytech/parity-ethereum/pull/10324))
|
||||
- Chore(bump parity-daemonize): require rust >= 1.31 ([#10359](https://github.com/paritytech/parity-ethereum/pull/10359))
|
||||
- Secretstore: use in-memory transport in cluster tests ([#9850](https://github.com/paritytech/parity-ethereum/pull/9850))
|
||||
- Add fields to `memzero`'s Cargo.toml ([#10362](https://github.com/paritytech/parity-ethereum/pull/10362))
|
||||
- Snap: release untagged versions from branches to the candidate snap channel ([#10357](https://github.com/paritytech/parity-ethereum/pull/10357))
|
||||
- Fix(compilation warns): `no-default-features` ([#10346](https://github.com/paritytech/parity-ethereum/pull/10346))
|
||||
- No volumes are needed, just run -v volume:/path/in/the/container ([#10345](https://github.com/paritytech/parity-ethereum/pull/10345))
|
||||
- Fixed misstype ([#10351](https://github.com/paritytech/parity-ethereum/pull/10351))
|
||||
- Snap: prefix version and populate candidate channel ([#10343](https://github.com/paritytech/parity-ethereum/pull/10343))
|
||||
- Bundle protocol and packet_id together in chain sync ([#10315](https://github.com/paritytech/parity-ethereum/pull/10315))
|
||||
- Role back docker build image and docker deploy image to ubuntu:xenial… ([#10338](https://github.com/paritytech/parity-ethereum/pull/10338))
|
||||
- Change docker image based on debian instead of ubuntu due to the chan… ([#10336](https://github.com/paritytech/parity-ethereum/pull/10336))
|
||||
- Don't add discovery initiators to the node table ([#10305](https://github.com/paritytech/parity-ethereum/pull/10305))
|
||||
- Fix(docker): fix not receives SIGINT ([#10059](https://github.com/paritytech/parity-ethereum/pull/10059))
|
||||
- Snap: official image / test ([#10168](https://github.com/paritytech/parity-ethereum/pull/10168))
|
||||
- Fix(add helper for timestamp overflows) ([#10330](https://github.com/paritytech/parity-ethereum/pull/10330))
|
||||
- Additional error for invalid gas ([#10327](https://github.com/paritytech/parity-ethereum/pull/10327))
|
||||
- Revive parity_setMinGasPrice RPC call ([#10294](https://github.com/paritytech/parity-ethereum/pull/10294))
|
||||
- Add Statetest support for Constantinople Fix ([#10323](https://github.com/paritytech/parity-ethereum/pull/10323))
|
||||
- Fix(parity-clib): grumbles that were not addressed in [#9920](https://github.com/paritytech/parity-ethereum/pull/9920) ([#10154](https://github.com/paritytech/parity-ethereum/pull/10154))
|
||||
- Fix(light-rpc): Make `light_sync` generic ([#10238](https://github.com/paritytech/parity-ethereum/pull/10238))
|
||||
- Fix publish job ([#10317](https://github.com/paritytech/parity-ethereum/pull/10317))
|
||||
- Secure WS-RPC: grant access to all apis ([#10246](https://github.com/paritytech/parity-ethereum/pull/10246))
|
||||
- Make specification of protocol in SyncRequester::send_request explicit ([#10295](https://github.com/paritytech/parity-ethereum/pull/10295))
|
||||
- Fix: parity-clib/examples/cpp/CMakeLists.txt ([#10313](https://github.com/paritytech/parity-ethereum/pull/10313))
|
||||
- Ci optimizations ([#10297](https://github.com/paritytech/parity-ethereum/pull/10297))
|
||||
- Increase number of requested block bodies in chain sync ([#10247](https://github.com/paritytech/parity-ethereum/pull/10247))
|
||||
- Deprecate account management ([#10213](https://github.com/paritytech/parity-ethereum/pull/10213))
|
||||
- Properly handle check_epoch_end_signal errors ([#10015](https://github.com/paritytech/parity-ethereum/pull/10015))
|
||||
- Fix(osx and windows builds): bump parity-daemonize ([#10291](https://github.com/paritytech/parity-ethereum/pull/10291))
|
||||
- Add missing step for Using `systemd` service file ([#10175](https://github.com/paritytech/parity-ethereum/pull/10175))
|
||||
- Call private contract methods from another private contract (read-onl… ([#10086](https://github.com/paritytech/parity-ethereum/pull/10086))
|
||||
- Update ring to 0.14 ([#10262](https://github.com/paritytech/parity-ethereum/pull/10262))
|
||||
- Fix(secret-store): deprecation warning ([#10301](https://github.com/paritytech/parity-ethereum/pull/10301))
|
||||
- Update to jsonrpc-derive 10.0.2, fixes aliases bug ([#10300](https://github.com/paritytech/parity-ethereum/pull/10300))
|
||||
- Convert to jsonrpc-derive, use jsonrpc-* from crates.io ([#10298](https://github.com/paritytech/parity-ethereum/pull/10298))
|
||||
- Fix Windows build ([#10284](https://github.com/paritytech/parity-ethereum/pull/10284))
|
||||
- Don't run the CPP example on CI ([#10285](https://github.com/paritytech/parity-ethereum/pull/10285))
|
||||
- Additional tests for uint deserialization. ([#10279](https://github.com/paritytech/parity-ethereum/pull/10279))
|
||||
- Prevent silent errors in daemon mode ([#10007](https://github.com/paritytech/parity-ethereum/pull/10007))
|
||||
- Fix join-set test to be deterministic. ([#10263](https://github.com/paritytech/parity-ethereum/pull/10263))
|
||||
- Update CHANGELOG-2.2.md ([#10254](https://github.com/paritytech/parity-ethereum/pull/10254))
|
||||
- Macos heapsize force jemalloc ([#10234](https://github.com/paritytech/parity-ethereum/pull/10234))
|
||||
- Allow specifying local accounts via CLI ([#9960](https://github.com/paritytech/parity-ethereum/pull/9960))
|
||||
- Take in account zero gas price certification when doing transact_cont… ([#10232](https://github.com/paritytech/parity-ethereum/pull/10232))
|
||||
- Update CHANGELOG.md ([#10249](https://github.com/paritytech/parity-ethereum/pull/10249))
|
||||
- Fix typo: CHANGELOG-2.1 -> CHANGELOG-2.2 ([#10233](https://github.com/paritytech/parity-ethereum/pull/10233))
|
||||
- Update copyright year to 2019. ([#10181](https://github.com/paritytech/parity-ethereum/pull/10181))
|
||||
- Fixed: types::transaction::SignedTransaction; ([#10229](https://github.com/paritytech/parity-ethereum/pull/10229))
|
||||
- Fix(ManageNetwork): replace Range with RangeInclusive ([#10209](https://github.com/paritytech/parity-ethereum/pull/10209))
|
||||
- Import rpc transactions sequentially ([#10051](https://github.com/paritytech/parity-ethereum/pull/10051))
|
||||
- Enable St-Peters-Fork ("Constantinople Fix") ([#10223](https://github.com/paritytech/parity-ethereum/pull/10223))
|
||||
- Add EIP-1283 disable transition ([#10214](https://github.com/paritytech/parity-ethereum/pull/10214))
|
||||
- Echo CORS request headers by default ([#10221](https://github.com/paritytech/parity-ethereum/pull/10221))
|
||||
- Happy New Year! ([#10211](https://github.com/paritytech/parity-ethereum/pull/10211))
|
||||
- Perform stripping during build ([#10208](https://github.com/paritytech/parity-ethereum/pull/10208))
|
||||
- Remove CallContract and RegistryInfo re-exports from `ethcore/client` ([#10205](https://github.com/paritytech/parity-ethereum/pull/10205))
|
||||
- Extract CallContract and RegistryInfo traits into their own crate ([#10178](https://github.com/paritytech/parity-ethereum/pull/10178))
|
||||
- Update the changelogs for 2.1.11, 2.2.6, 2.2.7, and 2.3.0 ([#10197](https://github.com/paritytech/parity-ethereum/pull/10197))
|
||||
- Cancel Constantinople HF on POA Core ([#10198](https://github.com/paritytech/parity-ethereum/pull/10198))
|
||||
- Adds cli interface to allow reseting chain to a particular block ([#9782](https://github.com/paritytech/parity-ethereum/pull/9782))
|
||||
- Run all `igd` methods in its own thread ([#10195](https://github.com/paritytech/parity-ethereum/pull/10195))
|
||||
- Pull constantinople on ethereum network ([#10189](https://github.com/paritytech/parity-ethereum/pull/10189))
|
||||
- Update for Android cross-compilation. ([#10180](https://github.com/paritytech/parity-ethereum/pull/10180))
|
||||
- Version: bump fork blocks for kovan and foundation ([#10186](https://github.com/paritytech/parity-ethereum/pull/10186))
|
||||
- Handle the case for contract creation on an empty but exist account w… ([#10065](https://github.com/paritytech/parity-ethereum/pull/10065))
|
||||
- Align personal_unlockAccount behaviour when permanent unlock is disab… ([#10060](https://github.com/paritytech/parity-ethereum/pull/10060))
|
||||
- Drop `runtime` after others (especially `ws_server`) ([#10179](https://github.com/paritytech/parity-ethereum/pull/10179))
|
||||
- Version: bump nightly to 2.4 ([#10165](https://github.com/paritytech/parity-ethereum/pull/10165))
|
||||
- Skip locking in statedb for non-canon blocks ([#10141](https://github.com/paritytech/parity-ethereum/pull/10141))
|
||||
- Remove reference to ui-interface command-line option ([#10170](https://github.com/paritytech/parity-ethereum/pull/10170))
|
||||
- Fix [#9822](https://github.com/paritytech/parity-ethereum/pull/9822): trace_filter does not return failed contract creation ([#10140](https://github.com/paritytech/parity-ethereum/pull/10140))
|
||||
- Fix _cannot recursively call into `Core`_ issue ([#10144](https://github.com/paritytech/parity-ethereum/pull/10144))
|
||||
- Fix(whisper): correct PoW calculation ([#10166](https://github.com/paritytech/parity-ethereum/pull/10166))
|
||||
- Bump JSON-RPC ([#10151](https://github.com/paritytech/parity-ethereum/pull/10151))
|
||||
- Ping nodes from discovery ([#10167](https://github.com/paritytech/parity-ethereum/pull/10167))
|
||||
- Fix(android): remove dependency to libusb ([#10161](https://github.com/paritytech/parity-ethereum/pull/10161))
|
||||
- Refactor(trim_right_matches -> trim_end_matches) ([#10159](https://github.com/paritytech/parity-ethereum/pull/10159))
|
||||
- Merge Machine and WithRewards ([#10071](https://github.com/paritytech/parity-ethereum/pull/10071))
|
||||
|
||||
## Previous releases
|
||||
|
||||
- [CHANGELOG-2.2](docs/CHANGELOG-2.2.md) (_stable_)
|
||||
- [CHANGELOG-2.3](docs/CHANGELOG-2.3.md) (_stable_)
|
||||
- [CHANGELOG-2.2](docs/CHANGELOG-2.2.md) (EOL: 2019-02-25)
|
||||
- [CHANGELOG-2.1](docs/CHANGELOG-2.1.md) (EOL: 2019-01-16)
|
||||
- [CHANGELOG-2.0](docs/CHANGELOG-2.0.md) (EOL: 2018-11-15)
|
||||
- [CHANGELOG-1.11](docs/CHANGELOG-1.11.md) (EOL: 2018-09-19)
|
||||
|
||||
51
Cargo.lock
generated
51
Cargo.lock
generated
@@ -306,7 +306,6 @@ dependencies = [
|
||||
"heapsize 0.4.2 (git+https://github.com/cheme/heapsize.git?branch=ec-macfix)",
|
||||
"keccak-hash 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-machine 0.1.0",
|
||||
"rlp 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"rlp_derive 0.1.0",
|
||||
"rustc-hex 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -742,7 +741,7 @@ dependencies = [
|
||||
"lazy_static 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"len-caching-lock 0.1.1",
|
||||
"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"lru-cache 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"lru-cache 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"macros 0.1.0",
|
||||
"memory-cache 0.1.0",
|
||||
"memory-db 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -750,7 +749,6 @@ dependencies = [
|
||||
"num_cpus 1.10.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-crypto 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-machine 0.1.0",
|
||||
"parity-runtime 0.1.0",
|
||||
"parity-snappy 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parking_lot 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -995,7 +993,7 @@ dependencies = [
|
||||
"keccak-hash 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"libc 0.2.48 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"lru-cache 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"lru-cache 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"mio 0.6.16 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-crypto 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -1801,7 +1799,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "jni"
|
||||
version = "0.10.2"
|
||||
version = "0.11.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
dependencies = [
|
||||
"cesu8 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -2055,11 +2053,6 @@ dependencies = [
|
||||
"libc 0.2.48 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "linked-hash-map"
|
||||
version = "0.4.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
|
||||
[[package]]
|
||||
name = "linked-hash-map"
|
||||
version = "0.5.1"
|
||||
@@ -2102,10 +2095,10 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "lru-cache"
|
||||
version = "0.1.1"
|
||||
version = "0.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
dependencies = [
|
||||
"linked-hash-map 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"linked-hash-map 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2154,7 +2147,7 @@ name = "memory-cache"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"heapsize 0.4.2 (git+https://github.com/cheme/heapsize.git?branch=ec-macfix)",
|
||||
"lru-cache 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"lru-cache 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2322,7 +2315,7 @@ dependencies = [
|
||||
"ethereum-types 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"kvdb-memorydb 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"lru-cache 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"lru-cache 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parking_lot 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"tempdir 0.3.7 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
@@ -2462,9 +2455,9 @@ name = "parity-clib"
|
||||
version = "1.12.0"
|
||||
dependencies = [
|
||||
"futures 0.1.25 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"jni 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"jni 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"panic_hook 0.1.0",
|
||||
"parity-ethereum 2.4.7",
|
||||
"parity-ethereum 2.5.3",
|
||||
"tokio 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"tokio-current-thread 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
@@ -2494,7 +2487,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "parity-ethereum"
|
||||
version = "2.4.7"
|
||||
version = "2.5.3"
|
||||
dependencies = [
|
||||
"ansi_term 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"atty 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -2547,7 +2540,7 @@ dependencies = [
|
||||
"parity-rpc 1.12.0",
|
||||
"parity-runtime 0.1.0",
|
||||
"parity-updater 1.12.0",
|
||||
"parity-version 2.4.7",
|
||||
"parity-version 2.5.3",
|
||||
"parity-whisper 0.1.0",
|
||||
"parking_lot 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"pretty_assertions 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -2622,13 +2615,6 @@ dependencies = [
|
||||
"serde_json 1.0.39 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "parity-machine"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"ethereum-types 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "parity-path"
|
||||
version = "0.1.1"
|
||||
@@ -2689,7 +2675,6 @@ dependencies = [
|
||||
"jsonrpc-pubsub 10.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"jsonrpc-ws-server 10.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"keccak-hash 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"kvdb-memorydb 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"macros 0.1.0",
|
||||
"multihash 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -2698,7 +2683,7 @@ dependencies = [
|
||||
"parity-crypto 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-runtime 0.1.0",
|
||||
"parity-updater 1.12.0",
|
||||
"parity-version 2.4.7",
|
||||
"parity-version 2.5.3",
|
||||
"parking_lot 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"pretty_assertions 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"rand 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -2714,7 +2699,6 @@ dependencies = [
|
||||
"tokio-timer 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"transaction-pool 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"transient-hashmap 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"trie-db 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"vm 0.1.0",
|
||||
]
|
||||
|
||||
@@ -2797,7 +2781,7 @@ dependencies = [
|
||||
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-hash-fetch 1.12.0",
|
||||
"parity-path 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"parity-version 2.4.7",
|
||||
"parity-version 2.5.3",
|
||||
"parking_lot 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"rand 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"semver 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -2807,7 +2791,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "parity-version"
|
||||
version = "2.4.7"
|
||||
version = "2.5.3"
|
||||
dependencies = [
|
||||
"parity-bytes 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"rlp 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
@@ -4400,12 +4384,14 @@ dependencies = [
|
||||
"env_logger 0.5.13 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"ethcore-network 1.12.0",
|
||||
"ethcore-network-devp2p 1.12.0",
|
||||
"ethkey 0.3.0",
|
||||
"jsonrpc-core 10.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"jsonrpc-http-server 10.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"jsonrpc-pubsub 10.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"panic_hook 0.1.0",
|
||||
"parity-whisper 0.1.0",
|
||||
"rustc-hex 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"serde 1.0.89 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"serde_derive 1.0.89 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
@@ -4607,7 +4593,7 @@ dependencies = [
|
||||
"checksum itoa 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "1306f3464951f30e30d12373d31c79fbd52d236e5e896fd92f96ec7babbbe60b"
|
||||
"checksum jemalloc-sys 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)" = "bfc62c8e50e381768ce8ee0428ee53741929f7ebd73e4d83f669bcf7693e00ae"
|
||||
"checksum jemallocator 0.1.9 (registry+https://github.com/rust-lang/crates.io-index)" = "9f0cd42ac65f758063fea55126b0148b1ce0a6354ff78e07a4d6806bc65c4ab3"
|
||||
"checksum jni 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)" = "1ecfa3b81afc64d9a6539c4eece96ac9a93c551c713a313800dade8e33d7b5c1"
|
||||
"checksum jni 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)" = "294eca097d1dc0bf59de5ab9f7eafa5f77129e9f6464c957ed3ddeb705fb4292"
|
||||
"checksum jni-sys 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "8eaf4bc02d17cbdd7ff4c7438cafcdf7fb9a4613313ad11b4f8fefe7d3fa0130"
|
||||
"checksum jsonrpc-core 10.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7a5152c3fda235dfd68341b3edf4121bc4428642c93acbd6de88c26bf95fc5d7"
|
||||
"checksum jsonrpc-derive 10.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "c14be84e86c75935be83a34c6765bf31f97ed6c9163bb0b83007190e9703940a"
|
||||
@@ -4629,13 +4615,12 @@ dependencies = [
|
||||
"checksum libloading 0.5.0 (registry+https://github.com/rust-lang/crates.io-index)" = "9c3ad660d7cb8c5822cd83d10897b0f1f1526792737a179e73896152f85b88c2"
|
||||
"checksum libusb 0.3.0 (git+https://github.com/paritytech/libusb-rs)" = "<none>"
|
||||
"checksum libusb-sys 0.2.4 (git+https://github.com/paritytech/libusb-sys)" = "<none>"
|
||||
"checksum linked-hash-map 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7860ec297f7008ff7a1e3382d7f7e1dcd69efc94751a2284bafc3d013c2aa939"
|
||||
"checksum linked-hash-map 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "70fb39025bc7cdd76305867c4eccf2f2dcf6e9a57f5b21a93e1c2d86cd03ec9e"
|
||||
"checksum local-encoding 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "e1ceb20f39ff7ae42f3ff9795f3986b1daad821caaa1e1732a0944103a5a1a66"
|
||||
"checksum lock_api 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "775751a3e69bde4df9b38dd00a1b5d6ac13791e4223d4a0506577f0dd27cfb7a"
|
||||
"checksum log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)" = "e19e8d5c34a3e0e2223db8e060f9e8264aeeb5c5fc64a4ee9965c062211c024b"
|
||||
"checksum log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c84ec4b527950aa83a329754b01dbe3f58361d1c5efacd1f6d68c494d08a17c6"
|
||||
"checksum lru-cache 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "4d06ff7ff06f729ce5f4e227876cb88d10bc59cd4ae1e09fbb2bde15c850dc21"
|
||||
"checksum lru-cache 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "31e24f1ad8321ca0e8a1e0ac13f23cb668e6f5466c2c57319f6a5cf1cc8e3b1c"
|
||||
"checksum lunarity-lexer 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "8a1670671f305792567116d4660e6e5bd785d6fa973e817c3445c0a7a54cecb6"
|
||||
"checksum matches 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)" = "7ffc5c5338469d4d3ea17d269fa8ea3512ad247247c30bd2df69e68309ed0a08"
|
||||
"checksum memchr 2.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "4b3629fe9fdbff6daa6c33b90f7c08355c1aca05a3d01fa8063b822fcf185f3b"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
description = "Parity Ethereum client"
|
||||
name = "parity-ethereum"
|
||||
# NOTE Make sure to update util/version/Cargo.toml as well
|
||||
version = "2.4.7"
|
||||
version = "2.5.3"
|
||||
license = "GPL-3.0"
|
||||
authors = ["Parity Technologies <admin@parity.io>"]
|
||||
|
||||
|
||||
@@ -1,3 +1,78 @@
|
||||
Note: Parity Ethereum 2.2 reached End-of-Life on 2019-02-25 (EOL).
|
||||
|
||||
## Parity-Ethereum [v2.2.11](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.11) (2019-02-21)
|
||||
|
||||
Parity-Ethereum 2.2.11-stable is a maintenance release that fixes snap and docker installations.
|
||||
|
||||
The full list of included changes:
|
||||
|
||||
- Stable: snap: release untagged versions from branches to the candidate ([#10357](https://github.com/paritytech/parity-ethereum/pull/10357)) ([#10372](https://github.com/paritytech/parity-ethereum/pull/10372))
|
||||
- Snap: release untagged versions from branches to the candidate snap channel ([#10357](https://github.com/paritytech/parity-ethereum/pull/10357))
|
||||
- Snap: add the removable-media plug ([#10377](https://github.com/paritytech/parity-ethereum/pull/10377))
|
||||
- Exchanged old(azure) bootnodes with new(ovh) ones ([#10309](https://github.com/paritytech/parity-ethereum/pull/10309))
|
||||
- Stable Backports ([#10353](https://github.com/paritytech/parity-ethereum/pull/10353))
|
||||
- Version: bump stable to 2.2.11
|
||||
- Snap: prefix version and populate candidate channel ([#10343](https://github.com/paritytech/parity-ethereum/pull/10343))
|
||||
- Snap: populate candidate releases with beta snaps to avoid stale channel
|
||||
- Snap: prefix version with v*
|
||||
- No volumes are needed, just run -v volume:/path/in/the/container ([#10345](https://github.com/paritytech/parity-ethereum/pull/10345))
|
||||
|
||||
## Parity-Ethereum [v2.2.10](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.10) (2019-02-13)
|
||||
|
||||
Parity-Ethereum 2.2.10-stable is a security-relevant release. A bug in the JSONRPC-deserialization module can cause crashes of all versions of Parity Ethereum nodes if an attacker is able to submit a specially-crafted RPC to certain publicly available endpoints.
|
||||
|
||||
- https://www.parity.io/new-parity-ethereum-update-fixes-several-rpc-vulnerabilities/
|
||||
|
||||
The full list of included changes:
|
||||
|
||||
- Additional error for invalid gas ([#10327](https://github.com/paritytech/parity-ethereum/pull/10327)) ([#10329](https://github.com/paritytech/parity-ethereum/pull/10329))
|
||||
- Backports for Stable 2.2.10 ([#10332](https://github.com/paritytech/parity-ethereum/pull/10332))
|
||||
- fix(docker-aarch64) : cross-compile config ([#9798](https://github.com/paritytech/parity-ethereum/pull/9798))
|
||||
- import rpc transactions sequentially ([#10051](https://github.com/paritytech/parity-ethereum/pull/10051))
|
||||
- fix(docker): fix not receives SIGINT ([#10059](https://github.com/paritytech/parity-ethereum/pull/10059))
|
||||
- snap: official image / test ([#10168](https://github.com/paritytech/parity-ethereum/pull/10168))
|
||||
- perform stripping during build ([#10208](https://github.com/paritytech/parity-ethereum/pull/10208))
|
||||
- Additional tests for uint/hash/bytes deserialization. ([#10279](https://github.com/paritytech/parity-ethereum/pull/10279))
|
||||
- Don't run the CPP example on CI ([#10285](https://github.com/paritytech/parity-ethereum/pull/10285))
|
||||
- CI optimizations ([#10297](https://github.com/paritytech/parity-ethereum/pull/10297))
|
||||
- fix publish job ([#10317](https://github.com/paritytech/parity-ethereum/pull/10317))
|
||||
- Add Statetest support for Constantinople Fix ([#10323](https://github.com/paritytech/parity-ethereum/pull/10323))
|
||||
- Add helper for Timestamp overflows ([#10330](https://github.com/paritytech/parity-ethereum/pull/10330))
|
||||
- Don't add discovery initiators to the node table ([#10305](https://github.com/paritytech/parity-ethereum/pull/10305))
|
||||
- change docker image based on debian instead of ubuntu due to the chan ([#10336](https://github.com/paritytech/parity-ethereum/pull/10336))
|
||||
- role back docker build image and docker deploy image to ubuntu:xenial based ([#10338](https://github.com/paritytech/parity-ethereum/pull/10338))
|
||||
|
||||
## Parity-Ethereum [v2.2.9](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.9) (2019-02-03)
|
||||
|
||||
Parity-Ethereum 2.2.9-stable is a security-relevant release. A bug in the JSONRPC-deserialization module can cause crashes of all versions of Parity Ethereum nodes if an attacker is able to submit a specially-crafted RPC to certain publicly available endpoints.
|
||||
|
||||
- https://www.parity.io/security-alert-parity-ethereum-03-02/
|
||||
|
||||
The full list of included changes:
|
||||
|
||||
- Additional tests for uint deserialization. ([#10279](https://github.com/paritytech/parity-ethereum/pull/10279)) ([#10281](https://github.com/paritytech/parity-ethereum/pull/10281))
|
||||
- Version: bump stable to 2.2.9 ([#10282](https://github.com/paritytech/parity-ethereum/pull/10282))
|
||||
|
||||
## Parity-Ethereum [v2.2.8](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.8) (2019-02-01)
|
||||
|
||||
Parity-Ethereum 2.2.8-stable is a consensus-relevant release that enables _St. Petersfork_ on:
|
||||
|
||||
- Ethereum Block `7280000` (along with Constantinople)
|
||||
- Kovan Block `10255201`
|
||||
- Ropsten Block `4939394`
|
||||
- POA Sokol Block `7026400`
|
||||
|
||||
In addition to this, Constantinople is cancelled for the POA Core network. Upgrading is mandatory for clients on any of these chains.
|
||||
|
||||
The full list of included changes:
|
||||
|
||||
- Backports for stable 2.2.8 ([#10224](https://github.com/paritytech/parity-ethereum/pull/10224))
|
||||
- Update for Android cross-compilation. ([#10180](https://github.com/paritytech/parity-ethereum/pull/10180))
|
||||
- Cancel Constantinople HF on POA Core ([#10198](https://github.com/paritytech/parity-ethereum/pull/10198))
|
||||
- Add EIP-1283 disable transition ([#10214](https://github.com/paritytech/parity-ethereum/pull/10214))
|
||||
- Enable St-Peters-Fork ("Constantinople Fix") ([#10223](https://github.com/paritytech/parity-ethereum/pull/10223))
|
||||
- Stable: Macos heapsize force jemalloc ([#10234](https://github.com/paritytech/parity-ethereum/pull/10234)) ([#10258](https://github.com/paritytech/parity-ethereum/pull/10258))
|
||||
|
||||
## Parity-Ethereum [v2.2.7](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.7) (2019-01-15)
|
||||
|
||||
Parity-Ethereum 2.2.7-stable is a consensus-relevant security release that reverts Constantinople on the Ethereum network. Upgrading is mandatory for Ethereum, and strongly recommended for other networks.
|
||||
|
||||
288
docs/CHANGELOG-2.3.md
Normal file
288
docs/CHANGELOG-2.3.md
Normal file
@@ -0,0 +1,288 @@
|
||||
## Parity-Ethereum [v2.3.8](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.8) (2019-03-22)
|
||||
|
||||
Parity-Ethereum 2.3.8-stable is a bugfix release that improves performance and stability. This patch release contains a critical bug fix where serving light clients previously led to client crashes. Upgrading is highly recommended.
|
||||
|
||||
The full list of included changes:
|
||||
- 2.3.8 stable backports ([#10507](https://github.com/paritytech/parity-ethereum/pull/10507))
|
||||
- Version: bump stable
|
||||
- Add additional request tests ([#10503](https://github.com/paritytech/parity-ethereum/pull/10503))
|
||||
|
||||
## Parity-Ethereum [v2.3.7](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.7) (2019-03-20)
|
||||
|
||||
Parity-Ethereum 2.3.7-stable is a bugfix release that improves performance and stability.
|
||||
|
||||
The full list of included changes:
|
||||
- 2.3.7 stable backports ([#10487](https://github.com/paritytech/parity-ethereum/pull/10487))
|
||||
- Version: bump stable
|
||||
- Сaching through docker volume ([#10477](https://github.com/paritytech/parity-ethereum/pull/10477))
|
||||
- fix win&mac build ([#10486](https://github.com/paritytech/parity-ethereum/pull/10486))
|
||||
- fix(extract `timestamp_checked_add` as lib) ([#10383](https://github.com/paritytech/parity-ethereum/pull/10383))
|
||||
|
||||
## Parity-Ethereum [v2.3.6](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.6) (2019-03-19)
|
||||
|
||||
Parity-Ethereum 2.3.6-stable is a bugfix release that improves performance and stability.
|
||||
|
||||
The full list of included changes:
|
||||
- 2.3.6 stable backports ([#10470](https://github.com/paritytech/parity-ethereum/pull/10470))
|
||||
- Version: bump stable
|
||||
- CI publish to aws ([#10446](https://github.com/paritytech/parity-ethereum/pull/10446))
|
||||
- Ensure static validator set changes are recognized ([#10467](https://github.com/paritytech/parity-ethereum/pull/10467))
|
||||
- CI aws git checkout ([#10451](https://github.com/paritytech/parity-ethereum/pull/10451))
|
||||
- Revert "CI aws git checkout ([#10451](https://github.com/paritytech/parity-ethereum/pull/10451))" ([#10456](https://github.com/paritytech/parity-ethereum/pull/10456))
|
||||
- Tests parallelized ([#10452](https://github.com/paritytech/parity-ethereum/pull/10452))
|
||||
|
||||
## Parity-Ethereum [v2.3.5](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.5) (2019-02-25)
|
||||
|
||||
Parity-Ethereum 2.3.5-stable is a bugfix release that improves performance and stability.
|
||||
|
||||
Note, all 2.2 releases and older are now unsupported and upgrading is recommended.
|
||||
|
||||
The full list of included changes:
|
||||
|
||||
- More Backports for Stable 2.3.5 ([#10430](https://github.com/paritytech/parity-ethereum/pull/10430))
|
||||
- Revert some changes, could be buggy ([#10399](https://github.com/paritytech/parity-ethereum/pull/10399))
|
||||
- Ci: clean up gitlab-ci.yml leftovers from previous merge ([#10429](https://github.com/paritytech/parity-ethereum/pull/10429))
|
||||
- 10000 > 5000 ([#10422](https://github.com/paritytech/parity-ethereum/pull/10422))
|
||||
- Fix underflow in pip, closes [#10419](https://github.com/paritytech/parity-ethereum/pull/10419) ([#10423](https://github.com/paritytech/parity-ethereum/pull/10423))
|
||||
- Fix panic when logging directory does not exist, closes [#10420](https://github.com/paritytech/parity-ethereum/pull/10420) ([#10424](https://github.com/paritytech/parity-ethereum/pull/10424))
|
||||
- Update hardcoded headers for Foundation, Ropsten, Kovan and Classic ([#10417](https://github.com/paritytech/parity-ethereum/pull/10417))
|
||||
- Backports for Stable 2.3.5 ([#10414](https://github.com/paritytech/parity-ethereum/pull/10414))
|
||||
- No-git for publish jobs, empty artifacts dir ([#10393](https://github.com/paritytech/parity-ethereum/pull/10393))
|
||||
- Snap: reenable i386, arm64, armhf architecture publishing ([#10386](https://github.com/paritytech/parity-ethereum/pull/10386))
|
||||
- Tx pool: always accept local transactions ([#10375](https://github.com/paritytech/parity-ethereum/pull/10375))
|
||||
- Fix to_pod storage trie value decoding ([#10368)](https://github.com/paritytech/parity-ethereum/pull/10368))
|
||||
- Version: mark 2.3.5 as stable
|
||||
|
||||
## Parity-Ethereum [v2.3.4](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.4) (2019-02-21)
|
||||
|
||||
Parity-Ethereum 2.3.4-beta is a maintenance release that fixes snap and docker installations.
|
||||
|
||||
The full list of included changes:
|
||||
- Beta: snap: release untagged versions from branches to the candidate ([#10357](https://github.com/paritytech/parity-ethereum/pull/10357)) ([#10373](https://github.com/paritytech/parity-ethereum/pull/10373))
|
||||
- Snap: release untagged versions from branches to the candidate snap channel ([#10357](https://github.com/paritytech/parity-ethereum/pull/10357))
|
||||
- Snap: add the removable-media plug ([#10377](https://github.com/paritytech/parity-ethereum/pull/10377))
|
||||
- Exchanged old(azure) bootnodes with new(ovh) ones ([#10309](https://github.com/paritytech/parity-ethereum/pull/10309))
|
||||
- Beta Backports ([#10354](https://github.com/paritytech/parity-ethereum/pull/10354))
|
||||
- Version: bump beta to 2.3.4
|
||||
- Snap: prefix version and populate candidate channel ([#10343](https://github.com/paritytech/parity-ethereum/pull/10343))
|
||||
- Snap: populate candidate releases with beta snaps to avoid stale channel
|
||||
- Snap: prefix version with v*
|
||||
- No volumes are needed, just run -v volume:/path/in/the/container ([#10345](https://github.com/paritytech/parity-ethereum/pull/10345))
|
||||
|
||||
## Parity-Ethereum [v2.3.3](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.3) (2019-02-13)
|
||||
|
||||
Parity-Ethereum 2.3.3-beta is a security-relevant release. A bug in the JSONRPC-deserialization module can cause crashes of all versions of Parity Ethereum nodes if an attacker is able to submit a specially-crafted RPC to certain publicly available endpoints.
|
||||
|
||||
- https://www.parity.io/new-parity-ethereum-update-fixes-several-rpc-vulnerabilities/
|
||||
|
||||
The full list of included changes:
|
||||
|
||||
- Additional error for invalid gas ([#10327](https://github.com/paritytech/parity-ethereum/pull/10327)) ([#10328](https://github.com/paritytech/parity-ethereum/pull/10328))
|
||||
- Backports for Beta 2.3.3 ([#10333](https://github.com/paritytech/parity-ethereum/pull/10333))
|
||||
- Properly handle check_epoch_end_signal errors ([#10015](https://github.com/paritytech/parity-ethereum/pull/10015))
|
||||
- import rpc transactions sequentially ([#10051](https://github.com/paritytech/parity-ethereum/pull/10051))
|
||||
- fix(docker): fix not receives SIGINT ([#10059](https://github.com/paritytech/parity-ethereum/pull/10059))
|
||||
- snap: official image / test ([#10168](https://github.com/paritytech/parity-ethereum/pull/10168))
|
||||
- Extract CallContract and RegistryInfo traits into their own crate ([#10178](https://github.com/paritytech/parity-ethereum/pull/10178))
|
||||
- perform stripping during build ([#10208](https://github.com/paritytech/parity-ethereum/pull/10208))
|
||||
- Remove CallContract and RegistryInfo re-exports from `ethcore/client` ([#10205](https://github.com/paritytech/parity-ethereum/pull/10205))
|
||||
- fixed: types::transaction::SignedTransaction; ([#10229](https://github.com/paritytech/parity-ethereum/pull/10229))
|
||||
- Additional tests for uint/hash/bytes deserialization. ([#10279](https://github.com/paritytech/parity-ethereum/pull/10279))
|
||||
- Fix Windows build ([#10284](https://github.com/paritytech/parity-ethereum/pull/10284))
|
||||
- Don't run the CPP example on CI ([#10285](https://github.com/paritytech/parity-ethereum/pull/10285))
|
||||
- CI optimizations ([#10297](https://github.com/paritytech/parity-ethereum/pull/10297))
|
||||
- fix publish job ([#10317](https://github.com/paritytech/parity-ethereum/pull/10317))
|
||||
- Add Statetest support for Constantinople Fix ([#10323](https://github.com/paritytech/parity-ethereum/pull/10323))
|
||||
- Add helper for Timestamp overflows ([#10330](https://github.com/paritytech/parity-ethereum/pull/10330))
|
||||
- Don't add discovery initiators to the node table ([#10305](https://github.com/paritytech/parity-ethereum/pull/10305))
|
||||
- change docker image based on debian instead of ubuntu due to the chan ([#10336](https://github.com/paritytech/parity-ethereum/pull/10336))
|
||||
- role back docker build image and docker deploy image to ubuntu:xenial based ([#10338](https://github.com/paritytech/parity-ethereum/pull/10338))
|
||||
|
||||
## Parity-Ethereum [v2.3.2](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.2) (2019-02-03)
|
||||
|
||||
Parity-Ethereum 2.3.2-stable is a security-relevant release. A bug in the JSONRPC-deserialization module can cause crashes of all versions of Parity Ethereum nodes if an attacker is able to submit a specially-crafted RPC to certain publicly available endpoints.
|
||||
|
||||
- https://www.parity.io/security-alert-parity-ethereum-03-02/
|
||||
|
||||
The full list of included changes:
|
||||
- Version: bump beta to 2.3.2 ([#10283](https://github.com/paritytech/parity-ethereum/pull/10283))
|
||||
- Additional tests for uint deserialization. ([#10279](https://github.com/paritytech/parity-ethereum/pull/10279)) ([#10280](https://github.com/paritytech/parity-ethereum/pull/10280))
|
||||
- Backport [#10285](https://github.com/paritytech/parity-ethereum/pull/10285) to beta ([#10286](https://github.com/paritytech/parity-ethereum/pull/10286))
|
||||
|
||||
## Parity-Ethereum [v2.3.1](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.1) (2019-02-01)
|
||||
|
||||
Parity-Ethereum 2.3.1-beta is a consensus-relevant release that enables _St. Petersfork_ on:
|
||||
|
||||
- Ethereum Block `7280000` (along with Constantinople)
|
||||
- Kovan Block `10255201`
|
||||
- Ropsten Block `4939394`
|
||||
- POA Sokol Block `7026400`
|
||||
|
||||
In addition to this, Constantinople is cancelled for the POA Core network. Upgrading is mandatory for clients on any of these chains.
|
||||
|
||||
The full list of included changes:
|
||||
|
||||
- Backports for beta 2.3.1 ([#10225](https://github.com/paritytech/parity-ethereum/pull/10225))
|
||||
- Fix _cannot recursively call into `Core`_ issue ([#10144](https://github.com/paritytech/parity-ethereum/pull/10144))
|
||||
- Update for Android cross-compilation. ([#10180](https://github.com/paritytech/parity-ethereum/pull/10180))
|
||||
- Fix _cannot recursively call into `Core`_ - Part 2 ([#10195](https://github.com/paritytech/parity-ethereum/pull/10195))
|
||||
- Cancel Constantinople HF on POA Core ([#10198](https://github.com/paritytech/parity-ethereum/pull/10198))
|
||||
- Add EIP-1283 disable transition ([#10214](https://github.com/paritytech/parity-ethereum/pull/10214))
|
||||
- Enable St-Peters-Fork ("Constantinople Fix") ([#10223](https://github.com/paritytech/parity-ethereum/pull/10223))
|
||||
- Beta: Macos heapsize force jemalloc ([#10234](https://github.com/paritytech/parity-ethereum/pull/10234)) ([#10259](https://github.com/paritytech/parity-ethereum/pull/10259))
|
||||
|
||||
## Parity-Ethereum [v2.3.0](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.0) (2019-01-16)
|
||||
|
||||
Parity-Ethereum 2.3.0-beta is a consensus-relevant security release that reverts Constantinople on the Ethereum network. Upgrading is mandatory for Ethereum, and strongly recommended for other networks.
|
||||
|
||||
- **Consensus** - Ethereum Network: Pull Constantinople protocol upgrade on Ethereum ([#10189](https://github.com/paritytech/parity-ethereum/pull/10189))
|
||||
- Read more: [Security Alert: Ethereum Constantinople Postponement](https://blog.ethereum.org/2019/01/15/security-alert-ethereum-constantinople-postponement/)
|
||||
- **Networking** - All networks: Ping nodes from discovery ([#10167](https://github.com/paritytech/parity-ethereum/pull/10167))
|
||||
- **Wasm** - Kovan Network: Update pwasm-utils to 0.6.1 ([#10134](https://github.com/paritytech/parity-ethereum/pull/10134))
|
||||
|
||||
Other notable changes:
|
||||
|
||||
- Existing blocks in the database are now kept when restoring a Snapshot. ([#8643](https://github.com/paritytech/parity-ethereum/pull/8643))
|
||||
- Block and transaction propagation is improved significantly. ([#9954](https://github.com/paritytech/parity-ethereum/pull/9954))
|
||||
- The ERC-191 Signed Data Standard is now supported by `personal_sign191`. ([#9701](https://github.com/paritytech/parity-ethereum/pull/9701))
|
||||
- Add support for ERC-191/712 `eth_signTypedData` as a standard for machine-verifiable and human-readable typed data signing with Ethereum keys. ([#9631](https://github.com/paritytech/parity-ethereum/pull/9631))
|
||||
- Add support for ERC-1186 `eth_getProof` ([#9001](https://github.com/paritytech/parity-ethereum/pull/9001))
|
||||
- Add experimental RPCs flag to enable ERC-191, ERC-712, and ERC-1186 APIs via `--jsonrpc-experimental` ([#9928](https://github.com/paritytech/parity-ethereum/pull/9928))
|
||||
- Make `CALLCODE` to trace value to be the code address. ([#9881](https://github.com/paritytech/parity-ethereum/pull/9881))
|
||||
|
||||
Configuration changes:
|
||||
|
||||
- The EIP-98 transition is now disabled by default. If you previously had no `eip98transition` specified in your chain specification, you would enable this now manually on block `0x0`. ([#9955](https://github.com/paritytech/parity-ethereum/pull/9955))
|
||||
- Also, unknown fields in chain specs are now rejected. ([#9972](https://github.com/paritytech/parity-ethereum/pull/9972))
|
||||
- The Tendermint engine was removed from Parity Ethereum and is no longer available and maintained. ([#9980](https://github.com/paritytech/parity-ethereum/pull/9980))
|
||||
- Ropsten testnet data and keys moved from `test/` to `ropsten/` subdir. To reuse your old keys and data either copy or symlink them to the new location. ([#10123](https://github.com/paritytech/parity-ethereum/pull/10123))
|
||||
- Strict empty steps validation ([#10041](https://github.com/paritytech/parity-ethereum/pull/10041))
|
||||
- If you have a chain with`empty_steps` already running, some blocks most likely contain non-strict entries (unordered or duplicated empty steps). In this release `strict_empty_steps_transition` is enabled by default at block `0x0` for any chain with `empty_steps`.
|
||||
- If your network uses `empty_steps` you **must** (A) plan a hard fork and change `strict_empty_steps_transition` to the desired fork block and (B) update the clients of the whole network to 2.2.7-stable / 2.3.0-beta. If for some reason you don't want to do this please set`strict_empty_steps_transition` to `0xfffffffff` to disable it.
|
||||
|
||||
_Note:_ This release marks Parity 2.3 as _beta_. All versions of Parity 2.2 are now considered _stable_.
|
||||
|
||||
The full list of included changes:
|
||||
|
||||
- Backports for 2.3.0 beta ([#10164](https://github.com/paritytech/parity-ethereum/pull/10164))
|
||||
- Snap: fix path in script ([#10157](https://github.com/paritytech/parity-ethereum/pull/10157))
|
||||
- Make sure parent block is not in importing queue when importing ancient blocks ([#10138](https://github.com/paritytech/parity-ethereum/pull/10138))
|
||||
- Ci: re-enable snap publishing ([#10142](https://github.com/paritytech/parity-ethereum/pull/10142))
|
||||
- Hf in POA Core (2019-01-18) - Constantinople ([#10155](https://github.com/paritytech/parity-ethereum/pull/10155))
|
||||
- Update EWF's tobalaba chainspec ([#10152](https://github.com/paritytech/parity-ethereum/pull/10152))
|
||||
- Replace ethcore-logger with env-logger. ([#10102](https://github.com/paritytech/parity-ethereum/pull/10102))
|
||||
- Finality: dont require chain head to be in the chain ([#10054](https://github.com/paritytech/parity-ethereum/pull/10054))
|
||||
- Remove caching for node connections ([#10143](https://github.com/paritytech/parity-ethereum/pull/10143))
|
||||
- Blooms file iterator empty on out of range position. ([#10145](https://github.com/paritytech/parity-ethereum/pull/10145))
|
||||
- Autogen docs for the "Configuring Parity Ethereum" wiki page. ([#10067](https://github.com/paritytech/parity-ethereum/pull/10067))
|
||||
- Misc: bump license header to 2019 ([#10135](https://github.com/paritytech/parity-ethereum/pull/10135))
|
||||
- Hide most of the logs from cpp example. ([#10139](https://github.com/paritytech/parity-ethereum/pull/10139))
|
||||
- Don't try to send oversized packets ([#10042](https://github.com/paritytech/parity-ethereum/pull/10042))
|
||||
- Private tx enabled flag added into STATUS packet ([#9999](https://github.com/paritytech/parity-ethereum/pull/9999))
|
||||
- Update pwasm-utils to 0.6.1 ([#10134](https://github.com/paritytech/parity-ethereum/pull/10134))
|
||||
- Extract blockchain from ethcore ([#10114](https://github.com/paritytech/parity-ethereum/pull/10114))
|
||||
- Ethcore: update hardcoded headers ([#10123](https://github.com/paritytech/parity-ethereum/pull/10123))
|
||||
- Identity fix ([#10128](https://github.com/paritytech/parity-ethereum/pull/10128))
|
||||
- Use LenCachingMutex to optimize verification. ([#10117](https://github.com/paritytech/parity-ethereum/pull/10117))
|
||||
- Pyethereum keystore support ([#9710](https://github.com/paritytech/parity-ethereum/pull/9710))
|
||||
- Bump rocksdb-sys to 0.5.5 ([#10124](https://github.com/paritytech/parity-ethereum/pull/10124))
|
||||
- Parity-clib: `async C bindings to RPC requests` + `subscribe/unsubscribe to websocket events` ([#9920](https://github.com/paritytech/parity-ethereum/pull/9920))
|
||||
- Refactor (hardware wallet) : reduce the number of threads ([#9644](https://github.com/paritytech/parity-ethereum/pull/9644))
|
||||
- Hf in POA Sokol (2019-01-04) ([#10077](https://github.com/paritytech/parity-ethereum/pull/10077))
|
||||
- Fix broken links ([#10119](https://github.com/paritytech/parity-ethereum/pull/10119))
|
||||
- Follow-up to [#10105](https://github.com/paritytech/parity-ethereum/issues/10105) ([#10107](https://github.com/paritytech/parity-ethereum/pull/10107))
|
||||
- Move EIP-712 crate back to parity-ethereum ([#10106](https://github.com/paritytech/parity-ethereum/pull/10106))
|
||||
- Move a bunch of stuff around ([#10101](https://github.com/paritytech/parity-ethereum/pull/10101))
|
||||
- Revert "Add --frozen when running cargo ([#10081](https://github.com/paritytech/parity-ethereum/pull/10081))" ([#10105](https://github.com/paritytech/parity-ethereum/pull/10105))
|
||||
- Fix left over small grumbles on whitespaces ([#10084](https://github.com/paritytech/parity-ethereum/pull/10084))
|
||||
- Add --frozen when running cargo ([#10081](https://github.com/paritytech/parity-ethereum/pull/10081))
|
||||
- Fix pubsub new_blocks notifications to include all blocks ([#9987](https://github.com/paritytech/parity-ethereum/pull/9987))
|
||||
- Update some dependencies for compilation with pc-windows-gnu ([#10082](https://github.com/paritytech/parity-ethereum/pull/10082))
|
||||
- Fill transaction hash on ethGetLog of light client. ([#9938](https://github.com/paritytech/parity-ethereum/pull/9938))
|
||||
- Update changelog update for 2.2.5-beta and 2.1.10-stable ([#10064](https://github.com/paritytech/parity-ethereum/pull/10064))
|
||||
- Implement len caching for parking_lot RwLock ([#10032](https://github.com/paritytech/parity-ethereum/pull/10032))
|
||||
- Update parking_lot to 0.7 ([#10050](https://github.com/paritytech/parity-ethereum/pull/10050))
|
||||
- Bump crossbeam. ([#10048](https://github.com/paritytech/parity-ethereum/pull/10048))
|
||||
- Ethcore: enable constantinople on ethereum ([#10031](https://github.com/paritytech/parity-ethereum/pull/10031))
|
||||
- Strict empty steps validation ([#10041](https://github.com/paritytech/parity-ethereum/pull/10041))
|
||||
- Center the Subtitle, use some CAPS ([#10034](https://github.com/paritytech/parity-ethereum/pull/10034))
|
||||
- Change test miner max memory to malloc reports. ([#10024](https://github.com/paritytech/parity-ethereum/pull/10024))
|
||||
- Sort the storage for private state ([#10018](https://github.com/paritytech/parity-ethereum/pull/10018))
|
||||
- Fix: test corpus_inaccessible panic ([#10019](https://github.com/paritytech/parity-ethereum/pull/10019))
|
||||
- Ci: move future releases to ethereum subdir on s3 ([#10017](https://github.com/paritytech/parity-ethereum/pull/10017))
|
||||
- Light(on_demand): decrease default time window to 10 secs ([#10016](https://github.com/paritytech/parity-ethereum/pull/10016))
|
||||
- Light client : failsafe crate (circuit breaker) ([#9790](https://github.com/paritytech/parity-ethereum/pull/9790))
|
||||
- Lencachingmutex ([#9988](https://github.com/paritytech/parity-ethereum/pull/9988))
|
||||
- Version and notification for private contract wrapper added ([#9761](https://github.com/paritytech/parity-ethereum/pull/9761))
|
||||
- Handle failing case for update account cache in require ([#9989](https://github.com/paritytech/parity-ethereum/pull/9989))
|
||||
- Add tokio runtime to ethcore io worker ([#9979](https://github.com/paritytech/parity-ethereum/pull/9979))
|
||||
- Move daemonize before creating account provider ([#10003](https://github.com/paritytech/parity-ethereum/pull/10003))
|
||||
- Docs: update changelogs ([#9990](https://github.com/paritytech/parity-ethereum/pull/9990))
|
||||
- Fix daemonize ([#10000](https://github.com/paritytech/parity-ethereum/pull/10000))
|
||||
- Fix Bloom migration ([#9992](https://github.com/paritytech/parity-ethereum/pull/9992))
|
||||
- Remove tendermint engine support ([#9980](https://github.com/paritytech/parity-ethereum/pull/9980))
|
||||
- Calculate gas for deployment transaction ([#9840](https://github.com/paritytech/parity-ethereum/pull/9840))
|
||||
- Fix unstable peers and slowness in sync ([#9967](https://github.com/paritytech/parity-ethereum/pull/9967))
|
||||
- Adds parity_verifySignature RPC method ([#9507](https://github.com/paritytech/parity-ethereum/pull/9507))
|
||||
- Improve block and transaction propagation ([#9954](https://github.com/paritytech/parity-ethereum/pull/9954))
|
||||
- Deny unknown fields for chainspec ([#9972](https://github.com/paritytech/parity-ethereum/pull/9972))
|
||||
- Fix docker build ([#9971](https://github.com/paritytech/parity-ethereum/pull/9971))
|
||||
- Ci: rearrange pipeline by logic ([#9970](https://github.com/paritytech/parity-ethereum/pull/9970))
|
||||
- Add changelogs for 2.0.9, 2.1.4, 2.1.6, and 2.2.1 ([#9963](https://github.com/paritytech/parity-ethereum/pull/9963))
|
||||
- Add Error message when sync is still in progress. ([#9475](https://github.com/paritytech/parity-ethereum/pull/9475))
|
||||
- Make CALLCODE to trace value to be the code address ([#9881](https://github.com/paritytech/parity-ethereum/pull/9881))
|
||||
- Fix light client informant while syncing ([#9932](https://github.com/paritytech/parity-ethereum/pull/9932))
|
||||
- Add a optional json dump state to evm-bin ([#9706](https://github.com/paritytech/parity-ethereum/pull/9706))
|
||||
- Disable EIP-98 transition by default ([#9955](https://github.com/paritytech/parity-ethereum/pull/9955))
|
||||
- Remove secret_store runtimes. ([#9888](https://github.com/paritytech/parity-ethereum/pull/9888))
|
||||
- Fix a deadlock ([#9952](https://github.com/paritytech/parity-ethereum/pull/9952))
|
||||
- Chore(eip712): remove unused `failure-derive` ([#9958](https://github.com/paritytech/parity-ethereum/pull/9958))
|
||||
- Do not use the home directory as the working dir in docker ([#9834](https://github.com/paritytech/parity-ethereum/pull/9834))
|
||||
- Prevent silent errors in daemon mode, closes [#9367](https://github.com/paritytech/parity-ethereum/issues/9367) ([#9946](https://github.com/paritytech/parity-ethereum/pull/9946))
|
||||
- Fix empty steps ([#9939](https://github.com/paritytech/parity-ethereum/pull/9939))
|
||||
- Adjust requests costs for light client ([#9925](https://github.com/paritytech/parity-ethereum/pull/9925))
|
||||
- Eip-1186: add `eth_getProof` RPC-Method ([#9001](https://github.com/paritytech/parity-ethereum/pull/9001))
|
||||
- Missing blocks in filter_changes RPC ([#9947](https://github.com/paritytech/parity-ethereum/pull/9947))
|
||||
- Allow rust-nightly builds fail in nightly builds ([#9944](https://github.com/paritytech/parity-ethereum/pull/9944))
|
||||
- Update eth-secp256k1 to include fix for BSDs ([#9935](https://github.com/paritytech/parity-ethereum/pull/9935))
|
||||
- Unbreak build on rust -stable ([#9934](https://github.com/paritytech/parity-ethereum/pull/9934))
|
||||
- Keep existing blocks when restoring a Snapshot ([#8643](https://github.com/paritytech/parity-ethereum/pull/8643))
|
||||
- Add experimental RPCs flag ([#9928](https://github.com/paritytech/parity-ethereum/pull/9928))
|
||||
- Clarify poll lifetime ([#9922](https://github.com/paritytech/parity-ethereum/pull/9922))
|
||||
- Docs(require rust 1.30) ([#9923](https://github.com/paritytech/parity-ethereum/pull/9923))
|
||||
- Use block header for building finality ([#9914](https://github.com/paritytech/parity-ethereum/pull/9914))
|
||||
- Simplify cargo audit ([#9918](https://github.com/paritytech/parity-ethereum/pull/9918))
|
||||
- Light-fetch: Differentiate between out-of-gas/manual throw and use required gas from response on failure ([#9824](https://github.com/paritytech/parity-ethereum/pull/9824))
|
||||
- Eip 191 ([#9701](https://github.com/paritytech/parity-ethereum/pull/9701))
|
||||
- Fix(logger): `reqwest` no longer a dependency ([#9908](https://github.com/paritytech/parity-ethereum/pull/9908))
|
||||
- Remove rust-toolchain file ([#9906](https://github.com/paritytech/parity-ethereum/pull/9906))
|
||||
- Foundation: 6692865, ropsten: 4417537, kovan: 9363457 ([#9907](https://github.com/paritytech/parity-ethereum/pull/9907))
|
||||
- Ethcore: use Machine::verify_transaction on parent block ([#9900](https://github.com/paritytech/parity-ethereum/pull/9900))
|
||||
- Chore(rpc-tests): remove unused rand ([#9896](https://github.com/paritytech/parity-ethereum/pull/9896))
|
||||
- Fix: Intermittent failing CI due to addr in use ([#9885](https://github.com/paritytech/parity-ethereum/pull/9885))
|
||||
- Chore(bump docopt): 0.8 -> 1.0 ([#9889](https://github.com/paritytech/parity-ethereum/pull/9889))
|
||||
- Use expect ([#9883](https://github.com/paritytech/parity-ethereum/pull/9883))
|
||||
- Use Weak reference in PubSubClient ([#9886](https://github.com/paritytech/parity-ethereum/pull/9886))
|
||||
- Ci: nuke the gitlab caches ([#9855](https://github.com/paritytech/parity-ethereum/pull/9855))
|
||||
- Remove unused code ([#9884](https://github.com/paritytech/parity-ethereum/pull/9884))
|
||||
- Fix json tracer overflow ([#9873](https://github.com/paritytech/parity-ethereum/pull/9873))
|
||||
- Allow to seal work on latest block ([#9876](https://github.com/paritytech/parity-ethereum/pull/9876))
|
||||
- Fix docker script ([#9854](https://github.com/paritytech/parity-ethereum/pull/9854))
|
||||
- Health endpoint ([#9847](https://github.com/paritytech/parity-ethereum/pull/9847))
|
||||
- Gitlab-ci: make android release build succeed ([#9743](https://github.com/paritytech/parity-ethereum/pull/9743))
|
||||
- Clean up existing benchmarks ([#9839](https://github.com/paritytech/parity-ethereum/pull/9839))
|
||||
- Update Callisto block reward code to support HF1 ([#9811](https://github.com/paritytech/parity-ethereum/pull/9811))
|
||||
- Option to disable keep alive for JSON-RPC http transport ([#9848](https://github.com/paritytech/parity-ethereum/pull/9848))
|
||||
- Classic.json Bootnode Update ([#9828](https://github.com/paritytech/parity-ethereum/pull/9828))
|
||||
- Support MIX. ([#9767](https://github.com/paritytech/parity-ethereum/pull/9767))
|
||||
- Ci: remove failing tests for android, windows, and macos ([#9788](https://github.com/paritytech/parity-ethereum/pull/9788))
|
||||
- Implement NoProof for json tests and update tests reference (replaces [#9744](https://github.com/paritytech/parity-ethereum/issues/9744)) ([#9814](https://github.com/paritytech/parity-ethereum/pull/9814))
|
||||
- Chore(bump regex) ([#9842](https://github.com/paritytech/parity-ethereum/pull/9842))
|
||||
- Ignore global cache for patched accounts ([#9752](https://github.com/paritytech/parity-ethereum/pull/9752))
|
||||
- Move state root verification before gas used ([#9841](https://github.com/paritytech/parity-ethereum/pull/9841))
|
||||
- Fix(docker-aarch64) : cross-compile config ([#9798](https://github.com/paritytech/parity-ethereum/pull/9798))
|
||||
- Version: bump nightly to 2.3.0 ([#9819](https://github.com/paritytech/parity-ethereum/pull/9819))
|
||||
- Tests modification for windows CI ([#9671](https://github.com/paritytech/parity-ethereum/pull/9671))
|
||||
- Eip-712 implementation ([#9631](https://github.com/paritytech/parity-ethereum/pull/9631))
|
||||
- Fix typo ([#9826](https://github.com/paritytech/parity-ethereum/pull/9826))
|
||||
- Clean up serde rename and use rename_all = camelCase when possible ([#9823](https://github.com/paritytech/parity-ethereum/pull/9823))
|
||||
@@ -39,7 +39,7 @@ keccak-hasher = { path = "../util/keccak-hasher" }
|
||||
kvdb = "0.1"
|
||||
kvdb-memorydb = "0.1"
|
||||
kvdb-rocksdb = { version = "0.1.3", optional = true }
|
||||
lazy_static = "1.0"
|
||||
lazy_static = "1.2.0"
|
||||
len-caching-lock = { path = "../util/len-caching-lock" }
|
||||
log = "0.4"
|
||||
lru-cache = "0.1"
|
||||
@@ -50,7 +50,6 @@ num = { version = "0.1", default-features = false, features = ["bigint"] }
|
||||
num_cpus = "1.2"
|
||||
parity-bytes = "0.1"
|
||||
parity-crypto = "0.3.0"
|
||||
parity-machine = { path = "../machine" }
|
||||
parity-snappy = "0.1"
|
||||
parking_lot = "0.7"
|
||||
trie-db = "0.11.0"
|
||||
|
||||
@@ -668,21 +668,6 @@ impl BlockChain {
|
||||
self.db.key_value().read_with_cache(db::COL_EXTRA, &self.block_details, parent).map_or(false, |d| d.children.contains(hash))
|
||||
}
|
||||
|
||||
/// fetches the list of blocks from best block to n, and n's parent hash
|
||||
/// where n > 0
|
||||
pub fn block_headers_from_best_block(&self, n: u32) -> Option<(Vec<encoded::Header>, H256)> {
|
||||
let mut blocks = Vec::with_capacity(n as usize);
|
||||
let mut hash = self.best_block_hash();
|
||||
|
||||
for _ in 0..n {
|
||||
let current_hash = self.block_header_data(&hash)?;
|
||||
hash = current_hash.parent_hash();
|
||||
blocks.push(current_hash);
|
||||
}
|
||||
|
||||
Some((blocks, hash))
|
||||
}
|
||||
|
||||
/// Returns a tree route between `from` and `to`, which is a tuple of:
|
||||
///
|
||||
/// - a vector of hashes of all blocks, ordered from `from` to `to`.
|
||||
@@ -869,6 +854,14 @@ impl BlockChain {
|
||||
}
|
||||
}
|
||||
|
||||
/// clears all caches for testing purposes
|
||||
pub fn clear_cache(&self) {
|
||||
self.block_bodies.write().clear();
|
||||
self.block_details.write().clear();
|
||||
self.block_hashes.write().clear();
|
||||
self.block_headers.write().clear();
|
||||
}
|
||||
|
||||
/// Update the best ancient block to the given hash, after checking that
|
||||
/// it's directly linked to the currently known best ancient block
|
||||
pub fn update_best_ancient_block(&self, hash: &H256) {
|
||||
|
||||
@@ -24,7 +24,6 @@ use std::marker::PhantomData;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use ethcore::executed::{Executed, ExecutionError};
|
||||
use futures::{Poll, Future, Async};
|
||||
use futures::sync::oneshot::{self, Receiver};
|
||||
use network::PeerId;
|
||||
@@ -41,10 +40,10 @@ use cache::Cache;
|
||||
use request::{self as basic_request, Request as NetworkRequest};
|
||||
use self::request::CheckedRequest;
|
||||
|
||||
pub use ethcore::executed::ExecutionResult;
|
||||
pub use self::request::{Request, Response, HeaderRef, Error as ValidityError};
|
||||
pub use self::request_guard::{RequestGuard, Error as RequestError};
|
||||
pub use self::response_guard::{ResponseGuard, Error as ResponseGuardError, Inner as ResponseGuardInner};
|
||||
|
||||
pub use types::request::ResponseError;
|
||||
|
||||
#[cfg(test)]
|
||||
@@ -54,9 +53,6 @@ pub mod request;
|
||||
mod request_guard;
|
||||
mod response_guard;
|
||||
|
||||
/// The result of execution
|
||||
pub type ExecutionResult = Result<Executed, ExecutionError>;
|
||||
|
||||
/// The initial backoff interval for OnDemand queries
|
||||
pub const DEFAULT_REQUEST_MIN_BACKOFF_DURATION: Duration = Duration::from_secs(10);
|
||||
/// The maximum request interval for OnDemand queries
|
||||
@@ -70,6 +66,10 @@ pub const DEFAULT_NUM_CONSECUTIVE_FAILED_REQUESTS: usize = 1;
|
||||
|
||||
/// OnDemand related errors
|
||||
pub mod error {
|
||||
// Silence: `use of deprecated item 'std::error::Error::cause': replaced by Error::source, which can support downcasting`
|
||||
// https://github.com/paritytech/parity-ethereum/issues/10302
|
||||
#![allow(deprecated)]
|
||||
|
||||
use futures::sync::oneshot::Canceled;
|
||||
|
||||
error_chain! {
|
||||
@@ -94,6 +94,24 @@ pub mod error {
|
||||
}
|
||||
}
|
||||
|
||||
/// Public interface for performing network requests `OnDemand`
|
||||
pub trait OnDemandRequester: Send + Sync {
|
||||
/// Submit a strongly-typed batch of requests.
|
||||
///
|
||||
/// Fails if back-reference are not coherent.
|
||||
fn request<T>(&self, ctx: &BasicContext, requests: T) -> Result<OnResponses<T>, basic_request::NoSuchOutput>
|
||||
where
|
||||
T: request::RequestAdapter;
|
||||
|
||||
/// Submit a vector of requests to be processed together.
|
||||
///
|
||||
/// Fails if back-references are not coherent.
|
||||
/// The returned vector of responses will correspond to the requests exactly.
|
||||
fn request_raw(&self, ctx: &BasicContext, requests: Vec<Request>)
|
||||
-> Result<Receiver<PendingResponse>, basic_request::NoSuchOutput>;
|
||||
}
|
||||
|
||||
|
||||
// relevant peer info.
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
struct Peer {
|
||||
@@ -355,6 +373,74 @@ pub struct OnDemand {
|
||||
request_number_of_consecutive_errors: usize
|
||||
}
|
||||
|
||||
impl OnDemandRequester for OnDemand {
|
||||
fn request_raw(&self, ctx: &BasicContext, requests: Vec<Request>)
|
||||
-> Result<Receiver<PendingResponse>, basic_request::NoSuchOutput>
|
||||
{
|
||||
let (sender, receiver) = oneshot::channel();
|
||||
if requests.is_empty() {
|
||||
assert!(sender.send(Ok(Vec::new())).is_ok(), "receiver still in scope; qed");
|
||||
return Ok(receiver);
|
||||
}
|
||||
|
||||
let mut builder = basic_request::Builder::default();
|
||||
|
||||
let responses = Vec::with_capacity(requests.len());
|
||||
|
||||
let mut header_producers = HashMap::new();
|
||||
for (i, request) in requests.into_iter().enumerate() {
|
||||
let request = CheckedRequest::from(request);
|
||||
|
||||
// ensure that all requests needing headers will get them.
|
||||
if let Some((idx, field)) = request.needs_header() {
|
||||
// a request chain with a header back-reference is valid only if it both
|
||||
// points to a request that returns a header and has the same back-reference
|
||||
// for the block hash.
|
||||
match header_producers.get(&idx) {
|
||||
Some(ref f) if &field == *f => {}
|
||||
_ => return Err(basic_request::NoSuchOutput),
|
||||
}
|
||||
}
|
||||
if let CheckedRequest::HeaderByHash(ref req, _) = request {
|
||||
header_producers.insert(i, req.0);
|
||||
}
|
||||
|
||||
builder.push(request)?;
|
||||
}
|
||||
|
||||
let requests = builder.build();
|
||||
let net_requests = requests.clone().map_requests(|req| req.into_net_request());
|
||||
let capabilities = guess_capabilities(requests.requests());
|
||||
|
||||
self.submit_pending(ctx, Pending {
|
||||
requests,
|
||||
net_requests,
|
||||
required_capabilities: capabilities,
|
||||
responses,
|
||||
sender,
|
||||
request_guard: RequestGuard::new(
|
||||
self.request_number_of_consecutive_errors as u32,
|
||||
self.request_backoff_rounds_max,
|
||||
self.request_backoff_start,
|
||||
self.request_backoff_max,
|
||||
),
|
||||
response_guard: ResponseGuard::new(self.response_time_window),
|
||||
});
|
||||
|
||||
Ok(receiver)
|
||||
}
|
||||
|
||||
fn request<T>(&self, ctx: &BasicContext, requests: T) -> Result<OnResponses<T>, basic_request::NoSuchOutput>
|
||||
where T: request::RequestAdapter
|
||||
{
|
||||
self.request_raw(ctx, requests.make_requests()).map(|recv| OnResponses {
|
||||
receiver: recv,
|
||||
_marker: PhantomData,
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
impl OnDemand {
|
||||
|
||||
/// Create a new `OnDemand` service with the given cache.
|
||||
@@ -415,77 +501,6 @@ impl OnDemand {
|
||||
me
|
||||
}
|
||||
|
||||
/// Submit a vector of requests to be processed together.
|
||||
///
|
||||
/// Fails if back-references are not coherent.
|
||||
/// The returned vector of responses will correspond to the requests exactly.
|
||||
pub fn request_raw(&self, ctx: &BasicContext, requests: Vec<Request>)
|
||||
-> Result<Receiver<PendingResponse>, basic_request::NoSuchOutput>
|
||||
{
|
||||
let (sender, receiver) = oneshot::channel();
|
||||
if requests.is_empty() {
|
||||
assert!(sender.send(Ok(Vec::new())).is_ok(), "receiver still in scope; qed");
|
||||
return Ok(receiver);
|
||||
}
|
||||
|
||||
let mut builder = basic_request::Builder::default();
|
||||
|
||||
let responses = Vec::with_capacity(requests.len());
|
||||
|
||||
let mut header_producers = HashMap::new();
|
||||
for (i, request) in requests.into_iter().enumerate() {
|
||||
let request = CheckedRequest::from(request);
|
||||
|
||||
// ensure that all requests needing headers will get them.
|
||||
if let Some((idx, field)) = request.needs_header() {
|
||||
// a request chain with a header back-reference is valid only if it both
|
||||
// points to a request that returns a header and has the same back-reference
|
||||
// for the block hash.
|
||||
match header_producers.get(&idx) {
|
||||
Some(ref f) if &field == *f => {}
|
||||
_ => return Err(basic_request::NoSuchOutput),
|
||||
}
|
||||
}
|
||||
if let CheckedRequest::HeaderByHash(ref req, _) = request {
|
||||
header_producers.insert(i, req.0);
|
||||
}
|
||||
|
||||
builder.push(request)?;
|
||||
}
|
||||
|
||||
let requests = builder.build();
|
||||
let net_requests = requests.clone().map_requests(|req| req.into_net_request());
|
||||
let capabilities = guess_capabilities(requests.requests());
|
||||
|
||||
self.submit_pending(ctx, Pending {
|
||||
requests,
|
||||
net_requests,
|
||||
required_capabilities: capabilities,
|
||||
responses,
|
||||
sender,
|
||||
request_guard: RequestGuard::new(
|
||||
self.request_number_of_consecutive_errors as u32,
|
||||
self.request_backoff_rounds_max,
|
||||
self.request_backoff_start,
|
||||
self.request_backoff_max,
|
||||
),
|
||||
response_guard: ResponseGuard::new(self.response_time_window),
|
||||
});
|
||||
|
||||
Ok(receiver)
|
||||
}
|
||||
|
||||
/// Submit a strongly-typed batch of requests.
|
||||
///
|
||||
/// Fails if back-reference are not coherent.
|
||||
pub fn request<T>(&self, ctx: &BasicContext, requests: T) -> Result<OnResponses<T>, basic_request::NoSuchOutput>
|
||||
where T: request::RequestAdapter
|
||||
{
|
||||
self.request_raw(ctx, requests.make_requests()).map(|recv| OnResponses {
|
||||
receiver: recv,
|
||||
_marker: PhantomData,
|
||||
})
|
||||
}
|
||||
|
||||
// maybe dispatch pending requests.
|
||||
// sometimes
|
||||
|
||||
@@ -29,7 +29,7 @@ use std::sync::Arc;
|
||||
use std::time::{Duration, Instant};
|
||||
use std::thread;
|
||||
|
||||
use super::{request, OnDemand, Peer, HeaderRef};
|
||||
use super::{request, OnDemand, OnDemandRequester, Peer, HeaderRef};
|
||||
|
||||
// useful contexts to give the service.
|
||||
enum Context {
|
||||
|
||||
@@ -275,8 +275,7 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic]
|
||||
fn batch_tx_index_backreference_wrong_output() {
|
||||
fn batch_tx_index_backreference_public_api() {
|
||||
let mut builder = Builder::default();
|
||||
builder.push(Request::HeaderProof(IncompleteHeaderProofRequest {
|
||||
num: 100.into(), // header proof puts hash at output 0.
|
||||
@@ -286,11 +285,16 @@ mod tests {
|
||||
})).unwrap();
|
||||
|
||||
let mut batch = builder.build();
|
||||
batch.requests[1].fill(|_req_idx, _out_idx| Ok(Output::Number(42)));
|
||||
|
||||
batch.next_complete();
|
||||
batch.answered += 1;
|
||||
batch.next_complete();
|
||||
assert!(batch.next_complete().is_some());
|
||||
let hdr_proof_res = header_proof::Response {
|
||||
proof: vec![],
|
||||
hash: 12.into(),
|
||||
td: 21.into(),
|
||||
};
|
||||
batch.supply_response_unchecked(&hdr_proof_res);
|
||||
|
||||
assert!(batch.next_complete().is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -310,23 +314,4 @@ mod tests {
|
||||
batch.answered += 1;
|
||||
assert!(batch.next_complete().is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic]
|
||||
fn batch_receipts_backreference_wrong_output() {
|
||||
let mut builder = Builder::default();
|
||||
builder.push(Request::HeaderProof(IncompleteHeaderProofRequest {
|
||||
num: 100.into(), // header proof puts hash at output 0.
|
||||
})).unwrap();
|
||||
builder.push(Request::Receipts(IncompleteReceiptsRequest {
|
||||
hash: Field::BackReference(0, 0),
|
||||
})).unwrap();
|
||||
|
||||
let mut batch = builder.build();
|
||||
batch.requests[1].fill(|_req_idx, _out_idx| Ok(Output::Number(42)));
|
||||
|
||||
batch.next_complete();
|
||||
batch.answered += 1;
|
||||
batch.next_complete();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -277,13 +277,12 @@ impl Provider {
|
||||
|
||||
fn pool_client<'a>(&'a self, nonce_cache: &'a NonceCache, local_accounts: &'a HashSet<Address>) -> miner::pool_client::PoolClient<'a, Client> {
|
||||
let engine = self.client.engine();
|
||||
let refuse_service_transactions = true;
|
||||
miner::pool_client::PoolClient::new(
|
||||
&*self.client,
|
||||
nonce_cache,
|
||||
engine,
|
||||
local_accounts,
|
||||
refuse_service_transactions,
|
||||
None, // refuse_service_transactions = true
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
"ecip1010PauseTransition": "0x2dc6c0",
|
||||
"ecip1010ContinueTransition": "0x4c4b40",
|
||||
"ecip1017EraRounds": "0x4c4b40",
|
||||
"eip100bTransition": "0x7fffffffffffffff",
|
||||
"eip100bTransition": "0x85d9a0",
|
||||
"bombDefuseTransition": "0x5a06e0"
|
||||
}
|
||||
}
|
||||
@@ -29,15 +29,15 @@
|
||||
"forkCanonHash": "0x94365e3a8c0b35089c1d1195081fe7489b528a84b22199c916180db8b28ade7f",
|
||||
"eip150Transition": "0x2625a0",
|
||||
"eip160Transition": "0x2dc6c0",
|
||||
"eip161abcTransition": "0x7fffffffffffffff",
|
||||
"eip161dTransition": "0x7fffffffffffffff",
|
||||
"eip161abcTransition": "0x85d9a0",
|
||||
"eip161dTransition": "0x85d9a0",
|
||||
"eip155Transition": "0x2dc6c0",
|
||||
"maxCodeSize": "0x6000",
|
||||
"maxCodeSizeTransition": "0x7fffffffffffffff",
|
||||
"eip140Transition": "0x7fffffffffffffff",
|
||||
"eip211Transition": "0x7fffffffffffffff",
|
||||
"eip214Transition": "0x7fffffffffffffff",
|
||||
"eip658Transition": "0x7fffffffffffffff"
|
||||
"maxCodeSizeTransition": "0x85d9a0",
|
||||
"eip140Transition": "0x85d9a0",
|
||||
"eip211Transition": "0x85d9a0",
|
||||
"eip214Transition": "0x85d9a0",
|
||||
"eip658Transition": "0x85d9a0"
|
||||
},
|
||||
"genesis": {
|
||||
"seal": {
|
||||
@@ -3905,7 +3905,7 @@
|
||||
"0x0000000000000000000000000000000000000005": {
|
||||
"builtin": {
|
||||
"name": "modexp",
|
||||
"activate_at": "0x7fffffffffffffff",
|
||||
"activate_at": "0x85d9a0",
|
||||
"pricing": {
|
||||
"modexp": {
|
||||
"divisor": 20
|
||||
@@ -3916,7 +3916,7 @@
|
||||
"0x0000000000000000000000000000000000000006": {
|
||||
"builtin": {
|
||||
"name": "alt_bn128_add",
|
||||
"activate_at": "0x7fffffffffffffff",
|
||||
"activate_at": "0x85d9a0",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 500,
|
||||
@@ -3928,7 +3928,7 @@
|
||||
"0x0000000000000000000000000000000000000007": {
|
||||
"builtin": {
|
||||
"name": "alt_bn128_mul",
|
||||
"activate_at": "0x7fffffffffffffff",
|
||||
"activate_at": "0x85d9a0",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 40000,
|
||||
@@ -3940,7 +3940,7 @@
|
||||
"0x0000000000000000000000000000000000000008": {
|
||||
"builtin": {
|
||||
"name": "alt_bn128_pairing",
|
||||
"activate_at": "0x7fffffffffffffff",
|
||||
"activate_at": "0x85d9a0",
|
||||
"pricing": {
|
||||
"alt_bn128_pairing": {
|
||||
"base": 100000,
|
||||
|
||||
917
ethcore/res/ethereum/goerli.json
Normal file
917
ethcore/res/ethereum/goerli.json
Normal file
@@ -0,0 +1,917 @@
|
||||
{
|
||||
"name": "Görli Testnet",
|
||||
"dataDir": "goerli",
|
||||
"engine": {
|
||||
"clique": {
|
||||
"params": {
|
||||
"period": 15,
|
||||
"epoch": 30000
|
||||
}
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
"accountStartNonce": "0x0",
|
||||
"chainID": "0x5",
|
||||
"eip140Transition": "0x0",
|
||||
"eip145Transition": "0x0",
|
||||
"eip150Transition": "0x0",
|
||||
"eip155Transition": "0x0",
|
||||
"eip160Transition": "0x0",
|
||||
"eip161abcTransition": "0x0",
|
||||
"eip161dTransition": "0x0",
|
||||
"eip211Transition": "0x0",
|
||||
"eip214Transition": "0x0",
|
||||
"eip658Transition": "0x0",
|
||||
"eip1014Transition": "0x0",
|
||||
"eip1052Transition": "0x0",
|
||||
"eip1283Transition": "0x0",
|
||||
"eip1283DisableTransition": "0x0",
|
||||
"gasLimitBoundDivisor": "0x400",
|
||||
"maxCodeSize": "0x6000",
|
||||
"maxCodeSizeTransition": "0x0",
|
||||
"maximumExtraDataSize": "0xffff",
|
||||
"minGasLimit": "0x1388",
|
||||
"networkID": "0x5"
|
||||
},
|
||||
"genesis": {
|
||||
"author": "0x0000000000000000000000000000000000000000",
|
||||
"difficulty": "0x1",
|
||||
"extraData": "0x22466c6578692069732061207468696e6722202d204166726900000000000000e0a2bd4258d2768837baa26a28fe71dc079f84c70000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
"gasLimit": "0xa00000",
|
||||
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"seal": {
|
||||
"ethereum": {
|
||||
"nonce": "0x0000000000000000",
|
||||
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
|
||||
}
|
||||
},
|
||||
"timestamp": "0x5c51a607"
|
||||
},
|
||||
"nodes": [
|
||||
"enode://06333009fc9ef3c9e174768e495722a7f98fe7afd4660542e983005f85e556028410fd03278944f44cfe5437b1750b5e6bd1738f700fe7da3626d52010d2954c@51.141.15.254:30303",
|
||||
"enode://176b9417f511d05b6b2cf3e34b756cf0a7096b3094572a8f6ef4cdcb9d1f9d00683bf0f83347eebdf3b81c3521c2332086d9592802230bf528eaf606a1d9677b@13.93.54.137:30303",
|
||||
"enode://573b6607cd59f241e30e4c4943fd50e99e2b6f42f9bd5ca111659d309c06741247f4f1e93843ad3e8c8c18b6e2d94c161b7ef67479b3938780a97134b618b5ce@52.56.136.200:30303",
|
||||
"enode://67913271d14f445689e8310270c304d42f268428f2de7a4ac0275bea97690e021df6f549f462503ff4c7a81d9dd27288867bbfa2271477d0911378b8944fae55@157.230.239.163:30303",
|
||||
"enode://a87685902a0622e9cf18c68e73a0ea45156ec53e857ef049b185a9db2296ca04d776417bf1901c0b4eacb5b26271d8694e88e3f17c20d49eb77e1a41ab26b5b3@51.141.78.53:30303",
|
||||
"enode://ae8658da8d255d1992c3ec6e62e11d6e1c5899aa1566504bc1ff96a0c9c8bd44838372be643342553817f5cc7d78f1c83a8093dee13d77b3b0a583c050c81940@18.232.185.151:30303",
|
||||
"enode://ae8658da8d255d1992c3ec6e62e11d6e1c5899aa1566504bc1ff96a0c9c8bd44838372be643342553817f5cc7d78f1c83a8093dee13d77b3b0a583c050c81940@18.232.185.151:30303",
|
||||
"enode://b477ca6d507a3f57070783eb62ba838847635f8b1a0cbffb8b7f8173f5894cf550f0225a5c279341e2d862a606e778b57180a4f1db3db78c51eadcfa4fdc6963@40.68.240.160:30303"
|
||||
],
|
||||
"accounts": {
|
||||
"0x0000000000000000000000000000000000000000": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000001": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "ecrecover",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 3000,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000002": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "sha256",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 60,
|
||||
"word": 12
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000003": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "ripemd160",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 600,
|
||||
"word": 120
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000004": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "identity",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 15,
|
||||
"word": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000005": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "modexp",
|
||||
"activate_at": "0x0",
|
||||
"pricing": {
|
||||
"modexp": {
|
||||
"divisor": 20
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000006": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_add",
|
||||
"activate_at": "0x0",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 500,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000007": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_mul",
|
||||
"activate_at": "0x0",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 40000,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000008": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_pairing",
|
||||
"activate_at": "0x0",
|
||||
"pricing": {
|
||||
"alt_bn128_pairing": {
|
||||
"base": 100000,
|
||||
"pair": 80000
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000009": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000010": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000011": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000012": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000013": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000014": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000015": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000016": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000017": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000018": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000019": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000020": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000021": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000022": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000023": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000024": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000025": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000026": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000027": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000028": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000029": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000030": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000031": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000032": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000033": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000034": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000035": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000036": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000037": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000038": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000039": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000040": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000041": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000042": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000043": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000044": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000045": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000046": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000047": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000048": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000049": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000050": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000051": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000052": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000053": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000054": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000055": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000056": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000057": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000058": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000059": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000060": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000061": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000062": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000063": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000064": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000065": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000066": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000067": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000068": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000069": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000070": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000071": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000072": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000073": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000074": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000075": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000076": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000077": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000078": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000079": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000080": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000081": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000082": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000083": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000084": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000085": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000086": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000087": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000088": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000089": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000090": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000091": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000092": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000093": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000094": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000095": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000096": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000097": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000098": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000099": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000aa": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ab": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ac": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ad": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ae": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000af": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ba": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000be": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bf": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ca": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ce": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cf": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000da": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000db": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000dc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000dd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000de": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000df": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ea": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000eb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ec": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ed": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ee": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ef": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fa": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fe": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ff": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x4c2ae482593505f0163cdefc073e81c63cda4107": {
|
||||
"balance": "0x152d02c7e14af6800000"
|
||||
},
|
||||
"0xa8e8f14732658e4b51e8711931053a8a69baf2b1": {
|
||||
"balance": "0x152d02c7e14af6800000"
|
||||
},
|
||||
"0xd9a5179f091d85051d3c982785efd1455cec8699": {
|
||||
"balance": "0x84595161401484a000000"
|
||||
},
|
||||
"0xe0a2bd4258d2768837baa26a28fe71dc079f84c7": {
|
||||
"balance": "0x4a47e3c12448f4ad000000"
|
||||
}
|
||||
}
|
||||
}
|
||||
902
ethcore/res/ethereum/kotti.json
Normal file
902
ethcore/res/ethereum/kotti.json
Normal file
@@ -0,0 +1,902 @@
|
||||
{
|
||||
"name": "Kotti Testnet",
|
||||
"dataDir": "kotti",
|
||||
"engine": {
|
||||
"clique": {
|
||||
"params": {
|
||||
"period": 15,
|
||||
"epoch": 30000
|
||||
}
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
"accountStartNonce": "0x0",
|
||||
"chainID": "0x6",
|
||||
"eip140Transition": "0xaef49",
|
||||
"eip150Transition": "0x0",
|
||||
"eip155Transition": "0x0",
|
||||
"eip160Transition": "0x0",
|
||||
"eip161abcTransition": "0xaef49",
|
||||
"eip161dTransition": "0xaef49",
|
||||
"eip211Transition": "0xaef49",
|
||||
"eip214Transition": "0xaef49",
|
||||
"eip658Transition": "0xaef49",
|
||||
"gasLimitBoundDivisor": "0x400",
|
||||
"maxCodeSize": "0x6000",
|
||||
"maxCodeSizeTransition": "0xaef49",
|
||||
"maximumExtraDataSize": "0xffff",
|
||||
"minGasLimit": "0x1388",
|
||||
"networkID": "0x6"
|
||||
},
|
||||
"genesis": {
|
||||
"author": "0x0000000000000000000000000000000000000000",
|
||||
"difficulty": "0x1",
|
||||
"extraData": "0x000000000000000000000000000000000000000000000000000000000000000025b7955e43adf9c2a01a9475908702cce67f302a6aaf8cba3c9255a2b863415d4db7bae4f4bbca020000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
"gasLimit": "0xa00000",
|
||||
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"seal": {
|
||||
"ethereum": {
|
||||
"nonce": "0x0000000000000000",
|
||||
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
|
||||
}
|
||||
},
|
||||
"timestamp": "0x5c2d2287"
|
||||
},
|
||||
"nodes": [
|
||||
"enode://06333009fc9ef3c9e174768e495722a7f98fe7afd4660542e983005f85e556028410fd03278944f44cfe5437b1750b5e6bd1738f700fe7da3626d52010d2954c@51.141.15.254:30303",
|
||||
"enode://93c94e999be5dd854c5d82a7cf5c14822973b5d9badb56ad4974586ec4d4f1995c815af795c20bb6e0a6226d3ee55808435c4dc89baf94ee581141b064d19dfc@80.187.116.161:25720",
|
||||
"enode://ae8658da8d255d1992c3ec6e62e11d6e1c5899aa1566504bc1ff96a0c9c8bd44838372be643342553817f5cc7d78f1c83a8093dee13d77b3b0a583c050c81940@18.232.185.151:30303",
|
||||
"enode://b477ca6d507a3f57070783eb62ba838847635f8b1a0cbffb8b7f8173f5894cf550f0225a5c279341e2d862a606e778b57180a4f1db3db78c51eadcfa4fdc6963@40.68.240.160:30303"
|
||||
],
|
||||
"accounts": {
|
||||
"0x0000000000000000000000000000000000000000": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000001": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "ecrecover",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 3000,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000002": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "sha256",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 60,
|
||||
"word": 12
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000003": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "ripemd160",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 600,
|
||||
"word": 120
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000004": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "identity",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 15,
|
||||
"word": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000005": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "modexp",
|
||||
"activate_at": "0xaef49",
|
||||
"pricing": {
|
||||
"modexp": {
|
||||
"divisor": 20
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000006": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_add",
|
||||
"activate_at": "0xaef49",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 500,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000007": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_mul",
|
||||
"activate_at": "0xaef49",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 40000,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000008": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_pairing",
|
||||
"activate_at": "0xaef49",
|
||||
"pricing": {
|
||||
"alt_bn128_pairing": {
|
||||
"base": 100000,
|
||||
"pair": 80000
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000009": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000010": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000011": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000012": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000013": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000014": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000015": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000016": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000017": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000018": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000019": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000020": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000021": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000022": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000023": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000024": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000025": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000026": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000027": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000028": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000029": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000030": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000031": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000032": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000033": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000034": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000035": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000036": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000037": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000038": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000039": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000040": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000041": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000042": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000043": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000044": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000045": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000046": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000047": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000048": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000049": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000050": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000051": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000052": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000053": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000054": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000055": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000056": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000057": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000058": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000059": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000060": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000061": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000062": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000063": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000064": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000065": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000066": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000067": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000068": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000069": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000070": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000071": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000072": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000073": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000074": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000075": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000076": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000077": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000078": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000079": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000080": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000081": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000082": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000083": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000084": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000085": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000086": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000087": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000088": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000089": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000090": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000091": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000092": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000093": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000094": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000095": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000096": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000097": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000098": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000099": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000aa": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ab": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ac": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ad": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ae": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000af": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ba": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000be": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bf": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ca": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ce": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cf": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000da": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000db": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000dc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000dd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000de": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000df": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ea": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000eb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ec": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ed": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ee": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ef": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fa": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fe": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ff": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x25b7955e43adf9c2a01a9475908702cce67f302a": {
|
||||
"balance": "0x84595161401484a000000"
|
||||
},
|
||||
"0x6aaf8cba3c9255a2b863415d4db7bae4f4bbca02": {
|
||||
"balance": "0x4a723dc6b40b8a9a000000"
|
||||
}
|
||||
}
|
||||
}
|
||||
903
ethcore/res/ethereum/rinkeby.json
Normal file
903
ethcore/res/ethereum/rinkeby.json
Normal file
@@ -0,0 +1,903 @@
|
||||
{
|
||||
"name": "Rinkeby",
|
||||
"dataDir": "rinkeby",
|
||||
"engine": {
|
||||
"clique": {
|
||||
"params": {
|
||||
"period": 15,
|
||||
"epoch": 30000
|
||||
}
|
||||
}
|
||||
},
|
||||
"params": {
|
||||
"accountStartNonce": "0x0",
|
||||
"chainID": "0x4",
|
||||
"eip140Transition": "0xfcc25",
|
||||
"eip145Transition": "0x37db77",
|
||||
"eip150Transition": "0x2",
|
||||
"eip155Transition": "0x3",
|
||||
"eip160Transition": "0x0",
|
||||
"eip161abcTransition": "0x0",
|
||||
"eip161dTransition": "0x0",
|
||||
"eip211Transition": "0xfcc25",
|
||||
"eip214Transition": "0xfcc25",
|
||||
"eip658Transition": "0xfcc25",
|
||||
"eip1014Transition": "0x37db77",
|
||||
"eip1052Transition": "0x37db77",
|
||||
"eip1283Transition": "0x37db77",
|
||||
"eip1283DisableTransition": "0x41efd2",
|
||||
"gasLimitBoundDivisor": "0x400",
|
||||
"maxCodeSize": "0x6000",
|
||||
"maxCodeSizeTransition": "0x0",
|
||||
"maximumExtraDataSize": "0xffff",
|
||||
"minGasLimit": "0x1388",
|
||||
"networkID": "0x4"
|
||||
},
|
||||
"genesis": {
|
||||
"author": "0x0000000000000000000000000000000000000000",
|
||||
"difficulty": "0x1",
|
||||
"extraData": "0x52657370656374206d7920617574686f7269746168207e452e436172746d616e42eb768f2244c8811c63729a21a3569731535f067ffc57839b00206d1ad20c69a1981b489f772031b279182d99e65703f0076e4812653aab85fca0f00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
"gasLimit": "0x47b760",
|
||||
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"seal": {
|
||||
"ethereum": {
|
||||
"nonce": "0x0000000000000000",
|
||||
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
|
||||
}
|
||||
},
|
||||
"timestamp": "0x58ee40ba"
|
||||
},
|
||||
"nodes": [
|
||||
"enode://a24ac7c5484ef4ed0c5eb2d36620ba4e4aa13b8c84684e1b4aab0cebea2ae45cb4d375b77eab56516d34bfbd3c1a833fc51296ff084b770b94fb9028c4d25ccf@52.169.42.101:30303",
|
||||
"enode://343149e4feefa15d882d9fe4ac7d88f885bd05ebb735e547f12e12080a9fa07c8014ca6fd7f373123488102fe5e34111f8509cf0b7de3f5b44339c9f25e87cb8@52.3.158.184:30303",
|
||||
"enode://b6b28890b006743680c52e64e0d16db57f28124885595fa03a562be1d2bf0f3a1da297d56b13da25fb992888fd556d4c1a27b1f39d531bde7de1921c90061cc6@159.89.28.211:30303"
|
||||
],
|
||||
"accounts": {
|
||||
"0x0000000000000000000000000000000000000000": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000001": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "ecrecover",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 3000,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000002": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "sha256",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 60,
|
||||
"word": 12
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000003": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "ripemd160",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 600,
|
||||
"word": 120
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000004": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "identity",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 15,
|
||||
"word": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000005": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "modexp",
|
||||
"activate_at": "0xfcc25",
|
||||
"pricing": {
|
||||
"modexp": {
|
||||
"divisor": 20
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000006": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_add",
|
||||
"activate_at": "0xfcc25",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 500,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000007": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_mul",
|
||||
"activate_at": "0xfcc25",
|
||||
"pricing": {
|
||||
"linear": {
|
||||
"base": 40000,
|
||||
"word": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000008": {
|
||||
"balance": "0x1",
|
||||
"builtin": {
|
||||
"name": "alt_bn128_pairing",
|
||||
"activate_at": "0xfcc25",
|
||||
"pricing": {
|
||||
"alt_bn128_pairing": {
|
||||
"base": 100000,
|
||||
"pair": 80000
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"0x0000000000000000000000000000000000000009": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000000f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000010": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000011": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000012": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000013": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000014": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000015": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000016": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000017": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000018": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000019": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000001f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000020": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000021": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000022": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000023": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000024": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000025": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000026": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000027": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000028": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000029": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000002f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000030": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000031": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000032": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000033": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000034": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000035": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000036": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000037": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000038": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000039": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000003f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000040": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000041": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000042": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000043": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000044": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000045": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000046": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000047": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000048": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000049": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000004f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000050": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000051": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000052": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000053": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000054": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000055": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000056": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000057": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000058": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000059": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000005f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000060": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000061": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000062": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000063": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000064": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000065": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000066": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000067": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000068": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000069": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000006f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000070": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000071": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000072": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000073": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000074": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000075": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000076": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000077": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000078": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000079": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000007f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000080": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000081": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000082": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000083": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000084": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000085": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000086": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000087": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000088": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000089": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000008f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000090": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000091": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000092": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000093": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000094": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000095": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000096": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000097": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000098": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x0000000000000000000000000000000000000099": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009a": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009b": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009c": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009d": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009e": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x000000000000000000000000000000000000009f": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000a9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000aa": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ab": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ac": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ad": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ae": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000af": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000b9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ba": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000be": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000bf": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000c9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ca": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ce": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000cf": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000d9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000da": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000db": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000dc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000dd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000de": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000df": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000e9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ea": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000eb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ec": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ed": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ee": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ef": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f0": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f1": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f2": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f3": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f4": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f5": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f6": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f7": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f8": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000f9": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fa": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fb": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fc": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fd": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000fe": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x00000000000000000000000000000000000000ff": {
|
||||
"balance": "0x1"
|
||||
},
|
||||
"0x31b98d14007bdee637298086988a0bbd31184523": {
|
||||
"balance": "0x200000000000000000000000000000000000000000000000000000000000000"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -14,6 +14,10 @@
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// Silence: `use of deprecated item 'std::error::Error::cause': replaced by Error::source, which can support downcasting`
|
||||
// https://github.com/paritytech/parity-ethereum/issues/10302
|
||||
#![allow(deprecated)]
|
||||
|
||||
use ethcore;
|
||||
use io;
|
||||
use ethcore_private_tx;
|
||||
|
||||
@@ -30,8 +30,10 @@ use blockchain::{BlockChainDB, BlockChainDBHandler};
|
||||
use ethcore::client::{Client, ClientConfig, ChainNotify, ClientIoMessage};
|
||||
use ethcore::miner::Miner;
|
||||
use ethcore::snapshot::service::{Service as SnapshotService, ServiceParams as SnapServiceParams};
|
||||
use ethcore::snapshot::{SnapshotService as _SnapshotService, RestorationStatus};
|
||||
use ethcore::snapshot::{SnapshotService as _SnapshotService, RestorationStatus, Error as SnapshotError};
|
||||
use ethcore::spec::Spec;
|
||||
use ethcore::error::{Error as EthcoreError, ErrorKind};
|
||||
|
||||
|
||||
use ethcore_private_tx::{self, Importer, Signer};
|
||||
use Error;
|
||||
@@ -197,6 +199,7 @@ impl ClientService {
|
||||
|
||||
/// Shutdown the Client Service
|
||||
pub fn shutdown(&self) {
|
||||
trace!(target: "shutdown", "Shutting down Client Service");
|
||||
self.snapshot.shutdown();
|
||||
}
|
||||
}
|
||||
@@ -257,7 +260,11 @@ impl IoHandler<ClientIoMessage> for ClientIoHandler {
|
||||
|
||||
let res = thread::Builder::new().name("Periodic Snapshot".into()).spawn(move || {
|
||||
if let Err(e) = snapshot.take_snapshot(&*client, num) {
|
||||
warn!("Failed to take snapshot at block #{}: {}", num, e);
|
||||
match e {
|
||||
EthcoreError(ErrorKind::Snapshot(SnapshotError::SnapshotAborted), _) => info!("Snapshot aborted"),
|
||||
_ => warn!("Failed to take snapshot at block #{}: {}", num, e),
|
||||
}
|
||||
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
@@ -31,7 +31,7 @@
|
||||
//! `ExecutedBlock` is an underlaying data structure used by all structs above to store block
|
||||
//! related info.
|
||||
|
||||
use std::cmp;
|
||||
use std::{cmp, ops};
|
||||
use std::collections::HashSet;
|
||||
use std::sync::Arc;
|
||||
|
||||
@@ -52,7 +52,6 @@ use vm::{EnvInfo, LastHashes};
|
||||
use hash::keccak;
|
||||
use rlp::{RlpStream, Encodable, encode_list};
|
||||
use types::transaction::{SignedTransaction, Error as TransactionError};
|
||||
use types::block::Block;
|
||||
use types::header::{Header, ExtendedHeader};
|
||||
use types::receipt::{Receipt, TransactionOutcome};
|
||||
|
||||
@@ -155,69 +154,15 @@ impl ExecutedBlock {
|
||||
}
|
||||
}
|
||||
|
||||
/// Trait for a object that is a `ExecutedBlock`.
|
||||
pub trait IsBlock {
|
||||
/// Get the `ExecutedBlock` associated with this object.
|
||||
fn block(&self) -> &ExecutedBlock;
|
||||
|
||||
/// Get the base `Block` object associated with this.
|
||||
fn to_base(&self) -> Block {
|
||||
Block {
|
||||
header: self.header().clone(),
|
||||
transactions: self.transactions().iter().cloned().map(Into::into).collect(),
|
||||
uncles: self.uncles().to_vec(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the header associated with this object's block.
|
||||
fn header(&self) -> &Header { &self.block().header }
|
||||
|
||||
/// Get the final state associated with this object's block.
|
||||
fn state(&self) -> &State<StateDB> { &self.block().state }
|
||||
|
||||
/// Get all information on transactions in this block.
|
||||
fn transactions(&self) -> &[SignedTransaction] { &self.block().transactions }
|
||||
|
||||
/// Get all information on receipts in this block.
|
||||
fn receipts(&self) -> &[Receipt] { &self.block().receipts }
|
||||
|
||||
/// Get all uncles in this block.
|
||||
fn uncles(&self) -> &[Header] { &self.block().uncles }
|
||||
}
|
||||
|
||||
/// Trait for an object that owns an `ExecutedBlock`
|
||||
pub trait Drain {
|
||||
/// Returns `ExecutedBlock`
|
||||
fn drain(self) -> ExecutedBlock;
|
||||
}
|
||||
|
||||
impl IsBlock for ExecutedBlock {
|
||||
fn block(&self) -> &ExecutedBlock { self }
|
||||
}
|
||||
|
||||
impl ::parity_machine::LiveBlock for ExecutedBlock {
|
||||
type Header = Header;
|
||||
|
||||
fn header(&self) -> &Header {
|
||||
&self.header
|
||||
}
|
||||
|
||||
fn uncles(&self) -> &[Header] {
|
||||
&self.uncles
|
||||
}
|
||||
}
|
||||
|
||||
impl ::parity_machine::Transactions for ExecutedBlock {
|
||||
type Transaction = SignedTransaction;
|
||||
|
||||
fn transactions(&self) -> &[SignedTransaction] {
|
||||
&self.transactions
|
||||
}
|
||||
}
|
||||
|
||||
impl<'x> OpenBlock<'x> {
|
||||
/// Create a new `OpenBlock` ready for transaction pushing.
|
||||
pub fn new<'a>(
|
||||
pub fn new<'a, I: IntoIterator<Item = ExtendedHeader>>(
|
||||
engine: &'x EthEngine,
|
||||
factories: Factories,
|
||||
tracing: bool,
|
||||
@@ -228,7 +173,7 @@ impl<'x> OpenBlock<'x> {
|
||||
gas_range_target: (U256, U256),
|
||||
extra_data: Bytes,
|
||||
is_epoch_begin: bool,
|
||||
ancestry: &mut Iterator<Item=ExtendedHeader>,
|
||||
ancestry: I,
|
||||
) -> Result<Self, Error> {
|
||||
let number = parent.number() + 1;
|
||||
let state = State::from_existing(db, parent.state_root().clone(), engine.account_start_nonce(number), factories)?;
|
||||
@@ -250,7 +195,7 @@ impl<'x> OpenBlock<'x> {
|
||||
engine.populate_from_parent(&mut r.block.header, parent);
|
||||
|
||||
engine.machine().on_new_block(&mut r.block)?;
|
||||
engine.on_new_block(&mut r.block, is_epoch_begin, ancestry)?;
|
||||
engine.on_new_block(&mut r.block, is_epoch_begin, &mut ancestry.into_iter())?;
|
||||
|
||||
Ok(r)
|
||||
}
|
||||
@@ -270,7 +215,7 @@ impl<'x> OpenBlock<'x> {
|
||||
/// NOTE Will check chain constraints and the uncle number but will NOT check
|
||||
/// that the header itself is actually valid.
|
||||
pub fn push_uncle(&mut self, valid_uncle_header: Header) -> Result<(), BlockError> {
|
||||
let max_uncles = self.engine.maximum_uncle_count(self.block.header().number());
|
||||
let max_uncles = self.engine.maximum_uncle_count(self.block.header.number());
|
||||
if self.block.uncles.len() + 1 > max_uncles {
|
||||
return Err(BlockError::TooManyUncles(OutOfBounds{
|
||||
min: None,
|
||||
@@ -284,11 +229,6 @@ impl<'x> OpenBlock<'x> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get the environment info concerning this block.
|
||||
pub fn env_info(&self) -> EnvInfo {
|
||||
self.block.env_info()
|
||||
}
|
||||
|
||||
/// Push a transaction into the block.
|
||||
///
|
||||
/// If valid, it will be executed, and archived together with the receipt.
|
||||
@@ -297,7 +237,7 @@ impl<'x> OpenBlock<'x> {
|
||||
return Err(TransactionError::AlreadyImported.into());
|
||||
}
|
||||
|
||||
let env_info = self.env_info();
|
||||
let env_info = self.block.env_info();
|
||||
let outcome = self.block.state.apply(&env_info, self.engine.machine(), &t, self.block.traces.is_enabled())?;
|
||||
|
||||
self.block.transactions_set.insert(h.unwrap_or_else(||t.hash()));
|
||||
@@ -344,7 +284,6 @@ impl<'x> OpenBlock<'x> {
|
||||
self.block.header.set_difficulty(*header.difficulty());
|
||||
self.block.header.set_gas_limit(*header.gas_limit());
|
||||
self.block.header.set_timestamp(header.timestamp());
|
||||
self.block.header.set_author(*header.author());
|
||||
self.block.header.set_uncles_hash(*header.uncles_hash());
|
||||
self.block.header.set_transactions_root(*header.transactions_root());
|
||||
// TODO: that's horrible. set only for backwards compatibility
|
||||
@@ -394,22 +333,39 @@ impl<'x> OpenBlock<'x> {
|
||||
pub fn block_mut(&mut self) -> &mut ExecutedBlock { &mut self.block }
|
||||
}
|
||||
|
||||
impl<'x> IsBlock for OpenBlock<'x> {
|
||||
fn block(&self) -> &ExecutedBlock { &self.block }
|
||||
impl<'a> ops::Deref for OpenBlock<'a> {
|
||||
type Target = ExecutedBlock;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.block
|
||||
}
|
||||
}
|
||||
|
||||
impl IsBlock for ClosedBlock {
|
||||
fn block(&self) -> &ExecutedBlock { &self.block }
|
||||
impl ops::Deref for ClosedBlock {
|
||||
type Target = ExecutedBlock;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.block
|
||||
}
|
||||
}
|
||||
|
||||
impl IsBlock for LockedBlock {
|
||||
fn block(&self) -> &ExecutedBlock { &self.block }
|
||||
impl ops::Deref for LockedBlock {
|
||||
type Target = ExecutedBlock;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.block
|
||||
}
|
||||
}
|
||||
|
||||
impl ops::Deref for SealedBlock {
|
||||
type Target = ExecutedBlock;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.block
|
||||
}
|
||||
}
|
||||
|
||||
impl ClosedBlock {
|
||||
/// Get the hash of the header without seal arguments.
|
||||
pub fn hash(&self) -> H256 { self.header().bare_hash() }
|
||||
|
||||
/// Turn this into a `LockedBlock`, unable to be reopened again.
|
||||
pub fn lock(self) -> LockedBlock {
|
||||
LockedBlock {
|
||||
@@ -443,25 +399,25 @@ impl LockedBlock {
|
||||
self.block.header.set_receipts_root(
|
||||
ordered_trie_root(self.block.receipts.iter().map(|r| r.rlp_bytes()))
|
||||
);
|
||||
// compute hash and cache it.
|
||||
self.block.header.compute_hash();
|
||||
}
|
||||
|
||||
/// Get the hash of the header without seal arguments.
|
||||
pub fn hash(&self) -> H256 { self.header().bare_hash() }
|
||||
|
||||
/// Provide a valid seal in order to turn this into a `SealedBlock`.
|
||||
///
|
||||
/// NOTE: This does not check the validity of `seal` with the engine.
|
||||
pub fn seal(self, engine: &EthEngine, seal: Vec<Bytes>) -> Result<SealedBlock, BlockError> {
|
||||
let expected_seal_fields = engine.seal_fields(self.header());
|
||||
pub fn seal(self, engine: &EthEngine, seal: Vec<Bytes>) -> Result<SealedBlock, Error> {
|
||||
let expected_seal_fields = engine.seal_fields(&self.header);
|
||||
let mut s = self;
|
||||
if seal.len() != expected_seal_fields {
|
||||
return Err(BlockError::InvalidSealArity(
|
||||
Mismatch { expected: expected_seal_fields, found: seal.len() }));
|
||||
Err(BlockError::InvalidSealArity(Mismatch {
|
||||
expected: expected_seal_fields,
|
||||
found: seal.len()
|
||||
}))?;
|
||||
}
|
||||
|
||||
s.block.header.set_seal(seal);
|
||||
engine.on_seal_block(&mut s.block)?;
|
||||
s.block.header.compute_hash();
|
||||
|
||||
Ok(SealedBlock {
|
||||
block: s.block
|
||||
})
|
||||
@@ -470,6 +426,7 @@ impl LockedBlock {
|
||||
/// Provide a valid seal in order to turn this into a `SealedBlock`.
|
||||
/// This does check the validity of `seal` with the engine.
|
||||
/// Returns the `ClosedBlock` back again if the seal is no good.
|
||||
/// TODO(https://github.com/paritytech/parity-ethereum/issues/10407): This is currently only used in POW chain call paths, we should really merge it with seal() above.
|
||||
pub fn try_seal(
|
||||
self,
|
||||
engine: &EthEngine,
|
||||
@@ -510,12 +467,8 @@ impl Drain for SealedBlock {
|
||||
}
|
||||
}
|
||||
|
||||
impl IsBlock for SealedBlock {
|
||||
fn block(&self) -> &ExecutedBlock { &self.block }
|
||||
}
|
||||
|
||||
/// Enact the block given by block header, transactions and uncles
|
||||
fn enact(
|
||||
pub(crate) fn enact(
|
||||
header: Header,
|
||||
transactions: Vec<SignedTransaction>,
|
||||
uncles: Vec<Header>,
|
||||
@@ -528,13 +481,12 @@ fn enact(
|
||||
is_epoch_begin: bool,
|
||||
ancestry: &mut Iterator<Item=ExtendedHeader>,
|
||||
) -> Result<LockedBlock, Error> {
|
||||
{
|
||||
if ::log::max_level() >= ::log::Level::Trace {
|
||||
let s = State::from_existing(db.boxed_clone(), parent.state_root().clone(), engine.account_start_nonce(parent.number() + 1), factories.clone())?;
|
||||
trace!(target: "enact", "num={}, root={}, author={}, author_balance={}\n",
|
||||
header.number(), s.root(), header.author(), s.balance(&header.author())?);
|
||||
}
|
||||
}
|
||||
// For trace log
|
||||
let trace_state = if log_enabled!(target: "enact", ::log::Level::Trace) {
|
||||
Some(State::from_existing(db.boxed_clone(), parent.state_root().clone(), engine.account_start_nonce(parent.number() + 1), factories.clone())?)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let mut b = OpenBlock::new(
|
||||
engine,
|
||||
@@ -543,13 +495,23 @@ fn enact(
|
||||
db,
|
||||
parent,
|
||||
last_hashes,
|
||||
Address::new(),
|
||||
// Engine such as Clique will calculate author from extra_data.
|
||||
// this is only important for executing contracts as the 'executive_author'.
|
||||
engine.executive_author(&header)?,
|
||||
(3141562.into(), 31415620.into()),
|
||||
vec![],
|
||||
is_epoch_begin,
|
||||
ancestry,
|
||||
)?;
|
||||
|
||||
if let Some(ref s) = trace_state {
|
||||
let env = b.env_info();
|
||||
let root = s.root();
|
||||
let author_balance = s.balance(&env.author)?;
|
||||
trace!(target: "enact", "num={}, root={}, author={}, author_balance={}\n",
|
||||
b.block.header.number(), root, env.author, author_balance);
|
||||
}
|
||||
|
||||
b.populate_from(&header);
|
||||
b.push_transactions(transactions)?;
|
||||
|
||||
@@ -615,6 +577,7 @@ mod tests {
|
||||
last_hashes: Arc<LastHashes>,
|
||||
factories: Factories,
|
||||
) -> Result<LockedBlock, Error> {
|
||||
|
||||
let block = Unverified::from_rlp(block_bytes)?;
|
||||
let header = block.header;
|
||||
let transactions: Result<Vec<_>, Error> = block
|
||||
@@ -644,7 +607,7 @@ mod tests {
|
||||
(3141562.into(), 31415620.into()),
|
||||
vec![],
|
||||
false,
|
||||
&mut Vec::new().into_iter(),
|
||||
None,
|
||||
)?;
|
||||
|
||||
b.populate_from(&header);
|
||||
@@ -669,7 +632,7 @@ mod tests {
|
||||
) -> Result<SealedBlock, Error> {
|
||||
let header = Unverified::from_rlp(block_bytes.clone())?.header;
|
||||
Ok(enact_bytes(block_bytes, engine, tracing, db, parent, last_hashes, factories)?
|
||||
.seal(engine, header.seal().to_vec())?)
|
||||
.seal(engine, header.seal().to_vec())?)
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -679,7 +642,7 @@ mod tests {
|
||||
let genesis_header = spec.genesis_header();
|
||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let b = OpenBlock::new(&*spec.engine, Default::default(), false, db, &genesis_header, last_hashes, Address::zero(), (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b = OpenBlock::new(&*spec.engine, Default::default(), false, db, &genesis_header, last_hashes, Address::zero(), (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b = b.close_and_lock().unwrap();
|
||||
let _ = b.seal(&*spec.engine, vec![]);
|
||||
}
|
||||
@@ -693,7 +656,7 @@ mod tests {
|
||||
|
||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes.clone(), Address::zero(), (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap()
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes.clone(), Address::zero(), (3141562.into(), 31415620.into()), vec![], false, None).unwrap()
|
||||
.close_and_lock().unwrap().seal(engine, vec![]).unwrap();
|
||||
let orig_bytes = b.rlp_bytes();
|
||||
let orig_db = b.drain().state.drop().1;
|
||||
@@ -717,7 +680,7 @@ mod tests {
|
||||
|
||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let mut open_block = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes.clone(), Address::zero(), (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let mut open_block = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes.clone(), Address::zero(), (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let mut uncle1_header = Header::new();
|
||||
uncle1_header.set_extra_data(b"uncle1".to_vec());
|
||||
let mut uncle2_header = Header::new();
|
||||
|
||||
@@ -25,8 +25,7 @@ use blockchain::{BlockReceipts, BlockChain, BlockChainDB, BlockProvider, TreeRou
|
||||
use bytes::Bytes;
|
||||
use call_contract::{CallContract, RegistryInfo};
|
||||
use ethcore_miner::pool::VerifiedTransaction;
|
||||
use ethcore_miner::service_transaction_checker::ServiceTransactionChecker;
|
||||
use ethereum_types::{H256, Address, U256};
|
||||
use ethereum_types::{H256, H264, Address, U256};
|
||||
use evm::Schedule;
|
||||
use hash::keccak;
|
||||
use io::IoChannel;
|
||||
@@ -45,7 +44,7 @@ use types::receipt::{Receipt, LocalizedReceipt};
|
||||
use types::{BlockNumber, header::{Header, ExtendedHeader}};
|
||||
use vm::{EnvInfo, LastHashes};
|
||||
|
||||
use block::{IsBlock, LockedBlock, Drain, ClosedBlock, OpenBlock, enact_verified, SealedBlock};
|
||||
use block::{LockedBlock, Drain, ClosedBlock, OpenBlock, enact_verified, SealedBlock};
|
||||
use client::ancient_import::AncientVerifier;
|
||||
use client::{
|
||||
Nonce, Balance, ChainInfo, BlockInfo, TransactionInfo,
|
||||
@@ -61,7 +60,7 @@ use client::{
|
||||
IoClient, BadBlocks,
|
||||
};
|
||||
use client::bad_blocks;
|
||||
use engines::{EthEngine, EpochTransition, ForkChoice, EngineError};
|
||||
use engines::{MAX_UNCLE_AGE, EthEngine, EpochTransition, ForkChoice, EngineError};
|
||||
use engines::epoch::PendingTransition;
|
||||
use error::{
|
||||
ImportErrorKind, ExecutionError, CallError, BlockError,
|
||||
@@ -87,7 +86,7 @@ pub use types::blockchain_info::BlockChainInfo;
|
||||
pub use types::block_status::BlockStatus;
|
||||
pub use blockchain::CacheSize as BlockChainCacheSize;
|
||||
pub use verification::QueueInfo as BlockQueueInfo;
|
||||
use db::Writable;
|
||||
use db::{Writable, Readable, keys::BlockDetails};
|
||||
|
||||
use_contract!(registry, "res/contracts/registrar.json");
|
||||
|
||||
@@ -298,19 +297,11 @@ impl Importer {
|
||||
|
||||
match self.check_and_lock_block(&bytes, block, client) {
|
||||
Ok((closed_block, pending)) => {
|
||||
if self.engine.is_proposal(&header) {
|
||||
self.block_queue.mark_as_good(&[hash]);
|
||||
proposed_blocks.push(bytes);
|
||||
} else {
|
||||
imported_blocks.push(hash);
|
||||
|
||||
let transactions_len = closed_block.transactions().len();
|
||||
|
||||
let route = self.commit_block(closed_block, &header, encoded::Block::new(bytes), pending, client);
|
||||
import_results.push(route);
|
||||
|
||||
client.report.write().accrue_block(&header, transactions_len);
|
||||
}
|
||||
imported_blocks.push(hash);
|
||||
let transactions_len = closed_block.transactions.len();
|
||||
let route = self.commit_block(closed_block, &header, encoded::Block::new(bytes), pending, client);
|
||||
import_results.push(route);
|
||||
client.report.write().accrue_block(&header, transactions_len);
|
||||
},
|
||||
Err(err) => {
|
||||
self.bad_blocks.report(bytes, format!("{:?}", err));
|
||||
@@ -407,6 +398,7 @@ impl Importer {
|
||||
let db = client.state_db.read().boxed_clone_canon(header.parent_hash());
|
||||
|
||||
let is_epoch_begin = chain.epoch_transition(parent.number(), *header.parent_hash()).is_some();
|
||||
|
||||
let enact_result = enact_verified(
|
||||
block,
|
||||
engine,
|
||||
@@ -431,13 +423,13 @@ impl Importer {
|
||||
// if the expected receipts root header does not match.
|
||||
// (i.e. allow inconsistency in receipts outcome before the transition block)
|
||||
if header.number() < engine.params().validate_receipts_transition
|
||||
&& header.receipts_root() != locked_block.block().header().receipts_root()
|
||||
&& header.receipts_root() != locked_block.header.receipts_root()
|
||||
{
|
||||
locked_block.strip_receipts_outcomes();
|
||||
}
|
||||
|
||||
// Final Verification
|
||||
if let Err(e) = self.verifier.verify_block_final(&header, locked_block.block().header()) {
|
||||
if let Err(e) = self.verifier.verify_block_final(&header, &locked_block.header) {
|
||||
warn!(target: "client", "Stage 5 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
|
||||
bail!(e);
|
||||
}
|
||||
@@ -445,8 +437,8 @@ impl Importer {
|
||||
let pending = self.check_epoch_end_signal(
|
||||
&header,
|
||||
bytes,
|
||||
locked_block.receipts(),
|
||||
locked_block.state().db(),
|
||||
&locked_block.receipts,
|
||||
locked_block.state.db(),
|
||||
client
|
||||
)?;
|
||||
|
||||
@@ -772,8 +764,8 @@ impl Client {
|
||||
liveness: AtomicBool::new(awake),
|
||||
mode: Mutex::new(config.mode.clone()),
|
||||
chain: RwLock::new(chain),
|
||||
tracedb: tracedb,
|
||||
engine: engine,
|
||||
tracedb,
|
||||
engine,
|
||||
pruning: config.pruning.clone(),
|
||||
db: RwLock::new(db.clone()),
|
||||
state_db: RwLock::new(state_db),
|
||||
@@ -786,8 +778,8 @@ impl Client {
|
||||
ancient_blocks_import_lock: Default::default(),
|
||||
queue_consensus_message: IoChannelQueue::new(usize::max_value()),
|
||||
last_hashes: RwLock::new(VecDeque::new()),
|
||||
factories: factories,
|
||||
history: history,
|
||||
factories,
|
||||
history,
|
||||
on_user_defaults_change: Mutex::new(None),
|
||||
registrar_address,
|
||||
exit_handler: Mutex::new(None),
|
||||
@@ -1146,10 +1138,15 @@ impl Client {
|
||||
|
||||
/// Take a snapshot at the given block.
|
||||
/// If the ID given is "latest", this will default to 1000 blocks behind.
|
||||
pub fn take_snapshot<W: snapshot_io::SnapshotWriter + Send>(&self, writer: W, at: BlockId, p: &snapshot::Progress) -> Result<(), EthcoreError> {
|
||||
pub fn take_snapshot<W: snapshot_io::SnapshotWriter + Send>(
|
||||
&self,
|
||||
writer: W,
|
||||
at: BlockId,
|
||||
p: &snapshot::Progress,
|
||||
) -> Result<(), EthcoreError> {
|
||||
let db = self.state_db.read().journal_db().boxed_clone();
|
||||
let best_block_number = self.chain_info().best_block_number;
|
||||
let block_number = self.block_number(at).ok_or(snapshot::Error::InvalidStartingBlock(at))?;
|
||||
let block_number = self.block_number(at).ok_or_else(|| snapshot::Error::InvalidStartingBlock(at))?;
|
||||
|
||||
if db.is_pruned() && self.pruning_info().earliest_state > block_number {
|
||||
return Err(snapshot::Error::OldBlockPrunedDB.into());
|
||||
@@ -1176,8 +1173,16 @@ impl Client {
|
||||
};
|
||||
|
||||
let processing_threads = self.config.snapshot.processing_threads;
|
||||
snapshot::take_snapshot(&*self.engine, &self.chain.read(), start_hash, db.as_hash_db(), writer, p, processing_threads)?;
|
||||
|
||||
let chunker = self.engine.snapshot_components().ok_or(snapshot::Error::SnapshotsUnsupported)?;
|
||||
snapshot::take_snapshot(
|
||||
chunker,
|
||||
&self.chain.read(),
|
||||
start_hash,
|
||||
db.as_hash_db(),
|
||||
writer,
|
||||
p,
|
||||
processing_threads,
|
||||
)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -1335,37 +1340,60 @@ impl BlockChainReset for Client {
|
||||
fn reset(&self, num: u32) -> Result<(), String> {
|
||||
if num as u64 > self.pruning_history() {
|
||||
return Err("Attempting to reset to block with pruned state".into())
|
||||
} else if num == 0 {
|
||||
return Err("invalid number of blocks to reset".into())
|
||||
}
|
||||
|
||||
let (blocks_to_delete, best_block_hash) = self.chain.read()
|
||||
.block_headers_from_best_block(num)
|
||||
.ok_or("Attempted to reset past genesis block")?;
|
||||
let mut blocks_to_delete = Vec::with_capacity(num as usize);
|
||||
let mut best_block_hash = self.chain.read().best_block_hash();
|
||||
let mut batch = DBTransaction::with_capacity(blocks_to_delete.len());
|
||||
|
||||
let mut db_transaction = DBTransaction::with_capacity((num + 1) as usize);
|
||||
for _ in 0..num {
|
||||
let current_header = self.chain.read().block_header_data(&best_block_hash)
|
||||
.expect("best_block_hash was fetched from db; block_header_data should exist in db; qed");
|
||||
best_block_hash = current_header.parent_hash();
|
||||
|
||||
for hash in &blocks_to_delete {
|
||||
db_transaction.delete(::db::COL_HEADERS, &hash.hash());
|
||||
db_transaction.delete(::db::COL_BODIES, &hash.hash());
|
||||
db_transaction.delete(::db::COL_EXTRA, &hash.hash());
|
||||
let (number, hash) = (current_header.number(), current_header.hash());
|
||||
batch.delete(::db::COL_HEADERS, &hash);
|
||||
batch.delete(::db::COL_BODIES, &hash);
|
||||
Writable::delete::<BlockDetails, H264>
|
||||
(&mut batch, ::db::COL_EXTRA, &hash);
|
||||
Writable::delete::<H256, BlockNumberKey>
|
||||
(&mut db_transaction, ::db::COL_EXTRA, &hash.number());
|
||||
(&mut batch, ::db::COL_EXTRA, &number);
|
||||
|
||||
blocks_to_delete.push((number, hash));
|
||||
}
|
||||
|
||||
let hashes = blocks_to_delete.iter().map(|(_, hash)| hash).collect::<Vec<_>>();
|
||||
info!("Deleting block hashes {}",
|
||||
Colour::Red
|
||||
.bold()
|
||||
.paint(format!("{:#?}", hashes))
|
||||
);
|
||||
|
||||
let mut best_block_details = Readable::read::<BlockDetails, H264>(
|
||||
&**self.db.read().key_value(),
|
||||
::db::COL_EXTRA,
|
||||
&best_block_hash
|
||||
).expect("block was previously imported; best_block_details should exist; qed");
|
||||
|
||||
let (_, last_hash) = blocks_to_delete.last()
|
||||
.expect("num is > 0; blocks_to_delete can't be empty; qed");
|
||||
// remove the last block as a child so that it can be re-imported
|
||||
// ethcore/blockchain/src/blockchain.rs/Blockchain::is_known_child()
|
||||
best_block_details.children.retain(|h| *h != *last_hash);
|
||||
batch.write(
|
||||
::db::COL_EXTRA,
|
||||
&best_block_hash,
|
||||
&best_block_details
|
||||
);
|
||||
// update the new best block hash
|
||||
db_transaction.put(::db::COL_EXTRA, b"best", &*best_block_hash);
|
||||
batch.put(::db::COL_EXTRA, b"best", &best_block_hash);
|
||||
|
||||
self.db.read()
|
||||
.key_value()
|
||||
.write(db_transaction)
|
||||
.map_err(|err| format!("could not complete reset operation; io error occured: {}", err))?;
|
||||
|
||||
let hashes = blocks_to_delete.iter().map(|b| b.hash()).collect::<Vec<_>>();
|
||||
|
||||
info!("Deleting block hashes {}",
|
||||
Colour::Red
|
||||
.bold()
|
||||
.paint(format!("{:#?}", hashes))
|
||||
);
|
||||
.write(batch)
|
||||
.map_err(|err| format!("could not delete blocks; io error occurred: {}", err))?;
|
||||
|
||||
info!("New best block hash {}", Colour::Green.bold().paint(format!("{:?}", best_block_hash)));
|
||||
|
||||
@@ -1578,22 +1606,27 @@ impl Call for Client {
|
||||
let schedule = machine.schedule(env_info.number);
|
||||
Executive::new(&mut clone, &env_info, &machine, &schedule)
|
||||
.transact_virtual(&tx, options())
|
||||
.ok()
|
||||
.map(|r| r.exception.is_none())
|
||||
};
|
||||
|
||||
let cond = |gas| exec(gas).unwrap_or(false);
|
||||
let cond = |gas| {
|
||||
exec(gas)
|
||||
.ok()
|
||||
.map_or(false, |r| r.exception.is_none())
|
||||
};
|
||||
|
||||
if !cond(upper) {
|
||||
upper = max_upper;
|
||||
match exec(upper) {
|
||||
Some(false) => return Err(CallError::Exceptional),
|
||||
None => {
|
||||
Ok(v) => {
|
||||
if let Some(exception) = v.exception {
|
||||
return Err(CallError::Exceptional(exception))
|
||||
}
|
||||
},
|
||||
Err(_e) => {
|
||||
trace!(target: "estimate_gas", "estimate_gas failed with {}", upper);
|
||||
let err = ExecutionError::Internal(format!("Requires higher than upper limit of {}", upper));
|
||||
return Err(err.into())
|
||||
},
|
||||
_ => {},
|
||||
}
|
||||
}
|
||||
}
|
||||
let lower = t.gas_required(&self.engine.schedule(env_info.number)).into();
|
||||
@@ -1926,7 +1959,7 @@ impl BlockChainClient for Client {
|
||||
}
|
||||
|
||||
fn find_uncles(&self, hash: &H256) -> Option<Vec<H256>> {
|
||||
self.chain.read().find_uncle_hashes(hash, self.engine.maximum_uncle_age())
|
||||
self.chain.read().find_uncle_hashes(hash, MAX_UNCLE_AGE)
|
||||
}
|
||||
|
||||
fn state_data(&self, hash: &H256) -> Option<Bytes> {
|
||||
@@ -2159,10 +2192,14 @@ impl BlockChainClient for Client {
|
||||
|
||||
fn transact_contract(&self, address: Address, data: Bytes) -> Result<(), transaction::Error> {
|
||||
let authoring_params = self.importer.miner.authoring_params();
|
||||
let service_transaction_checker = ServiceTransactionChecker::default();
|
||||
let gas_price = match service_transaction_checker.check_address(self, authoring_params.author) {
|
||||
Ok(true) => U256::zero(),
|
||||
_ => self.importer.miner.sensible_gas_price(),
|
||||
let service_transaction_checker = self.importer.miner.service_transaction_checker();
|
||||
let gas_price = if let Some(checker) = service_transaction_checker {
|
||||
match checker.check_address(self, authoring_params.author) {
|
||||
Ok(true) => U256::zero(),
|
||||
_ => self.importer.miner.sensible_gas_price(),
|
||||
}
|
||||
} else {
|
||||
self.importer.miner.sensible_gas_price()
|
||||
};
|
||||
let transaction = transaction::Transaction {
|
||||
nonce: self.latest_nonce(&authoring_params.author),
|
||||
@@ -2284,24 +2321,24 @@ impl ReopenBlock for Client {
|
||||
fn reopen_block(&self, block: ClosedBlock) -> OpenBlock {
|
||||
let engine = &*self.engine;
|
||||
let mut block = block.reopen(engine);
|
||||
let max_uncles = engine.maximum_uncle_count(block.header().number());
|
||||
if block.uncles().len() < max_uncles {
|
||||
let max_uncles = engine.maximum_uncle_count(block.header.number());
|
||||
if block.uncles.len() < max_uncles {
|
||||
let chain = self.chain.read();
|
||||
let h = chain.best_block_hash();
|
||||
// Add new uncles
|
||||
let uncles = chain
|
||||
.find_uncle_hashes(&h, engine.maximum_uncle_age())
|
||||
.find_uncle_hashes(&h, MAX_UNCLE_AGE)
|
||||
.unwrap_or_else(Vec::new);
|
||||
|
||||
for h in uncles {
|
||||
if !block.uncles().iter().any(|header| header.hash() == h) {
|
||||
if !block.uncles.iter().any(|header| header.hash() == h) {
|
||||
let uncle = chain.block_header_data(&h).expect("find_uncle_hashes only returns hashes for existing headers; qed");
|
||||
let uncle = uncle.decode().expect("decoding failure");
|
||||
block.push_uncle(uncle).expect("pushing up to maximum_uncle_count;
|
||||
push_uncle is not ok only if more than maximum_uncle_count is pushed;
|
||||
so all push_uncle are Ok;
|
||||
qed");
|
||||
if block.uncles().len() >= max_uncles { break }
|
||||
if block.uncles.len() >= max_uncles { break }
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2329,15 +2366,15 @@ impl PrepareOpenBlock for Client {
|
||||
gas_range_target,
|
||||
extra_data,
|
||||
is_epoch_begin,
|
||||
&mut chain.ancestry_with_metadata_iter(best_header.hash()),
|
||||
chain.ancestry_with_metadata_iter(best_header.hash()),
|
||||
)?;
|
||||
|
||||
// Add uncles
|
||||
chain
|
||||
.find_uncle_headers(&h, engine.maximum_uncle_age())
|
||||
.find_uncle_headers(&h, MAX_UNCLE_AGE)
|
||||
.unwrap_or_else(Vec::new)
|
||||
.into_iter()
|
||||
.take(engine.maximum_uncle_count(open_block.header().number()))
|
||||
.take(engine.maximum_uncle_count(open_block.header.number()))
|
||||
.foreach(|h| {
|
||||
open_block.push_uncle(h.decode().expect("decoding failure")).expect("pushing maximum_uncle_count;
|
||||
open_block was just created;
|
||||
@@ -2362,7 +2399,7 @@ impl ImportSealedBlock for Client {
|
||||
fn import_sealed_block(&self, block: SealedBlock) -> EthcoreResult<H256> {
|
||||
let start = Instant::now();
|
||||
let raw = block.rlp_bytes();
|
||||
let header = block.header().clone();
|
||||
let header = block.header.clone();
|
||||
let hash = header.hash();
|
||||
self.notify(|n| n.block_pre_import(&raw, &hash, header.difficulty()));
|
||||
|
||||
@@ -2385,8 +2422,8 @@ impl ImportSealedBlock for Client {
|
||||
let pending = self.importer.check_epoch_end_signal(
|
||||
&header,
|
||||
&block_data,
|
||||
block.receipts(),
|
||||
block.state().db(),
|
||||
&block.receipts,
|
||||
block.state.db(),
|
||||
self
|
||||
)?;
|
||||
let route = self.importer.commit_block(
|
||||
@@ -2523,7 +2560,11 @@ impl SnapshotClient for Client {}
|
||||
|
||||
impl Drop for Client {
|
||||
fn drop(&mut self) {
|
||||
self.engine.stop();
|
||||
if let Some(c) = Arc::get_mut(&mut self.engine) {
|
||||
c.stop()
|
||||
} else {
|
||||
warn!(target: "shutdown", "unable to get mut ref for engine for shutdown.");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -241,16 +241,17 @@ impl<'a> EvmTestClient<'a> {
|
||||
transaction: transaction::SignedTransaction,
|
||||
tracer: T,
|
||||
vm_tracer: V,
|
||||
) -> TransactResult<T::Output, V::Output> {
|
||||
) -> std::result::Result<TransactSuccess<T::Output, V::Output>, TransactErr> {
|
||||
let initial_gas = transaction.gas;
|
||||
// Verify transaction
|
||||
let is_ok = transaction.verify_basic(true, None, false);
|
||||
if let Err(error) = is_ok {
|
||||
return TransactResult::Err {
|
||||
state_root: *self.state.root(),
|
||||
error: error.into(),
|
||||
end_state: (self.dump_state)(&self.state),
|
||||
};
|
||||
return Err(
|
||||
TransactErr{
|
||||
state_root: *self.state.root(),
|
||||
error: error.into(),
|
||||
end_state: (self.dump_state)(&self.state),
|
||||
});
|
||||
}
|
||||
|
||||
// Apply transaction
|
||||
@@ -283,7 +284,7 @@ impl<'a> EvmTestClient<'a> {
|
||||
|
||||
match result {
|
||||
Ok(result) => {
|
||||
TransactResult::Ok {
|
||||
Ok(TransactSuccess {
|
||||
state_root,
|
||||
gas_left: initial_gas - result.receipt.gas_used,
|
||||
outcome: result.receipt.outcome,
|
||||
@@ -298,47 +299,48 @@ impl<'a> EvmTestClient<'a> {
|
||||
},
|
||||
end_state,
|
||||
}
|
||||
},
|
||||
Err(error) => TransactResult::Err {
|
||||
)},
|
||||
Err(error) => Err(TransactErr {
|
||||
state_root,
|
||||
error,
|
||||
end_state,
|
||||
},
|
||||
}),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A result of applying transaction to the state.
|
||||
#[derive(Debug)]
|
||||
pub enum TransactResult<T, V> {
|
||||
/// Successful execution
|
||||
Ok {
|
||||
/// State root
|
||||
state_root: H256,
|
||||
/// Amount of gas left
|
||||
gas_left: U256,
|
||||
/// Output
|
||||
output: Vec<u8>,
|
||||
/// Traces
|
||||
trace: Vec<T>,
|
||||
/// VM Traces
|
||||
vm_trace: Option<V>,
|
||||
/// Created contract address (if any)
|
||||
contract_address: Option<H160>,
|
||||
/// Generated logs
|
||||
logs: Vec<log_entry::LogEntry>,
|
||||
/// outcome
|
||||
outcome: receipt::TransactionOutcome,
|
||||
/// end state if needed
|
||||
end_state: Option<pod_state::PodState>,
|
||||
},
|
||||
/// Transaction failed to run
|
||||
Err {
|
||||
/// State root
|
||||
state_root: H256,
|
||||
/// Execution error
|
||||
error: ::error::Error,
|
||||
/// end state if needed
|
||||
end_state: Option<pod_state::PodState>,
|
||||
},
|
||||
/// To be returned inside a std::result::Result::Ok after a successful
|
||||
/// transaction completed.
|
||||
#[allow(dead_code)]
|
||||
pub struct TransactSuccess<T, V> {
|
||||
/// State root
|
||||
pub state_root: H256,
|
||||
/// Amount of gas left
|
||||
pub gas_left: U256,
|
||||
/// Output
|
||||
pub output: Vec<u8>,
|
||||
/// Traces
|
||||
pub trace: Vec<T>,
|
||||
/// VM Traces
|
||||
pub vm_trace: Option<V>,
|
||||
/// Created contract address (if any)
|
||||
pub contract_address: Option<H160>,
|
||||
/// Generated logs
|
||||
pub logs: Vec<log_entry::LogEntry>,
|
||||
/// outcome
|
||||
pub outcome: receipt::TransactionOutcome,
|
||||
/// end state if needed
|
||||
pub end_state: Option<pod_state::PodState>,
|
||||
}
|
||||
|
||||
/// To be returned inside a std::result::Result::Err after a failed
|
||||
/// transaction.
|
||||
#[allow(dead_code)]
|
||||
pub struct TransactErr {
|
||||
/// State root
|
||||
pub state_root: H256,
|
||||
/// Execution error
|
||||
pub error: ::error::Error,
|
||||
/// end state if needed
|
||||
pub end_state: Option<pod_state::PodState>,
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@ mod trace;
|
||||
pub use self::client::*;
|
||||
pub use self::config::{Mode, ClientConfig, DatabaseCompactionProfile, BlockChainConfig, VMType};
|
||||
#[cfg(any(test, feature = "test-helpers"))]
|
||||
pub use self::evm_test_client::{EvmTestClient, EvmTestError, TransactResult};
|
||||
pub use self::evm_test_client::{EvmTestClient, EvmTestError, TransactErr, TransactSuccess};
|
||||
pub use self::io_message::ClientIoMessage;
|
||||
#[cfg(any(test, feature = "test-helpers"))]
|
||||
pub use self::test_client::{TestBlockChainClient, EachBlockWith};
|
||||
|
||||
@@ -416,7 +416,7 @@ impl PrepareOpenBlock for TestBlockChainClient {
|
||||
gas_range_target,
|
||||
extra_data,
|
||||
false,
|
||||
&mut Vec::new().into_iter(),
|
||||
None,
|
||||
)?;
|
||||
// TODO [todr] Override timestamp for predictability
|
||||
open_block.set_timestamp(*self.latest_block_timestamp.read());
|
||||
|
||||
@@ -22,7 +22,7 @@ use std::iter::FromIterator;
|
||||
use std::ops::Deref;
|
||||
use std::sync::atomic::{AtomicUsize, AtomicBool, Ordering as AtomicOrdering};
|
||||
use std::sync::{Weak, Arc};
|
||||
use std::time::{UNIX_EPOCH, SystemTime, Duration};
|
||||
use std::time::{UNIX_EPOCH, Duration};
|
||||
|
||||
use block::*;
|
||||
use client::EngineClient;
|
||||
@@ -42,14 +42,12 @@ use itertools::{self, Itertools};
|
||||
use rlp::{encode, Decodable, DecoderError, Encodable, RlpStream, Rlp};
|
||||
use ethereum_types::{H256, H520, Address, U128, U256};
|
||||
use parking_lot::{Mutex, RwLock};
|
||||
use time_utils::CheckedSystemTime;
|
||||
use types::BlockNumber;
|
||||
use types::header::{Header, ExtendedHeader};
|
||||
use types::ancestry_action::AncestryAction;
|
||||
use unexpected::{Mismatch, OutOfBounds};
|
||||
|
||||
#[cfg(not(time_checked_add))]
|
||||
use time_utils::CheckedSystemTime;
|
||||
|
||||
mod finality;
|
||||
|
||||
/// `AuthorityRound` params.
|
||||
@@ -515,15 +513,19 @@ fn header_expected_seal_fields(header: &Header, empty_steps_transition: u64) ->
|
||||
}
|
||||
|
||||
fn header_step(header: &Header, empty_steps_transition: u64) -> Result<u64, ::rlp::DecoderError> {
|
||||
let expected_seal_fields = header_expected_seal_fields(header, empty_steps_transition);
|
||||
Rlp::new(&header.seal().get(0).expect(
|
||||
&format!("was either checked with verify_block_basic or is genesis; has {} fields; qed (Make sure the spec file has a correct genesis seal)", expected_seal_fields))).as_val()
|
||||
Rlp::new(&header.seal().get(0).unwrap_or_else(||
|
||||
panic!("was either checked with verify_block_basic or is genesis; has {} fields; qed (Make sure the spec
|
||||
file has a correct genesis seal)", header_expected_seal_fields(header, empty_steps_transition))
|
||||
))
|
||||
.as_val()
|
||||
}
|
||||
|
||||
fn header_signature(header: &Header, empty_steps_transition: u64) -> Result<Signature, ::rlp::DecoderError> {
|
||||
let expected_seal_fields = header_expected_seal_fields(header, empty_steps_transition);
|
||||
Rlp::new(&header.seal().get(1).expect(
|
||||
&format!("was checked with verify_block_basic; has {} fields; qed", expected_seal_fields))).as_val::<H520>().map(Into::into)
|
||||
Rlp::new(&header.seal().get(1).unwrap_or_else(||
|
||||
panic!("was checked with verify_block_basic; has {} fields; qed",
|
||||
header_expected_seal_fields(header, empty_steps_transition))
|
||||
))
|
||||
.as_val::<H520>().map(Into::into)
|
||||
}
|
||||
|
||||
// extracts the raw empty steps vec from the header seal. should only be called when there are 3 fields in the seal
|
||||
@@ -574,10 +576,10 @@ fn verify_timestamp(step: &Step, header_step: u64) -> Result<(), BlockError> {
|
||||
// Returning it further won't recover the sync process.
|
||||
trace!(target: "engine", "verify_timestamp: block too early");
|
||||
|
||||
let now = SystemTime::now();
|
||||
let found = now.checked_add(Duration::from_secs(oob.found)).ok_or(BlockError::TimestampOverflow)?;
|
||||
let max = oob.max.and_then(|m| now.checked_add(Duration::from_secs(m)));
|
||||
let min = oob.min.and_then(|m| now.checked_add(Duration::from_secs(m)));
|
||||
let found = CheckedSystemTime::checked_add(UNIX_EPOCH, Duration::from_secs(oob.found))
|
||||
.ok_or(BlockError::TimestampOverflow)?;
|
||||
let max = oob.max.and_then(|m| CheckedSystemTime::checked_add(UNIX_EPOCH, Duration::from_secs(m)));
|
||||
let min = oob.min.and_then(|m| CheckedSystemTime::checked_add(UNIX_EPOCH, Duration::from_secs(m)));
|
||||
|
||||
let new_oob = OutOfBounds { min, max, found };
|
||||
|
||||
@@ -945,8 +947,12 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
return BTreeMap::default();
|
||||
}
|
||||
|
||||
let step = header_step(header, self.empty_steps_transition).as_ref().map(ToString::to_string).unwrap_or("".into());
|
||||
let signature = header_signature(header, self.empty_steps_transition).as_ref().map(ToString::to_string).unwrap_or("".into());
|
||||
let step = header_step(header, self.empty_steps_transition).as_ref()
|
||||
.map(ToString::to_string)
|
||||
.unwrap_or_default();
|
||||
let signature = header_signature(header, self.empty_steps_transition).as_ref()
|
||||
.map(ToString::to_string)
|
||||
.unwrap_or_default();
|
||||
|
||||
let mut info = map![
|
||||
"step".into() => step,
|
||||
@@ -1033,7 +1039,7 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
return Seal::None;
|
||||
}
|
||||
|
||||
let header = block.header();
|
||||
let header = &block.header;
|
||||
let parent_step = header_step(parent, self.empty_steps_transition)
|
||||
.expect("Header has been verified; qed");
|
||||
|
||||
@@ -1079,7 +1085,7 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
// `EmptyStep(step, parent_hash)` message. If we exceed the maximum amount of `empty_step` rounds we proceed
|
||||
// with the seal.
|
||||
if header.number() >= self.empty_steps_transition &&
|
||||
block.transactions().is_empty() &&
|
||||
block.transactions.is_empty() &&
|
||||
empty_steps.len() < self.maximum_empty_steps {
|
||||
|
||||
if self.step.can_propose.compare_and_swap(true, false, AtomicOrdering::SeqCst) {
|
||||
@@ -1149,7 +1155,7 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
if self.immediate_transitions || !epoch_begin { return Ok(()) }
|
||||
|
||||
// genesis is never a new block, but might as well check.
|
||||
let header = block.header().clone();
|
||||
let header = block.header.clone();
|
||||
let first = header.number() == 0;
|
||||
|
||||
let mut call = |to, data| {
|
||||
@@ -1169,8 +1175,8 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
/// Apply the block reward on finalisation of the block.
|
||||
fn on_close_block(&self, block: &mut ExecutedBlock) -> Result<(), Error> {
|
||||
let mut beneficiaries = Vec::new();
|
||||
if block.header().number() >= self.empty_steps_transition {
|
||||
let empty_steps = if block.header().seal().is_empty() {
|
||||
if block.header.number() >= self.empty_steps_transition {
|
||||
let empty_steps = if block.header.seal().is_empty() {
|
||||
// this is a new block, calculate rewards based on the empty steps messages we have accumulated
|
||||
let client = match self.client.read().as_ref().and_then(|weak| weak.upgrade()) {
|
||||
Some(client) => client,
|
||||
@@ -1180,7 +1186,7 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
},
|
||||
};
|
||||
|
||||
let parent = client.block_header(::client::BlockId::Hash(*block.header().parent_hash()))
|
||||
let parent = client.block_header(::client::BlockId::Hash(*block.header.parent_hash()))
|
||||
.expect("hash is from parent; parent header must exist; qed")
|
||||
.decode()?;
|
||||
|
||||
@@ -1189,7 +1195,7 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
self.empty_steps(parent_step.into(), current_step.into(), parent.hash())
|
||||
} else {
|
||||
// we're verifying a block, extract empty steps from the seal
|
||||
header_empty_steps(block.header())?
|
||||
header_empty_steps(&block.header)?
|
||||
};
|
||||
|
||||
for empty_step in empty_steps {
|
||||
@@ -1198,11 +1204,11 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
}
|
||||
}
|
||||
|
||||
let author = *block.header().author();
|
||||
let author = *block.header.author();
|
||||
beneficiaries.push((author, RewardKind::Author));
|
||||
|
||||
let rewards: Vec<_> = match self.block_reward_contract {
|
||||
Some(ref c) if block.header().number() >= self.block_reward_contract_transition => {
|
||||
Some(ref c) if block.header.number() >= self.block_reward_contract_transition => {
|
||||
let mut call = super::default_system_or_code_call(&self.machine, block);
|
||||
|
||||
let rewards = c.reward(&beneficiaries, &mut call)?;
|
||||
@@ -1631,23 +1637,23 @@ mod tests {
|
||||
let db1 = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let db2 = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b1 = b1.close_and_lock().unwrap();
|
||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes, addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes, addr2, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b2 = b2.close_and_lock().unwrap();
|
||||
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
if let Seal::Regular(seal) = engine.generate_seal(b1.block(), &genesis_header) {
|
||||
if let Seal::Regular(seal) = engine.generate_seal(&b1, &genesis_header) {
|
||||
assert!(b1.clone().try_seal(engine, seal).is_ok());
|
||||
// Second proposal is forbidden.
|
||||
assert!(engine.generate_seal(b1.block(), &genesis_header) == Seal::None);
|
||||
assert!(engine.generate_seal(&b1, &genesis_header) == Seal::None);
|
||||
}
|
||||
|
||||
engine.set_signer(Box::new((tap, addr2, "2".into())));
|
||||
if let Seal::Regular(seal) = engine.generate_seal(b2.block(), &genesis_header) {
|
||||
if let Seal::Regular(seal) = engine.generate_seal(&b2, &genesis_header) {
|
||||
assert!(b2.clone().try_seal(engine, seal).is_ok());
|
||||
// Second proposal is forbidden.
|
||||
assert!(engine.generate_seal(b2.block(), &genesis_header) == Seal::None);
|
||||
assert!(engine.generate_seal(&b2, &genesis_header) == Seal::None);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1665,19 +1671,19 @@ mod tests {
|
||||
let db2 = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b1 = b1.close_and_lock().unwrap();
|
||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes, addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes, addr2, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b2 = b2.close_and_lock().unwrap();
|
||||
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
match engine.generate_seal(b1.block(), &genesis_header) {
|
||||
match engine.generate_seal(&b1, &genesis_header) {
|
||||
Seal::None | Seal::Proposal(_) => panic!("wrong seal"),
|
||||
Seal::Regular(_) => {
|
||||
engine.step();
|
||||
|
||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
||||
match engine.generate_seal(b2.block(), &genesis_header) {
|
||||
match engine.generate_seal(&b2, &genesis_header) {
|
||||
Seal::Regular(_) | Seal::Proposal(_) => panic!("sealed despite wrong difficulty"),
|
||||
Seal::None => {}
|
||||
}
|
||||
@@ -1901,11 +1907,11 @@ mod tests {
|
||||
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b1 = b1.close_and_lock().unwrap();
|
||||
|
||||
// the block is empty so we don't seal and instead broadcast an empty step message
|
||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||
assert_eq!(engine.generate_seal(&b1, &genesis_header), Seal::None);
|
||||
|
||||
// spec starts with step 2
|
||||
let empty_step_rlp = encode(&empty_step(engine, 2, &genesis_header.hash()));
|
||||
@@ -1915,7 +1921,7 @@ mod tests {
|
||||
let len = notify.messages.read().len();
|
||||
|
||||
// make sure that we don't generate empty step for the second time
|
||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||
assert_eq!(engine.generate_seal(&b1, &genesis_header), Seal::None);
|
||||
assert_eq!(len, notify.messages.read().len());
|
||||
}
|
||||
|
||||
@@ -1939,16 +1945,16 @@ mod tests {
|
||||
engine.register_client(Arc::downgrade(&client) as _);
|
||||
|
||||
// step 2
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b1 = b1.close_and_lock().unwrap();
|
||||
|
||||
// since the block is empty it isn't sealed and we generate empty steps
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||
assert_eq!(engine.generate_seal(&b1, &genesis_header), Seal::None);
|
||||
engine.step();
|
||||
|
||||
// step 3
|
||||
let mut b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes.clone(), addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let mut b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes.clone(), addr2, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
b2.push_transaction(Transaction {
|
||||
action: Action::Create,
|
||||
nonce: U256::from(0),
|
||||
@@ -1961,7 +1967,7 @@ mod tests {
|
||||
|
||||
// we will now seal a block with 1tx and include the accumulated empty step message
|
||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
||||
if let Seal::Regular(seal) = engine.generate_seal(b2.block(), &genesis_header) {
|
||||
if let Seal::Regular(seal) = engine.generate_seal(&b2, &genesis_header) {
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
let empty_step2 = sealed_empty_step(engine, 2, &genesis_header.hash());
|
||||
let empty_steps = ::rlp::encode_list(&vec![empty_step2]);
|
||||
@@ -1992,28 +1998,28 @@ mod tests {
|
||||
engine.register_client(Arc::downgrade(&client) as _);
|
||||
|
||||
// step 2
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b1 = b1.close_and_lock().unwrap();
|
||||
|
||||
// since the block is empty it isn't sealed and we generate empty steps
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||
assert_eq!(engine.generate_seal(&b1, &genesis_header), Seal::None);
|
||||
engine.step();
|
||||
|
||||
// step 3
|
||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes.clone(), addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes.clone(), addr2, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b2 = b2.close_and_lock().unwrap();
|
||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
||||
assert_eq!(engine.generate_seal(b2.block(), &genesis_header), Seal::None);
|
||||
assert_eq!(engine.generate_seal(&b2, &genesis_header), Seal::None);
|
||||
engine.step();
|
||||
|
||||
// step 4
|
||||
// the spec sets the maximum_empty_steps to 2 so we will now seal an empty block and include the empty step messages
|
||||
let b3 = OpenBlock::new(engine, Default::default(), false, db3, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b3 = OpenBlock::new(engine, Default::default(), false, db3, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b3 = b3.close_and_lock().unwrap();
|
||||
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
if let Seal::Regular(seal) = engine.generate_seal(b3.block(), &genesis_header) {
|
||||
if let Seal::Regular(seal) = engine.generate_seal(&b3, &genesis_header) {
|
||||
let empty_step2 = sealed_empty_step(engine, 2, &genesis_header.hash());
|
||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
||||
let empty_step3 = sealed_empty_step(engine, 3, &genesis_header.hash());
|
||||
@@ -2042,24 +2048,24 @@ mod tests {
|
||||
engine.register_client(Arc::downgrade(&client) as _);
|
||||
|
||||
// step 2
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b1 = b1.close_and_lock().unwrap();
|
||||
|
||||
// since the block is empty it isn't sealed and we generate empty steps
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||
assert_eq!(engine.generate_seal(&b1, &genesis_header), Seal::None);
|
||||
engine.step();
|
||||
|
||||
// step 3
|
||||
// the signer of the accumulated empty step message should be rewarded
|
||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let addr1_balance = b2.block().state().balance(&addr1).unwrap();
|
||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let addr1_balance = b2.state.balance(&addr1).unwrap();
|
||||
|
||||
// after closing the block `addr1` should be reward twice, one for the included empty step message and another for block creation
|
||||
let b2 = b2.close_and_lock().unwrap();
|
||||
|
||||
// the spec sets the block reward to 10
|
||||
assert_eq!(b2.block().state().balance(&addr1).unwrap(), addr1_balance + (10 * 2))
|
||||
assert_eq!(b2.state.balance(&addr1).unwrap(), addr1_balance + (10 * 2))
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -2152,13 +2158,13 @@ mod tests {
|
||||
(3141562.into(), 31415620.into()),
|
||||
vec![],
|
||||
false,
|
||||
&mut Vec::new().into_iter(),
|
||||
None,
|
||||
).unwrap();
|
||||
let b1 = b1.close_and_lock().unwrap();
|
||||
|
||||
// since the block is empty it isn't sealed and we generate empty steps
|
||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||
assert_eq!(engine.generate_seal(&b1, &genesis_header), Seal::None);
|
||||
engine.step();
|
||||
|
||||
// step 3
|
||||
@@ -2174,9 +2180,9 @@ mod tests {
|
||||
(3141562.into(), 31415620.into()),
|
||||
vec![],
|
||||
false,
|
||||
&mut Vec::new().into_iter(),
|
||||
None,
|
||||
).unwrap();
|
||||
let addr1_balance = b2.block().state().balance(&addr1).unwrap();
|
||||
let addr1_balance = b2.state.balance(&addr1).unwrap();
|
||||
|
||||
// after closing the block `addr1` should be reward twice, one for the included empty step
|
||||
// message and another for block creation
|
||||
@@ -2184,7 +2190,7 @@ mod tests {
|
||||
|
||||
// the contract rewards (1000 + kind) for each benefactor/reward kind
|
||||
assert_eq!(
|
||||
b2.block().state().balance(&addr1).unwrap(),
|
||||
b2.state.balance(&addr1).unwrap(),
|
||||
addr1_balance + (1000 + 0) + (1000 + 2),
|
||||
)
|
||||
}
|
||||
|
||||
@@ -104,7 +104,7 @@ impl Engine<EthereumMachine> for BasicAuthority {
|
||||
|
||||
/// Attempt to seal the block internally.
|
||||
fn generate_seal(&self, block: &ExecutedBlock, _parent: &Header) -> Seal {
|
||||
let header = block.header();
|
||||
let header = &block.header;
|
||||
let author = header.author();
|
||||
if self.validators.contains(header.parent_hash(), author) {
|
||||
// account should be pernamently unlocked, otherwise sealing will fail
|
||||
@@ -264,9 +264,9 @@ mod tests {
|
||||
let genesis_header = spec.genesis_header();
|
||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, addr, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, addr, (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b = b.close_and_lock().unwrap();
|
||||
if let Seal::Regular(seal) = engine.generate_seal(b.block(), &genesis_header) {
|
||||
if let Seal::Regular(seal) = engine.generate_seal(&b, &genesis_header) {
|
||||
assert!(b.try_seal(engine, seal).is_ok());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,11 +24,12 @@ use ethereum_types::{H160, Address, U256};
|
||||
use std::sync::Arc;
|
||||
use hash::keccak;
|
||||
use error::Error;
|
||||
use machine::WithRewards;
|
||||
use parity_machine::Machine;
|
||||
use machine::Machine;
|
||||
use trace;
|
||||
use types::BlockNumber;
|
||||
use super::{SystemOrCodeCall, SystemOrCodeCallKind};
|
||||
use trace::{Tracer, ExecutiveTracer, Tracing};
|
||||
use block::ExecutedBlock;
|
||||
|
||||
use_contract!(block_reward_contract, "res/contracts/block_reward.json");
|
||||
|
||||
@@ -152,17 +153,26 @@ impl BlockRewardContract {
|
||||
|
||||
/// Applies the given block rewards, i.e. adds the given balance to each beneficiary' address.
|
||||
/// If tracing is enabled the operations are recorded.
|
||||
pub fn apply_block_rewards<M: Machine + WithRewards>(
|
||||
pub fn apply_block_rewards<M: Machine>(
|
||||
rewards: &[(Address, RewardKind, U256)],
|
||||
block: &mut M::LiveBlock,
|
||||
block: &mut ExecutedBlock,
|
||||
machine: &M,
|
||||
) -> Result<(), M::Error> {
|
||||
for &(ref author, _, ref block_reward) in rewards {
|
||||
machine.add_balance(block, author, block_reward)?;
|
||||
}
|
||||
|
||||
let rewards: Vec<_> = rewards.into_iter().map(|&(a, k, r)| (a, k.into(), r)).collect();
|
||||
machine.note_rewards(block, &rewards)
|
||||
if let Tracing::Enabled(ref mut traces) = *block.traces_mut() {
|
||||
let mut tracer = ExecutiveTracer::default();
|
||||
|
||||
for &(address, reward_kind, amount) in rewards {
|
||||
tracer.trace_reward(address, amount, reward_kind.into());
|
||||
}
|
||||
|
||||
traces.push(tracer.drain().into());
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
367
ethcore/src/engines/clique/block_state.rs
Normal file
367
ethcore/src/engines/clique/block_state.rs
Normal file
@@ -0,0 +1,367 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use std::collections::{HashMap, BTreeSet, VecDeque};
|
||||
use std::fmt;
|
||||
use std::time::{Duration, SystemTime, UNIX_EPOCH};
|
||||
|
||||
use engines::EngineError;
|
||||
use engines::clique::util::{extract_signers, recover_creator};
|
||||
use engines::clique::{VoteType, DIFF_INTURN, DIFF_NOTURN, NULL_AUTHOR, SIGNING_DELAY_NOTURN_MS};
|
||||
use error::{Error, BlockError};
|
||||
use ethereum_types::{Address, H64};
|
||||
use rand::Rng;
|
||||
use time_utils::CheckedSystemTime;
|
||||
use types::BlockNumber;
|
||||
use types::header::Header;
|
||||
use unexpected::Mismatch;
|
||||
|
||||
/// Type that keeps track of the state for a given vote
|
||||
// Votes that go against the proposal aren't counted since it's equivalent to not voting
|
||||
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd)]
|
||||
pub struct VoteState {
|
||||
kind: VoteType,
|
||||
votes: u64,
|
||||
}
|
||||
|
||||
/// Type that represent a vote
|
||||
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd)]
|
||||
pub struct Vote {
|
||||
block_number: BlockNumber,
|
||||
beneficiary: Address,
|
||||
kind: VoteType,
|
||||
signer: Address,
|
||||
reverted: bool,
|
||||
}
|
||||
|
||||
/// Type that represent a pending vote
|
||||
#[derive(Copy, Clone, Debug, Eq, Hash, PartialEq, PartialOrd)]
|
||||
pub struct PendingVote {
|
||||
signer: Address,
|
||||
beneficiary: Address,
|
||||
}
|
||||
|
||||
/// Clique state for each block.
|
||||
#[cfg(not(test))]
|
||||
#[derive(Clone, Debug, Default)]
|
||||
pub struct CliqueBlockState {
|
||||
/// Current votes for a beneficiary
|
||||
votes: HashMap<PendingVote, VoteState>,
|
||||
/// A list of all votes for the given epoch
|
||||
votes_history: Vec<Vote>,
|
||||
/// a list of all valid signer, sorted by ascending order.
|
||||
signers: BTreeSet<Address>,
|
||||
/// a deque of recent signer, new entry should be pushed front, apply() modifies this.
|
||||
recent_signers: VecDeque<Address>,
|
||||
/// inturn signing should wait until this time
|
||||
pub next_timestamp_inturn: Option<SystemTime>,
|
||||
/// noturn signing should wait until this time
|
||||
pub next_timestamp_noturn: Option<SystemTime>,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[derive(Clone, Debug, Default)]
|
||||
pub struct CliqueBlockState {
|
||||
/// All recorded votes for a given signer, `Vec<PendingVote>` is a stack of votes
|
||||
pub votes: HashMap<PendingVote, VoteState>,
|
||||
/// A list of all votes for the given epoch
|
||||
pub votes_history: Vec<Vote>,
|
||||
/// a list of all valid signer, sorted by ascending order.
|
||||
pub signers: BTreeSet<Address>,
|
||||
/// a deque of recent signer, new entry should be pushed front, apply() modifies this.
|
||||
pub recent_signers: VecDeque<Address>,
|
||||
/// inturn signing should wait until this time
|
||||
pub next_timestamp_inturn: Option<SystemTime>,
|
||||
/// noturn signing should wait until this time
|
||||
pub next_timestamp_noturn: Option<SystemTime>,
|
||||
}
|
||||
|
||||
impl fmt::Display for CliqueBlockState {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
let signers: Vec<String> = self.signers.iter()
|
||||
.map(|s|
|
||||
format!("{} {:?}",
|
||||
s,
|
||||
self.votes.iter().map(|(v, s)| format!("[beneficiary {}, votes: {}]", v.beneficiary, s.votes))
|
||||
.collect::<Vec<_>>()
|
||||
)
|
||||
)
|
||||
.collect();
|
||||
|
||||
let recent_signers: Vec<String> = self.recent_signers.iter().map(|s| format!("{}", s)).collect();
|
||||
let num_votes = self.votes_history.len();
|
||||
let add_votes = self.votes_history.iter().filter(|v| v.kind == VoteType::Add).count();
|
||||
let rm_votes = self.votes_history.iter().filter(|v| v.kind == VoteType::Remove).count();
|
||||
let reverted_votes = self.votes_history.iter().filter(|v| v.reverted).count();
|
||||
|
||||
write!(f,
|
||||
"Votes {{ \n signers: {:?} \n recent_signers: {:?} \n number of votes: {} \n number of add votes {}
|
||||
\r number of remove votes {} \n number of reverted votes: {}}}",
|
||||
signers, recent_signers, num_votes, add_votes, rm_votes, reverted_votes)
|
||||
}
|
||||
}
|
||||
|
||||
impl CliqueBlockState {
|
||||
/// Create new state with given information, this is used creating new state from Checkpoint block.
|
||||
pub fn new(signers: BTreeSet<Address>) -> Self {
|
||||
CliqueBlockState {
|
||||
signers,
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
|
||||
// see https://github.com/ethereum/go-ethereum/blob/master/consensus/clique/clique.go#L474
|
||||
fn verify(&self, header: &Header) -> Result<Address, Error> {
|
||||
let creator = recover_creator(header)?.clone();
|
||||
|
||||
// The signer is not authorized
|
||||
if !self.signers.contains(&creator) {
|
||||
trace!(target: "engine", "current state: {}", self);
|
||||
Err(EngineError::NotAuthorized(creator))?
|
||||
}
|
||||
|
||||
// The signer has signed a block too recently
|
||||
if self.recent_signers.contains(&creator) {
|
||||
trace!(target: "engine", "current state: {}", self);
|
||||
Err(EngineError::CliqueTooRecentlySigned(creator))?
|
||||
}
|
||||
|
||||
// Wrong difficulty
|
||||
let inturn = self.is_inturn(header.number(), &creator);
|
||||
|
||||
if inturn && *header.difficulty() != DIFF_INTURN {
|
||||
Err(BlockError::InvalidDifficulty(Mismatch {
|
||||
expected: DIFF_INTURN,
|
||||
found: *header.difficulty(),
|
||||
}))?
|
||||
}
|
||||
|
||||
if !inturn && *header.difficulty() != DIFF_NOTURN {
|
||||
Err(BlockError::InvalidDifficulty(Mismatch {
|
||||
expected: DIFF_NOTURN,
|
||||
found: *header.difficulty(),
|
||||
}))?
|
||||
}
|
||||
|
||||
Ok(creator)
|
||||
}
|
||||
|
||||
/// Verify and apply a new header to current state
|
||||
pub fn apply(&mut self, header: &Header, is_checkpoint: bool) -> Result<Address, Error> {
|
||||
let creator = self.verify(header)?;
|
||||
self.recent_signers.push_front(creator);
|
||||
self.rotate_recent_signers();
|
||||
|
||||
if is_checkpoint {
|
||||
// checkpoint block should not affect previous tallying, so we check that.
|
||||
let signers = extract_signers(header)?;
|
||||
if self.signers != signers {
|
||||
let invalid_signers: Vec<String> = signers.into_iter()
|
||||
.filter(|s| !self.signers.contains(s))
|
||||
.map(|s| format!("{}", s))
|
||||
.collect();
|
||||
Err(EngineError::CliqueFaultyRecoveredSigners(invalid_signers))?
|
||||
};
|
||||
|
||||
// TODO(niklasad1): I'm not sure if we should shrink here because it is likely that next epoch
|
||||
// will need some memory and might be better for allocation algorithm to decide whether to shrink or not
|
||||
// (typically doubles or halves the allocted memory when necessary)
|
||||
self.votes.clear();
|
||||
self.votes_history.clear();
|
||||
self.votes.shrink_to_fit();
|
||||
self.votes_history.shrink_to_fit();
|
||||
}
|
||||
|
||||
// Contains vote
|
||||
if *header.author() != NULL_AUTHOR {
|
||||
let decoded_seal = header.decode_seal::<Vec<_>>()?;
|
||||
if decoded_seal.len() != 2 {
|
||||
Err(BlockError::InvalidSealArity(Mismatch { expected: 2, found: decoded_seal.len() }))?
|
||||
}
|
||||
|
||||
let nonce: H64 = decoded_seal[1].into();
|
||||
self.update_signers_on_vote(VoteType::from_nonce(nonce)?, creator, *header.author(), header.number())?;
|
||||
}
|
||||
|
||||
Ok(creator)
|
||||
}
|
||||
|
||||
fn update_signers_on_vote(
|
||||
&mut self,
|
||||
kind: VoteType,
|
||||
signer: Address,
|
||||
beneficiary: Address,
|
||||
block_number: u64
|
||||
) -> Result<(), Error> {
|
||||
|
||||
trace!(target: "engine", "Attempt vote {:?} {:?}", kind, beneficiary);
|
||||
|
||||
let pending_vote = PendingVote { signer, beneficiary };
|
||||
|
||||
let reverted = if self.is_valid_vote(&beneficiary, kind) {
|
||||
self.add_vote(pending_vote, kind)
|
||||
} else {
|
||||
// This case only happens if a `signer` wants to revert their previous vote
|
||||
// (does nothing if no previous vote was found)
|
||||
self.revert_vote(pending_vote)
|
||||
};
|
||||
|
||||
// Add all votes to the history
|
||||
self.votes_history.push(
|
||||
Vote {
|
||||
block_number,
|
||||
beneficiary,
|
||||
kind,
|
||||
signer,
|
||||
reverted,
|
||||
});
|
||||
|
||||
// If no vote was found for the beneficiary return `early` but don't propogate an error
|
||||
let (votes, vote_kind) = match self.get_current_votes_and_kind(beneficiary) {
|
||||
Some((v, k)) => (v, k),
|
||||
None => return Ok(()),
|
||||
};
|
||||
let threshold = self.signers.len() / 2;
|
||||
|
||||
debug!(target: "engine", "{}/{} votes to have consensus", votes, threshold + 1);
|
||||
trace!(target: "engine", "votes: {:?}", votes);
|
||||
|
||||
if votes > threshold {
|
||||
match vote_kind {
|
||||
VoteType::Add => {
|
||||
if self.signers.insert(beneficiary) {
|
||||
debug!(target: "engine", "added new signer: {}", beneficiary);
|
||||
}
|
||||
}
|
||||
VoteType::Remove => {
|
||||
if self.signers.remove(&beneficiary) {
|
||||
debug!(target: "engine", "removed signer: {}", beneficiary);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.rotate_recent_signers();
|
||||
self.remove_all_votes_from(beneficiary);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Calculate the next timestamp for `inturn` and `noturn` fails if any of them can't be represented as
|
||||
/// `SystemTime`
|
||||
// TODO(niklasad1): refactor this method to be in constructor of `CliqueBlockState` instead.
|
||||
// This is a quite bad API because we must mutate both variables even when already `inturn` fails
|
||||
// That's why we can't return early and must have the `if-else` in the end
|
||||
pub fn calc_next_timestamp(&mut self, timestamp: u64, period: u64) -> Result<(), Error> {
|
||||
let inturn = CheckedSystemTime::checked_add(UNIX_EPOCH, Duration::from_secs(timestamp.saturating_add(period)));
|
||||
|
||||
self.next_timestamp_inturn = inturn;
|
||||
|
||||
let delay = Duration::from_millis(
|
||||
rand::thread_rng().gen_range(0u64, (self.signers.len() as u64 / 2 + 1) * SIGNING_DELAY_NOTURN_MS));
|
||||
self.next_timestamp_noturn = inturn.map(|inturn| {
|
||||
inturn + delay
|
||||
});
|
||||
|
||||
if self.next_timestamp_inturn.is_some() && self.next_timestamp_noturn.is_some() {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(BlockError::TimestampOverflow)?
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns true if the block difficulty should be `inturn`
|
||||
pub fn is_inturn(&self, current_block_number: u64, author: &Address) -> bool {
|
||||
if let Some(pos) = self.signers.iter().position(|x| *author == *x) {
|
||||
return current_block_number % self.signers.len() as u64 == pos as u64;
|
||||
}
|
||||
false
|
||||
}
|
||||
|
||||
/// Returns whether the signer is authorized to sign a block
|
||||
pub fn is_authorized(&self, author: &Address) -> bool {
|
||||
self.signers.contains(author) && !self.recent_signers.contains(author)
|
||||
}
|
||||
|
||||
/// Returns whether it makes sense to cast the specified vote in the
|
||||
/// current state (e.g. don't try to add an already authorized signer).
|
||||
pub fn is_valid_vote(&self, address: &Address, vote_type: VoteType) -> bool {
|
||||
let in_signer = self.signers.contains(address);
|
||||
match vote_type {
|
||||
VoteType::Add => !in_signer,
|
||||
VoteType::Remove => in_signer,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the list of current signers
|
||||
pub fn signers(&self) -> &BTreeSet<Address> {
|
||||
&self.signers
|
||||
}
|
||||
|
||||
// Note this method will always return `true` but it is intended for a uniform `API`
|
||||
fn add_vote(&mut self, pending_vote: PendingVote, kind: VoteType) -> bool {
|
||||
|
||||
self.votes.entry(pending_vote)
|
||||
.and_modify(|state| {
|
||||
state.votes = state.votes.saturating_add(1);
|
||||
})
|
||||
.or_insert_with(|| VoteState { kind, votes: 1 });
|
||||
true
|
||||
}
|
||||
|
||||
fn revert_vote(&mut self, pending_vote: PendingVote) -> bool {
|
||||
let mut revert = false;
|
||||
let mut remove = false;
|
||||
|
||||
self.votes.entry(pending_vote).and_modify(|state| {
|
||||
if state.votes.saturating_sub(1) == 0 {
|
||||
remove = true;
|
||||
}
|
||||
revert = true;
|
||||
});
|
||||
|
||||
if remove {
|
||||
self.votes.remove(&pending_vote);
|
||||
}
|
||||
|
||||
revert
|
||||
}
|
||||
|
||||
fn get_current_votes_and_kind(&self, beneficiary: Address) -> Option<(usize, VoteType)> {
|
||||
let kind = self.votes.iter()
|
||||
.find(|(v, _t)| v.beneficiary == beneficiary)
|
||||
.map(|(_v, t)| t.kind)?;
|
||||
|
||||
let votes = self.votes.keys()
|
||||
.filter(|vote| vote.beneficiary == beneficiary)
|
||||
.count();
|
||||
|
||||
Some((votes, kind))
|
||||
}
|
||||
|
||||
fn rotate_recent_signers(&mut self) {
|
||||
if self.recent_signers.len() >= ( self.signers.len() / 2 ) + 1 {
|
||||
self.recent_signers.pop_back();
|
||||
}
|
||||
}
|
||||
|
||||
fn remove_all_votes_from(&mut self, beneficiary: Address) {
|
||||
self.votes = std::mem::replace(&mut self.votes, HashMap::new())
|
||||
.into_iter()
|
||||
.filter(|(v, _t)| v.signer != beneficiary && v.beneficiary != beneficiary)
|
||||
.collect();
|
||||
}
|
||||
}
|
||||
774
ethcore/src/engines/clique/mod.rs
Normal file
774
ethcore/src/engines/clique/mod.rs
Normal file
@@ -0,0 +1,774 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! Implementation of the Clique PoA Engine.
|
||||
//!
|
||||
//! File structure:
|
||||
//! - mod.rs -> Provides the engine API implementation, with additional block state tracking
|
||||
//! - block_state.rs -> Records the Clique state for given block.
|
||||
//! - params.rs -> Contains the parameters for the Clique engine.
|
||||
//! - step_service.rs -> An event loop to trigger sealing.
|
||||
//! - util.rs -> Various standalone utility functions.
|
||||
//! - tests.rs -> Consensus tests as defined in EIP-225.
|
||||
|
||||
/// How syncing works:
|
||||
///
|
||||
/// 1. Client will call:
|
||||
/// - `Clique::verify_block_basic()`
|
||||
/// - `Clique::verify_block_unordered()`
|
||||
/// - `Clique::verify_block_family()`
|
||||
/// 2. Using `Clique::state()` we try and retrieve the parent state. If this isn't found
|
||||
/// we need to back-fill it from the last known checkpoint.
|
||||
/// 3. Once we have a good state, we can record it using `CliqueBlockState::apply()`.
|
||||
|
||||
/// How sealing works:
|
||||
///
|
||||
/// 1. Set a signer using `Engine::set_signer()`. If a miner account was set up through
|
||||
/// a config file or CLI flag `MinerService::set_author()` will eventually set the signer
|
||||
/// 2. We check that the engine seals internally through `Clique::seals_internally()`
|
||||
/// Note: This is always true for Clique
|
||||
/// 3. Calling `Clique::new()` will spawn a `StepService` thread. This thread will call `Engine::step()`
|
||||
/// periodically. Internally, the Clique `step()` function calls `Client::update_sealing()`, which is
|
||||
/// what makes and seals a block.
|
||||
/// 4. `Clique::generate_seal()` will then be called by `miner`. This will return a `Seal` which
|
||||
/// is either a `Seal::None` or `Seal:Regular`. The following shows how a `Seal` variant is chosen:
|
||||
/// a. We return `Seal::None` if no signer is available or the signer is not authorized.
|
||||
/// b. If period == 0 and block has transactions, we return `Seal::Regular`, otherwise return `Seal::None`.
|
||||
/// c. If we're `INTURN`, wait for at least `period` since last block before trying to seal.
|
||||
/// d. If we're not `INTURN`, we wait for a random amount of time using the algorithm specified
|
||||
/// in EIP-225 before trying to seal again.
|
||||
/// 5. Miner will create new block, in process it will call several engine methods to do following:
|
||||
/// a. `Clique::open_block_header_timestamp()` must set timestamp correctly.
|
||||
/// b. `Clique::populate_from_parent()` must set difficulty to correct value.
|
||||
/// Note: `Clique::populate_from_parent()` is used in both the syncing and sealing code paths.
|
||||
/// 6. We call `Clique::on_seal_block()` which will allow us to modify the block header during seal generation.
|
||||
/// 7. Finally, `Clique::verify_local_seal()` is called. After this, the syncing code path will be followed
|
||||
/// in order to import the new block.
|
||||
|
||||
use std::cmp;
|
||||
use std::collections::HashMap;
|
||||
use std::collections::VecDeque;
|
||||
use std::sync::{Arc, Weak};
|
||||
use std::thread;
|
||||
use std::time;
|
||||
use std::time::{Duration, SystemTime, UNIX_EPOCH};
|
||||
|
||||
use block::ExecutedBlock;
|
||||
use bytes::Bytes;
|
||||
use client::{BlockId, EngineClient};
|
||||
use engines::clique::util::{extract_signers, recover_creator};
|
||||
use engines::{Engine, EngineError, Seal};
|
||||
use error::{BlockError, Error};
|
||||
use ethereum_types::{Address, H64, H160, H256, U256};
|
||||
use ethkey::Signature;
|
||||
use hash::KECCAK_EMPTY_LIST_RLP;
|
||||
use itertools::Itertools;
|
||||
use lru_cache::LruCache;
|
||||
use machine::{Call, EthereumMachine};
|
||||
use parking_lot::RwLock;
|
||||
use rand::Rng;
|
||||
use super::signer::EngineSigner;
|
||||
use unexpected::{Mismatch, OutOfBounds};
|
||||
use time_utils::CheckedSystemTime;
|
||||
use types::BlockNumber;
|
||||
use types::header::{ExtendedHeader, Header};
|
||||
|
||||
use self::block_state::CliqueBlockState;
|
||||
use self::params::CliqueParams;
|
||||
use self::step_service::StepService;
|
||||
|
||||
mod params;
|
||||
mod block_state;
|
||||
mod step_service;
|
||||
mod util;
|
||||
|
||||
// TODO(niklasad1): extract tester types into a separate mod to be shared in the code base
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
// Protocol constants
|
||||
/// Fixed number of extra-data prefix bytes reserved for signer vanity
|
||||
pub const VANITY_LENGTH: usize = 32;
|
||||
/// Fixed number of extra-data suffix bytes reserved for signer signature
|
||||
pub const SIGNATURE_LENGTH: usize = 65;
|
||||
/// Address length of signer
|
||||
pub const ADDRESS_LENGTH: usize = 20;
|
||||
/// Nonce value for DROP vote
|
||||
pub const NONCE_DROP_VOTE: H64 = H64([0; 8]);
|
||||
/// Nonce value for AUTH vote
|
||||
pub const NONCE_AUTH_VOTE: H64 = H64([0xff; 8]);
|
||||
/// Difficulty for INTURN block
|
||||
pub const DIFF_INTURN: U256 = U256([2, 0, 0, 0]);
|
||||
/// Difficulty for NOTURN block
|
||||
pub const DIFF_NOTURN: U256 = U256([1, 0, 0, 0]);
|
||||
/// Default empty author field value
|
||||
pub const NULL_AUTHOR: Address = H160([0x00; 20]);
|
||||
/// Default empty nonce value
|
||||
pub const NULL_NONCE: H64 = NONCE_DROP_VOTE;
|
||||
/// Default value for mixhash
|
||||
pub const NULL_MIXHASH: H256 = H256([0; 32]);
|
||||
/// Default value for uncles hash
|
||||
pub const NULL_UNCLES_HASH: H256 = KECCAK_EMPTY_LIST_RLP;
|
||||
/// Default noturn block wiggle factor defined in spec.
|
||||
pub const SIGNING_DELAY_NOTURN_MS: u64 = 500;
|
||||
|
||||
/// How many CliqueBlockState to cache in the memory.
|
||||
pub const STATE_CACHE_NUM: usize = 128;
|
||||
|
||||
/// Vote to add or remove the beneficiary
|
||||
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd)]
|
||||
pub enum VoteType {
|
||||
Add,
|
||||
Remove,
|
||||
}
|
||||
|
||||
impl VoteType {
|
||||
/// Try to construct a `Vote` from a nonce
|
||||
pub fn from_nonce(nonce: H64) -> Result<Self, Error> {
|
||||
if nonce == NONCE_AUTH_VOTE {
|
||||
Ok(VoteType::Add)
|
||||
} else if nonce == NONCE_DROP_VOTE {
|
||||
Ok(VoteType::Remove)
|
||||
} else {
|
||||
Err(EngineError::CliqueInvalidNonce(nonce))?
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the rlp encoding of the vote
|
||||
pub fn as_rlp(&self) -> Vec<Vec<u8>> {
|
||||
match self {
|
||||
VoteType::Add => vec![rlp::encode(&NULL_MIXHASH), rlp::encode(&NONCE_AUTH_VOTE)],
|
||||
VoteType::Remove => vec![rlp::encode(&NULL_MIXHASH), rlp::encode(&NONCE_DROP_VOTE)],
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Clique Engine implementation
|
||||
// block_state_by_hash -> block state indexed by header hash.
|
||||
#[cfg(not(test))]
|
||||
pub struct Clique {
|
||||
epoch_length: u64,
|
||||
period: u64,
|
||||
machine: EthereumMachine,
|
||||
client: RwLock<Option<Weak<EngineClient>>>,
|
||||
block_state_by_hash: RwLock<LruCache<H256, CliqueBlockState>>,
|
||||
proposals: RwLock<HashMap<Address, VoteType>>,
|
||||
signer: RwLock<Option<Box<EngineSigner>>>,
|
||||
step_service: Option<Arc<StepService>>,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
/// Test version of `CliqueEngine` to make all fields public
|
||||
pub struct Clique {
|
||||
pub epoch_length: u64,
|
||||
pub period: u64,
|
||||
pub machine: EthereumMachine,
|
||||
pub client: RwLock<Option<Weak<EngineClient>>>,
|
||||
pub block_state_by_hash: RwLock<LruCache<H256, CliqueBlockState>>,
|
||||
pub proposals: RwLock<HashMap<Address, VoteType>>,
|
||||
pub signer: RwLock<Option<Box<EngineSigner>>>,
|
||||
pub step_service: Option<Arc<StepService>>,
|
||||
}
|
||||
|
||||
impl Clique {
|
||||
/// Initialize Clique engine from empty state.
|
||||
pub fn new(our_params: CliqueParams, machine: EthereumMachine) -> Result<Arc<Self>, Error> {
|
||||
let mut engine = Clique {
|
||||
epoch_length: our_params.epoch,
|
||||
period: our_params.period,
|
||||
client: Default::default(),
|
||||
block_state_by_hash: RwLock::new(LruCache::new(STATE_CACHE_NUM)),
|
||||
proposals: Default::default(),
|
||||
signer: Default::default(),
|
||||
machine,
|
||||
step_service: None,
|
||||
};
|
||||
|
||||
let res = Arc::new(engine);
|
||||
|
||||
if our_params.period > 0 {
|
||||
engine.step_service = Some(StepService::start(Arc::downgrade(&res) as Weak<Engine<_>>));
|
||||
}
|
||||
|
||||
Ok(res)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
/// Initialize test variant of `CliqueEngine`,
|
||||
/// Note we need to `mock` the miner and it is introduced to test block verification to trigger new blocks
|
||||
/// to mainly test consensus edge cases
|
||||
pub fn with_test(epoch_length: u64, period: u64) -> Self {
|
||||
use spec::Spec;
|
||||
|
||||
Self {
|
||||
epoch_length,
|
||||
period,
|
||||
client: Default::default(),
|
||||
block_state_by_hash: RwLock::new(LruCache::new(STATE_CACHE_NUM)),
|
||||
proposals: Default::default(),
|
||||
signer: Default::default(),
|
||||
machine: Spec::new_test_machine(),
|
||||
step_service: None,
|
||||
}
|
||||
}
|
||||
|
||||
fn sign_header(&self, header: &Header) -> Result<(Signature, H256), Error> {
|
||||
|
||||
match self.signer.read().as_ref() {
|
||||
None => {
|
||||
Err(EngineError::RequiresSigner)?
|
||||
}
|
||||
Some(signer) => {
|
||||
let digest = header.hash();
|
||||
match signer.sign(digest) {
|
||||
Ok(sig) => Ok((sig, digest)),
|
||||
Err(e) => Err(EngineError::Custom(e.into()))?,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Construct an new state from given checkpoint header.
|
||||
fn new_checkpoint_state(&self, header: &Header) -> Result<CliqueBlockState, Error> {
|
||||
debug_assert_eq!(header.number() % self.epoch_length, 0);
|
||||
|
||||
let mut state = CliqueBlockState::new(
|
||||
extract_signers(header)?);
|
||||
|
||||
// TODO(niklasad1): refactor to perform this check in the `CliqueBlockState` constructor instead
|
||||
state.calc_next_timestamp(header.timestamp(), self.period)?;
|
||||
|
||||
Ok(state)
|
||||
}
|
||||
|
||||
fn state_no_backfill(&self, hash: &H256) -> Option<CliqueBlockState> {
|
||||
self.block_state_by_hash.write().get_mut(hash).cloned()
|
||||
}
|
||||
|
||||
/// Get `CliqueBlockState` for given header, backfill from last checkpoint if needed.
|
||||
fn state(&self, header: &Header) -> Result<CliqueBlockState, Error> {
|
||||
let mut block_state_by_hash = self.block_state_by_hash.write();
|
||||
if let Some(state) = block_state_by_hash.get_mut(&header.hash()) {
|
||||
return Ok(state.clone());
|
||||
}
|
||||
// If we are looking for an checkpoint block state, we can directly reconstruct it.
|
||||
if header.number() % self.epoch_length == 0 {
|
||||
let state = self.new_checkpoint_state(header)?;
|
||||
block_state_by_hash.insert(header.hash(), state.clone());
|
||||
return Ok(state);
|
||||
}
|
||||
// BlockState is not found in memory, which means we need to reconstruct state from last checkpoint.
|
||||
match self.client.read().as_ref().and_then(|w| w.upgrade()) {
|
||||
None => {
|
||||
return Err(EngineError::RequiresClient)?;
|
||||
}
|
||||
Some(c) => {
|
||||
let last_checkpoint_number = header.number() - header.number() % self.epoch_length as u64;
|
||||
debug_assert_ne!(last_checkpoint_number, header.number());
|
||||
|
||||
// Catching up state, note that we don't really store block state for intermediary blocks,
|
||||
// for speed.
|
||||
let backfill_start = time::Instant::now();
|
||||
trace!(target: "engine",
|
||||
"Back-filling block state. last_checkpoint_number: {}, target: {}({}).",
|
||||
last_checkpoint_number, header.number(), header.hash());
|
||||
|
||||
let mut chain: &mut VecDeque<Header> = &mut VecDeque::with_capacity(
|
||||
(header.number() - last_checkpoint_number + 1) as usize);
|
||||
|
||||
// Put ourselves in.
|
||||
chain.push_front(header.clone());
|
||||
|
||||
// populate chain to last checkpoint
|
||||
loop {
|
||||
let (last_parent_hash, last_num) = {
|
||||
let l = chain.front().expect("chain has at least one element; qed");
|
||||
(*l.parent_hash(), l.number())
|
||||
};
|
||||
|
||||
if last_num == last_checkpoint_number + 1 {
|
||||
break;
|
||||
}
|
||||
match c.block_header(BlockId::Hash(last_parent_hash)) {
|
||||
None => {
|
||||
return Err(BlockError::UnknownParent(last_parent_hash))?;
|
||||
}
|
||||
Some(next) => {
|
||||
chain.push_front(next.decode()?);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Get the state for last checkpoint.
|
||||
let last_checkpoint_hash = *chain.front()
|
||||
.expect("chain has at least one element; qed")
|
||||
.parent_hash();
|
||||
|
||||
let last_checkpoint_header = match c.block_header(BlockId::Hash(last_checkpoint_hash)) {
|
||||
None => return Err(EngineError::CliqueMissingCheckpoint(last_checkpoint_hash))?,
|
||||
Some(header) => header.decode()?,
|
||||
};
|
||||
|
||||
let last_checkpoint_state = match block_state_by_hash.get_mut(&last_checkpoint_hash) {
|
||||
Some(state) => state.clone(),
|
||||
None => self.new_checkpoint_state(&last_checkpoint_header)?,
|
||||
};
|
||||
|
||||
block_state_by_hash.insert(last_checkpoint_header.hash(), last_checkpoint_state.clone());
|
||||
|
||||
// Backfill!
|
||||
let mut new_state = last_checkpoint_state.clone();
|
||||
for item in chain {
|
||||
new_state.apply(item, false)?;
|
||||
}
|
||||
new_state.calc_next_timestamp(header.timestamp(), self.period)?;
|
||||
block_state_by_hash.insert(header.hash(), new_state.clone());
|
||||
|
||||
let elapsed = backfill_start.elapsed();
|
||||
trace!(target: "engine", "Back-filling succeed, took {} ms.", elapsed.as_millis());
|
||||
Ok(new_state)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Engine<EthereumMachine> for Clique {
|
||||
fn name(&self) -> &str { "Clique" }
|
||||
|
||||
fn machine(&self) -> &EthereumMachine { &self.machine }
|
||||
|
||||
// Clique use same fields, nonce + mixHash
|
||||
fn seal_fields(&self, _header: &Header) -> usize { 2 }
|
||||
|
||||
fn maximum_uncle_count(&self, _block: BlockNumber) -> usize { 0 }
|
||||
|
||||
fn on_new_block(
|
||||
&self,
|
||||
_block: &mut ExecutedBlock,
|
||||
_epoch_begin: bool,
|
||||
_ancestry: &mut Iterator<Item=ExtendedHeader>,
|
||||
) -> Result<(), Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Clique has no block reward.
|
||||
fn on_close_block(&self, _block: &mut ExecutedBlock) -> Result<(), Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn on_seal_block(&self, block: &mut ExecutedBlock) -> Result<(), Error> {
|
||||
trace!(target: "engine", "on_seal_block");
|
||||
|
||||
let header = &mut block.header;
|
||||
|
||||
let state = self.state_no_backfill(header.parent_hash())
|
||||
.ok_or_else(|| BlockError::UnknownParent(*header.parent_hash()))?;
|
||||
|
||||
let is_checkpoint = header.number() % self.epoch_length == 0;
|
||||
|
||||
header.set_author(NULL_AUTHOR);
|
||||
|
||||
// Cast a random Vote if not checkpoint
|
||||
if !is_checkpoint {
|
||||
// TODO(niklasad1): this will always be false because `proposals` is never written to
|
||||
let votes = self.proposals.read().iter()
|
||||
.filter(|(address, vote_type)| state.is_valid_vote(*address, **vote_type))
|
||||
.map(|(address, vote_type)| (*address, *vote_type))
|
||||
.collect_vec();
|
||||
|
||||
if !votes.is_empty() {
|
||||
// Pick a random vote.
|
||||
let random_vote = rand::thread_rng().gen_range(0 as usize, votes.len());
|
||||
let (beneficiary, vote_type) = votes[random_vote];
|
||||
|
||||
trace!(target: "engine", "Casting vote: beneficiary {}, type {:?} ", beneficiary, vote_type);
|
||||
|
||||
header.set_author(beneficiary);
|
||||
header.set_seal(vote_type.as_rlp());
|
||||
}
|
||||
}
|
||||
|
||||
// Work on clique seal.
|
||||
|
||||
let mut seal: Vec<u8> = Vec::with_capacity(VANITY_LENGTH + SIGNATURE_LENGTH);
|
||||
|
||||
// At this point, extra_data should only contain miner vanity.
|
||||
if header.extra_data().len() != VANITY_LENGTH {
|
||||
Err(BlockError::ExtraDataOutOfBounds(OutOfBounds {
|
||||
min: Some(VANITY_LENGTH),
|
||||
max: Some(VANITY_LENGTH),
|
||||
found: header.extra_data().len()
|
||||
}))?;
|
||||
}
|
||||
// vanity
|
||||
{
|
||||
seal.extend_from_slice(&header.extra_data()[0..VANITY_LENGTH]);
|
||||
}
|
||||
|
||||
// If we are building an checkpoint block, add all signers now.
|
||||
if is_checkpoint {
|
||||
seal.reserve(state.signers().len() * 20);
|
||||
state.signers().iter().foreach(|addr| {
|
||||
seal.extend_from_slice(&addr[..]);
|
||||
});
|
||||
}
|
||||
|
||||
header.set_extra_data(seal.clone());
|
||||
|
||||
// append signature onto extra_data
|
||||
let (sig, _msg) = self.sign_header(&header)?;
|
||||
seal.extend_from_slice(&sig[..]);
|
||||
header.set_extra_data(seal.clone());
|
||||
|
||||
header.compute_hash();
|
||||
|
||||
// locally sealed block don't go through valid_block_family(), so we have to record state here.
|
||||
let mut new_state = state.clone();
|
||||
new_state.apply(&header, is_checkpoint)?;
|
||||
new_state.calc_next_timestamp(header.timestamp(), self.period)?;
|
||||
self.block_state_by_hash.write().insert(header.hash(), new_state);
|
||||
|
||||
trace!(target: "engine", "on_seal_block: finished, final header: {:?}", header);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Clique doesn't require external work to seal, so we always return true here.
|
||||
fn seals_internally(&self) -> Option<bool> {
|
||||
Some(true)
|
||||
}
|
||||
|
||||
/// Returns if we are ready to seal, the real sealing (signing extra_data) is actually done in `on_seal_block()`.
|
||||
fn generate_seal(&self, block: &ExecutedBlock, parent: &Header) -> Seal {
|
||||
trace!(target: "engine", "tried to generate_seal");
|
||||
let null_seal = util::null_seal();
|
||||
|
||||
if block.header.number() == 0 {
|
||||
trace!(target: "engine", "attempted to seal genesis block");
|
||||
return Seal::None;
|
||||
}
|
||||
|
||||
// if sealing period is 0, and not an checkpoint block, refuse to seal
|
||||
if self.period == 0 {
|
||||
if block.transactions.is_empty() && block.header.number() % self.epoch_length != 0 {
|
||||
return Seal::None;
|
||||
}
|
||||
return Seal::Regular(null_seal);
|
||||
}
|
||||
|
||||
// Check we actually have authority to seal.
|
||||
if let Some(author) = self.signer.read().as_ref().map(|x| x.address()) {
|
||||
|
||||
// ensure the voting state exists
|
||||
match self.state(&parent) {
|
||||
Err(e) => {
|
||||
warn!(target: "engine", "generate_seal: can't get parent state(number: {}, hash: {}): {} ",
|
||||
parent.number(), parent.hash(), e);
|
||||
return Seal::None;
|
||||
}
|
||||
Ok(state) => {
|
||||
// Are we authorized to seal?
|
||||
if !state.is_authorized(&author) {
|
||||
trace!(target: "engine", "generate_seal: Not authorized to sign right now.");
|
||||
// wait for one third of period to try again.
|
||||
thread::sleep(Duration::from_secs(self.period / 3 + 1));
|
||||
return Seal::None;
|
||||
}
|
||||
|
||||
let inturn = state.is_inturn(block.header.number(), &author);
|
||||
|
||||
let now = SystemTime::now();
|
||||
|
||||
let limit = match inturn {
|
||||
true => state.next_timestamp_inturn.unwrap_or(now),
|
||||
false => state.next_timestamp_noturn.unwrap_or(now),
|
||||
};
|
||||
|
||||
// Wait for the right moment.
|
||||
if now < limit {
|
||||
trace!(target: "engine",
|
||||
"generate_seal: sleeping to sign: inturn: {}, now: {:?}, to: {:?}.",
|
||||
inturn, now, limit);
|
||||
match limit.duration_since(SystemTime::now()) {
|
||||
Ok(duration) => {
|
||||
thread::sleep(duration);
|
||||
},
|
||||
Err(e) => {
|
||||
warn!(target:"engine", "generate_seal: unable to sleep, err: {}", e);
|
||||
return Seal::None;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
trace!(target: "engine", "generate_seal: seal ready for block {}, txs: {}.",
|
||||
block.header.number(), block.transactions.len());
|
||||
return Seal::Regular(null_seal);
|
||||
}
|
||||
}
|
||||
}
|
||||
Seal::None
|
||||
}
|
||||
|
||||
fn verify_local_seal(&self, _header: &Header) -> Result<(), Error> { Ok(()) }
|
||||
|
||||
fn verify_block_basic(&self, header: &Header) -> Result<(), Error> {
|
||||
// Largely same as https://github.com/ethereum/go-ethereum/blob/master/consensus/clique/clique.go#L275
|
||||
|
||||
// Ignore genesis block.
|
||||
if header.number() == 0 {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Don't waste time checking blocks from the future
|
||||
{
|
||||
let limit = CheckedSystemTime::checked_add(SystemTime::now(), Duration::from_secs(self.period))
|
||||
.ok_or(BlockError::TimestampOverflow)?;
|
||||
|
||||
// This should succeed under the contraints that the system clock works
|
||||
let limit_as_dur = limit.duration_since(UNIX_EPOCH).map_err(|e| {
|
||||
Box::new(format!("Converting SystemTime to Duration failed: {}", e))
|
||||
})?;
|
||||
|
||||
let hdr = Duration::from_secs(header.timestamp());
|
||||
if hdr > limit_as_dur {
|
||||
let found = CheckedSystemTime::checked_add(UNIX_EPOCH, hdr).ok_or(BlockError::TimestampOverflow)?;
|
||||
|
||||
Err(BlockError::TemporarilyInvalid(OutOfBounds {
|
||||
min: None,
|
||||
max: Some(limit),
|
||||
found,
|
||||
}))?
|
||||
}
|
||||
}
|
||||
|
||||
let is_checkpoint = header.number() % self.epoch_length == 0;
|
||||
|
||||
if is_checkpoint && *header.author() != NULL_AUTHOR {
|
||||
return Err(EngineError::CliqueWrongAuthorCheckpoint(Mismatch {
|
||||
expected: 0.into(),
|
||||
found: *header.author(),
|
||||
}))?;
|
||||
}
|
||||
|
||||
let seal_fields = header.decode_seal::<Vec<_>>()?;
|
||||
if seal_fields.len() != 2 {
|
||||
Err(BlockError::InvalidSealArity(Mismatch {
|
||||
expected: 2,
|
||||
found: seal_fields.len(),
|
||||
}))?
|
||||
}
|
||||
|
||||
let mixhash: H256 = seal_fields[0].into();
|
||||
let nonce: H64 = seal_fields[1].into();
|
||||
|
||||
// Nonce must be 0x00..0 or 0xff..f
|
||||
if nonce != NONCE_DROP_VOTE && nonce != NONCE_AUTH_VOTE {
|
||||
Err(EngineError::CliqueInvalidNonce(nonce))?;
|
||||
}
|
||||
|
||||
if is_checkpoint && nonce != NULL_NONCE {
|
||||
Err(EngineError::CliqueInvalidNonce(nonce))?;
|
||||
}
|
||||
|
||||
// Ensure that the mix digest is zero as Clique don't have fork protection currently
|
||||
if mixhash != NULL_MIXHASH {
|
||||
Err(BlockError::MismatchedH256SealElement(Mismatch {
|
||||
expected: NULL_MIXHASH,
|
||||
found: mixhash,
|
||||
}))?
|
||||
}
|
||||
|
||||
let extra_data_len = header.extra_data().len();
|
||||
|
||||
if extra_data_len < VANITY_LENGTH {
|
||||
Err(EngineError::CliqueMissingVanity)?
|
||||
}
|
||||
|
||||
if extra_data_len < VANITY_LENGTH + SIGNATURE_LENGTH {
|
||||
Err(EngineError::CliqueMissingSignature)?
|
||||
}
|
||||
|
||||
let signers = extra_data_len - (VANITY_LENGTH + SIGNATURE_LENGTH);
|
||||
|
||||
// Checkpoint blocks must at least contain one signer
|
||||
if is_checkpoint && signers == 0 {
|
||||
Err(EngineError::CliqueCheckpointNoSigner)?
|
||||
}
|
||||
|
||||
// Addresses must be be divisable by 20
|
||||
if is_checkpoint && signers % ADDRESS_LENGTH != 0 {
|
||||
Err(EngineError::CliqueCheckpointInvalidSigners(signers))?
|
||||
}
|
||||
|
||||
// Ensure that the block doesn't contain any uncles which are meaningless in PoA
|
||||
if *header.uncles_hash() != NULL_UNCLES_HASH {
|
||||
Err(BlockError::InvalidUnclesHash(Mismatch {
|
||||
expected: NULL_UNCLES_HASH,
|
||||
found: *header.uncles_hash(),
|
||||
}))?
|
||||
}
|
||||
|
||||
// Ensure that the block's difficulty is meaningful (may not be correct at this point)
|
||||
if *header.difficulty() != DIFF_INTURN && *header.difficulty() != DIFF_NOTURN {
|
||||
Err(BlockError::DifficultyOutOfBounds(OutOfBounds {
|
||||
min: Some(DIFF_NOTURN),
|
||||
max: Some(DIFF_INTURN),
|
||||
found: *header.difficulty(),
|
||||
}))?
|
||||
}
|
||||
|
||||
// All basic checks passed, continue to next phase
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn verify_block_unordered(&self, _header: &Header) -> Result<(), Error> {
|
||||
// Nothing to check here.
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Verify block family by looking up parent state (backfill if needed), then try to apply current header.
|
||||
/// see https://github.com/ethereum/go-ethereum/blob/master/consensus/clique/clique.go#L338
|
||||
fn verify_block_family(&self, header: &Header, parent: &Header) -> Result<(), Error> {
|
||||
// Ignore genesis block.
|
||||
if header.number() == 0 {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// parent sanity check
|
||||
if parent.hash() != *header.parent_hash() || header.number() != parent.number() + 1 {
|
||||
Err(BlockError::UnknownParent(parent.hash()))?
|
||||
}
|
||||
|
||||
// Ensure that the block's timestamp isn't too close to it's parent
|
||||
let limit = parent.timestamp().saturating_add(self.period);
|
||||
if limit > header.timestamp() {
|
||||
let max = CheckedSystemTime::checked_add(UNIX_EPOCH, Duration::from_secs(header.timestamp()));
|
||||
let found = CheckedSystemTime::checked_add(UNIX_EPOCH, Duration::from_secs(limit))
|
||||
.ok_or(BlockError::TimestampOverflow)?;
|
||||
|
||||
Err(BlockError::InvalidTimestamp(OutOfBounds {
|
||||
min: None,
|
||||
max,
|
||||
found,
|
||||
}))?
|
||||
}
|
||||
|
||||
// Retrieve the parent state
|
||||
let parent_state = self.state(&parent)?;
|
||||
// Try to apply current state, apply() will further check signer and recent signer.
|
||||
let mut new_state = parent_state.clone();
|
||||
new_state.apply(header, header.number() % self.epoch_length == 0)?;
|
||||
new_state.calc_next_timestamp(header.timestamp(), self.period)?;
|
||||
self.block_state_by_hash.write().insert(header.hash(), new_state);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn genesis_epoch_data(&self, header: &Header, _call: &Call) -> Result<Vec<u8>, String> {
|
||||
let mut state = self.new_checkpoint_state(header).expect("Unable to parse genesis data.");
|
||||
state.calc_next_timestamp(header.timestamp(), self.period).map_err(|e| format!("{}", e))?;
|
||||
self.block_state_by_hash.write().insert(header.hash(), state);
|
||||
|
||||
// no proof.
|
||||
Ok(Vec::new())
|
||||
}
|
||||
|
||||
// Our task here is to set difficulty
|
||||
fn populate_from_parent(&self, header: &mut Header, parent: &Header) {
|
||||
// TODO(https://github.com/paritytech/parity-ethereum/issues/10410): this is a horrible hack,
|
||||
// it is due to the fact that enact and miner both use OpenBlock::new() which will both call
|
||||
// this function. more refactoring is definitely needed.
|
||||
if header.extra_data().len() < VANITY_LENGTH + SIGNATURE_LENGTH {
|
||||
trace!(target: "engine", "populate_from_parent in sealing");
|
||||
|
||||
// It's unclear how to prevent creating new blocks unless we are authorized, the best way (and geth does this too)
|
||||
// it's just to ignore setting an correct difficulty here, we will check authorization in next step in generate_seal anyway.
|
||||
if let Some(signer) = self.signer.read().as_ref() {
|
||||
let state = match self.state(&parent) {
|
||||
Err(e) => {
|
||||
trace!(target: "engine", "populate_from_parent: Unable to find parent state: {}, ignored.", e);
|
||||
return;
|
||||
}
|
||||
Ok(state) => state,
|
||||
};
|
||||
|
||||
if state.is_authorized(&signer.address()) {
|
||||
if state.is_inturn(header.number(), &signer.address()) {
|
||||
header.set_difficulty(DIFF_INTURN);
|
||||
} else {
|
||||
header.set_difficulty(DIFF_NOTURN);
|
||||
}
|
||||
}
|
||||
|
||||
let zero_padding_len = VANITY_LENGTH - header.extra_data().len();
|
||||
if zero_padding_len > 0 {
|
||||
let mut resized_extra_data = header.extra_data().clone();
|
||||
resized_extra_data.resize(VANITY_LENGTH, 0);
|
||||
header.set_extra_data(resized_extra_data);
|
||||
}
|
||||
} else {
|
||||
trace!(target: "engine", "populate_from_parent: no signer registered");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn set_signer(&self, signer: Box<EngineSigner>) {
|
||||
trace!(target: "engine", "set_signer: {}", signer.address());
|
||||
*self.signer.write() = Some(signer);
|
||||
}
|
||||
|
||||
fn register_client(&self, client: Weak<EngineClient>) {
|
||||
*self.client.write() = Some(client.clone());
|
||||
}
|
||||
|
||||
fn step(&self) {
|
||||
if self.signer.read().is_some() {
|
||||
if let Some(ref weak) = *self.client.read() {
|
||||
if let Some(c) = weak.upgrade() {
|
||||
c.update_sealing();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn stop(&mut self) {
|
||||
if let Some(mut s) = self.step_service.as_mut() {
|
||||
Arc::get_mut(&mut s).map(|x| x.stop());
|
||||
} else {
|
||||
warn!(target: "engine", "Stopping `CliqueStepService` failed requires mutable access");
|
||||
}
|
||||
}
|
||||
|
||||
/// Clique timestamp is set to parent + period , or current time which ever is higher.
|
||||
fn open_block_header_timestamp(&self, parent_timestamp: u64) -> u64 {
|
||||
let now = time::SystemTime::now().duration_since(time::UNIX_EPOCH).unwrap_or_default();
|
||||
cmp::max(now.as_secs() as u64, parent_timestamp.saturating_add(self.period))
|
||||
}
|
||||
|
||||
fn is_timestamp_valid(&self, header_timestamp: u64, parent_timestamp: u64) -> bool {
|
||||
header_timestamp >= parent_timestamp.saturating_add(self.period)
|
||||
}
|
||||
|
||||
fn fork_choice(&self, new: &ExtendedHeader, current: &ExtendedHeader) -> super::ForkChoice {
|
||||
super::total_difficulty_fork_choice(new, current)
|
||||
}
|
||||
|
||||
// Clique uses the author field for voting, the real author is hidden in the `extra_data` field.
|
||||
// So when executing tx's (like in `enact()`) we want to use the executive author
|
||||
fn executive_author(&self, header: &Header) -> Result<Address, Error> {
|
||||
recover_creator(header)
|
||||
}
|
||||
}
|
||||
41
ethcore/src/engines/clique/params.rs
Normal file
41
ethcore/src/engines/clique/params.rs
Normal file
@@ -0,0 +1,41 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! Clique specific parameters.
|
||||
|
||||
use ethjson;
|
||||
|
||||
/// `Clique` params.
|
||||
pub struct CliqueParams {
|
||||
/// Period as defined in EIP
|
||||
pub period: u64,
|
||||
/// Epoch length as defined in EIP
|
||||
pub epoch: u64,
|
||||
}
|
||||
|
||||
impl From<ethjson::spec::CliqueParams> for CliqueParams {
|
||||
fn from(p: ethjson::spec::CliqueParams) -> Self {
|
||||
let period = p.period.map_or_else(|| 30000 as u64, Into::into);
|
||||
let epoch = p.epoch.map_or_else(|| 15 as u64, Into::into);
|
||||
|
||||
assert!(epoch > 0);
|
||||
|
||||
CliqueParams {
|
||||
period,
|
||||
epoch,
|
||||
}
|
||||
}
|
||||
}
|
||||
77
ethcore/src/engines/clique/step_service.rs
Normal file
77
ethcore/src/engines/clique/step_service.rs
Normal file
@@ -0,0 +1,77 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
use std::sync::Weak;
|
||||
use std::sync::atomic::{AtomicBool, Ordering};
|
||||
use std::time::Duration;
|
||||
use std::thread;
|
||||
use std::sync::Arc;
|
||||
|
||||
use engines::Engine;
|
||||
use machine::Machine;
|
||||
|
||||
/// Service that is managing the engine
|
||||
pub struct StepService {
|
||||
shutdown: Arc<AtomicBool>,
|
||||
thread: Option<thread::JoinHandle<()>>,
|
||||
}
|
||||
|
||||
impl StepService {
|
||||
/// Start the `StepService`
|
||||
pub fn start<M: Machine + 'static>(engine: Weak<Engine<M>>) -> Arc<Self> {
|
||||
let shutdown = Arc::new(AtomicBool::new(false));
|
||||
let s = shutdown.clone();
|
||||
|
||||
let thread = thread::Builder::new()
|
||||
.name("CliqueStepService".into())
|
||||
.spawn(move || {
|
||||
// startup delay.
|
||||
thread::sleep(Duration::from_secs(5));
|
||||
|
||||
loop {
|
||||
// see if we are in shutdown.
|
||||
if shutdown.load(Ordering::Acquire) {
|
||||
trace!(target: "miner", "CliqueStepService: received shutdown signal!");
|
||||
break;
|
||||
}
|
||||
|
||||
trace!(target: "miner", "CliqueStepService: triggering sealing");
|
||||
|
||||
// Try sealing
|
||||
engine.upgrade().map(|x| x.step());
|
||||
|
||||
// Yield
|
||||
thread::sleep(Duration::from_millis(2000));
|
||||
}
|
||||
trace!(target: "miner", "CliqueStepService: shutdown.");
|
||||
}).expect("CliqueStepService thread failed");
|
||||
|
||||
Arc::new(StepService {
|
||||
shutdown: s,
|
||||
thread: Some(thread),
|
||||
})
|
||||
}
|
||||
|
||||
/// Stop the `StepService`
|
||||
pub fn stop(&mut self) {
|
||||
trace!(target: "miner", "CliqueStepService: shutting down.");
|
||||
self.shutdown.store(true, Ordering::Release);
|
||||
if let Some(t) = self.thread.take() {
|
||||
t.join().expect("CliqueStepService thread panicked!");
|
||||
}
|
||||
}
|
||||
}
|
||||
804
ethcore/src/engines/clique/tests.rs
Normal file
804
ethcore/src/engines/clique/tests.rs
Normal file
@@ -0,0 +1,804 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! Consensus tests for `PoA Clique Engine`, see http://eips.ethereum.org/EIPS/eip-225 for more information
|
||||
|
||||
use block::*;
|
||||
use engines::Engine;
|
||||
use error::{Error, ErrorKind};
|
||||
use ethereum_types::{Address, H256};
|
||||
use ethkey::{Secret, KeyPair};
|
||||
use state_db::StateDB;
|
||||
use super::*;
|
||||
use test_helpers::get_temp_state_db;
|
||||
|
||||
use std::sync::Arc;
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Possible signers
|
||||
pub const SIGNER_TAGS: [char; 6] = ['A', 'B', 'C', 'D', 'E', 'F'];
|
||||
|
||||
/// Clique block types
|
||||
pub enum CliqueBlockType {
|
||||
/// Epoch transition block must contain list of signers
|
||||
Checkpoint,
|
||||
/// Block with no votes
|
||||
Empty,
|
||||
/// Vote
|
||||
Vote(VoteType),
|
||||
}
|
||||
|
||||
/// Clique tester
|
||||
pub struct CliqueTester {
|
||||
/// Mocked Clique
|
||||
pub clique: Clique,
|
||||
/// Mocked genesis state
|
||||
pub genesis: Header,
|
||||
/// StateDB
|
||||
pub db: StateDB,
|
||||
/// List of signers
|
||||
pub signers: HashMap<char, KeyPair>,
|
||||
}
|
||||
|
||||
impl CliqueTester {
|
||||
/// Create a `Clique` tester with settings
|
||||
pub fn with(epoch: u64, period: u64, initial_signers: Vec<char>) -> Self {
|
||||
assert_eq!(initial_signers.iter().all(|s| SIGNER_TAGS.contains(s)), true,
|
||||
"Not all the initial signers is in SIGNER_TAGS, possible keys are 'A' ..= 'F'");
|
||||
|
||||
let clique = Clique::with_test(epoch, period);
|
||||
let mut genesis = Header::default();
|
||||
let mut signers = HashMap::new();
|
||||
|
||||
let call = |_a, _b| {
|
||||
unimplemented!("Clique doesn't use Engine::Call");
|
||||
};
|
||||
|
||||
let mut extra_data = vec![0; VANITY_LENGTH];
|
||||
|
||||
for &signer in SIGNER_TAGS.iter() {
|
||||
let secret = Secret::from(H256::from(signer as u64));
|
||||
let keypair = KeyPair::from_secret(secret).unwrap();
|
||||
if initial_signers.contains(&signer) {
|
||||
extra_data.extend(&*keypair.address());
|
||||
}
|
||||
signers.insert(signer, keypair);
|
||||
}
|
||||
|
||||
// append dummy signature
|
||||
extra_data.extend(std::iter::repeat(0).take(SIGNATURE_LENGTH));
|
||||
|
||||
genesis.set_extra_data(extra_data);
|
||||
genesis.set_gas_limit(U256::from(0xa00000));
|
||||
genesis.set_difficulty(U256::from(1));
|
||||
genesis.set_seal(util::null_seal());
|
||||
|
||||
clique.genesis_epoch_data(&genesis, &call).expect("Create genesis failed");
|
||||
Self {clique, genesis, db: get_temp_state_db(), signers}
|
||||
}
|
||||
|
||||
/// Get difficulty for a given block
|
||||
pub fn get_difficulty(&self, block_num: BlockNumber, header: &Header, signer: &Address) -> U256 {
|
||||
let state = self.clique.state(header).unwrap();
|
||||
if state.is_inturn(block_num, signer) {
|
||||
DIFF_INTURN
|
||||
} else {
|
||||
DIFF_NOTURN
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the state of a given block
|
||||
// Note, this will read the cache and `will` not work with more than 128 blocks
|
||||
pub fn get_state_at_block(&self, hash: &H256) -> CliqueBlockState {
|
||||
self.clique.block_state_by_hash.write()
|
||||
.get_mut(hash)
|
||||
.expect("CliqueBlockState not found tested failed")
|
||||
.clone()
|
||||
}
|
||||
|
||||
/// Get signers after a certain state
|
||||
// This is generally used to fetch the state after a test has been executed and checked against
|
||||
// the intial list of signers provided in the test
|
||||
pub fn clique_signers(&self, hash: &H256) -> impl Iterator<Item = Address> {
|
||||
self.get_state_at_block(hash).signers().clone().into_iter()
|
||||
}
|
||||
|
||||
/// Fetches all addresses at current `block` and converts them back to `tags (char)` and sorts them
|
||||
/// Addresses are supposed sorted based on address but these tests are using `tags` just for simplicity
|
||||
/// and the order is not important!
|
||||
pub fn into_tags<T: Iterator<Item = Address>>(&self, addr: T) -> Vec<char> {
|
||||
let mut tags: Vec<char> = addr.filter_map(|addr| {
|
||||
for (t, kp) in self.signers.iter() {
|
||||
if addr == kp.address() {
|
||||
return Some(*t)
|
||||
}
|
||||
}
|
||||
None
|
||||
})
|
||||
.collect();
|
||||
|
||||
tags.sort();
|
||||
tags
|
||||
}
|
||||
|
||||
/// Create a new `Clique` block and import
|
||||
pub fn new_block_and_import(
|
||||
&self,
|
||||
block_type: CliqueBlockType,
|
||||
last_header: &Header,
|
||||
beneficary: Option<Address>,
|
||||
signer: char,
|
||||
) -> Result<Header, Error> {
|
||||
|
||||
let mut extra_data = vec![0; VANITY_LENGTH];
|
||||
let mut seal = util::null_seal();
|
||||
let last_hash = last_header.hash();
|
||||
|
||||
match block_type {
|
||||
CliqueBlockType::Checkpoint => {
|
||||
let signers = self.clique.state(&last_header).unwrap().signers().clone();
|
||||
for signer in signers {
|
||||
extra_data.extend(&*signer);
|
||||
}
|
||||
}
|
||||
CliqueBlockType::Vote(v) => seal = v.as_rlp(),
|
||||
CliqueBlockType::Empty => (),
|
||||
};
|
||||
|
||||
let db = self.db.boxed_clone();
|
||||
|
||||
let mut block = OpenBlock::new(
|
||||
&self.clique,
|
||||
Default::default(),
|
||||
false,
|
||||
db,
|
||||
&last_header.clone(),
|
||||
Arc::new(vec![last_hash]),
|
||||
beneficary.unwrap_or_default(),
|
||||
(3141562.into(), 31415620.into()),
|
||||
extra_data,
|
||||
false,
|
||||
None,
|
||||
).unwrap();
|
||||
|
||||
{
|
||||
let difficulty = self.get_difficulty(block.header.number(), last_header, &self.signers[&signer].address());
|
||||
let b = block.block_mut();
|
||||
b.header.set_timestamp(last_header.timestamp() + self.clique.period);
|
||||
b.header.set_difficulty(difficulty);
|
||||
b.header.set_seal(seal);
|
||||
|
||||
let sign = ethkey::sign(self.signers[&signer].secret(), &b.header.hash()).unwrap();
|
||||
let mut extra_data = b.header.extra_data().clone();
|
||||
extra_data.extend_from_slice(&*sign);
|
||||
b.header.set_extra_data(extra_data);
|
||||
}
|
||||
|
||||
let current_header = &block.header;
|
||||
self.clique.verify_block_basic(current_header)?;
|
||||
self.clique.verify_block_family(current_header, &last_header)?;
|
||||
|
||||
Ok(current_header.clone())
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn one_signer_with_no_votes() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A']);
|
||||
|
||||
let empty_block = tester.new_block_and_import(CliqueBlockType::Empty, &tester.genesis, None, 'A').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&empty_block.hash()));
|
||||
assert_eq!(&tags, &['A']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn one_signer_two_votes() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A']);
|
||||
|
||||
// Add a vote for `B` signed by `A`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &tester.genesis,
|
||||
Some(tester.signers[&'B'].address()), 'A').unwrap();
|
||||
let tags = tester.into_tags(tester.clique_signers(&vote.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
|
||||
// Add a empty block signed by `B`
|
||||
let empty = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'B').unwrap();
|
||||
|
||||
// Add vote for `C` signed by A but should not be accepted
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &empty,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&vote.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn two_signers_six_votes_deny_last() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B']);
|
||||
|
||||
let mut prev_header = tester.genesis.clone();
|
||||
|
||||
// Add two votes for `C` signed by `A` and `B`
|
||||
for &signer in SIGNER_TAGS.iter().take(2) {
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &prev_header,
|
||||
Some(tester.signers[&'C'].address()), signer).unwrap();
|
||||
prev_header = vote.clone();
|
||||
}
|
||||
|
||||
// Add two votes for `D` signed by `A` and `B`
|
||||
for &signer in SIGNER_TAGS.iter().take(2) {
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &prev_header,
|
||||
Some(tester.signers[&'D'].address()), signer).unwrap();
|
||||
prev_header = vote.clone();
|
||||
}
|
||||
|
||||
// Add a empty block signed by `C`
|
||||
let empty = tester.new_block_and_import(CliqueBlockType::Empty, &prev_header, None, 'C').unwrap();
|
||||
prev_header = empty.clone();
|
||||
|
||||
// Add two votes for `E` signed by `A` and `B`
|
||||
for &signer in SIGNER_TAGS.iter().take(2) {
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &prev_header,
|
||||
Some(tester.signers[&'E'].address()), signer).unwrap();
|
||||
prev_header = vote.clone();
|
||||
}
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&prev_header.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C', 'D']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn one_signer_dropping_itself() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A']);
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'A'].address()), 'A').unwrap();
|
||||
let signers = tester.clique_signers(&vote.hash());
|
||||
assert!(signers.count() == 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn two_signers_one_remove_vote_no_consensus() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B']);
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'B'].address()), 'A').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&vote.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn two_signers_consensus_remove_b() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B']);
|
||||
let first_vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'B'].address()), 'A').unwrap();
|
||||
let second_vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &first_vote,
|
||||
Some(tester.signers[&'B'].address()), 'B').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&second_vote.hash()));
|
||||
assert_eq!(&tags, &['A']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn three_signers_consensus_remove_c() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B', 'C']);
|
||||
let first_vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
let second_vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &first_vote,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&second_vote.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn four_signers_half_no_consensus() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B', 'C', 'D']);
|
||||
let first_vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
let second_vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &first_vote,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&second_vote.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C', 'D']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn four_signers_three_consensus_rm() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B', 'C', 'D']);
|
||||
|
||||
let mut prev_header = tester.genesis.clone();
|
||||
|
||||
// Three votes to remove `D` signed by ['A', 'B', 'C']
|
||||
for signer in SIGNER_TAGS.iter().take(3) {
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &prev_header,
|
||||
Some(tester.signers[&'D'].address()), *signer).unwrap();
|
||||
prev_header = vote.clone();
|
||||
}
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&prev_header.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn vote_add_only_counted_once_per_signer() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B']);
|
||||
|
||||
// Add a vote for `C` signed by `A`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
// Empty block signed by B`
|
||||
let empty = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'B').unwrap();
|
||||
|
||||
// Add a vote for `C` signed by `A`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &empty,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
// Empty block signed by `B`
|
||||
let empty = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'B').unwrap();
|
||||
|
||||
// Add a vote for `C` signed by `A`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &empty,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&vote.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn vote_add_concurrently_is_permitted() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B']);
|
||||
|
||||
// Add a vote for `C` signed by `A`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
// Empty block signed by `B`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Empty, &b, None, 'B').unwrap();
|
||||
|
||||
// Add a vote for `D` signed by `A`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &b,
|
||||
Some(tester.signers[&'D'].address()), 'A').unwrap();
|
||||
|
||||
// Empty block signed by `B`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Empty, &b, None, 'B').unwrap();
|
||||
|
||||
// Empty block signed by `A`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Empty, &b, None, 'A').unwrap();
|
||||
|
||||
// Add a vote for `D` signed by `B`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &b,
|
||||
Some(tester.signers[&'D'].address()), 'B').unwrap();
|
||||
|
||||
// Empty block signed by `A`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Empty, &b, None, 'A').unwrap();
|
||||
|
||||
// Add a vote for `C` signed by `B`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &b,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&b.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C', 'D']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn vote_rm_only_counted_once_per_signer() {
|
||||
let tester = CliqueTester::with(10, 1, vec!['A', 'B']);
|
||||
|
||||
let mut prev_header = tester.genesis.clone();
|
||||
|
||||
for _ in 0..2 {
|
||||
// Vote to remove `B` signed by `A`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &prev_header,
|
||||
Some(tester.signers[&'B'].address()), 'A').unwrap();
|
||||
// Empty block signed by `B`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Empty, &b, None, 'B').unwrap();
|
||||
|
||||
prev_header = b.clone();
|
||||
}
|
||||
|
||||
// Add a vote for `B` signed by `A`
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &prev_header,
|
||||
Some(tester.signers[&'B'].address()), 'A').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&b.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn vote_rm_concurrently_is_permitted() {
|
||||
let tester = CliqueTester::with(100, 1, vec!['A', 'B', 'C', 'D']);
|
||||
|
||||
// Add a vote for `C` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
// Empty block signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'B').unwrap();
|
||||
// Empty block signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'C').unwrap();
|
||||
|
||||
// Add a vote for `D` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'A').unwrap();
|
||||
|
||||
// Empty block signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'B').unwrap();
|
||||
// Empty block signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'C').unwrap();
|
||||
// Empty block signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'A').unwrap();
|
||||
|
||||
// Add a vote for `D` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'B').unwrap();
|
||||
// Add a vote for `D` signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'C').unwrap();
|
||||
|
||||
// Empty block signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'A').unwrap();
|
||||
// Add a vote for `C` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn vote_to_rm_are_immediate_and_ensure_votes_are_rm() {
|
||||
let tester = CliqueTester::with(100, 1, vec!['A', 'B', 'C']);
|
||||
|
||||
// Vote to remove `B` signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'B'].address()), 'C').unwrap();
|
||||
// Vote to remove `C` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
// Vote to remove `C` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
// Vote to remove `B` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'B'].address()), 'A').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn vote_to_rm_are_immediate_and_votes_should_be_dropped_from_kicked_signer() {
|
||||
let tester = CliqueTester::with(100, 1, vec!['A', 'B', 'C']);
|
||||
|
||||
// Vote to add `D` signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &tester.genesis,
|
||||
Some(tester.signers[&'D'].address()), 'C').unwrap();
|
||||
// Vote to remove `C` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
// Vote to remove `C` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
// Vote to add `D` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &block,
|
||||
Some(tester.signers[&'D'].address()), 'A').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cascading_not_allowed() {
|
||||
let tester = CliqueTester::with(100, 1, vec!['A', 'B', 'C', 'D']);
|
||||
|
||||
// Vote against `C` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
// Empty block signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'B').unwrap();
|
||||
|
||||
// Empty block signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'C').unwrap();
|
||||
|
||||
// Vote against `D` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'A').unwrap();
|
||||
|
||||
// Vote against `C` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
// Empty block signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'C').unwrap();
|
||||
|
||||
// Empty block signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'A').unwrap();
|
||||
|
||||
// Vote against `D` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'B').unwrap();
|
||||
|
||||
// Vote against `D` signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'C').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn consensus_out_of_bounds_consensus_execute_on_touch() {
|
||||
let tester = CliqueTester::with(100, 1, vec!['A', 'B', 'C', 'D']);
|
||||
|
||||
// Vote against `C` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
// Empty block signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'B').unwrap();
|
||||
|
||||
// Empty block signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'C').unwrap();
|
||||
|
||||
// Vote against `D` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'A').unwrap();
|
||||
|
||||
// Vote against `C` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
// Empty block signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'C').unwrap();
|
||||
|
||||
// Empty block signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'A').unwrap();
|
||||
|
||||
// Vote against `D` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'B').unwrap();
|
||||
|
||||
// Vote against `D` signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'C').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C'], "D should have been removed after 3/4 remove votes");
|
||||
|
||||
// Empty block signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'A').unwrap();
|
||||
|
||||
// Vote for `C` signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &block,
|
||||
Some(tester.signers[&'C'].address()), 'C').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn consensus_out_of_bounds_first_touch() {
|
||||
let tester = CliqueTester::with(100, 1, vec!['A', 'B', 'C', 'D']);
|
||||
|
||||
// Vote against `C` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
// Empty block signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'B').unwrap();
|
||||
|
||||
// Empty block signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'C').unwrap();
|
||||
|
||||
// Vote against `D` signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'A').unwrap();
|
||||
|
||||
// Vote against `C` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
// Empty block signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'C').unwrap();
|
||||
|
||||
// Empty block signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'A').unwrap();
|
||||
|
||||
// Vote against `D` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'B').unwrap();
|
||||
|
||||
// Vote against `D` signed by `C`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &block,
|
||||
Some(tester.signers[&'D'].address()), 'C').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C']);
|
||||
|
||||
// Empty block signed by `A`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'A').unwrap();
|
||||
|
||||
// Vote for `C` signed by `B`
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &block,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C']);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pending_votes_doesnt_survive_authorization_changes() {
|
||||
let tester = CliqueTester::with(100, 1, vec!['A', 'B', 'C', 'D', 'E']);
|
||||
|
||||
let mut prev_header = tester.genesis.clone();
|
||||
|
||||
// Vote for `F` from [`A`, `B`, `C`]
|
||||
for sign in SIGNER_TAGS.iter().take(3) {
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &prev_header,
|
||||
Some(tester.signers[&'F'].address()), *sign).unwrap();
|
||||
prev_header = block.clone();
|
||||
}
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&prev_header.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C', 'D', 'E', 'F'], "F should have been added");
|
||||
|
||||
// Vote against `F` from [`D`, `E`, `B`, `C`]
|
||||
for sign in SIGNER_TAGS.iter().skip(3).chain(SIGNER_TAGS.iter().skip(1).take(2)) {
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &prev_header,
|
||||
Some(tester.signers[&'F'].address()), *sign).unwrap();
|
||||
prev_header = block.clone();
|
||||
}
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&prev_header.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C', 'D', 'E'], "F should have been removed");
|
||||
|
||||
// Vote for `F` from [`D`, `E`]
|
||||
for sign in SIGNER_TAGS.iter().skip(3).take(2) {
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &prev_header,
|
||||
Some(tester.signers[&'F'].address()), *sign).unwrap();
|
||||
prev_header = block.clone();
|
||||
}
|
||||
|
||||
// Vote against `A` from [`B`, `C`, `D`]
|
||||
for sign in SIGNER_TAGS.iter().skip(1).take(3) {
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Remove), &prev_header,
|
||||
Some(tester.signers[&'A'].address()), *sign).unwrap();
|
||||
prev_header = block.clone();
|
||||
}
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&prev_header.hash()));
|
||||
assert_eq!(&tags, &['B', 'C', 'D', 'E'], "A should have been removed");
|
||||
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &prev_header,
|
||||
Some(tester.signers[&'F'].address()), 'B').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['B', 'C', 'D', 'E', 'F'], "F should have been added again");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn epoch_transition_reset_all_votes() {
|
||||
let tester = CliqueTester::with(3, 1, vec!['A', 'B']);
|
||||
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &tester.genesis,
|
||||
Some(tester.signers[&'C'].address()), 'A').unwrap();
|
||||
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'B').unwrap();
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Checkpoint, &block, None, 'A').unwrap();
|
||||
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &block,
|
||||
Some(tester.signers[&'C'].address()), 'B').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&block.hash()));
|
||||
assert_eq!(&tags, &['A', 'B'], "Votes should have been reset after checkpoint");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unauthorized_signer_should_not_be_able_to_sign_block() {
|
||||
let tester = CliqueTester::with(3, 1, vec!['A']);
|
||||
let err = tester.new_block_and_import(CliqueBlockType::Empty, &tester.genesis, None, 'B').unwrap_err();
|
||||
|
||||
match err.kind() {
|
||||
ErrorKind::Engine(EngineError::NotAuthorized(_)) => (),
|
||||
_ => assert!(true == false, "Wrong error kind"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn signer_should_not_be_able_to_sign_two_consequtive_blocks() {
|
||||
let tester = CliqueTester::with(3, 1, vec!['A', 'B']);
|
||||
let b = tester.new_block_and_import(CliqueBlockType::Empty, &tester.genesis, None, 'A').unwrap();
|
||||
let err = tester.new_block_and_import(CliqueBlockType::Empty, &b, None, 'A').unwrap_err();
|
||||
|
||||
match err.kind() {
|
||||
ErrorKind::Engine(EngineError::CliqueTooRecentlySigned(_)) => (),
|
||||
_ => assert!(true == false, "Wrong error kind"),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#[test]
|
||||
fn recent_signers_should_not_reset_on_checkpoint() {
|
||||
let tester = CliqueTester::with(3, 1, vec!['A', 'B', 'C']);
|
||||
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &tester.genesis, None, 'A').unwrap();
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'B').unwrap();
|
||||
let block = tester.new_block_and_import(CliqueBlockType::Checkpoint, &block, None, 'A').unwrap();
|
||||
|
||||
let err = tester.new_block_and_import(CliqueBlockType::Empty, &block, None, 'A').unwrap_err();
|
||||
|
||||
match err.kind() {
|
||||
ErrorKind::Engine(EngineError::CliqueTooRecentlySigned(_)) => (),
|
||||
_ => assert!(true == false, "Wrong error kind"),
|
||||
}
|
||||
}
|
||||
|
||||
// Not part of http://eips.ethereum.org/EIPS/eip-225
|
||||
#[test]
|
||||
fn bonus_consensus_should_keep_track_of_votes_before_latest_per_signer() {
|
||||
let tester = CliqueTester::with(100, 1, vec!['A', 'B', 'C', 'D']);
|
||||
|
||||
// Add a vote for `E` signed by `A`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &tester.genesis,
|
||||
Some(tester.signers[&'E'].address()), 'A').unwrap();
|
||||
// Empty block signed by `B`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'B').unwrap();
|
||||
|
||||
// Empty block signed by `C`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'C').unwrap();
|
||||
|
||||
// Empty block signed by `D`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'D').unwrap();
|
||||
|
||||
// Add a vote for `F` signed by `A`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &vote,
|
||||
Some(tester.signers[&'F'].address()), 'A').unwrap();
|
||||
// Empty block signed by `C`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'C').unwrap();
|
||||
|
||||
// Empty block signed by `D`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'D').unwrap();
|
||||
|
||||
// Add a vote for `E` signed by `B`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &vote,
|
||||
Some(tester.signers[&'E'].address()), 'B').unwrap();
|
||||
// Empty block signed by `A`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'A').unwrap();
|
||||
|
||||
// Empty block signed by `C`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'C').unwrap();
|
||||
|
||||
// Empty block signed by `D`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'D').unwrap();
|
||||
|
||||
// Add a vote for `F` signed by `B`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &vote,
|
||||
Some(tester.signers[&'F'].address()), 'B').unwrap();
|
||||
|
||||
// Empty block signed by A`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Empty, &vote, None, 'A').unwrap();
|
||||
|
||||
// Add a vote for `E` signed by `C`
|
||||
let vote = tester.new_block_and_import(CliqueBlockType::Vote(VoteType::Add), &vote,
|
||||
Some(tester.signers[&'E'].address()), 'C').unwrap();
|
||||
|
||||
let tags = tester.into_tags(tester.clique_signers(&vote.hash()));
|
||||
assert_eq!(&tags, &['A', 'B', 'C', 'D', 'E']);
|
||||
}
|
||||
115
ethcore/src/engines/clique/util.rs
Normal file
115
ethcore/src/engines/clique/util.rs
Normal file
@@ -0,0 +1,115 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use std::collections::BTreeSet;
|
||||
|
||||
use engines::EngineError;
|
||||
use engines::clique::{ADDRESS_LENGTH, SIGNATURE_LENGTH, VANITY_LENGTH, NULL_NONCE, NULL_MIXHASH};
|
||||
use error::Error;
|
||||
use ethereum_types::{Address, H256};
|
||||
use ethkey::{public_to_address, recover as ec_recover, Signature};
|
||||
use lru_cache::LruCache;
|
||||
use parking_lot::RwLock;
|
||||
use rlp::encode;
|
||||
use types::header::Header;
|
||||
|
||||
/// How many recovered signature to cache in the memory.
|
||||
pub const CREATOR_CACHE_NUM: usize = 4096;
|
||||
lazy_static! {
|
||||
/// key: header hash
|
||||
/// value: creator address
|
||||
static ref CREATOR_BY_HASH: RwLock<LruCache<H256, Address>> = RwLock::new(LruCache::new(CREATOR_CACHE_NUM));
|
||||
}
|
||||
|
||||
/// Recover block creator from signature
|
||||
pub fn recover_creator(header: &Header) -> Result<Address, Error> {
|
||||
// Initialization
|
||||
let mut cache = CREATOR_BY_HASH.write();
|
||||
|
||||
if let Some(creator) = cache.get_mut(&header.hash()) {
|
||||
return Ok(*creator);
|
||||
}
|
||||
|
||||
let data = header.extra_data();
|
||||
if data.len() < VANITY_LENGTH {
|
||||
Err(EngineError::CliqueMissingVanity)?
|
||||
}
|
||||
|
||||
if data.len() < VANITY_LENGTH + SIGNATURE_LENGTH {
|
||||
Err(EngineError::CliqueMissingSignature)?
|
||||
}
|
||||
|
||||
// Split `signed_extra data` and `signature`
|
||||
let (signed_data_slice, signature_slice) = data.split_at(data.len() - SIGNATURE_LENGTH);
|
||||
|
||||
// convert `&[u8]` to `[u8; 65]`
|
||||
let signature = {
|
||||
let mut s = [0; SIGNATURE_LENGTH];
|
||||
s.copy_from_slice(signature_slice);
|
||||
s
|
||||
};
|
||||
|
||||
// modify header and hash it
|
||||
let unsigned_header = &mut header.clone();
|
||||
unsigned_header.set_extra_data(signed_data_slice.to_vec());
|
||||
let msg = unsigned_header.hash();
|
||||
|
||||
let pubkey = ec_recover(&Signature::from(signature), &msg)?;
|
||||
let creator = public_to_address(&pubkey);
|
||||
|
||||
cache.insert(header.hash(), creator.clone());
|
||||
Ok(creator)
|
||||
}
|
||||
|
||||
/// Extract signer list from extra_data.
|
||||
///
|
||||
/// Layout of extra_data:
|
||||
/// ----
|
||||
/// VANITY: 32 bytes
|
||||
/// Signers: N * 32 bytes as hex encoded (20 characters)
|
||||
/// Signature: 65 bytes
|
||||
/// --
|
||||
pub fn extract_signers(header: &Header) -> Result<BTreeSet<Address>, Error> {
|
||||
let data = header.extra_data();
|
||||
|
||||
if data.len() <= VANITY_LENGTH + SIGNATURE_LENGTH {
|
||||
Err(EngineError::CliqueCheckpointNoSigner)?
|
||||
}
|
||||
|
||||
// extract only the portion of extra_data which includes the signer list
|
||||
let signers_raw = &data[(VANITY_LENGTH)..data.len() - (SIGNATURE_LENGTH)];
|
||||
|
||||
if signers_raw.len() % ADDRESS_LENGTH != 0 {
|
||||
Err(EngineError::CliqueCheckpointInvalidSigners(signers_raw.len()))?
|
||||
}
|
||||
|
||||
let num_signers = signers_raw.len() / 20;
|
||||
|
||||
let signers: BTreeSet<Address> = (0..num_signers)
|
||||
.map(|i| {
|
||||
let start = i * ADDRESS_LENGTH;
|
||||
let end = start + ADDRESS_LENGTH;
|
||||
signers_raw[start..end].into()
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(signers)
|
||||
}
|
||||
|
||||
/// Retrieve `null_seal`
|
||||
pub fn null_seal() -> Vec<Vec<u8>> {
|
||||
vec![encode(&NULL_MIXHASH.to_vec()), encode(&NULL_NONCE.to_vec())]
|
||||
}
|
||||
@@ -15,7 +15,9 @@
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use engines::{Engine, Seal};
|
||||
use parity_machine::{Machine, Transactions, TotalScoredHeader};
|
||||
use machine::Machine;
|
||||
use types::header::{Header, ExtendedHeader};
|
||||
use block::ExecutedBlock;
|
||||
|
||||
/// `InstantSeal` params.
|
||||
#[derive(Default, Debug, PartialEq)]
|
||||
@@ -48,11 +50,7 @@ impl<M> InstantSeal<M> {
|
||||
}
|
||||
}
|
||||
|
||||
impl<M: Machine> Engine<M> for InstantSeal<M>
|
||||
where M::LiveBlock: Transactions,
|
||||
M::ExtendedHeader: TotalScoredHeader,
|
||||
<M::ExtendedHeader as TotalScoredHeader>::Value: Ord
|
||||
{
|
||||
impl<M: Machine> Engine<M> for InstantSeal<M> {
|
||||
fn name(&self) -> &str {
|
||||
"InstantSeal"
|
||||
}
|
||||
@@ -61,11 +59,15 @@ impl<M: Machine> Engine<M> for InstantSeal<M>
|
||||
|
||||
fn seals_internally(&self) -> Option<bool> { Some(true) }
|
||||
|
||||
fn generate_seal(&self, block: &M::LiveBlock, _parent: &M::Header) -> Seal {
|
||||
if block.transactions().is_empty() { Seal::None } else { Seal::Regular(Vec::new()) }
|
||||
fn generate_seal(&self, block: &ExecutedBlock, _parent: &Header) -> Seal {
|
||||
if block.transactions.is_empty() {
|
||||
Seal::None
|
||||
} else {
|
||||
Seal::Regular(Vec::new())
|
||||
}
|
||||
}
|
||||
|
||||
fn verify_local_seal(&self, _header: &M::Header) -> Result<(), M::Error> {
|
||||
fn verify_local_seal(&self, _header: &Header) -> Result<(), M::Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -84,7 +86,7 @@ impl<M: Machine> Engine<M> for InstantSeal<M>
|
||||
header_timestamp >= parent_timestamp
|
||||
}
|
||||
|
||||
fn fork_choice(&self, new: &M::ExtendedHeader, current: &M::ExtendedHeader) -> super::ForkChoice {
|
||||
fn fork_choice(&self, new: &ExtendedHeader, current: &ExtendedHeader) -> super::ForkChoice {
|
||||
super::total_difficulty_fork_choice(new, current)
|
||||
}
|
||||
}
|
||||
@@ -106,9 +108,9 @@ mod tests {
|
||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let genesis_header = spec.genesis_header();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::default(), (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::default(), (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b = b.close_and_lock().unwrap();
|
||||
if let Seal::Regular(seal) = engine.generate_seal(b.block(), &genesis_header) {
|
||||
if let Seal::Regular(seal) = engine.generate_seal(&b, &genesis_header) {
|
||||
assert!(b.try_seal(engine, seal).is_ok());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
|
||||
mod authority_round;
|
||||
mod basic_authority;
|
||||
mod clique;
|
||||
mod instant_seal;
|
||||
mod null_engine;
|
||||
mod validator_set;
|
||||
@@ -27,14 +28,14 @@ pub mod signer;
|
||||
|
||||
pub use self::authority_round::AuthorityRound;
|
||||
pub use self::basic_authority::BasicAuthority;
|
||||
pub use self::epoch::{EpochVerifier, Transition as EpochTransition};
|
||||
pub use self::instant_seal::{InstantSeal, InstantSealParams};
|
||||
pub use self::null_engine::NullEngine;
|
||||
pub use self::signer::EngineSigner;
|
||||
pub use self::clique::Clique;
|
||||
|
||||
// TODO [ToDr] Remove re-export (#10130)
|
||||
pub use types::engines::ForkChoice;
|
||||
pub use types::engines::epoch;
|
||||
pub use types::engines::epoch::{self, Transition as EpochTransition};
|
||||
|
||||
use std::sync::{Weak, Arc};
|
||||
use std::collections::{BTreeMap, HashMap};
|
||||
@@ -44,21 +45,24 @@ use builtin::Builtin;
|
||||
use vm::{EnvInfo, Schedule, CreateContractAddress, CallType, ActionValue};
|
||||
use error::Error;
|
||||
use types::BlockNumber;
|
||||
use types::header::Header;
|
||||
use types::header::{Header, ExtendedHeader};
|
||||
use snapshot::SnapshotComponents;
|
||||
use spec::CommonParams;
|
||||
use types::transaction::{self, UnverifiedTransaction, SignedTransaction};
|
||||
|
||||
use ethkey::{Signature};
|
||||
use parity_machine::{Machine, LocalizedMachine as Localized, TotalScoredHeader};
|
||||
use ethereum_types::{H256, U256, Address};
|
||||
use machine::{self, Machine, AuxiliaryRequest, AuxiliaryData};
|
||||
use ethereum_types::{H64, H256, U256, Address};
|
||||
use unexpected::{Mismatch, OutOfBounds};
|
||||
use bytes::Bytes;
|
||||
use types::ancestry_action::AncestryAction;
|
||||
use block::ExecutedBlock;
|
||||
|
||||
/// Default EIP-210 contract code.
|
||||
/// As defined in https://github.com/ethereum/EIPs/pull/210
|
||||
pub const DEFAULT_BLOCKHASH_CONTRACT: &'static str = "73fffffffffffffffffffffffffffffffffffffffe33141561006a5760014303600035610100820755610100810715156100455760003561010061010083050761010001555b6201000081071515610064576000356101006201000083050761020001555b5061013e565b4360003512151561008457600060405260206040f361013d565b61010060003543031315156100a857610100600035075460605260206060f361013c565b6101006000350715156100c55762010000600035430313156100c8565b60005b156100ea576101006101006000350507610100015460805260206080f361013b565b620100006000350715156101095763010000006000354303131561010c565b60005b1561012f57610100620100006000350507610200015460a052602060a0f361013a565b600060c052602060c0f35b5b5b5b5b";
|
||||
/// The number of generations back that uncles can be.
|
||||
pub const MAX_UNCLE_AGE: usize = 6;
|
||||
|
||||
/// Voting errors.
|
||||
#[derive(Debug)]
|
||||
@@ -83,12 +87,45 @@ pub enum EngineError {
|
||||
RequiresClient,
|
||||
/// Invalid engine specification or implementation.
|
||||
InvalidEngine,
|
||||
/// Requires signer ref, but none registered.
|
||||
RequiresSigner,
|
||||
/// Checkpoint is missing
|
||||
CliqueMissingCheckpoint(H256),
|
||||
/// Missing vanity data
|
||||
CliqueMissingVanity,
|
||||
/// Missing signature
|
||||
CliqueMissingSignature,
|
||||
/// Missing signers
|
||||
CliqueCheckpointNoSigner,
|
||||
/// List of signers is invalid
|
||||
CliqueCheckpointInvalidSigners(usize),
|
||||
/// Wrong author on a checkpoint
|
||||
CliqueWrongAuthorCheckpoint(Mismatch<Address>),
|
||||
/// Wrong checkpoint authors recovered
|
||||
CliqueFaultyRecoveredSigners(Vec<String>),
|
||||
/// Invalid nonce (should contain vote)
|
||||
CliqueInvalidNonce(H64),
|
||||
/// The signer signed a block to recently
|
||||
CliqueTooRecentlySigned(Address),
|
||||
/// Custom
|
||||
Custom(String),
|
||||
}
|
||||
|
||||
impl fmt::Display for EngineError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
use self::EngineError::*;
|
||||
let msg = match *self {
|
||||
CliqueMissingCheckpoint(ref hash) => format!("Missing checkpoint block: {}", hash),
|
||||
CliqueMissingVanity => format!("Extra data is missing vanity data"),
|
||||
CliqueMissingSignature => format!("Extra data is missing signature"),
|
||||
CliqueCheckpointInvalidSigners(len) => format!("Checkpoint block list was of length: {} of checkpoint but
|
||||
it needs to be bigger than zero and a divisible by 20", len),
|
||||
CliqueCheckpointNoSigner => format!("Checkpoint block list of signers was empty"),
|
||||
CliqueInvalidNonce(ref mis) => format!("Unexpected nonce {} expected {} or {}", mis, 0_u64, u64::max_value()),
|
||||
CliqueWrongAuthorCheckpoint(ref oob) => format!("Unexpected checkpoint author: {}", oob),
|
||||
CliqueFaultyRecoveredSigners(ref mis) => format!("Faulty recovered signers {:?}", mis),
|
||||
CliqueTooRecentlySigned(ref address) => format!("The signer: {} has signed a block too recently", address),
|
||||
Custom(ref s) => s.clone(),
|
||||
DoubleVote(ref address) => format!("Author {} issued too many blocks.", address),
|
||||
NotProposer(ref mis) => format!("Author is not a current proposer: {}", mis),
|
||||
NotAuthorized(ref address) => format!("Signer {} is not authorized.", address),
|
||||
@@ -98,6 +135,7 @@ impl fmt::Display for EngineError {
|
||||
FailedSystemCall(ref msg) => format!("Failed to make system call: {}", msg),
|
||||
MalformedMessage(ref msg) => format!("Received malformed consensus message: {}", msg),
|
||||
RequiresClient => format!("Call requires client but none registered"),
|
||||
RequiresSigner => format!("Call requires signer but none registered"),
|
||||
InvalidEngine => format!("Invalid engine specification or implementation"),
|
||||
};
|
||||
|
||||
@@ -118,7 +156,7 @@ pub enum Seal {
|
||||
Proposal(Vec<Bytes>),
|
||||
/// Regular block seal; should be part of the blockchain.
|
||||
Regular(Vec<Bytes>),
|
||||
/// Engine does generate seal for this block right now.
|
||||
/// Engine does not generate seal for this block right now.
|
||||
None,
|
||||
}
|
||||
|
||||
@@ -176,8 +214,7 @@ pub type PendingTransitionStore<'a> = Fn(H256) -> Option<epoch::PendingTransitio
|
||||
/// Proof dependent on state.
|
||||
pub trait StateDependentProof<M: Machine>: Send + Sync {
|
||||
/// Generate a proof, given the state.
|
||||
// TODO: make this into an &M::StateContext
|
||||
fn generate_proof<'a>(&self, state: &<M as Localized<'a>>::StateContext) -> Result<Vec<u8>, String>;
|
||||
fn generate_proof<'a>(&self, state: &machine::Call) -> Result<Vec<u8>, String>;
|
||||
/// Check a proof generated elsewhere (potentially by a peer).
|
||||
// `engine` needed to check state proofs, while really this should
|
||||
// just be state machine params.
|
||||
@@ -217,7 +254,7 @@ impl<'a, M: Machine> ConstructedVerifier<'a, M> {
|
||||
/// Results of a query of whether an epoch change occurred at the given block.
|
||||
pub enum EpochChange<M: Machine> {
|
||||
/// Cannot determine until more data is passed.
|
||||
Unsure(M::AuxiliaryRequest),
|
||||
Unsure(AuxiliaryRequest),
|
||||
/// No epoch change.
|
||||
No,
|
||||
/// The epoch will change, with proof.
|
||||
@@ -235,17 +272,14 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
fn machine(&self) -> &M;
|
||||
|
||||
/// The number of additional header fields required for this engine.
|
||||
fn seal_fields(&self, _header: &M::Header) -> usize { 0 }
|
||||
fn seal_fields(&self, _header: &Header) -> usize { 0 }
|
||||
|
||||
/// Additional engine-specific information for the user/developer concerning `header`.
|
||||
fn extra_info(&self, _header: &M::Header) -> BTreeMap<String, String> { BTreeMap::new() }
|
||||
fn extra_info(&self, _header: &Header) -> BTreeMap<String, String> { BTreeMap::new() }
|
||||
|
||||
/// Maximum number of uncles a block is allowed to declare.
|
||||
fn maximum_uncle_count(&self, _block: BlockNumber) -> usize { 0 }
|
||||
|
||||
/// The number of generations back that uncles can be.
|
||||
fn maximum_uncle_age(&self) -> usize { 6 }
|
||||
|
||||
/// Optional maximum gas limit.
|
||||
fn maximum_gas_limit(&self) -> Option<U256> { None }
|
||||
|
||||
@@ -253,18 +287,21 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
/// `epoch_begin` set to true if this block kicks off an epoch.
|
||||
fn on_new_block(
|
||||
&self,
|
||||
_block: &mut M::LiveBlock,
|
||||
_block: &mut ExecutedBlock,
|
||||
_epoch_begin: bool,
|
||||
_ancestry: &mut Iterator<Item=M::ExtendedHeader>,
|
||||
_ancestry: &mut Iterator<Item = ExtendedHeader>,
|
||||
) -> Result<(), M::Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Block transformation functions, after the transactions.
|
||||
fn on_close_block(&self, _block: &mut M::LiveBlock) -> Result<(), M::Error> {
|
||||
fn on_close_block(&self, _block: &mut ExecutedBlock) -> Result<(), M::Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Allow mutating the header during seal generation. Currently only used by Clique.
|
||||
fn on_seal_block(&self, _block: &mut ExecutedBlock) -> Result<(), Error> { Ok(()) }
|
||||
|
||||
/// None means that it requires external input (e.g. PoW) to seal a block.
|
||||
/// Some(true) means the engine is currently prime for seal generation (i.e. node is the current validator).
|
||||
/// Some(false) means that the node might seal internally but is not qualified now.
|
||||
@@ -279,7 +316,7 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
///
|
||||
/// It is fine to require access to state or a full client for this function, since
|
||||
/// light clients do not generate seals.
|
||||
fn generate_seal(&self, _block: &M::LiveBlock, _parent: &M::Header) -> Seal { Seal::None }
|
||||
fn generate_seal(&self, _block: &ExecutedBlock, _parent: &Header) -> Seal { Seal::None }
|
||||
|
||||
/// Verify a locally-generated seal of a header.
|
||||
///
|
||||
@@ -291,25 +328,25 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
///
|
||||
/// It is fine to require access to state or a full client for this function, since
|
||||
/// light clients do not generate seals.
|
||||
fn verify_local_seal(&self, header: &M::Header) -> Result<(), M::Error>;
|
||||
fn verify_local_seal(&self, header: &Header) -> Result<(), M::Error>;
|
||||
|
||||
/// Phase 1 quick block verification. Only does checks that are cheap. Returns either a null `Ok` or a general error detailing the problem with import.
|
||||
/// The verification module can optionally avoid checking the seal (`check_seal`), if seal verification is disabled this method won't be called.
|
||||
fn verify_block_basic(&self, _header: &M::Header) -> Result<(), M::Error> { Ok(()) }
|
||||
fn verify_block_basic(&self, _header: &Header) -> Result<(), M::Error> { Ok(()) }
|
||||
|
||||
/// Phase 2 verification. Perform costly checks such as transaction signatures. Returns either a null `Ok` or a general error detailing the problem with import.
|
||||
/// The verification module can optionally avoid checking the seal (`check_seal`), if seal verification is disabled this method won't be called.
|
||||
fn verify_block_unordered(&self, _header: &M::Header) -> Result<(), M::Error> { Ok(()) }
|
||||
fn verify_block_unordered(&self, _header: &Header) -> Result<(), M::Error> { Ok(()) }
|
||||
|
||||
/// Phase 3 verification. Check block information against parent. Returns either a null `Ok` or a general error detailing the problem with import.
|
||||
fn verify_block_family(&self, _header: &M::Header, _parent: &M::Header) -> Result<(), M::Error> { Ok(()) }
|
||||
fn verify_block_family(&self, _header: &Header, _parent: &Header) -> Result<(), M::Error> { Ok(()) }
|
||||
|
||||
/// Phase 4 verification. Verify block header against potentially external data.
|
||||
/// Should only be called when `register_client` has been called previously.
|
||||
fn verify_block_external(&self, _header: &M::Header) -> Result<(), M::Error> { Ok(()) }
|
||||
fn verify_block_external(&self, _header: &Header) -> Result<(), M::Error> { Ok(()) }
|
||||
|
||||
/// Genesis epoch data.
|
||||
fn genesis_epoch_data<'a>(&self, _header: &M::Header, _state: &<M as Localized<'a>>::StateContext) -> Result<Vec<u8>, String> { Ok(Vec::new()) }
|
||||
fn genesis_epoch_data<'a>(&self, _header: &Header, _state: &machine::Call) -> Result<Vec<u8>, String> { Ok(Vec::new()) }
|
||||
|
||||
/// Whether an epoch change is signalled at the given header but will require finality.
|
||||
/// If a change can be enacted immediately then return `No` from this function but
|
||||
@@ -320,7 +357,7 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
/// Return `Yes` or `No` when the answer is definitively known.
|
||||
///
|
||||
/// Should not interact with state.
|
||||
fn signals_epoch_end<'a>(&self, _header: &M::Header, _aux: <M as Localized<'a>>::AuxiliaryData)
|
||||
fn signals_epoch_end<'a>(&self, _header: &Header, _aux: AuxiliaryData<'a>)
|
||||
-> EpochChange<M>
|
||||
{
|
||||
EpochChange::No
|
||||
@@ -336,9 +373,9 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
/// Return optional transition proof.
|
||||
fn is_epoch_end(
|
||||
&self,
|
||||
_chain_head: &M::Header,
|
||||
_chain_head: &Header,
|
||||
_finalized: &[H256],
|
||||
_chain: &Headers<M::Header>,
|
||||
_chain: &Headers<Header>,
|
||||
_transition_store: &PendingTransitionStore,
|
||||
) -> Option<Vec<u8>> {
|
||||
None
|
||||
@@ -355,8 +392,8 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
/// Return optional transition proof.
|
||||
fn is_epoch_end_light(
|
||||
&self,
|
||||
_chain_head: &M::Header,
|
||||
_chain: &Headers<M::Header>,
|
||||
_chain_head: &Header,
|
||||
_chain: &Headers<Header>,
|
||||
_transition_store: &PendingTransitionStore,
|
||||
) -> Option<Vec<u8>> {
|
||||
None
|
||||
@@ -364,22 +401,18 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
|
||||
/// Create an epoch verifier from validation proof and a flag indicating
|
||||
/// whether finality is required.
|
||||
fn epoch_verifier<'a>(&self, _header: &M::Header, _proof: &'a [u8]) -> ConstructedVerifier<'a, M> {
|
||||
ConstructedVerifier::Trusted(Box::new(self::epoch::NoOp))
|
||||
fn epoch_verifier<'a>(&self, _header: &Header, _proof: &'a [u8]) -> ConstructedVerifier<'a, M> {
|
||||
ConstructedVerifier::Trusted(Box::new(NoOp))
|
||||
}
|
||||
|
||||
/// Populate a header's fields based on its parent's header.
|
||||
/// Usually implements the chain scoring rule based on weight.
|
||||
fn populate_from_parent(&self, _header: &mut M::Header, _parent: &M::Header) { }
|
||||
fn populate_from_parent(&self, _header: &mut Header, _parent: &Header) { }
|
||||
|
||||
/// Handle any potential consensus messages;
|
||||
/// updating consensus state and potentially issuing a new one.
|
||||
fn handle_message(&self, _message: &[u8]) -> Result<(), EngineError> { Err(EngineError::UnexpectedMessage) }
|
||||
|
||||
/// Find out if the block is a proposal block and should not be inserted into the DB.
|
||||
/// Takes a header of a fully verified block.
|
||||
fn is_proposal(&self, _verified_header: &M::Header) -> bool { false }
|
||||
|
||||
/// Register a component which signs consensus messages.
|
||||
fn set_signer(&self, _signer: Box<EngineSigner>) {}
|
||||
|
||||
@@ -393,7 +426,7 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
fn step(&self) {}
|
||||
|
||||
/// Stops any services that the may hold the Engine and makes it safe to drop.
|
||||
fn stop(&self) {}
|
||||
fn stop(&mut self) {}
|
||||
|
||||
/// Create a factory for building snapshot chunks and restoring from them.
|
||||
/// Returning `None` indicates that this engine doesn't support snapshot creation.
|
||||
@@ -421,16 +454,21 @@ pub trait Engine<M: Machine>: Sync + Send {
|
||||
|
||||
/// Gather all ancestry actions. Called at the last stage when a block is committed. The Engine must guarantee that
|
||||
/// the ancestry exists.
|
||||
fn ancestry_actions(&self, _header: &M::Header, _ancestry: &mut Iterator<Item=M::ExtendedHeader>) -> Vec<AncestryAction> {
|
||||
fn ancestry_actions(&self, _header: &Header, _ancestry: &mut Iterator<Item = ExtendedHeader>) -> Vec<AncestryAction> {
|
||||
Vec::new()
|
||||
}
|
||||
|
||||
/// Check whether the given new block is the best block, after finalization check.
|
||||
fn fork_choice(&self, new: &M::ExtendedHeader, best: &M::ExtendedHeader) -> ForkChoice;
|
||||
fn fork_choice(&self, new: &ExtendedHeader, best: &ExtendedHeader) -> ForkChoice;
|
||||
|
||||
/// Returns author should used when executing tx's for this block.
|
||||
fn executive_author(&self, header: &Header) -> Result<Address, Error> {
|
||||
Ok(*header.author())
|
||||
}
|
||||
}
|
||||
|
||||
/// Check whether a given block is the best block based on the default total difficulty rule.
|
||||
pub fn total_difficulty_fork_choice<T: TotalScoredHeader>(new: &T, best: &T) -> ForkChoice where <T as TotalScoredHeader>::Value: Ord {
|
||||
pub fn total_difficulty_fork_choice(new: &ExtendedHeader, best: &ExtendedHeader) -> ForkChoice {
|
||||
if new.total_score() > best.total_score() {
|
||||
ForkChoice::New
|
||||
} else {
|
||||
@@ -523,3 +561,29 @@ pub trait EthEngine: Engine<::machine::EthereumMachine> {
|
||||
|
||||
// convenience wrappers for existing functions.
|
||||
impl<T> EthEngine for T where T: Engine<::machine::EthereumMachine> { }
|
||||
|
||||
/// Verifier for all blocks within an epoch with self-contained state.
|
||||
pub trait EpochVerifier<M: machine::Machine>: Send + Sync {
|
||||
/// Lightly verify the next block header.
|
||||
/// This may not be a header belonging to a different epoch.
|
||||
fn verify_light(&self, header: &Header) -> Result<(), M::Error>;
|
||||
|
||||
/// Perform potentially heavier checks on the next block header.
|
||||
fn verify_heavy(&self, header: &Header) -> Result<(), M::Error> {
|
||||
self.verify_light(header)
|
||||
}
|
||||
|
||||
/// Check a finality proof against this epoch verifier.
|
||||
/// Returns `Some(hashes)` if the proof proves finality of these hashes.
|
||||
/// Returns `None` if the proof doesn't prove anything.
|
||||
fn check_finality_proof(&self, _proof: &[u8]) -> Option<Vec<H256>> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Special "no-op" verifier for stateless, epoch-less engines.
|
||||
pub struct NoOp;
|
||||
|
||||
impl<M: machine::Machine> EpochVerifier<M> for NoOp {
|
||||
fn verify_light(&self, _header: &Header) -> Result<(), M::Error> { Ok(()) }
|
||||
}
|
||||
|
||||
@@ -17,9 +17,10 @@
|
||||
use engines::Engine;
|
||||
use engines::block_reward::{self, RewardKind};
|
||||
use ethereum_types::U256;
|
||||
use machine::WithRewards;
|
||||
use parity_machine::{Machine, Header, LiveBlock, TotalScoredHeader};
|
||||
use machine::Machine;
|
||||
use types::BlockNumber;
|
||||
use types::header::{Header, ExtendedHeader};
|
||||
use block::ExecutedBlock;
|
||||
|
||||
/// Params for a null engine.
|
||||
#[derive(Clone, Default)]
|
||||
@@ -58,26 +59,23 @@ impl<M: Default> Default for NullEngine<M> {
|
||||
}
|
||||
}
|
||||
|
||||
impl<M: Machine + WithRewards> Engine<M> for NullEngine<M>
|
||||
where M::ExtendedHeader: TotalScoredHeader,
|
||||
<M::ExtendedHeader as TotalScoredHeader>::Value: Ord
|
||||
{
|
||||
impl<M: Machine> Engine<M> for NullEngine<M> {
|
||||
fn name(&self) -> &str {
|
||||
"NullEngine"
|
||||
}
|
||||
|
||||
fn machine(&self) -> &M { &self.machine }
|
||||
|
||||
fn on_close_block(&self, block: &mut M::LiveBlock) -> Result<(), M::Error> {
|
||||
fn on_close_block(&self, block: &mut ExecutedBlock) -> Result<(), M::Error> {
|
||||
use std::ops::Shr;
|
||||
|
||||
let author = *LiveBlock::header(&*block).author();
|
||||
let number = LiveBlock::header(&*block).number();
|
||||
let author = *block.header.author();
|
||||
let number = block.header.number();
|
||||
|
||||
let reward = self.params.block_reward;
|
||||
if reward == U256::zero() { return Ok(()) }
|
||||
|
||||
let n_uncles = LiveBlock::uncles(&*block).len();
|
||||
let n_uncles = block.uncles.len();
|
||||
|
||||
let mut rewards = Vec::new();
|
||||
|
||||
@@ -86,7 +84,7 @@ impl<M: Machine + WithRewards> Engine<M> for NullEngine<M>
|
||||
rewards.push((author, RewardKind::Author, result_block_reward));
|
||||
|
||||
// bestow uncle rewards.
|
||||
for u in LiveBlock::uncles(&*block) {
|
||||
for u in &block.uncles {
|
||||
let uncle_author = u.author();
|
||||
let result_uncle_reward = (reward * U256::from(8 + u.number() - number)).shr(3);
|
||||
rewards.push((*uncle_author, RewardKind::uncle(number, u.number()), result_uncle_reward));
|
||||
@@ -97,7 +95,7 @@ impl<M: Machine + WithRewards> Engine<M> for NullEngine<M>
|
||||
|
||||
fn maximum_uncle_count(&self, _block: BlockNumber) -> usize { 2 }
|
||||
|
||||
fn verify_local_seal(&self, _header: &M::Header) -> Result<(), M::Error> {
|
||||
fn verify_local_seal(&self, _header: &Header) -> Result<(), M::Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -105,7 +103,7 @@ impl<M: Machine + WithRewards> Engine<M> for NullEngine<M>
|
||||
Some(Box::new(::snapshot::PowSnapshot::new(10000, 10000)))
|
||||
}
|
||||
|
||||
fn fork_choice(&self, new: &M::ExtendedHeader, current: &M::ExtendedHeader) -> super::ForkChoice {
|
||||
fn fork_choice(&self, new: &ExtendedHeader, current: &ExtendedHeader) -> super::ForkChoice {
|
||||
super::total_difficulty_fork_choice(new, current)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -74,7 +74,7 @@ impl Multi {
|
||||
impl ValidatorSet for Multi {
|
||||
fn default_caller(&self, block_id: BlockId) -> Box<Call> {
|
||||
self.correct_set(block_id).map(|set| set.default_caller(block_id))
|
||||
.unwrap_or(Box::new(|_, _| Err("No validator set for given ID.".into())))
|
||||
.unwrap_or_else(|| Box::new(|_, _| Err("No validator set for given ID.".into())))
|
||||
}
|
||||
|
||||
fn on_epoch_begin(&self, _first: bool, header: &Header, call: &mut SystemCall) -> Result<(), ::error::Error> {
|
||||
@@ -141,7 +141,7 @@ impl ValidatorSet for Multi {
|
||||
*self.block_number.write() = Box::new(move |id| client
|
||||
.upgrade()
|
||||
.ok_or_else(|| "No client!".into())
|
||||
.and_then(|c| c.block_number(id).ok_or("Unknown block".into())));
|
||||
.and_then(|c| c.block_number(id).ok_or_else(|| "Unknown block".into())));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -16,6 +16,10 @@
|
||||
|
||||
//! General error types for use in ethcore.
|
||||
|
||||
// Silence: `use of deprecated item 'std::error::Error::cause': replaced by Error::source, which can support downcasting`
|
||||
// https://github.com/paritytech/parity-ethereum/issues/10302
|
||||
#![allow(deprecated)]
|
||||
|
||||
use std::{fmt, error};
|
||||
use std::time::SystemTime;
|
||||
|
||||
@@ -33,7 +37,7 @@ use engines::EngineError;
|
||||
|
||||
pub use executed::{ExecutionError, CallError};
|
||||
|
||||
#[derive(Debug, PartialEq, Clone, Copy, Eq)]
|
||||
#[derive(Debug, PartialEq, Clone, Eq)]
|
||||
/// Errors concerning block processing.
|
||||
pub enum BlockError {
|
||||
/// Block has too many uncles.
|
||||
@@ -84,7 +88,7 @@ pub enum BlockError {
|
||||
/// Timestamp header field is too far in future.
|
||||
TemporarilyInvalid(OutOfBounds<SystemTime>),
|
||||
/// Log bloom header field is invalid.
|
||||
InvalidLogBloom(Mismatch<Bloom>),
|
||||
InvalidLogBloom(Box<Mismatch<Bloom>>),
|
||||
/// Number field of header is invalid.
|
||||
InvalidNumber(Mismatch<BlockNumber>),
|
||||
/// Block number isn't sensible.
|
||||
|
||||
@@ -241,17 +241,16 @@ impl Engine<EthereumMachine> for Arc<Ethash> {
|
||||
/// This assumes that all uncles are valid uncles (i.e. of at least one generation before the current).
|
||||
fn on_close_block(&self, block: &mut ExecutedBlock) -> Result<(), Error> {
|
||||
use std::ops::Shr;
|
||||
use parity_machine::LiveBlock;
|
||||
|
||||
let author = *LiveBlock::header(&*block).author();
|
||||
let number = LiveBlock::header(&*block).number();
|
||||
let author = *block.header.author();
|
||||
let number = block.header.number();
|
||||
|
||||
let rewards = match self.ethash_params.block_reward_contract {
|
||||
Some(ref c) if number >= self.ethash_params.block_reward_contract_transition => {
|
||||
let mut beneficiaries = Vec::new();
|
||||
|
||||
beneficiaries.push((author, RewardKind::Author));
|
||||
for u in LiveBlock::uncles(&*block) {
|
||||
for u in &block.uncles {
|
||||
let uncle_author = u.author();
|
||||
beneficiaries.push((*uncle_author, RewardKind::uncle(number, u.number())));
|
||||
}
|
||||
@@ -274,7 +273,8 @@ impl Engine<EthereumMachine> for Arc<Ethash> {
|
||||
let eras_rounds = self.ethash_params.ecip1017_era_rounds;
|
||||
let (eras, reward) = ecip1017_eras_block_reward(eras_rounds, reward, number);
|
||||
|
||||
let n_uncles = LiveBlock::uncles(&*block).len();
|
||||
//let n_uncles = LiveBlock::uncles(&*block).len();
|
||||
let n_uncles = block.uncles.len();
|
||||
|
||||
// Bestow block rewards.
|
||||
let mut result_block_reward = reward + reward.shr(5) * U256::from(n_uncles);
|
||||
@@ -282,7 +282,7 @@ impl Engine<EthereumMachine> for Arc<Ethash> {
|
||||
rewards.push((author, RewardKind::Author, result_block_reward));
|
||||
|
||||
// Bestow uncle rewards.
|
||||
for u in LiveBlock::uncles(&*block) {
|
||||
for u in &block.uncles {
|
||||
let uncle_author = u.author();
|
||||
let result_uncle_reward = if eras == 0 {
|
||||
(reward * U256::from(8 + u.number() - number)).shr(3)
|
||||
@@ -540,9 +540,9 @@ mod tests {
|
||||
let genesis_header = spec.genesis_header();
|
||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::zero(), (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::zero(), (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b = b.close().unwrap();
|
||||
assert_eq!(b.state().balance(&Address::zero()).unwrap(), U256::from_str("4563918244f40000").unwrap());
|
||||
assert_eq!(b.state.balance(&Address::zero()).unwrap(), U256::from_str("4563918244f40000").unwrap());
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -589,15 +589,15 @@ mod tests {
|
||||
let genesis_header = spec.genesis_header();
|
||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let mut b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::zero(), (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let mut b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::zero(), (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let mut uncle = Header::new();
|
||||
let uncle_author: Address = "ef2d6d194084c2de36e0dabfce45d046b37d1106".into();
|
||||
uncle.set_author(uncle_author);
|
||||
b.push_uncle(uncle).unwrap();
|
||||
|
||||
let b = b.close().unwrap();
|
||||
assert_eq!(b.state().balance(&Address::zero()).unwrap(), "478eae0e571ba000".into());
|
||||
assert_eq!(b.state().balance(&uncle_author).unwrap(), "3cb71f51fc558000".into());
|
||||
assert_eq!(b.state.balance(&Address::zero()).unwrap(), "478eae0e571ba000".into());
|
||||
assert_eq!(b.state.balance(&uncle_author).unwrap(), "3cb71f51fc558000".into());
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -607,14 +607,14 @@ mod tests {
|
||||
let genesis_header = spec.genesis_header();
|
||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::zero(), (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::zero(), (3141562.into(), 31415620.into()), vec![], false, None).unwrap();
|
||||
let b = b.close().unwrap();
|
||||
|
||||
let ubi_contract: Address = "00efdd5883ec628983e9063c7d969fe268bbf310".into();
|
||||
let dev_contract: Address = "00756cf8159095948496617f5fb17ed95059f536".into();
|
||||
assert_eq!(b.state().balance(&Address::zero()).unwrap(), U256::from_str("d8d726b7177a80000").unwrap());
|
||||
assert_eq!(b.state().balance(&ubi_contract).unwrap(), U256::from_str("2b5e3af16b1880000").unwrap());
|
||||
assert_eq!(b.state().balance(&dev_contract).unwrap(), U256::from_str("c249fdd327780000").unwrap());
|
||||
assert_eq!(b.state.balance(&Address::zero()).unwrap(), U256::from_str("d8d726b7177a80000").unwrap());
|
||||
assert_eq!(b.state.balance(&ubi_contract).unwrap(), U256::from_str("2b5e3af16b1880000").unwrap());
|
||||
assert_eq!(b.state.balance(&dev_contract).unwrap(), U256::from_str("c249fdd327780000").unwrap());
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -84,6 +84,11 @@ pub fn new_mix<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/mix.json"))
|
||||
}
|
||||
|
||||
/// Create a new Callisto chain spec
|
||||
pub fn new_callisto<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/callisto.json"))
|
||||
}
|
||||
|
||||
/// Create a new Morden testnet chain spec.
|
||||
pub fn new_morden<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/morden.json"))
|
||||
@@ -99,16 +104,26 @@ pub fn new_kovan<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/kovan.json"))
|
||||
}
|
||||
|
||||
/// Create a new Rinkeby testnet chain spec.
|
||||
pub fn new_rinkeby<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/rinkeby.json"))
|
||||
}
|
||||
|
||||
/// Create a new Görli testnet chain spec.
|
||||
pub fn new_goerli<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/goerli.json"))
|
||||
}
|
||||
|
||||
/// Create a new Kotti testnet chain spec.
|
||||
pub fn new_kotti<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/kotti.json"))
|
||||
}
|
||||
|
||||
/// Create a new POA Sokol testnet chain spec.
|
||||
pub fn new_sokol<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/poasokol.json"))
|
||||
}
|
||||
|
||||
/// Create a new Callisto chaun spec
|
||||
pub fn new_callisto<'a, T: Into<SpecParams<'a>>>(params: T) -> Spec {
|
||||
load(params.into(), include_bytes!("../../res/ethereum/callisto.json"))
|
||||
}
|
||||
|
||||
// For tests
|
||||
|
||||
/// Create a new Foundation Frontier-era chain spec as though it never changes to Homestead.
|
||||
|
||||
@@ -167,7 +167,7 @@ pub enum CallError {
|
||||
/// Couldn't find requested block's state in the chain.
|
||||
StatePruned,
|
||||
/// Couldn't find an amount of gas that didn't result in an exception.
|
||||
Exceptional,
|
||||
Exceptional(vm::Error),
|
||||
/// Corrupt state.
|
||||
StateCorrupt,
|
||||
/// Error executing.
|
||||
@@ -187,7 +187,7 @@ impl fmt::Display for CallError {
|
||||
let msg = match *self {
|
||||
TransactionNotFound => "Transaction couldn't be found in the chain".into(),
|
||||
StatePruned => "Couldn't find the transaction block's state in the chain".into(),
|
||||
Exceptional => "An exception happened in the execution".into(),
|
||||
Exceptional(ref e) => format!("An exception ({}) happened in the execution", e),
|
||||
StateCorrupt => "Stored state found to be corrupted.".into(),
|
||||
Execution(ref e) => format!("{}", e),
|
||||
};
|
||||
@@ -197,4 +197,4 @@ impl fmt::Display for CallError {
|
||||
}
|
||||
|
||||
/// Transaction execution result.
|
||||
pub type ExecutionResult = Result<Executed, ExecutionError>;
|
||||
pub type ExecutionResult = Result<Box<Executed>, ExecutionError>;
|
||||
|
||||
@@ -117,7 +117,7 @@ impl<'a, T: 'a, V: 'a, B: 'a> Ext for Externalities<'a, T, V, B>
|
||||
{
|
||||
fn initial_storage_at(&self, key: &H256) -> vm::Result<H256> {
|
||||
if self.state.is_base_storage_root_unchanged(&self.origin_info.address)? {
|
||||
self.state.checkpoint_storage_at(0, &self.origin_info.address, key).map(|v| v.unwrap_or(H256::zero())).map_err(Into::into)
|
||||
self.state.checkpoint_storage_at(0, &self.origin_info.address, key).map(|v| v.unwrap_or_default()).map_err(Into::into)
|
||||
} else {
|
||||
warn!(target: "externalities", "Detected existing account {:#x} where a forced contract creation happened.", self.origin_info.address);
|
||||
Ok(H256::zero())
|
||||
@@ -314,7 +314,11 @@ impl<'a, T: 'a, V: 'a, B: 'a> Ext for Externalities<'a, T, V, B>
|
||||
}
|
||||
|
||||
fn extcodehash(&self, address: &Address) -> vm::Result<Option<H256>> {
|
||||
Ok(self.state.code_hash(address)?)
|
||||
if self.state.exists_and_not_null(address)? {
|
||||
Ok(self.state.code_hash(address)?)
|
||||
} else {
|
||||
Ok(None)
|
||||
}
|
||||
}
|
||||
|
||||
fn extcodesize(&self, address: &Address) -> vm::Result<Option<usize>> {
|
||||
|
||||
@@ -18,7 +18,7 @@ use std::path::Path;
|
||||
use super::test_common::*;
|
||||
use pod_state::PodState;
|
||||
use trace;
|
||||
use client::{EvmTestClient, EvmTestError, TransactResult};
|
||||
use client::{EvmTestClient, EvmTestError, TransactErr, TransactSuccess};
|
||||
use ethjson;
|
||||
use types::transaction::SignedTransaction;
|
||||
use vm::EnvInfo;
|
||||
@@ -90,18 +90,18 @@ pub fn json_chain_test<H: FnMut(&str, HookType)>(json_data: &[u8], start_stop_ho
|
||||
flushln!("{} fail", info);
|
||||
failed.push(name.clone());
|
||||
},
|
||||
Ok(TransactResult::Ok { state_root, .. }) if state_root != post_root => {
|
||||
Ok(Ok(TransactSuccess { state_root, .. })) if state_root != post_root => {
|
||||
println!("{} !!! State mismatch (got: {}, expect: {}", info, state_root, post_root);
|
||||
flushln!("{} fail", info);
|
||||
failed.push(name.clone());
|
||||
},
|
||||
Ok(TransactResult::Err { state_root, ref error, .. }) if state_root != post_root => {
|
||||
Ok(Err(TransactErr { state_root, ref error, .. })) if state_root != post_root => {
|
||||
println!("{} !!! State mismatch (got: {}, expect: {}", info, state_root, post_root);
|
||||
println!("{} !!! Execution error: {:?}", info, error);
|
||||
flushln!("{} fail", info);
|
||||
failed.push(name.clone());
|
||||
},
|
||||
Ok(TransactResult::Err { error, .. }) => {
|
||||
Ok(Err(TransactErr { error, .. })) => {
|
||||
flushln!("{} ok ({:?})", info, error);
|
||||
},
|
||||
Ok(_) => {
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
#![warn(missing_docs, unused_extern_crates)]
|
||||
#![cfg_attr(feature = "time_checked_add", feature(time_checked_add))]
|
||||
|
||||
//! Ethcore library
|
||||
//!
|
||||
@@ -90,7 +89,6 @@ extern crate num;
|
||||
extern crate num_cpus;
|
||||
extern crate parity_bytes as bytes;
|
||||
extern crate parity_crypto;
|
||||
extern crate parity_machine;
|
||||
extern crate parity_snappy as snappy;
|
||||
extern crate parking_lot;
|
||||
extern crate trie_db as trie;
|
||||
@@ -101,6 +99,7 @@ extern crate rlp;
|
||||
extern crate rustc_hex;
|
||||
extern crate serde;
|
||||
extern crate stats;
|
||||
extern crate time_utils;
|
||||
extern crate triehash_ethereum as triehash;
|
||||
extern crate unexpected;
|
||||
extern crate using_queue;
|
||||
@@ -150,9 +149,6 @@ extern crate fetch;
|
||||
#[cfg(all(test, feature = "price-info"))]
|
||||
extern crate parity_runtime;
|
||||
|
||||
#[cfg(not(time_checked_add))]
|
||||
extern crate time_utils;
|
||||
|
||||
pub mod block;
|
||||
pub mod builtin;
|
||||
pub mod client;
|
||||
|
||||
@@ -24,11 +24,11 @@ use ethereum_types::{U256, H256, Address};
|
||||
use rlp::Rlp;
|
||||
use types::transaction::{self, SYSTEM_ADDRESS, UNSIGNED_SENDER, UnverifiedTransaction, SignedTransaction};
|
||||
use types::BlockNumber;
|
||||
use types::header::{Header, ExtendedHeader};
|
||||
use types::header::Header;
|
||||
use vm::{CallType, ActionParams, ActionValue, ParamsType};
|
||||
use vm::{EnvInfo, Schedule, CreateContractAddress};
|
||||
|
||||
use block::{ExecutedBlock, IsBlock};
|
||||
use block::ExecutedBlock;
|
||||
use builtin::Builtin;
|
||||
use call_contract::CallContract;
|
||||
use client::BlockInfo;
|
||||
@@ -36,7 +36,7 @@ use error::Error;
|
||||
use executive::Executive;
|
||||
use spec::CommonParams;
|
||||
use state::{CleanupMode, Substate};
|
||||
use trace::{NoopTracer, NoopVMTracer, Tracer, ExecutiveTracer, RewardType, Tracing};
|
||||
use trace::{NoopTracer, NoopVMTracer};
|
||||
use tx_filter::TransactionFilter;
|
||||
|
||||
/// Parity tries to round block.gas_limit to multiple of this constant
|
||||
@@ -126,7 +126,7 @@ impl EthereumMachine {
|
||||
data: Option<Vec<u8>>,
|
||||
) -> Result<Vec<u8>, Error> {
|
||||
let (code, code_hash) = {
|
||||
let state = block.state();
|
||||
let state = &block.state;
|
||||
|
||||
(state.code(&contract_address)?,
|
||||
state.code_hash(&contract_address)?)
|
||||
@@ -173,7 +173,7 @@ impl EthereumMachine {
|
||||
origin: SYSTEM_ADDRESS,
|
||||
gas,
|
||||
gas_price: 0.into(),
|
||||
value: value.unwrap_or(ActionValue::Transfer(0.into())),
|
||||
value: value.unwrap_or_else(|| ActionValue::Transfer(0.into())),
|
||||
code,
|
||||
code_hash,
|
||||
data,
|
||||
@@ -193,12 +193,12 @@ impl EthereumMachine {
|
||||
/// Push last known block hash to the state.
|
||||
fn push_last_hash(&self, block: &mut ExecutedBlock) -> Result<(), Error> {
|
||||
let params = self.params();
|
||||
if block.header().number() == params.eip210_transition {
|
||||
if block.header.number() == params.eip210_transition {
|
||||
let state = block.state_mut();
|
||||
state.init_code(¶ms.eip210_contract_address, params.eip210_contract_code.clone())?;
|
||||
}
|
||||
if block.header().number() >= params.eip210_transition {
|
||||
let parent_hash = block.header().parent_hash().clone();
|
||||
if block.header.number() >= params.eip210_transition {
|
||||
let parent_hash = *block.header.parent_hash();
|
||||
let _ = self.execute_as_system(
|
||||
block,
|
||||
params.eip210_contract_address,
|
||||
@@ -215,7 +215,7 @@ impl EthereumMachine {
|
||||
self.push_last_hash(block)?;
|
||||
|
||||
if let Some(ref ethash_params) = self.ethash_extensions {
|
||||
if block.header().number() == ethash_params.dao_hardfork_transition {
|
||||
if block.header.number() == ethash_params.dao_hardfork_transition {
|
||||
let state = block.state_mut();
|
||||
for child in ðash_params.dao_hardfork_accounts {
|
||||
let beneficiary = ðash_params.dao_hardfork_beneficiary;
|
||||
@@ -428,19 +428,13 @@ pub enum AuxiliaryRequest {
|
||||
Both,
|
||||
}
|
||||
|
||||
impl ::parity_machine::Machine for EthereumMachine {
|
||||
type Header = Header;
|
||||
type ExtendedHeader = ExtendedHeader;
|
||||
|
||||
type LiveBlock = ExecutedBlock;
|
||||
impl super::Machine for EthereumMachine {
|
||||
type EngineClient = ::client::EngineClient;
|
||||
type AuxiliaryRequest = AuxiliaryRequest;
|
||||
type AncestryAction = ::types::ancestry_action::AncestryAction;
|
||||
|
||||
type Error = Error;
|
||||
|
||||
fn balance(&self, live: &ExecutedBlock, address: &Address) -> Result<U256, Error> {
|
||||
live.state().balance(address).map_err(Into::into)
|
||||
live.state.balance(address).map_err(Into::into)
|
||||
}
|
||||
|
||||
fn add_balance(&self, live: &mut ExecutedBlock, address: &Address, amount: &U256) -> Result<(), Error> {
|
||||
@@ -448,42 +442,6 @@ impl ::parity_machine::Machine for EthereumMachine {
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> ::parity_machine::LocalizedMachine<'a> for EthereumMachine {
|
||||
type StateContext = Call<'a>;
|
||||
type AuxiliaryData = AuxiliaryData<'a>;
|
||||
}
|
||||
|
||||
/// A state machine that uses block rewards.
|
||||
pub trait WithRewards: ::parity_machine::Machine {
|
||||
/// Note block rewards, traces each reward storing information about benefactor, amount and type
|
||||
/// of reward.
|
||||
fn note_rewards(
|
||||
&self,
|
||||
live: &mut Self::LiveBlock,
|
||||
rewards: &[(Address, RewardType, U256)],
|
||||
) -> Result<(), Self::Error>;
|
||||
}
|
||||
|
||||
impl WithRewards for EthereumMachine {
|
||||
fn note_rewards(
|
||||
&self,
|
||||
live: &mut Self::LiveBlock,
|
||||
rewards: &[(Address, RewardType, U256)],
|
||||
) -> Result<(), Self::Error> {
|
||||
if let Tracing::Enabled(ref mut traces) = *live.traces_mut() {
|
||||
let mut tracer = ExecutiveTracer::default();
|
||||
|
||||
for &(address, ref reward_type, amount) in rewards {
|
||||
tracer.trace_reward(address, amount, reward_type.clone());
|
||||
}
|
||||
|
||||
traces.push(tracer.drain().into());
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
// Try to round gas_limit a bit so that:
|
||||
// 1) it will still be in desired range
|
||||
// 2) it will be a nearest (with tendency to increase) multiple of PARITY_GAS_LIMIT_DETERMINANT
|
||||
7
ethcore/src/machine/mod.rs
Normal file
7
ethcore/src/machine/mod.rs
Normal file
@@ -0,0 +1,7 @@
|
||||
//! Generalization of a state machine for a consensus engine.
|
||||
|
||||
mod impls;
|
||||
mod traits;
|
||||
|
||||
pub use self::impls::*;
|
||||
pub use self::traits::*;
|
||||
37
ethcore/src/machine/traits.rs
Normal file
37
ethcore/src/machine/traits.rs
Normal file
@@ -0,0 +1,37 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! Generalization of a state machine for a consensus engine.
|
||||
//! This will define traits for the header, block, and state of a blockchain.
|
||||
|
||||
use ethereum_types::{U256, Address};
|
||||
use block::ExecutedBlock;
|
||||
|
||||
/// Generalization of types surrounding blockchain-suitable state machines.
|
||||
pub trait Machine: Send + Sync {
|
||||
/// A handle to a blockchain client for this machine.
|
||||
type EngineClient: ?Sized;
|
||||
|
||||
/// Errors which can occur when querying or interacting with the machine.
|
||||
type Error;
|
||||
|
||||
/// Get the balance, in base units, associated with an account.
|
||||
/// Extracts data from the live block.
|
||||
fn balance(&self, live: &ExecutedBlock, address: &Address) -> Result<U256, Self::Error>;
|
||||
|
||||
/// Increment the balance of an account in the state of the live block.
|
||||
fn add_balance(&self, live: &mut ExecutedBlock, address: &Address, amount: &U256) -> Result<(), Self::Error>;
|
||||
}
|
||||
@@ -25,6 +25,7 @@ use call_contract::CallContract;
|
||||
use ethcore_miner::gas_pricer::GasPricer;
|
||||
use ethcore_miner::local_accounts::LocalAccounts;
|
||||
use ethcore_miner::pool::{self, TransactionQueue, VerifiedTransaction, QueueStatus, PrioritizationStrategy};
|
||||
use ethcore_miner::service_transaction_checker::ServiceTransactionChecker;
|
||||
#[cfg(feature = "work-notify")]
|
||||
use ethcore_miner::work_notify::NotifyWork;
|
||||
use ethereum_types::{H256, U256, Address};
|
||||
@@ -46,7 +47,7 @@ use types::header::Header;
|
||||
use types::receipt::RichReceipt;
|
||||
use using_queue::{UsingQueue, GetAction};
|
||||
|
||||
use block::{ClosedBlock, IsBlock, SealedBlock};
|
||||
use block::{ClosedBlock, SealedBlock};
|
||||
use client::{
|
||||
BlockChain, ChainInfo, BlockProducer, SealedBlockImporter, Nonce, TransactionInfo, TransactionId
|
||||
};
|
||||
@@ -214,7 +215,6 @@ impl Author {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
struct SealingWork {
|
||||
queue: UsingQueue<ClosedBlock>,
|
||||
enabled: bool,
|
||||
@@ -247,6 +247,7 @@ pub struct Miner {
|
||||
engine: Arc<EthEngine>,
|
||||
accounts: Arc<LocalAccounts>,
|
||||
io_channel: RwLock<Option<IoChannel<ClientIoMessage>>>,
|
||||
service_transaction_checker: Option<ServiceTransactionChecker>,
|
||||
}
|
||||
|
||||
impl Miner {
|
||||
@@ -273,6 +274,7 @@ impl Miner {
|
||||
let verifier_options = options.pool_verification_options.clone();
|
||||
let tx_queue_strategy = options.tx_queue_strategy;
|
||||
let nonce_cache_size = cmp::max(4096, limits.max_count / 4);
|
||||
let refuse_service_transactions = options.refuse_service_transactions;
|
||||
|
||||
Miner {
|
||||
sealing: Mutex::new(SealingWork {
|
||||
@@ -293,6 +295,11 @@ impl Miner {
|
||||
accounts: Arc::new(accounts),
|
||||
engine: spec.engine.clone(),
|
||||
io_channel: RwLock::new(None),
|
||||
service_transaction_checker: if refuse_service_transactions {
|
||||
None
|
||||
} else {
|
||||
Some(ServiceTransactionChecker::default())
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -351,6 +358,11 @@ impl Miner {
|
||||
});
|
||||
}
|
||||
|
||||
/// Returns ServiceTransactionChecker
|
||||
pub fn service_transaction_checker(&self) -> Option<ServiceTransactionChecker> {
|
||||
self.service_transaction_checker.clone()
|
||||
}
|
||||
|
||||
/// Retrieves an existing pending block iff it's not older than given block number.
|
||||
///
|
||||
/// NOTE: This will not prepare a new pending block if it's not existing.
|
||||
@@ -362,7 +374,7 @@ impl Miner {
|
||||
.and_then(|b| {
|
||||
// to prevent a data race between block import and updating pending block
|
||||
// we allow the number to be equal.
|
||||
if b.block().header().number() >= latest_block_number {
|
||||
if b.header.number() >= latest_block_number {
|
||||
Some(f(b))
|
||||
} else {
|
||||
None
|
||||
@@ -378,7 +390,7 @@ impl Miner {
|
||||
&self.nonce_cache,
|
||||
&*self.engine,
|
||||
&*self.accounts,
|
||||
self.options.refuse_service_transactions,
|
||||
self.service_transaction_checker.as_ref(),
|
||||
)
|
||||
}
|
||||
|
||||
@@ -392,7 +404,7 @@ impl Miner {
|
||||
// Open block
|
||||
let (mut open_block, original_work_hash) = {
|
||||
let mut sealing = self.sealing.lock();
|
||||
let last_work_hash = sealing.queue.peek_last_ref().map(|pb| pb.block().header().hash());
|
||||
let last_work_hash = sealing.queue.peek_last_ref().map(|pb| pb.header.hash());
|
||||
let best_hash = chain_info.best_block_hash;
|
||||
|
||||
// check to see if last ClosedBlock in would_seals is actually same parent block.
|
||||
@@ -401,7 +413,7 @@ impl Miner {
|
||||
// if at least one was pushed successfully, close and enqueue new ClosedBlock;
|
||||
// otherwise, leave everything alone.
|
||||
// otherwise, author a fresh block.
|
||||
let mut open_block = match sealing.queue.get_pending_if(|b| b.block().header().parent_hash() == &best_hash) {
|
||||
let mut open_block = match sealing.queue.get_pending_if(|b| b.header.parent_hash() == &best_hash) {
|
||||
Some(old_block) => {
|
||||
trace!(target: "miner", "prepare_block: Already have previous work; updating and returning");
|
||||
// add transactions to old_block
|
||||
@@ -436,7 +448,7 @@ impl Miner {
|
||||
let mut invalid_transactions = HashSet::new();
|
||||
let mut not_allowed_transactions = HashSet::new();
|
||||
let mut senders_to_penalize = HashSet::new();
|
||||
let block_number = open_block.block().header().number();
|
||||
let block_number = open_block.header.number();
|
||||
|
||||
let mut tx_count = 0usize;
|
||||
let mut skipped_transactions = 0usize;
|
||||
@@ -453,7 +465,7 @@ impl Miner {
|
||||
let max_transactions = if min_tx_gas.is_zero() {
|
||||
usize::max_value()
|
||||
} else {
|
||||
MAX_SKIPPED_TRANSACTIONS.saturating_add(cmp::min(*open_block.block().header().gas_limit() / min_tx_gas, u64::max_value().into()).as_u64() as usize)
|
||||
MAX_SKIPPED_TRANSACTIONS.saturating_add(cmp::min(*open_block.header.gas_limit() / min_tx_gas, u64::max_value().into()).as_u64() as usize)
|
||||
};
|
||||
|
||||
let pending: Vec<Arc<_>> = self.transaction_queue.pending(
|
||||
@@ -630,13 +642,16 @@ impl Miner {
|
||||
}
|
||||
}
|
||||
|
||||
/// Attempts to perform internal sealing (one that does not require work) and handles the result depending on the type of Seal.
|
||||
// TODO: (https://github.com/paritytech/parity-ethereum/issues/10407)
|
||||
// This is only used in authority_round path, and should be refactored to merge with the other seal() path.
|
||||
// Attempts to perform internal sealing (one that does not require work) and handles the result depending on the
|
||||
// type of Seal.
|
||||
fn seal_and_import_block_internally<C>(&self, chain: &C, block: ClosedBlock) -> bool
|
||||
where C: BlockChain + SealedBlockImporter,
|
||||
{
|
||||
{
|
||||
let sealing = self.sealing.lock();
|
||||
if block.transactions().is_empty()
|
||||
if block.transactions.is_empty()
|
||||
&& !self.forced_sealing()
|
||||
&& Instant::now() <= sealing.next_mandatory_reseal
|
||||
{
|
||||
@@ -646,7 +661,7 @@ impl Miner {
|
||||
|
||||
trace!(target: "miner", "seal_block_internally: attempting internal seal.");
|
||||
|
||||
let parent_header = match chain.block_header(BlockId::Hash(*block.header().parent_hash())) {
|
||||
let parent_header = match chain.block_header(BlockId::Hash(*block.header.parent_hash())) {
|
||||
Some(h) => {
|
||||
match h.decode() {
|
||||
Ok(decoded_hdr) => decoded_hdr,
|
||||
@@ -656,7 +671,7 @@ impl Miner {
|
||||
None => return false,
|
||||
};
|
||||
|
||||
match self.engine.generate_seal(block.block(), &parent_header) {
|
||||
match self.engine.generate_seal(&block, &parent_header) {
|
||||
// Save proposal for later seal submission and broadcast it.
|
||||
Seal::Proposal(seal) => {
|
||||
trace!(target: "miner", "Received a Proposal seal.");
|
||||
@@ -705,11 +720,11 @@ impl Miner {
|
||||
/// Prepares work which has to be done to seal.
|
||||
fn prepare_work(&self, block: ClosedBlock, original_work_hash: Option<H256>) {
|
||||
let (work, is_new) = {
|
||||
let block_header = block.block().header().clone();
|
||||
let block_header = block.header.clone();
|
||||
let block_hash = block_header.hash();
|
||||
|
||||
let mut sealing = self.sealing.lock();
|
||||
let last_work_hash = sealing.queue.peek_last_ref().map(|pb| pb.block().header().hash());
|
||||
let last_work_hash = sealing.queue.peek_last_ref().map(|pb| pb.header.hash());
|
||||
|
||||
trace!(
|
||||
target: "miner",
|
||||
@@ -742,7 +757,7 @@ impl Miner {
|
||||
trace!(
|
||||
target: "miner",
|
||||
"prepare_work: leaving (last={:?})",
|
||||
sealing.queue.peek_last_ref().map(|b| b.block().header().hash())
|
||||
sealing.queue.peek_last_ref().map(|b| b.header.hash())
|
||||
);
|
||||
(work, is_new)
|
||||
};
|
||||
@@ -994,7 +1009,7 @@ impl miner::MinerService for Miner {
|
||||
|
||||
let from_pending = || {
|
||||
self.map_existing_pending_block(|sealing| {
|
||||
sealing.transactions()
|
||||
sealing.transactions
|
||||
.iter()
|
||||
.map(|signed| signed.hash())
|
||||
.collect()
|
||||
@@ -1041,7 +1056,7 @@ impl miner::MinerService for Miner {
|
||||
|
||||
let from_pending = || {
|
||||
self.map_existing_pending_block(|sealing| {
|
||||
sealing.transactions()
|
||||
sealing.transactions
|
||||
.iter()
|
||||
.map(|signed| pool::VerifiedTransaction::from_pending_block_transaction(signed.clone()))
|
||||
.map(Arc::new)
|
||||
@@ -1086,9 +1101,9 @@ impl miner::MinerService for Miner {
|
||||
|
||||
fn pending_receipts(&self, best_block: BlockNumber) -> Option<Vec<RichReceipt>> {
|
||||
self.map_existing_pending_block(|pending| {
|
||||
let receipts = pending.receipts();
|
||||
pending.transactions()
|
||||
.into_iter()
|
||||
let receipts = &pending.receipts;
|
||||
pending.transactions
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(index, tx)| {
|
||||
let prev_gas = if index == 0 { Default::default() } else { receipts[index - 1].gas_used };
|
||||
@@ -1102,7 +1117,7 @@ impl miner::MinerService for Miner {
|
||||
Action::Call(_) => None,
|
||||
Action::Create => {
|
||||
let sender = tx.sender();
|
||||
Some(contract_address(self.engine.create_address_scheme(pending.header().number()), &sender, &tx.nonce, &tx.data).0)
|
||||
Some(contract_address(self.engine.create_address_scheme(pending.header.number()), &sender, &tx.nonce, &tx.data).0)
|
||||
}
|
||||
},
|
||||
logs: receipt.logs.clone(),
|
||||
@@ -1139,10 +1154,10 @@ impl miner::MinerService for Miner {
|
||||
|
||||
// refuse to seal the first block of the chain if it contains hard forks
|
||||
// which should be on by default.
|
||||
if block.block().header().number() == 1 {
|
||||
if block.header.number() == 1 {
|
||||
if let Some(name) = self.engine.params().nonzero_bugfix_hard_fork() {
|
||||
warn!("Your chain specification contains one or more hard forks which are required to be \
|
||||
on by default. Please remove these forks and start your chain again: {}.", name);
|
||||
on by default. Please remove these forks and start your chain again: {}.", name);
|
||||
return;
|
||||
}
|
||||
}
|
||||
@@ -1180,7 +1195,7 @@ impl miner::MinerService for Miner {
|
||||
self.prepare_pending_block(chain);
|
||||
|
||||
self.sealing.lock().queue.use_last_ref().map(|b| {
|
||||
let header = b.header();
|
||||
let header = &b.header;
|
||||
(header.hash(), header.number(), header.timestamp(), *header.difficulty())
|
||||
})
|
||||
}
|
||||
@@ -1194,9 +1209,9 @@ impl miner::MinerService for Miner {
|
||||
} else {
|
||||
GetAction::Take
|
||||
},
|
||||
|b| &b.hash() == &block_hash
|
||||
|b| &b.header.bare_hash() == &block_hash
|
||||
) {
|
||||
trace!(target: "miner", "Submitted block {}={}={} with seal {:?}", block_hash, b.hash(), b.header().bare_hash(), seal);
|
||||
trace!(target: "miner", "Submitted block {}={} with seal {:?}", block_hash, b.header.bare_hash(), seal);
|
||||
b.lock().try_seal(&*self.engine, seal).or_else(|e| {
|
||||
warn!(target: "miner", "Mined solution rejected: {}", e);
|
||||
Err(ErrorKind::PowInvalid.into())
|
||||
@@ -1207,8 +1222,8 @@ impl miner::MinerService for Miner {
|
||||
};
|
||||
|
||||
result.and_then(|sealed| {
|
||||
let n = sealed.header().number();
|
||||
let h = sealed.header().hash();
|
||||
let n = sealed.header.number();
|
||||
let h = sealed.header.hash();
|
||||
info!(target: "miner", "Submitted block imported OK. #{}: {}", Colour::White.bold().paint(format!("{}", n)), Colour::White.bold().paint(format!("{:x}", h)));
|
||||
Ok(sealed)
|
||||
})
|
||||
@@ -1281,7 +1296,7 @@ impl miner::MinerService for Miner {
|
||||
let nonce_cache = self.nonce_cache.clone();
|
||||
let engine = self.engine.clone();
|
||||
let accounts = self.accounts.clone();
|
||||
let refuse_service_transactions = self.options.refuse_service_transactions;
|
||||
let service_transaction_checker = self.service_transaction_checker.clone();
|
||||
|
||||
let cull = move |chain: &::client::Client| {
|
||||
let client = PoolClient::new(
|
||||
@@ -1289,7 +1304,7 @@ impl miner::MinerService for Miner {
|
||||
&nonce_cache,
|
||||
&*engine,
|
||||
&*accounts,
|
||||
refuse_service_transactions,
|
||||
service_transaction_checker.as_ref(),
|
||||
);
|
||||
queue.cull(client);
|
||||
};
|
||||
@@ -1301,22 +1316,39 @@ impl miner::MinerService for Miner {
|
||||
self.transaction_queue.cull(client);
|
||||
}
|
||||
}
|
||||
if let Some(ref service_transaction_checker) = self.service_transaction_checker {
|
||||
match service_transaction_checker.refresh_cache(chain) {
|
||||
Ok(true) => {
|
||||
trace!(target: "client", "Service transaction cache was refreshed successfully");
|
||||
},
|
||||
Ok(false) => {
|
||||
trace!(target: "client", "Registrar or/and service transactions contract does not exist");
|
||||
},
|
||||
Err(e) => error!(target: "client", "Error occurred while refreshing service transaction cache: {}", e)
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
fn pending_state(&self, latest_block_number: BlockNumber) -> Option<Self::State> {
|
||||
self.map_existing_pending_block(|b| b.state().clone(), latest_block_number)
|
||||
self.map_existing_pending_block(|b| b.state.clone(), latest_block_number)
|
||||
}
|
||||
|
||||
fn pending_block_header(&self, latest_block_number: BlockNumber) -> Option<Header> {
|
||||
self.map_existing_pending_block(|b| b.header().clone(), latest_block_number)
|
||||
self.map_existing_pending_block(|b| b.header.clone(), latest_block_number)
|
||||
}
|
||||
|
||||
fn pending_block(&self, latest_block_number: BlockNumber) -> Option<Block> {
|
||||
self.map_existing_pending_block(|b| b.to_base(), latest_block_number)
|
||||
self.map_existing_pending_block(|b| {
|
||||
Block {
|
||||
header: b.header.clone(),
|
||||
transactions: b.transactions.iter().cloned().map(Into::into).collect(),
|
||||
uncles: b.uncles.to_vec(),
|
||||
}
|
||||
}, latest_block_number)
|
||||
}
|
||||
|
||||
fn pending_transactions(&self, latest_block_number: BlockNumber) -> Option<Vec<SignedTransaction>> {
|
||||
self.map_existing_pending_block(|b| b.transactions().into_iter().cloned().collect(), latest_block_number)
|
||||
self.map_existing_pending_block(|b| b.transactions.iter().cloned().collect(), latest_block_number)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -75,7 +75,7 @@ pub struct PoolClient<'a, C: 'a> {
|
||||
engine: &'a EthEngine,
|
||||
accounts: &'a LocalAccounts,
|
||||
best_block_header: Header,
|
||||
service_transaction_checker: Option<ServiceTransactionChecker>,
|
||||
service_transaction_checker: Option<&'a ServiceTransactionChecker>,
|
||||
}
|
||||
|
||||
impl<'a, C: 'a> Clone for PoolClient<'a, C> {
|
||||
@@ -100,7 +100,7 @@ impl<'a, C: 'a> PoolClient<'a, C> where
|
||||
cache: &'a NonceCache,
|
||||
engine: &'a EthEngine,
|
||||
accounts: &'a LocalAccounts,
|
||||
refuse_service_transactions: bool,
|
||||
service_transaction_checker: Option<&'a ServiceTransactionChecker>,
|
||||
) -> Self {
|
||||
let best_block_header = chain.best_block_header();
|
||||
PoolClient {
|
||||
@@ -109,11 +109,7 @@ impl<'a, C: 'a> PoolClient<'a, C> where
|
||||
engine,
|
||||
accounts,
|
||||
best_block_header,
|
||||
service_transaction_checker: if refuse_service_transactions {
|
||||
None
|
||||
} else {
|
||||
Some(Default::default())
|
||||
},
|
||||
service_transaction_checker,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -24,9 +24,10 @@ use ethtrie::{TrieDB, TrieDBMut};
|
||||
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP};
|
||||
use hash_db::HashDB;
|
||||
use rlp::{RlpStream, Rlp};
|
||||
use snapshot::Error;
|
||||
use snapshot::{Error, Progress};
|
||||
use std::collections::HashSet;
|
||||
use trie::{Trie, TrieMut};
|
||||
use std::sync::atomic::Ordering;
|
||||
|
||||
// An empty account -- these were replaced with RLP null data for a space optimization in v1.
|
||||
const ACC_EMPTY: BasicAccount = BasicAccount {
|
||||
@@ -65,8 +66,16 @@ impl CodeState {
|
||||
// walk the account's storage trie, returning a vector of RLP items containing the
|
||||
// account address hash, account properties and the storage. Each item contains at most `max_storage_items`
|
||||
// storage records split according to snapshot format definition.
|
||||
pub fn to_fat_rlps(account_hash: &H256, acc: &BasicAccount, acct_db: &AccountDB, used_code: &mut HashSet<H256>, first_chunk_size: usize, max_chunk_size: usize) -> Result<Vec<Bytes>, Error> {
|
||||
let db = &(acct_db as &HashDB<_,_>);
|
||||
pub fn to_fat_rlps(
|
||||
account_hash: &H256,
|
||||
acc: &BasicAccount,
|
||||
acct_db: &AccountDB,
|
||||
used_code: &mut HashSet<H256>,
|
||||
first_chunk_size: usize,
|
||||
max_chunk_size: usize,
|
||||
p: &Progress,
|
||||
) -> Result<Vec<Bytes>, Error> {
|
||||
let db = &(acct_db as &dyn HashDB<_,_>);
|
||||
let db = TrieDB::new(db, &acc.storage_root)?;
|
||||
let mut chunks = Vec::new();
|
||||
let mut db_iter = db.iter()?;
|
||||
@@ -112,6 +121,10 @@ pub fn to_fat_rlps(account_hash: &H256, acc: &BasicAccount, acct_db: &AccountDB,
|
||||
}
|
||||
|
||||
loop {
|
||||
if p.abort.load(Ordering::SeqCst) {
|
||||
trace!(target: "snapshot", "to_fat_rlps: aborting snapshot");
|
||||
return Err(Error::SnapshotAborted);
|
||||
}
|
||||
match db_iter.next() {
|
||||
Some(Ok((k, v))) => {
|
||||
let pair = {
|
||||
@@ -211,6 +224,7 @@ mod tests {
|
||||
use types::basic_account::BasicAccount;
|
||||
use test_helpers::get_temp_state_db;
|
||||
use snapshot::tests::helpers::fill_storage;
|
||||
use snapshot::Progress;
|
||||
|
||||
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP, keccak};
|
||||
use ethereum_types::{H256, Address};
|
||||
@@ -236,8 +250,8 @@ mod tests {
|
||||
|
||||
let thin_rlp = ::rlp::encode(&account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||
|
||||
let fat_rlps = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), usize::max_value(), usize::max_value()).unwrap();
|
||||
let p = Progress::default();
|
||||
let fat_rlps = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), usize::max_value(), usize::max_value(), &p).unwrap();
|
||||
let fat_rlp = Rlp::new(&fat_rlps[0]).at(1).unwrap();
|
||||
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &addr), fat_rlp, H256::zero()).unwrap().0, account);
|
||||
}
|
||||
@@ -262,7 +276,9 @@ mod tests {
|
||||
let thin_rlp = ::rlp::encode(&account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||
|
||||
let fat_rlp = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), usize::max_value(), usize::max_value()).unwrap();
|
||||
let p = Progress::default();
|
||||
|
||||
let fat_rlp = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), usize::max_value(), usize::max_value(), &p).unwrap();
|
||||
let fat_rlp = Rlp::new(&fat_rlp[0]).at(1).unwrap();
|
||||
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &addr), fat_rlp, H256::zero()).unwrap().0, account);
|
||||
}
|
||||
@@ -287,7 +303,8 @@ mod tests {
|
||||
let thin_rlp = ::rlp::encode(&account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||
|
||||
let fat_rlps = to_fat_rlps(&keccak(addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), 500, 1000).unwrap();
|
||||
let p = Progress::default();
|
||||
let fat_rlps = to_fat_rlps(&keccak(addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), 500, 1000, &p).unwrap();
|
||||
let mut root = KECCAK_NULL_RLP;
|
||||
let mut restored_account = None;
|
||||
for rlp in fat_rlps {
|
||||
@@ -319,20 +336,21 @@ mod tests {
|
||||
nonce: 50.into(),
|
||||
balance: 123456789.into(),
|
||||
storage_root: KECCAK_NULL_RLP,
|
||||
code_hash: code_hash,
|
||||
code_hash,
|
||||
};
|
||||
|
||||
let account2 = BasicAccount {
|
||||
nonce: 400.into(),
|
||||
balance: 98765432123456789usize.into(),
|
||||
storage_root: KECCAK_NULL_RLP,
|
||||
code_hash: code_hash,
|
||||
code_hash,
|
||||
};
|
||||
|
||||
let mut used_code = HashSet::new();
|
||||
|
||||
let fat_rlp1 = to_fat_rlps(&keccak(&addr1), &account1, &AccountDB::new(db.as_hash_db(), &addr1), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
||||
let fat_rlp2 = to_fat_rlps(&keccak(&addr2), &account2, &AccountDB::new(db.as_hash_db(), &addr2), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
||||
let p1 = Progress::default();
|
||||
let p2 = Progress::default();
|
||||
let fat_rlp1 = to_fat_rlps(&keccak(&addr1), &account1, &AccountDB::new(db.as_hash_db(), &addr1), &mut used_code, usize::max_value(), usize::max_value(), &p1).unwrap();
|
||||
let fat_rlp2 = to_fat_rlps(&keccak(&addr2), &account2, &AccountDB::new(db.as_hash_db(), &addr2), &mut used_code, usize::max_value(), usize::max_value(), &p2).unwrap();
|
||||
assert_eq!(used_code.len(), 1);
|
||||
|
||||
let fat_rlp1 = Rlp::new(&fat_rlp1[0]).at(1).unwrap();
|
||||
@@ -350,6 +368,6 @@ mod tests {
|
||||
#[test]
|
||||
fn encoding_empty_acc() {
|
||||
let mut db = get_temp_state_db();
|
||||
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &Address::default()), Rlp::new(&::rlp::NULL_RLP), H256::zero()).unwrap(), (ACC_EMPTY, None));
|
||||
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &Address::zero()), Rlp::new(&::rlp::NULL_RLP), H256::zero()).unwrap(), (ACC_EMPTY, None));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -76,7 +76,7 @@ impl SnapshotComponents for PoaSnapshot {
|
||||
}
|
||||
|
||||
let header = chain.block_header_data(&transition.block_hash)
|
||||
.ok_or(Error::BlockNotFound(transition.block_hash))?;
|
||||
.ok_or_else(|| Error::BlockNotFound(transition.block_hash))?;
|
||||
|
||||
let entry = {
|
||||
let mut entry_stream = RlpStream::new_list(2);
|
||||
@@ -101,12 +101,12 @@ impl SnapshotComponents for PoaSnapshot {
|
||||
|
||||
let (block, receipts) = chain.block(&block_at)
|
||||
.and_then(|b| chain.block_receipts(&block_at).map(|r| (b, r)))
|
||||
.ok_or(Error::BlockNotFound(block_at))?;
|
||||
.ok_or_else(|| Error::BlockNotFound(block_at))?;
|
||||
let block = block.decode()?;
|
||||
|
||||
let parent_td = chain.block_details(block.header.parent_hash())
|
||||
.map(|d| d.total_difficulty)
|
||||
.ok_or(Error::BlockNotFound(block_at))?;
|
||||
.ok_or_else(|| Error::BlockNotFound(block_at))?;
|
||||
|
||||
rlps.push({
|
||||
let mut stream = RlpStream::new_list(5);
|
||||
|
||||
@@ -116,7 +116,7 @@ impl<'a> PowWorker<'a> {
|
||||
|
||||
let (block, receipts) = self.chain.block(&self.current_hash)
|
||||
.and_then(|b| self.chain.block_receipts(&self.current_hash).map(|r| (b, r)))
|
||||
.ok_or(Error::BlockNotFound(self.current_hash))?;
|
||||
.ok_or_else(|| Error::BlockNotFound(self.current_hash))?;
|
||||
|
||||
let abridged_rlp = AbridgedBlock::from_block_view(&block.view()).into_inner();
|
||||
|
||||
@@ -160,7 +160,7 @@ impl<'a> PowWorker<'a> {
|
||||
|
||||
let (last_header, last_details) = self.chain.block_header_data(&last)
|
||||
.and_then(|n| self.chain.block_details(&last).map(|d| (n, d)))
|
||||
.ok_or(Error::BlockNotFound(last))?;
|
||||
.ok_or_else(|| Error::BlockNotFound(last))?;
|
||||
|
||||
let parent_number = last_header.number() - 1;
|
||||
let parent_hash = last_header.parent_hash();
|
||||
|
||||
@@ -61,6 +61,8 @@ pub enum Error {
|
||||
ChunkTooLarge,
|
||||
/// Snapshots not supported by the consensus engine.
|
||||
SnapshotsUnsupported,
|
||||
/// Aborted snapshot
|
||||
SnapshotAborted,
|
||||
/// Bad epoch transition.
|
||||
BadEpochProof(u64),
|
||||
/// Wrong chunk format.
|
||||
@@ -91,6 +93,7 @@ impl fmt::Display for Error {
|
||||
Error::ChunkTooSmall => write!(f, "Chunk size is too small."),
|
||||
Error::ChunkTooLarge => write!(f, "Chunk size is too large."),
|
||||
Error::SnapshotsUnsupported => write!(f, "Snapshots unsupported by consensus engine."),
|
||||
Error::SnapshotAborted => write!(f, "Snapshot was aborted."),
|
||||
Error::BadEpochProof(i) => write!(f, "Bad epoch proof for transition to epoch {}", i),
|
||||
Error::WrongChunkFormat(ref msg) => write!(f, "Wrong chunk format: {}", msg),
|
||||
Error::UnlinkedAncientBlockChain => write!(f, "Unlinked ancient blocks chain"),
|
||||
|
||||
@@ -310,10 +310,7 @@ impl LooseReader {
|
||||
|
||||
dir.pop();
|
||||
|
||||
Ok(LooseReader {
|
||||
dir: dir,
|
||||
manifest: manifest,
|
||||
})
|
||||
Ok(LooseReader { dir, manifest })
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::cmp;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
|
||||
use std::sync::atomic::{AtomicBool, AtomicU64, AtomicUsize, Ordering};
|
||||
use hash::{keccak, KECCAK_NULL_RLP, KECCAK_EMPTY};
|
||||
|
||||
use account_db::{AccountDB, AccountDBMut};
|
||||
@@ -107,7 +107,7 @@ impl Default for SnapshotConfiguration {
|
||||
fn default() -> Self {
|
||||
SnapshotConfiguration {
|
||||
no_periodic: false,
|
||||
processing_threads: ::std::cmp::max(1, num_cpus::get() / 2),
|
||||
processing_threads: ::std::cmp::max(1, num_cpus::get_physical() / 2),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -117,8 +117,9 @@ impl Default for SnapshotConfiguration {
|
||||
pub struct Progress {
|
||||
accounts: AtomicUsize,
|
||||
blocks: AtomicUsize,
|
||||
size: AtomicUsize, // Todo [rob] use Atomicu64 when it stabilizes.
|
||||
size: AtomicU64,
|
||||
done: AtomicBool,
|
||||
abort: AtomicBool,
|
||||
}
|
||||
|
||||
impl Progress {
|
||||
@@ -127,6 +128,7 @@ impl Progress {
|
||||
self.accounts.store(0, Ordering::Release);
|
||||
self.blocks.store(0, Ordering::Release);
|
||||
self.size.store(0, Ordering::Release);
|
||||
self.abort.store(false, Ordering::Release);
|
||||
|
||||
// atomic fence here to ensure the others are written first?
|
||||
// logs might very rarely get polluted if not.
|
||||
@@ -140,7 +142,7 @@ impl Progress {
|
||||
pub fn blocks(&self) -> usize { self.blocks.load(Ordering::Acquire) }
|
||||
|
||||
/// Get the written size of the snapshot in bytes.
|
||||
pub fn size(&self) -> usize { self.size.load(Ordering::Acquire) }
|
||||
pub fn size(&self) -> u64 { self.size.load(Ordering::Acquire) }
|
||||
|
||||
/// Whether the snapshot is complete.
|
||||
pub fn done(&self) -> bool { self.done.load(Ordering::Acquire) }
|
||||
@@ -148,27 +150,28 @@ impl Progress {
|
||||
}
|
||||
/// Take a snapshot using the given blockchain, starting block hash, and database, writing into the given writer.
|
||||
pub fn take_snapshot<W: SnapshotWriter + Send>(
|
||||
engine: &EthEngine,
|
||||
chunker: Box<dyn SnapshotComponents>,
|
||||
chain: &BlockChain,
|
||||
block_at: H256,
|
||||
state_db: &HashDB<KeccakHasher, DBValue>,
|
||||
block_hash: H256,
|
||||
state_db: &dyn HashDB<KeccakHasher, DBValue>,
|
||||
writer: W,
|
||||
p: &Progress,
|
||||
processing_threads: usize,
|
||||
) -> Result<(), Error> {
|
||||
let start_header = chain.block_header_data(&block_at)
|
||||
.ok_or(Error::InvalidStartingBlock(BlockId::Hash(block_at)))?;
|
||||
let start_header = chain.block_header_data(&block_hash)
|
||||
.ok_or_else(|| Error::InvalidStartingBlock(BlockId::Hash(block_hash)))?;
|
||||
let state_root = start_header.state_root();
|
||||
let number = start_header.number();
|
||||
let block_number = start_header.number();
|
||||
|
||||
info!("Taking snapshot starting at block {}", number);
|
||||
info!("Taking snapshot starting at block {}", block_number);
|
||||
|
||||
let version = chunker.current_version();
|
||||
let writer = Mutex::new(writer);
|
||||
let chunker = engine.snapshot_components().ok_or(Error::SnapshotsUnsupported)?;
|
||||
let snapshot_version = chunker.current_version();
|
||||
let (state_hashes, block_hashes) = scope(|scope| -> Result<(Vec<H256>, Vec<H256>), Error> {
|
||||
let writer = &writer;
|
||||
let block_guard = scope.spawn(move || chunk_secondary(chunker, chain, block_at, writer, p));
|
||||
let block_guard = scope.spawn(move || {
|
||||
chunk_secondary(chunker, chain, block_hash, writer, p)
|
||||
});
|
||||
|
||||
// The number of threads must be between 1 and SNAPSHOT_SUBPARTS
|
||||
assert!(processing_threads >= 1, "Cannot use less than 1 threads for creating snapshots");
|
||||
@@ -183,7 +186,7 @@ pub fn take_snapshot<W: SnapshotWriter + Send>(
|
||||
|
||||
for part in (thread_idx..SNAPSHOT_SUBPARTS).step_by(num_threads) {
|
||||
debug!(target: "snapshot", "Chunking part {} in thread {}", part, thread_idx);
|
||||
let mut hashes = chunk_state(state_db, &state_root, writer, p, Some(part))?;
|
||||
let mut hashes = chunk_state(state_db, &state_root, writer, p, Some(part), thread_idx)?;
|
||||
chunk_hashes.append(&mut hashes);
|
||||
}
|
||||
|
||||
@@ -207,12 +210,12 @@ pub fn take_snapshot<W: SnapshotWriter + Send>(
|
||||
info!(target: "snapshot", "produced {} state chunks and {} block chunks.", state_hashes.len(), block_hashes.len());
|
||||
|
||||
let manifest_data = ManifestData {
|
||||
version: snapshot_version,
|
||||
state_hashes: state_hashes,
|
||||
block_hashes: block_hashes,
|
||||
state_root: state_root,
|
||||
block_number: number,
|
||||
block_hash: block_at,
|
||||
version,
|
||||
state_hashes,
|
||||
block_hashes,
|
||||
state_root,
|
||||
block_number,
|
||||
block_hash,
|
||||
};
|
||||
|
||||
writer.into_inner().finish(manifest_data)?;
|
||||
@@ -228,7 +231,13 @@ pub fn take_snapshot<W: SnapshotWriter + Send>(
|
||||
/// Secondary chunks are engine-specific, but they intend to corroborate the state data
|
||||
/// in the state chunks.
|
||||
/// Returns a list of chunk hashes, with the first having the blocks furthest from the genesis.
|
||||
pub fn chunk_secondary<'a>(mut chunker: Box<SnapshotComponents>, chain: &'a BlockChain, start_hash: H256, writer: &Mutex<SnapshotWriter + 'a>, progress: &'a Progress) -> Result<Vec<H256>, Error> {
|
||||
pub fn chunk_secondary<'a>(
|
||||
mut chunker: Box<dyn SnapshotComponents>,
|
||||
chain: &'a BlockChain,
|
||||
start_hash: H256,
|
||||
writer: &Mutex<dyn SnapshotWriter + 'a>,
|
||||
progress: &'a Progress
|
||||
) -> Result<Vec<H256>, Error> {
|
||||
let mut chunk_hashes = Vec::new();
|
||||
let mut snappy_buffer = vec![0; snappy::max_compressed_len(PREFERRED_CHUNK_SIZE)];
|
||||
|
||||
@@ -243,7 +252,7 @@ pub fn chunk_secondary<'a>(mut chunker: Box<SnapshotComponents>, chain: &'a Bloc
|
||||
trace!(target: "snapshot", "wrote secondary chunk. hash: {:x}, size: {}, uncompressed size: {}",
|
||||
hash, size, raw_data.len());
|
||||
|
||||
progress.size.fetch_add(size, Ordering::SeqCst);
|
||||
progress.size.fetch_add(size as u64, Ordering::SeqCst);
|
||||
chunk_hashes.push(hash);
|
||||
Ok(())
|
||||
};
|
||||
@@ -266,8 +275,9 @@ struct StateChunker<'a> {
|
||||
rlps: Vec<Bytes>,
|
||||
cur_size: usize,
|
||||
snappy_buffer: Vec<u8>,
|
||||
writer: &'a Mutex<SnapshotWriter + 'a>,
|
||||
writer: &'a Mutex<dyn SnapshotWriter + 'a>,
|
||||
progress: &'a Progress,
|
||||
thread_idx: usize,
|
||||
}
|
||||
|
||||
impl<'a> StateChunker<'a> {
|
||||
@@ -297,10 +307,10 @@ impl<'a> StateChunker<'a> {
|
||||
let hash = keccak(&compressed);
|
||||
|
||||
self.writer.lock().write_state_chunk(hash, compressed)?;
|
||||
trace!(target: "snapshot", "wrote state chunk. size: {}, uncompressed size: {}", compressed_size, raw_data.len());
|
||||
trace!(target: "snapshot", "Thread {} wrote state chunk. size: {}, uncompressed size: {}", self.thread_idx, compressed_size, raw_data.len());
|
||||
|
||||
self.progress.accounts.fetch_add(num_entries, Ordering::SeqCst);
|
||||
self.progress.size.fetch_add(compressed_size, Ordering::SeqCst);
|
||||
self.progress.size.fetch_add(compressed_size as u64, Ordering::SeqCst);
|
||||
|
||||
self.hashes.push(hash);
|
||||
self.cur_size = 0;
|
||||
@@ -321,7 +331,14 @@ impl<'a> StateChunker<'a> {
|
||||
///
|
||||
/// Returns a list of hashes of chunks created, or any error it may
|
||||
/// have encountered.
|
||||
pub fn chunk_state<'a>(db: &HashDB<KeccakHasher, DBValue>, root: &H256, writer: &Mutex<SnapshotWriter + 'a>, progress: &'a Progress, part: Option<usize>) -> Result<Vec<H256>, Error> {
|
||||
pub fn chunk_state<'a>(
|
||||
db: &dyn HashDB<KeccakHasher, DBValue>,
|
||||
root: &H256,
|
||||
writer: &Mutex<dyn SnapshotWriter + 'a>,
|
||||
progress: &'a Progress,
|
||||
part: Option<usize>,
|
||||
thread_idx: usize,
|
||||
) -> Result<Vec<H256>, Error> {
|
||||
let account_trie = TrieDB::new(&db, &root)?;
|
||||
|
||||
let mut chunker = StateChunker {
|
||||
@@ -329,8 +346,9 @@ pub fn chunk_state<'a>(db: &HashDB<KeccakHasher, DBValue>, root: &H256, writer:
|
||||
rlps: Vec::new(),
|
||||
cur_size: 0,
|
||||
snappy_buffer: vec![0; snappy::max_compressed_len(PREFERRED_CHUNK_SIZE)],
|
||||
writer: writer,
|
||||
progress: progress,
|
||||
writer,
|
||||
progress,
|
||||
thread_idx,
|
||||
};
|
||||
|
||||
let mut used_code = HashSet::new();
|
||||
@@ -365,7 +383,7 @@ pub fn chunk_state<'a>(db: &HashDB<KeccakHasher, DBValue>, root: &H256, writer:
|
||||
let account = ::rlp::decode(&*account_data)?;
|
||||
let account_db = AccountDB::from_hash(db, account_key_hash);
|
||||
|
||||
let fat_rlps = account::to_fat_rlps(&account_key_hash, &account, &account_db, &mut used_code, PREFERRED_CHUNK_SIZE - chunker.chunk_size(), PREFERRED_CHUNK_SIZE)?;
|
||||
let fat_rlps = account::to_fat_rlps(&account_key_hash, &account, &account_db, &mut used_code, PREFERRED_CHUNK_SIZE - chunker.chunk_size(), PREFERRED_CHUNK_SIZE, progress)?;
|
||||
for (i, fat_rlp) in fat_rlps.into_iter().enumerate() {
|
||||
if i > 0 {
|
||||
chunker.write_chunk()?;
|
||||
@@ -383,7 +401,7 @@ pub fn chunk_state<'a>(db: &HashDB<KeccakHasher, DBValue>, root: &H256, writer:
|
||||
|
||||
/// Used to rebuild the state trie piece by piece.
|
||||
pub struct StateRebuilder {
|
||||
db: Box<JournalDB>,
|
||||
db: Box<dyn JournalDB>,
|
||||
state_root: H256,
|
||||
known_code: HashMap<H256, H256>, // code hashes mapped to first account with this code.
|
||||
missing_code: HashMap<H256, Vec<H256>>, // maps code hashes to lists of accounts missing that code.
|
||||
@@ -393,7 +411,7 @@ pub struct StateRebuilder {
|
||||
|
||||
impl StateRebuilder {
|
||||
/// Create a new state rebuilder to write into the given backing DB.
|
||||
pub fn new(db: Arc<KeyValueDB>, pruning: Algorithm) -> Self {
|
||||
pub fn new(db: Arc<dyn KeyValueDB>, pruning: Algorithm) -> Self {
|
||||
StateRebuilder {
|
||||
db: journaldb::new(db.clone(), pruning, ::db::COL_STATE),
|
||||
state_root: KECCAK_NULL_RLP,
|
||||
@@ -411,7 +429,7 @@ impl StateRebuilder {
|
||||
let mut pairs = Vec::with_capacity(rlp.item_count()?);
|
||||
|
||||
// initialize the pairs vector with empty values so we have slots to write into.
|
||||
pairs.resize(rlp.item_count()?, (H256::new(), Vec::new()));
|
||||
pairs.resize(rlp.item_count()?, (H256::zero(), Vec::new()));
|
||||
|
||||
let status = rebuild_accounts(
|
||||
self.db.as_hash_db_mut(),
|
||||
@@ -468,7 +486,7 @@ impl StateRebuilder {
|
||||
/// Finalize the restoration. Check for accounts missing code and make a dummy
|
||||
/// journal entry.
|
||||
/// Once all chunks have been fed, there should be nothing missing.
|
||||
pub fn finalize(mut self, era: u64, id: H256) -> Result<Box<JournalDB>, ::error::Error> {
|
||||
pub fn finalize(mut self, era: u64, id: H256) -> Result<Box<dyn JournalDB>, ::error::Error> {
|
||||
let missing = self.missing_code.keys().cloned().collect::<Vec<_>>();
|
||||
if !missing.is_empty() { return Err(Error::MissingCode(missing).into()) }
|
||||
|
||||
@@ -493,7 +511,7 @@ struct RebuiltStatus {
|
||||
// rebuild a set of accounts and their storage.
|
||||
// returns a status detailing newly-loaded code and accounts missing code.
|
||||
fn rebuild_accounts(
|
||||
db: &mut HashDB<KeccakHasher, DBValue>,
|
||||
db: &mut dyn HashDB<KeccakHasher, DBValue>,
|
||||
account_fat_rlps: Rlp,
|
||||
out_chunk: &mut [(H256, Bytes)],
|
||||
known_code: &HashMap<H256, H256>,
|
||||
@@ -512,7 +530,7 @@ fn rebuild_accounts(
|
||||
// fill out the storage trie and code while decoding.
|
||||
let (acc, maybe_code) = {
|
||||
let mut acct_db = AccountDBMut::from_hash(db, hash);
|
||||
let storage_root = known_storage_roots.get(&hash).cloned().unwrap_or(H256::zero());
|
||||
let storage_root = known_storage_roots.get(&hash).cloned().unwrap_or_default();
|
||||
account::from_fat_rlp(&mut acct_db, fat_rlp, storage_root)?
|
||||
};
|
||||
|
||||
@@ -560,7 +578,7 @@ const POW_VERIFY_RATE: f32 = 0.02;
|
||||
/// Verify an old block with the given header, engine, blockchain, body. If `always` is set, it will perform
|
||||
/// the fullest verification possible. If not, it will take a random sample to determine whether it will
|
||||
/// do heavy or light verification.
|
||||
pub fn verify_old_block(rng: &mut OsRng, header: &Header, engine: &EthEngine, chain: &BlockChain, always: bool) -> Result<(), ::error::Error> {
|
||||
pub fn verify_old_block(rng: &mut OsRng, header: &Header, engine: &dyn EthEngine, chain: &BlockChain, always: bool) -> Result<(), ::error::Error> {
|
||||
engine.verify_block_basic(header)?;
|
||||
|
||||
if always || rng.gen::<f32>() <= POW_VERIFY_RATE {
|
||||
|
||||
@@ -415,7 +415,7 @@ impl Service {
|
||||
_ => break,
|
||||
}
|
||||
|
||||
// Writting changes to DB and logging every now and then
|
||||
// Writing changes to DB and logging every now and then
|
||||
if block_number % 1_000 == 0 {
|
||||
next_db.key_value().write_buffered(batch);
|
||||
next_chain.commit();
|
||||
@@ -479,16 +479,12 @@ impl Service {
|
||||
|
||||
let guard = Guard::new(temp_dir.clone());
|
||||
let res = client.take_snapshot(writer, BlockId::Number(num), &self.progress);
|
||||
|
||||
self.taking_snapshot.store(false, Ordering::SeqCst);
|
||||
if let Err(e) = res {
|
||||
if client.chain_info().best_block_number >= num + client.pruning_history() {
|
||||
// "Cancelled" is mincing words a bit -- what really happened
|
||||
// is that the state we were snapshotting got pruned out
|
||||
// before we could finish.
|
||||
info!("Periodic snapshot failed: block state pruned.\
|
||||
Run with a longer `--pruning-history` or with `--no-periodic-snapshot`");
|
||||
return Ok(())
|
||||
// The state we were snapshotting was pruned before we could finish.
|
||||
info!("Periodic snapshot failed: block state pruned. Run with a longer `--pruning-history` or with `--no-periodic-snapshot`");
|
||||
return Err(e);
|
||||
} else {
|
||||
return Err(e);
|
||||
}
|
||||
@@ -846,14 +842,29 @@ impl SnapshotService for Service {
|
||||
}
|
||||
}
|
||||
|
||||
fn abort_snapshot(&self) {
|
||||
if self.taking_snapshot.load(Ordering::SeqCst) {
|
||||
trace!(target: "snapshot", "Aborting snapshot – Snapshot under way");
|
||||
self.progress.abort.store(true, Ordering::SeqCst);
|
||||
}
|
||||
}
|
||||
|
||||
fn shutdown(&self) {
|
||||
trace!(target: "snapshot", "Shut down SnapshotService");
|
||||
self.abort_restore();
|
||||
trace!(target: "snapshot", "Shut down SnapshotService - restore aborted");
|
||||
self.abort_snapshot();
|
||||
trace!(target: "snapshot", "Shut down SnapshotService - snapshot aborted");
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for Service {
|
||||
fn drop(&mut self) {
|
||||
trace!(target: "shutdown", "Dropping Service");
|
||||
self.abort_restore();
|
||||
trace!(target: "shutdown", "Dropping Service - restore aborted");
|
||||
self.abort_snapshot();
|
||||
trace!(target: "shutdown", "Dropping Service - snapshot aborted");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -188,14 +188,15 @@ fn keep_ancient_blocks() {
|
||||
&state_root,
|
||||
&writer,
|
||||
&Progress::default(),
|
||||
None
|
||||
None,
|
||||
0
|
||||
).unwrap();
|
||||
|
||||
let manifest = ::snapshot::ManifestData {
|
||||
version: 2,
|
||||
state_hashes: state_hashes,
|
||||
state_root: state_root,
|
||||
block_hashes: block_hashes,
|
||||
state_hashes,
|
||||
state_root,
|
||||
block_hashes,
|
||||
block_number: NUM_BLOCKS,
|
||||
block_hash: best_hash,
|
||||
};
|
||||
|
||||
@@ -55,7 +55,7 @@ fn snap_and_restore() {
|
||||
|
||||
let mut state_hashes = Vec::new();
|
||||
for part in 0..SNAPSHOT_SUBPARTS {
|
||||
let mut hashes = chunk_state(&old_db, &state_root, &writer, &Progress::default(), Some(part)).unwrap();
|
||||
let mut hashes = chunk_state(&old_db, &state_root, &writer, &Progress::default(), Some(part), 0).unwrap();
|
||||
state_hashes.append(&mut hashes);
|
||||
}
|
||||
|
||||
@@ -126,8 +126,8 @@ fn get_code_from_prev_chunk() {
|
||||
let mut make_chunk = |acc, hash| {
|
||||
let mut db = journaldb::new_memory_db();
|
||||
AccountDBMut::from_hash(&mut db, hash).insert(&code[..]);
|
||||
|
||||
let fat_rlp = account::to_fat_rlps(&hash, &acc, &AccountDB::from_hash(&db, hash), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
||||
let p = Progress::default();
|
||||
let fat_rlp = account::to_fat_rlps(&hash, &acc, &AccountDB::from_hash(&db, hash), &mut used_code, usize::max_value(), usize::max_value(), &p).unwrap();
|
||||
let mut stream = RlpStream::new_list(1);
|
||||
stream.append_raw(&fat_rlp[0], 1);
|
||||
stream.out()
|
||||
@@ -171,13 +171,13 @@ fn checks_flag() {
|
||||
let state_root = producer.state_root();
|
||||
let writer = Mutex::new(PackedWriter::new(&snap_file).unwrap());
|
||||
|
||||
let state_hashes = chunk_state(&old_db, &state_root, &writer, &Progress::default(), None).unwrap();
|
||||
let state_hashes = chunk_state(&old_db, &state_root, &writer, &Progress::default(), None, 0).unwrap();
|
||||
|
||||
writer.into_inner().finish(::snapshot::ManifestData {
|
||||
version: 2,
|
||||
state_hashes: state_hashes,
|
||||
state_hashes,
|
||||
block_hashes: Vec::new(),
|
||||
state_root: state_root,
|
||||
state_root,
|
||||
block_number: 0,
|
||||
block_hash: H256::default(),
|
||||
}).unwrap();
|
||||
|
||||
@@ -55,6 +55,9 @@ pub trait SnapshotService : Sync + Send {
|
||||
/// no-op if currently restoring.
|
||||
fn restore_block_chunk(&self, hash: H256, chunk: Bytes);
|
||||
|
||||
/// Abort in-progress snapshotting if there is one.
|
||||
fn abort_snapshot(&self);
|
||||
|
||||
/// Shutdown the Snapshot Service by aborting any ongoing restore
|
||||
fn shutdown(&self);
|
||||
}
|
||||
|
||||
@@ -35,7 +35,7 @@ use vm::{EnvInfo, CallType, ActionValue, ActionParams, ParamsType};
|
||||
|
||||
use builtin::Builtin;
|
||||
use engines::{
|
||||
EthEngine, NullEngine, InstantSeal, InstantSealParams, BasicAuthority,
|
||||
EthEngine, NullEngine, InstantSeal, InstantSealParams, BasicAuthority, Clique,
|
||||
AuthorityRound, DEFAULT_BLOCKHASH_CONTRACT
|
||||
};
|
||||
use error::Error;
|
||||
@@ -99,9 +99,9 @@ pub struct CommonParams {
|
||||
pub validate_receipts_transition: BlockNumber,
|
||||
/// Validate transaction chain id.
|
||||
pub validate_chain_id_transition: BlockNumber,
|
||||
/// Number of first block where EIP-140 (Metropolis: REVERT opcode) rules begin.
|
||||
/// Number of first block where EIP-140 rules begin.
|
||||
pub eip140_transition: BlockNumber,
|
||||
/// Number of first block where EIP-210 (Metropolis: BLOCKHASH changes) rules begin.
|
||||
/// Number of first block where EIP-210 rules begin.
|
||||
pub eip210_transition: BlockNumber,
|
||||
/// EIP-210 Blockhash contract address.
|
||||
pub eip210_contract_address: Address,
|
||||
@@ -109,8 +109,7 @@ pub struct CommonParams {
|
||||
pub eip210_contract_code: Bytes,
|
||||
/// Gas allocated for EIP-210 blockhash update.
|
||||
pub eip210_contract_gas: U256,
|
||||
/// Number of first block where EIP-211 (Metropolis: RETURNDATASIZE/RETURNDATACOPY) rules
|
||||
/// begin.
|
||||
/// Number of first block where EIP-211 rules begin.
|
||||
pub eip211_transition: BlockNumber,
|
||||
/// Number of first block where EIP-214 rules begin.
|
||||
pub eip214_transition: BlockNumber,
|
||||
@@ -515,7 +514,7 @@ fn load_from(spec_params: SpecParams, s: ethjson::spec::Spec) -> Result<Spec, Er
|
||||
chts: s.hardcoded_sync
|
||||
.as_ref()
|
||||
.map(|s| s.chts.iter().map(|c| c.clone().into()).collect())
|
||||
.unwrap_or(Vec::new()),
|
||||
.unwrap_or_default()
|
||||
})
|
||||
} else {
|
||||
None
|
||||
@@ -611,6 +610,8 @@ impl Spec {
|
||||
ethjson::spec::Engine::InstantSeal(Some(instant_seal)) => Arc::new(InstantSeal::new(instant_seal.params.into(), machine)),
|
||||
ethjson::spec::Engine::InstantSeal(None) => Arc::new(InstantSeal::new(InstantSealParams::default(), machine)),
|
||||
ethjson::spec::Engine::BasicAuthority(basic_authority) => Arc::new(BasicAuthority::new(basic_authority.params.into(), machine)),
|
||||
ethjson::spec::Engine::Clique(clique) => Clique::new(clique.params.into(), machine)
|
||||
.expect("Failed to start Clique consensus engine."),
|
||||
ethjson::spec::Engine::AuthorityRound(authority_round) => AuthorityRound::new(authority_round.params.into(), machine)
|
||||
.expect("Failed to start AuthorityRound consensus engine."),
|
||||
}
|
||||
@@ -827,7 +828,6 @@ impl Spec {
|
||||
ethjson::spec::Spec::load(reader)
|
||||
.map_err(fmt_err)
|
||||
.map(load_machine_from)
|
||||
|
||||
}
|
||||
|
||||
/// Loads spec from json file. Provide factories for executing contracts and ensuring
|
||||
@@ -999,7 +999,6 @@ mod tests {
|
||||
use types::view;
|
||||
use types::views::BlockView;
|
||||
|
||||
// https://github.com/paritytech/parity-ethereum/issues/1840
|
||||
#[test]
|
||||
fn test_load_empty() {
|
||||
let tempdir = TempDir::new("").unwrap();
|
||||
|
||||
@@ -741,11 +741,4 @@ mod tests {
|
||||
assert_eq!(a.code_hash(), KECCAK_EMPTY);
|
||||
assert_eq!(a.storage_root().unwrap(), KECCAK_NULL_RLP);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn create_account() {
|
||||
let a = Account::new(69u8.into(), 0u8.into(), HashMap::new(), Bytes::new());
|
||||
assert_eq!(a.rlp().to_hex(), "f8448045a056e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421a0c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470");
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -82,8 +82,8 @@ pub enum ProvedExecution {
|
||||
BadProof,
|
||||
/// The transaction failed, but not due to a bad proof.
|
||||
Failed(ExecutionError),
|
||||
/// The transaction successfully completd with the given proof.
|
||||
Complete(Executed),
|
||||
/// The transaction successfully completed with the given proof.
|
||||
Complete(Box<Executed>),
|
||||
}
|
||||
|
||||
#[derive(Eq, PartialEq, Clone, Copy, Debug)]
|
||||
@@ -218,7 +218,7 @@ pub fn check_proof(
|
||||
|
||||
let options = TransactOptions::with_no_tracing().save_output_from_contract();
|
||||
match state.execute(env_info, machine, transaction, options, true) {
|
||||
Ok(executed) => ProvedExecution::Complete(executed),
|
||||
Ok(executed) => ProvedExecution::Complete(Box::new(executed)),
|
||||
Err(ExecutionError::Internal(_)) => ProvedExecution::BadProof,
|
||||
Err(e) => ProvedExecution::Failed(e),
|
||||
}
|
||||
@@ -1254,7 +1254,7 @@ impl<B: Backend> State<B> {
|
||||
let trie = TrieDB::new(db, &self.root)?;
|
||||
let maybe_account: Option<BasicAccount> = {
|
||||
let panicky_decoder = |bytes: &[u8]| {
|
||||
::rlp::decode(bytes).expect(&format!("prove_account, could not query trie for account key={}", &account_key))
|
||||
::rlp::decode(bytes).unwrap_or_else(|_| panic!("prove_account, could not query trie for account key={}", &account_key))
|
||||
};
|
||||
let query = (&mut recorder, panicky_decoder);
|
||||
trie.get_with(&account_key, query)?
|
||||
|
||||
@@ -155,7 +155,7 @@ pub fn generate_dummy_client_with_spec_and_data<F>(test_spec: F, block_number: u
|
||||
(3141562.into(), 31415620.into()),
|
||||
vec![],
|
||||
false,
|
||||
&mut Vec::new().into_iter(),
|
||||
None,
|
||||
).unwrap();
|
||||
rolling_timestamp += 10;
|
||||
b.set_timestamp(rolling_timestamp);
|
||||
|
||||
@@ -27,8 +27,7 @@ use types::filter::Filter;
|
||||
use types::view;
|
||||
use types::views::BlockView;
|
||||
|
||||
use block::IsBlock;
|
||||
use client::{BlockChainClient, Client, ClientConfig, BlockId, ChainInfo, BlockInfo, PrepareOpenBlock, ImportSealedBlock, ImportBlock};
|
||||
use client::{BlockChainClient, BlockChainReset, Client, ClientConfig, BlockId, ChainInfo, BlockInfo, PrepareOpenBlock, ImportSealedBlock, ImportBlock};
|
||||
use ethereum;
|
||||
use executive::{Executive, TransactOptions};
|
||||
use miner::{Miner, PendingOrdering, MinerService};
|
||||
@@ -254,7 +253,7 @@ fn can_mine() {
|
||||
|
||||
let b = client.prepare_open_block(Address::default(), (3141562.into(), 31415620.into()), vec![]).unwrap().close().unwrap();
|
||||
|
||||
assert_eq!(*b.block().header().parent_hash(), view!(BlockView, &dummy_blocks[0]).header_view().hash());
|
||||
assert_eq!(*b.header.parent_hash(), view!(BlockView, &dummy_blocks[0]).header_view().hash());
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -367,3 +366,23 @@ fn transaction_proof() {
|
||||
assert_eq!(state.balance(&Address::default()).unwrap(), 5.into());
|
||||
assert_eq!(state.balance(&address).unwrap(), 95.into());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reset_blockchain() {
|
||||
let client = get_test_client_with_blocks(get_good_dummy_block_seq(19));
|
||||
// 19 + genesis block
|
||||
assert!(client.block_header(BlockId::Number(20)).is_some());
|
||||
assert_eq!(client.block_header(BlockId::Number(20)).unwrap().hash(), client.best_block_header().hash());
|
||||
|
||||
assert!(client.reset(5).is_ok());
|
||||
|
||||
client.chain().clear_cache();
|
||||
|
||||
assert!(client.block_header(BlockId::Number(20)).is_none());
|
||||
assert!(client.block_header(BlockId::Number(19)).is_none());
|
||||
assert!(client.block_header(BlockId::Number(18)).is_none());
|
||||
assert!(client.block_header(BlockId::Number(17)).is_none());
|
||||
assert!(client.block_header(BlockId::Number(16)).is_none());
|
||||
|
||||
assert!(client.block_header(BlockId::Number(15)).is_some());
|
||||
}
|
||||
|
||||
@@ -86,7 +86,7 @@ fn can_trace_block_and_uncle_reward() {
|
||||
(3141562.into(), 31415620.into()),
|
||||
vec![],
|
||||
false,
|
||||
&mut Vec::new().into_iter(),
|
||||
None,
|
||||
).unwrap();
|
||||
rolling_timestamp += 10;
|
||||
root_block.set_timestamp(rolling_timestamp);
|
||||
@@ -115,7 +115,7 @@ fn can_trace_block_and_uncle_reward() {
|
||||
(3141562.into(), 31415620.into()),
|
||||
vec![],
|
||||
false,
|
||||
&mut Vec::new().into_iter(),
|
||||
None,
|
||||
).unwrap();
|
||||
rolling_timestamp += 10;
|
||||
parent_block.set_timestamp(rolling_timestamp);
|
||||
@@ -143,7 +143,7 @@ fn can_trace_block_and_uncle_reward() {
|
||||
(3141562.into(), 31415620.into()),
|
||||
vec![],
|
||||
false,
|
||||
&mut Vec::new().into_iter(),
|
||||
None,
|
||||
).unwrap();
|
||||
rolling_timestamp += 10;
|
||||
block.set_timestamp(rolling_timestamp);
|
||||
|
||||
@@ -135,7 +135,7 @@ impl Create {
|
||||
}
|
||||
|
||||
/// Reward type.
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
#[derive(Debug, PartialEq, Clone, Copy)]
|
||||
pub enum RewardType {
|
||||
/// Block
|
||||
Block,
|
||||
|
||||
@@ -34,13 +34,12 @@ use unexpected::{Mismatch, OutOfBounds};
|
||||
use blockchain::*;
|
||||
use call_contract::CallContract;
|
||||
use client::BlockInfo;
|
||||
use engines::EthEngine;
|
||||
use engines::{EthEngine, MAX_UNCLE_AGE};
|
||||
use error::{BlockError, Error};
|
||||
use types::{BlockNumber, header::Header};
|
||||
use types::transaction::SignedTransaction;
|
||||
use verification::queue::kind::blocks::Unverified;
|
||||
|
||||
#[cfg(not(time_checked_add))]
|
||||
use time_utils::CheckedSystemTime;
|
||||
|
||||
/// Preprocessed block data gathered in `verify_block_unordered` call
|
||||
@@ -176,7 +175,7 @@ fn verify_uncles(block: &PreverifiedBlock, bc: &BlockProvider, engine: &EthEngin
|
||||
excluded.insert(header.hash());
|
||||
let mut hash = header.parent_hash().clone();
|
||||
excluded.insert(hash.clone());
|
||||
for _ in 0..engine.maximum_uncle_age() {
|
||||
for _ in 0..MAX_UNCLE_AGE {
|
||||
match bc.block_details(&hash) {
|
||||
Some(details) => {
|
||||
excluded.insert(details.parent);
|
||||
@@ -209,7 +208,7 @@ fn verify_uncles(block: &PreverifiedBlock, bc: &BlockProvider, engine: &EthEngin
|
||||
// (8 Invalid)
|
||||
|
||||
let depth = if header.number() > uncle.number() { header.number() - uncle.number() } else { 0 };
|
||||
if depth > engine.maximum_uncle_age() as u64 {
|
||||
if depth > MAX_UNCLE_AGE as u64 {
|
||||
return Err(From::from(BlockError::UncleTooOld(OutOfBounds { min: Some(header.number() - depth), max: Some(header.number() - 1), found: uncle.number() })));
|
||||
}
|
||||
else if depth < 1 {
|
||||
@@ -258,7 +257,7 @@ pub fn verify_block_final(expected: &Header, got: &Header) -> Result<(), Error>
|
||||
return Err(From::from(BlockError::InvalidGasUsed(Mismatch { expected: *expected.gas_used(), found: *got.gas_used() })))
|
||||
}
|
||||
if expected.log_bloom() != got.log_bloom() {
|
||||
return Err(From::from(BlockError::InvalidLogBloom(Mismatch { expected: *expected.log_bloom(), found: *got.log_bloom() })))
|
||||
return Err(From::from(BlockError::InvalidLogBloom(Box::new(Mismatch { expected: *expected.log_bloom(), found: *got.log_bloom() }))))
|
||||
}
|
||||
if expected.receipts_root() != got.receipts_root() {
|
||||
return Err(From::from(BlockError::InvalidReceiptsRoot(Mismatch { expected: *expected.receipts_root(), found: *got.receipts_root() })))
|
||||
@@ -310,7 +309,7 @@ pub fn verify_header_params(header: &Header, engine: &EthEngine, is_full: bool,
|
||||
// this will resist overflow until `year 2037`
|
||||
let max_time = SystemTime::now() + ACCEPTABLE_DRIFT;
|
||||
let invalid_threshold = max_time + ACCEPTABLE_DRIFT * 9;
|
||||
let timestamp = UNIX_EPOCH.checked_add(Duration::from_secs(header.timestamp()))
|
||||
let timestamp = CheckedSystemTime::checked_add(UNIX_EPOCH, Duration::from_secs(header.timestamp()))
|
||||
.ok_or(BlockError::TimestampOverflow)?;
|
||||
|
||||
if timestamp > invalid_threshold {
|
||||
@@ -334,9 +333,9 @@ fn verify_parent(header: &Header, parent: &Header, engine: &EthEngine) -> Result
|
||||
|
||||
if !engine.is_timestamp_valid(header.timestamp(), parent.timestamp()) {
|
||||
let now = SystemTime::now();
|
||||
let min = now.checked_add(Duration::from_secs(parent.timestamp().saturating_add(1)))
|
||||
let min = CheckedSystemTime::checked_add(now, Duration::from_secs(parent.timestamp().saturating_add(1)))
|
||||
.ok_or(BlockError::TimestampOverflow)?;
|
||||
let found = now.checked_add(Duration::from_secs(header.timestamp()))
|
||||
let found = CheckedSystemTime::checked_add(now, Duration::from_secs(header.timestamp()))
|
||||
.ok_or(BlockError::TimestampOverflow)?;
|
||||
return Err(From::from(BlockError::InvalidTimestamp(OutOfBounds { max: None, min: Some(min), found })))
|
||||
}
|
||||
|
||||
@@ -122,6 +122,8 @@ impl SnapshotService for TestSnapshotService {
|
||||
self.block_restoration_chunks.lock().clear();
|
||||
}
|
||||
|
||||
fn abort_snapshot(&self) {}
|
||||
|
||||
fn restore_state_chunk(&self, hash: H256, chunk: Bytes) {
|
||||
if self.restoration_manifest.lock().as_ref().map_or(false, |m| m.state_hashes.iter().any(|h| h == &hash)) {
|
||||
self.state_restoration_chunks.lock().insert(hash, chunk);
|
||||
|
||||
@@ -11,7 +11,6 @@ ethkey = { path = "../../accounts/ethkey" }
|
||||
heapsize = "0.4"
|
||||
keccak-hash = "0.1"
|
||||
parity-bytes = "0.1"
|
||||
parity-machine = { path = "../../machine" }
|
||||
rlp = { version = "0.3.0", features = ["ethereum"] }
|
||||
rlp_derive = { path = "../../util/rlp-derive" }
|
||||
unexpected = { path = "../../util/unexpected" }
|
||||
|
||||
@@ -71,28 +71,3 @@ impl Decodable for PendingTransition {
|
||||
}
|
||||
}
|
||||
|
||||
/// Verifier for all blocks within an epoch with self-contained state.
|
||||
pub trait EpochVerifier<M: ::parity_machine::Machine>: Send + Sync {
|
||||
/// Lightly verify the next block header.
|
||||
/// This may not be a header belonging to a different epoch.
|
||||
fn verify_light(&self, header: &M::Header) -> Result<(), M::Error>;
|
||||
|
||||
/// Perform potentially heavier checks on the next block header.
|
||||
fn verify_heavy(&self, header: &M::Header) -> Result<(), M::Error> {
|
||||
self.verify_light(header)
|
||||
}
|
||||
|
||||
/// Check a finality proof against this epoch verifier.
|
||||
/// Returns `Some(hashes)` if the proof proves finality of these hashes.
|
||||
/// Returns `None` if the proof doesn't prove anything.
|
||||
fn check_finality_proof(&self, _proof: &[u8]) -> Option<Vec<H256>> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Special "no-op" verifier for stateless, epoch-less engines.
|
||||
pub struct NoOp;
|
||||
|
||||
impl<M: ::parity_machine::Machine> EpochVerifier<M> for NoOp {
|
||||
fn verify_light(&self, _header: &M::Header) -> Result<(), M::Error> { Ok(()) }
|
||||
}
|
||||
|
||||
@@ -367,44 +367,11 @@ impl HeapSizeOf for Header {
|
||||
}
|
||||
}
|
||||
|
||||
impl ::parity_machine::Header for Header {
|
||||
fn bare_hash(&self) -> H256 { Header::bare_hash(self) }
|
||||
fn hash(&self) -> H256 { Header::hash(self) }
|
||||
fn seal(&self) -> &[Vec<u8>] { Header::seal(self) }
|
||||
fn author(&self) -> &Address { Header::author(self) }
|
||||
fn number(&self) -> BlockNumber { Header::number(self) }
|
||||
}
|
||||
|
||||
impl ::parity_machine::ScoredHeader for Header {
|
||||
type Value = U256;
|
||||
|
||||
fn score(&self) -> &U256 { self.difficulty() }
|
||||
fn set_score(&mut self, score: U256) { self.set_difficulty(score) }
|
||||
}
|
||||
|
||||
impl ::parity_machine::Header for ExtendedHeader {
|
||||
fn bare_hash(&self) -> H256 { self.header.bare_hash() }
|
||||
fn hash(&self) -> H256 { self.header.hash() }
|
||||
fn seal(&self) -> &[Vec<u8>] { self.header.seal() }
|
||||
fn author(&self) -> &Address { self.header.author() }
|
||||
fn number(&self) -> BlockNumber { self.header.number() }
|
||||
}
|
||||
|
||||
impl ::parity_machine::ScoredHeader for ExtendedHeader {
|
||||
type Value = U256;
|
||||
|
||||
fn score(&self) -> &U256 { self.header.difficulty() }
|
||||
fn set_score(&mut self, score: U256) { self.header.set_difficulty(score) }
|
||||
}
|
||||
|
||||
impl ::parity_machine::TotalScoredHeader for ExtendedHeader {
|
||||
type Value = U256;
|
||||
|
||||
fn total_score(&self) -> U256 { self.parent_total_difficulty + *self.header.difficulty() }
|
||||
}
|
||||
|
||||
impl ::parity_machine::FinalizableHeader for ExtendedHeader {
|
||||
fn is_finalized(&self) -> bool { self.is_finalized }
|
||||
impl ExtendedHeader {
|
||||
/// Returns combined difficulty of all ancestors together with the difficulty of this header.
|
||||
pub fn total_score(&self) -> U256 {
|
||||
self.parent_total_difficulty + *self.header.difficulty()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
//! Unique identifiers.
|
||||
|
||||
use ethereum_types::H256;
|
||||
use {BlockNumber};
|
||||
use BlockNumber;
|
||||
|
||||
/// Uniquely identifies block.
|
||||
#[derive(Debug, PartialEq, Copy, Clone, Hash, Eq)]
|
||||
|
||||
@@ -39,7 +39,6 @@ extern crate ethkey;
|
||||
extern crate heapsize;
|
||||
extern crate keccak_hash as hash;
|
||||
extern crate parity_bytes as bytes;
|
||||
extern crate parity_machine;
|
||||
extern crate rlp;
|
||||
extern crate unexpected;
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
|
||||
use std::time::{Instant, Duration};
|
||||
use ethereum_types::{H256, U256};
|
||||
use ethcore::client::{self, EvmTestClient, EvmTestError, TransactResult};
|
||||
use ethcore::client::{self, EvmTestClient, EvmTestError, TransactErr, TransactSuccess};
|
||||
use ethcore::{state, state_db, trace, spec, pod_state, TrieSpec};
|
||||
use ethjson;
|
||||
use types::transaction;
|
||||
@@ -130,7 +130,7 @@ pub fn run_transaction<T: Informant>(
|
||||
let result = run(&spec, trie_spec, transaction.gas, pre_state, |mut client| {
|
||||
let result = client.transact(env_info, transaction, trace::NoopTracer, informant);
|
||||
match result {
|
||||
TransactResult::Ok { state_root, gas_left, output, vm_trace, end_state, .. } => {
|
||||
Ok(TransactSuccess { state_root, gas_left, output, vm_trace, end_state, .. }) => {
|
||||
if state_root != post_root {
|
||||
(Err(EvmTestError::PostCondition(format!(
|
||||
"State root mismatch (got: {:#x}, expected: {:#x})",
|
||||
@@ -141,7 +141,7 @@ pub fn run_transaction<T: Informant>(
|
||||
(Ok(output), state_root, end_state, Some(gas_left), vm_trace)
|
||||
}
|
||||
},
|
||||
TransactResult::Err { state_root, error, end_state } => {
|
||||
Err(TransactErr { state_root, error, end_state }) => {
|
||||
(Err(EvmTestError::PostCondition(format!(
|
||||
"Unexpected execution error: {:?}", error
|
||||
))), state_root, end_state, None, None)
|
||||
|
||||
57
json/src/spec/clique.rs
Normal file
57
json/src/spec/clique.rs
Normal file
@@ -0,0 +1,57 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! Clique params deserialization.
|
||||
|
||||
use std::num::NonZeroU64;
|
||||
|
||||
/// Clique params deserialization.
|
||||
#[derive(Debug, PartialEq, Deserialize)]
|
||||
pub struct CliqueParams {
|
||||
/// period as defined in EIP
|
||||
pub period: Option<u64>,
|
||||
/// epoch length as defined in EIP
|
||||
pub epoch: Option<NonZeroU64>
|
||||
}
|
||||
|
||||
/// Clique engine deserialization.
|
||||
#[derive(Debug, PartialEq, Deserialize)]
|
||||
pub struct Clique {
|
||||
/// CliqueEngine params
|
||||
pub params: CliqueParams,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use serde_json;
|
||||
use uint::Uint;
|
||||
use ethereum_types::U256;
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn clique_deserialization() {
|
||||
let s = r#"{
|
||||
"params": {
|
||||
"period": 5,
|
||||
"epoch": 30000
|
||||
}
|
||||
}"#;
|
||||
|
||||
let deserialized: Clique = serde_json::from_str(s).unwrap();
|
||||
assert_eq!(deserialized.params.period, Some(5u64));
|
||||
assert_eq!(deserialized.params.epoch, NonZeroU64::new(30000));
|
||||
}
|
||||
}
|
||||
@@ -16,7 +16,7 @@
|
||||
|
||||
//! Engine deserialization.
|
||||
|
||||
use super::{Ethash, BasicAuthority, AuthorityRound, NullEngine, InstantSeal};
|
||||
use super::{Ethash, BasicAuthority, AuthorityRound, NullEngine, InstantSeal, Clique};
|
||||
|
||||
/// Engine deserialization.
|
||||
#[derive(Debug, PartialEq, Deserialize)]
|
||||
@@ -34,6 +34,8 @@ pub enum Engine {
|
||||
BasicAuthority(BasicAuthority),
|
||||
/// AuthorityRound engine.
|
||||
AuthorityRound(AuthorityRound),
|
||||
/// Clique engine.
|
||||
Clique(Clique)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
@@ -130,5 +132,19 @@ mod tests {
|
||||
Engine::AuthorityRound(_) => {}, // AuthorityRound is unit tested in its own file.
|
||||
_ => panic!(),
|
||||
};
|
||||
|
||||
let s = r#"{
|
||||
"clique": {
|
||||
"params": {
|
||||
"period": 15,
|
||||
"epoch": 30000
|
||||
}
|
||||
}
|
||||
}"#;
|
||||
let deserialized: Engine = serde_json::from_str(s).unwrap();
|
||||
match deserialized {
|
||||
Engine::Clique(_) => {}, // Clique is unit tested in its own file.
|
||||
_ => panic!(),
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
@@ -31,6 +31,7 @@ pub mod authority_round;
|
||||
pub mod null_engine;
|
||||
pub mod instant_seal;
|
||||
pub mod hardcoded_sync;
|
||||
pub mod clique;
|
||||
|
||||
pub use self::account::Account;
|
||||
pub use self::builtin::{Builtin, Pricing, Linear};
|
||||
@@ -44,6 +45,7 @@ pub use self::ethash::{Ethash, EthashParams, BlockReward};
|
||||
pub use self::validator_set::ValidatorSet;
|
||||
pub use self::basic_authority::{BasicAuthority, BasicAuthorityParams};
|
||||
pub use self::authority_round::{AuthorityRound, AuthorityRoundParams};
|
||||
pub use self::clique::{Clique, CliqueParams};
|
||||
pub use self::null_engine::{NullEngine, NullEngineParams};
|
||||
pub use self::instant_seal::{InstantSeal, InstantSealParams};
|
||||
pub use self::hardcoded_sync::HardcodedSync;
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
[package]
|
||||
name = "parity-machine"
|
||||
version = "0.1.0"
|
||||
description = "Generalization of a state machine for consensus engines"
|
||||
authors = ["Parity Technologies <admin@parity.io>"]
|
||||
|
||||
[dependencies]
|
||||
ethereum-types = "0.4"
|
||||
@@ -1,132 +0,0 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! Generalization of a state machine for a consensus engine.
|
||||
//! This will define traits for the header, block, and state of a blockchain.
|
||||
|
||||
extern crate ethereum_types;
|
||||
|
||||
use ethereum_types::{H256, U256, Address};
|
||||
|
||||
/// A header. This contains important metadata about the block, as well as a
|
||||
/// "seal" that indicates validity to a consensus engine.
|
||||
pub trait Header {
|
||||
/// Cryptographic hash of the header, excluding the seal.
|
||||
fn bare_hash(&self) -> H256;
|
||||
|
||||
/// Cryptographic hash of the header, including the seal.
|
||||
fn hash(&self) -> H256;
|
||||
|
||||
/// Get a reference to the seal fields.
|
||||
fn seal(&self) -> &[Vec<u8>];
|
||||
|
||||
/// The author of the header.
|
||||
fn author(&self) -> &Address;
|
||||
|
||||
/// The number of the header.
|
||||
fn number(&self) -> u64;
|
||||
}
|
||||
|
||||
/// A header with an associated score (difficulty in PoW terms)
|
||||
pub trait ScoredHeader: Header {
|
||||
type Value;
|
||||
|
||||
/// Get the score of this header.
|
||||
fn score(&self) -> &Self::Value;
|
||||
|
||||
/// Set the score of this header.
|
||||
fn set_score(&mut self, score: Self::Value);
|
||||
}
|
||||
|
||||
/// A header with associated total score.
|
||||
pub trait TotalScoredHeader: Header {
|
||||
type Value;
|
||||
|
||||
/// Get the total score of this header.
|
||||
fn total_score(&self) -> Self::Value;
|
||||
}
|
||||
|
||||
/// A header with finalized information.
|
||||
pub trait FinalizableHeader: Header {
|
||||
/// Get whether this header is considered finalized, so that it will never be replaced in reorganization.
|
||||
fn is_finalized(&self) -> bool;
|
||||
}
|
||||
|
||||
/// A header with metadata information.
|
||||
pub trait WithMetadataHeader: Header {
|
||||
/// Get the current header metadata.
|
||||
fn metadata(&self) -> Option<&[u8]>;
|
||||
}
|
||||
|
||||
/// A "live" block is one which is in the process of the transition.
|
||||
/// The state of this block can be mutated by arbitrary rules of the
|
||||
/// state transition function.
|
||||
pub trait LiveBlock: 'static {
|
||||
/// The block header type;
|
||||
type Header: Header;
|
||||
|
||||
/// Get a reference to the header.
|
||||
fn header(&self) -> &Self::Header;
|
||||
|
||||
/// Get a reference to the uncle headers. If the block type doesn't
|
||||
/// support uncles, return the empty slice.
|
||||
fn uncles(&self) -> &[Self::Header];
|
||||
}
|
||||
|
||||
/// Trait for blocks which have a transaction type.
|
||||
pub trait Transactions: LiveBlock {
|
||||
/// The transaction type.
|
||||
type Transaction;
|
||||
|
||||
/// Get a reference to the transactions in this block.
|
||||
fn transactions(&self) -> &[Self::Transaction];
|
||||
}
|
||||
|
||||
/// Generalization of types surrounding blockchain-suitable state machines.
|
||||
pub trait Machine: for<'a> LocalizedMachine<'a> {
|
||||
/// The block header type.
|
||||
type Header: Header;
|
||||
/// The live block type.
|
||||
type LiveBlock: LiveBlock<Header=Self::Header>;
|
||||
/// Block header with metadata information.
|
||||
type ExtendedHeader: Header;
|
||||
/// A handle to a blockchain client for this machine.
|
||||
type EngineClient: ?Sized;
|
||||
/// A description of needed auxiliary data.
|
||||
type AuxiliaryRequest;
|
||||
/// Actions taken on ancestry blocks when commiting a new block.
|
||||
type AncestryAction;
|
||||
|
||||
/// Errors which can occur when querying or interacting with the machine.
|
||||
type Error;
|
||||
|
||||
/// Get the balance, in base units, associated with an account.
|
||||
/// Extracts data from the live block.
|
||||
fn balance(&self, live: &Self::LiveBlock, address: &Address) -> Result<U256, Self::Error>;
|
||||
|
||||
/// Increment the balance of an account in the state of the live block.
|
||||
fn add_balance(&self, live: &mut Self::LiveBlock, address: &Address, amount: &U256) -> Result<(), Self::Error>;
|
||||
}
|
||||
|
||||
/// Machine-related types localized to a specific lifetime.
|
||||
// TODO: this is a workaround for a lack of associated type constructors in the language.
|
||||
pub trait LocalizedMachine<'a>: Sync + Send {
|
||||
/// Definition of auxiliary data associated to a specific block.
|
||||
type AuxiliaryData: 'a;
|
||||
/// A context providing access to the state in a controlled capacity.
|
||||
/// Generally also provides verifiable proofs.
|
||||
type StateContext: ?Sized + 'a;
|
||||
}
|
||||
@@ -16,11 +16,15 @@
|
||||
|
||||
//! A service transactions contract checker.
|
||||
|
||||
use call_contract::{CallContract, RegistryInfo};
|
||||
use std::collections::HashMap;
|
||||
use std::mem;
|
||||
use std::sync::Arc;
|
||||
use call_contract::{RegistryInfo, CallContract};
|
||||
use types::ids::BlockId;
|
||||
use types::transaction::SignedTransaction;
|
||||
use ethabi::FunctionOutputDecoder;
|
||||
use ethereum_types::Address;
|
||||
use parking_lot::RwLock;
|
||||
|
||||
use_contract!(service_transaction, "res/contracts/service_transaction.json");
|
||||
|
||||
@@ -28,9 +32,12 @@ const SERVICE_TRANSACTION_CONTRACT_REGISTRY_NAME: &'static str = "service_transa
|
||||
|
||||
/// Service transactions checker.
|
||||
#[derive(Default, Clone)]
|
||||
pub struct ServiceTransactionChecker;
|
||||
pub struct ServiceTransactionChecker {
|
||||
certified_addresses_cache: Arc<RwLock<HashMap<Address, bool>>>
|
||||
}
|
||||
|
||||
impl ServiceTransactionChecker {
|
||||
|
||||
/// Checks if given address in tx is whitelisted to send service transactions.
|
||||
pub fn check<C: CallContract + RegistryInfo>(&self, client: &C, tx: &SignedTransaction) -> Result<bool, String> {
|
||||
let sender = tx.sender();
|
||||
@@ -44,9 +51,42 @@ impl ServiceTransactionChecker {
|
||||
|
||||
/// Checks if given address is whitelisted to send service transactions.
|
||||
pub fn check_address<C: CallContract + RegistryInfo>(&self, client: &C, sender: Address) -> Result<bool, String> {
|
||||
trace!(target: "txqueue", "Checking service transaction checker contract from {}", sender);
|
||||
if let Some(allowed) = self.certified_addresses_cache.try_read().as_ref().and_then(|c| c.get(&sender)) {
|
||||
return Ok(*allowed);
|
||||
}
|
||||
let contract_address = client.registry_address(SERVICE_TRANSACTION_CONTRACT_REGISTRY_NAME.to_owned(), BlockId::Latest)
|
||||
.ok_or_else(|| "contract is not configured")?;
|
||||
trace!(target: "txqueue", "Checking service transaction checker contract from {}", sender);
|
||||
self.call_contract(client, contract_address, sender).and_then(|allowed| {
|
||||
if let Some(mut cache) = self.certified_addresses_cache.try_write() {
|
||||
cache.insert(sender, allowed);
|
||||
};
|
||||
Ok(allowed)
|
||||
})
|
||||
}
|
||||
|
||||
/// Refresh certified addresses cache
|
||||
pub fn refresh_cache<C: CallContract + RegistryInfo>(&self, client: &C) -> Result<bool, String> {
|
||||
trace!(target: "txqueue", "Refreshing certified addresses cache");
|
||||
// replace the cache with an empty list,
|
||||
// since it's not recent it won't be used anyway.
|
||||
let cache = mem::replace(&mut *self.certified_addresses_cache.write(), HashMap::default());
|
||||
|
||||
if let Some(contract_address) = client.registry_address(SERVICE_TRANSACTION_CONTRACT_REGISTRY_NAME.to_owned(), BlockId::Latest) {
|
||||
let addresses: Vec<_> = cache.keys().collect();
|
||||
let mut cache: HashMap<Address, bool> = HashMap::default();
|
||||
for address in addresses {
|
||||
let allowed = self.call_contract(client, contract_address, *address)?;
|
||||
cache.insert(*address, allowed);
|
||||
}
|
||||
mem::replace(&mut *self.certified_addresses_cache.write(), cache);
|
||||
Ok(true)
|
||||
} else {
|
||||
Ok(false)
|
||||
}
|
||||
}
|
||||
|
||||
fn call_contract<C: CallContract + RegistryInfo>(&self, client: &C, contract_address: Address, sender: Address) -> Result<bool, String> {
|
||||
let (data, decoder) = service_transaction::functions::certified::call(sender);
|
||||
let value = client.call_contract(BlockId::Latest, contract_address, data)?;
|
||||
decoder.decode(&value).map_err(|e| e.to_string())
|
||||
|
||||
@@ -11,7 +11,7 @@ crate-type = ["cdylib", "staticlib"]
|
||||
|
||||
[dependencies]
|
||||
futures = "0.1.6"
|
||||
jni = { version = "0.10.1", optional = true }
|
||||
jni = { version = "0.11", optional = true }
|
||||
panic_hook = { path = "../util/panic-hook" }
|
||||
parity-ethereum = { path = "../", default-features = false }
|
||||
tokio = "0.1.11"
|
||||
|
||||
@@ -36,9 +36,6 @@ struct JavaCallback<'a> {
|
||||
method_descriptor: &'a str,
|
||||
}
|
||||
|
||||
unsafe impl<'a> Send for JavaCallback<'a> {}
|
||||
unsafe impl<'a> Sync for JavaCallback<'a> {}
|
||||
|
||||
impl<'a> JavaCallback<'a> {
|
||||
fn new(jvm: JavaVM, callback: GlobalRef) -> Self {
|
||||
Self {
|
||||
|
||||
@@ -300,7 +300,7 @@ usage! {
|
||||
|
||||
ARG arg_chain: (String) = "foundation", or |c: &Config| c.parity.as_ref()?.chain.clone(),
|
||||
"--chain=[CHAIN]",
|
||||
"Specify the blockchain type. CHAIN may be either a JSON chain specification file or ethereum, classic, poacore, tobalaba, expanse, musicoin, ellaism, mix, callisto, morden, ropsten, kovan, poasokol, testnet, or dev.",
|
||||
"Specify the blockchain type. CHAIN may be either a JSON chain specification file or ethereum, classic, poacore, tobalaba, expanse, musicoin, ellaism, mix, callisto, morden, ropsten, kovan, rinkeby, goerli, kotti, poasokol, testnet, or dev.",
|
||||
|
||||
ARG arg_keys_path: (String) = "$BASE/keys", or |c: &Config| c.parity.as_ref()?.keys_path.clone(),
|
||||
"--keys-path=[PATH]",
|
||||
|
||||
@@ -14,13 +14,6 @@
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
macro_rules! println_stderr(
|
||||
($($arg:tt)*) => { {
|
||||
let r = writeln!(&mut ::std::io::stderr(), $($arg)*);
|
||||
r.expect("failed printing to stderr");
|
||||
} }
|
||||
);
|
||||
|
||||
macro_rules! return_if_parse_error {
|
||||
($e:expr) => (
|
||||
match $e {
|
||||
@@ -143,7 +136,7 @@ macro_rules! usage {
|
||||
) => {
|
||||
use toml;
|
||||
use std::{fs, io, process, cmp};
|
||||
use std::io::{Read, Write};
|
||||
use std::io::Read;
|
||||
use parity_version::version;
|
||||
use clap::{Arg, App, SubCommand, AppSettings, ArgSettings, Error as ClapError, ErrorKind as ClapErrorKind};
|
||||
use dir::helpers::replace_home;
|
||||
@@ -172,17 +165,17 @@ macro_rules! usage {
|
||||
match self {
|
||||
ArgsError::Clap(e) => e.exit(),
|
||||
ArgsError::Decode(e) => {
|
||||
println_stderr!("You might have supplied invalid parameters in config file.");
|
||||
println_stderr!("{}", e);
|
||||
eprintln!("You might have supplied invalid parameters in config file.");
|
||||
eprintln!("{}", e);
|
||||
process::exit(2)
|
||||
},
|
||||
ArgsError::Config(path, e) => {
|
||||
println_stderr!("There was an error reading your config file at: {}", path);
|
||||
println_stderr!("{}", e);
|
||||
eprintln!("There was an error reading your config file at: {}", path);
|
||||
eprintln!("{}", e);
|
||||
process::exit(2)
|
||||
},
|
||||
ArgsError::PeerConfiguration => {
|
||||
println_stderr!("You have supplied `min_peers` > `max_peers`");
|
||||
eprintln!("You have supplied `min_peers` > `max_peers`");
|
||||
process::exit(2)
|
||||
}
|
||||
}
|
||||
@@ -332,7 +325,7 @@ macro_rules! usage {
|
||||
let args = match (fs::File::open(&config_file), raw_args.arg_config.clone()) {
|
||||
// Load config file
|
||||
(Ok(mut file), _) => {
|
||||
println_stderr!("Loading config file from {}", &config_file);
|
||||
eprintln!("Loading config file from {}", &config_file);
|
||||
let mut config = String::new();
|
||||
file.read_to_string(&mut config).map_err(|e| ArgsError::Config(config_file, e))?;
|
||||
Ok(raw_args.into_args(Self::parse_config(&config)?))
|
||||
|
||||
@@ -932,7 +932,7 @@ impl Configuration {
|
||||
no_periodic: self.args.flag_no_periodic_snapshot,
|
||||
processing_threads: match self.args.arg_snapshot_threads {
|
||||
Some(threads) if threads > 0 => threads,
|
||||
_ => ::std::cmp::max(1, num_cpus::get() / 2),
|
||||
_ => ::std::cmp::max(1, num_cpus::get_physical() / 2),
|
||||
},
|
||||
};
|
||||
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! Ethcore client application.
|
||||
|
||||
#![warn(missing_docs)]
|
||||
|
||||
extern crate ansi_term;
|
||||
|
||||
@@ -27,7 +27,7 @@ use futures::{future, Future};
|
||||
use futures::future::Either;
|
||||
|
||||
use light::client::fetch::ChainDataFetcher;
|
||||
use light::on_demand::{request, OnDemand};
|
||||
use light::on_demand::{request, OnDemand, OnDemandRequester};
|
||||
|
||||
use parking_lot::RwLock;
|
||||
use ethereum_types::H256;
|
||||
|
||||
@@ -17,7 +17,5 @@
|
||||
//! Utilities and helpers for the light client.
|
||||
|
||||
mod epoch_fetch;
|
||||
mod queue_cull;
|
||||
|
||||
pub use self::epoch_fetch::EpochFetch;
|
||||
pub use self::queue_cull::QueueCull;
|
||||
|
||||
@@ -1,105 +0,0 @@
|
||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity Ethereum.
|
||||
|
||||
// Parity Ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity Ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
//! Service for culling the light client's transaction queue.
|
||||
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use ethcore::client::ClientIoMessage;
|
||||
use sync::{LightSync, LightNetworkDispatcher};
|
||||
use io::{IoContext, IoHandler, TimerToken};
|
||||
|
||||
use light::client::LightChainClient;
|
||||
use light::on_demand::{request, OnDemand};
|
||||
use light::TransactionQueue;
|
||||
|
||||
use futures::{future, Future};
|
||||
|
||||
use parity_runtime::Executor;
|
||||
|
||||
use parking_lot::RwLock;
|
||||
|
||||
// Attepmt to cull once every 10 minutes.
|
||||
const TOKEN: TimerToken = 1;
|
||||
const TIMEOUT: Duration = Duration::from_secs(60 * 10);
|
||||
|
||||
// But make each attempt last only 9 minutes
|
||||
const PURGE_TIMEOUT: Duration = Duration::from_secs(60 * 9);
|
||||
|
||||
/// Periodically culls the transaction queue of mined transactions.
|
||||
pub struct QueueCull<T> {
|
||||
/// A handle to the client, for getting the latest block header.
|
||||
pub client: Arc<T>,
|
||||
/// A handle to the sync service.
|
||||
pub sync: Arc<LightSync>,
|
||||
/// The on-demand request service.
|
||||
pub on_demand: Arc<OnDemand>,
|
||||
/// The transaction queue.
|
||||
pub txq: Arc<RwLock<TransactionQueue>>,
|
||||
/// Event loop executor.
|
||||
pub executor: Executor,
|
||||
}
|
||||
|
||||
impl<T: LightChainClient + 'static> IoHandler<ClientIoMessage> for QueueCull<T> {
|
||||
fn initialize(&self, io: &IoContext<ClientIoMessage>) {
|
||||
io.register_timer(TOKEN, TIMEOUT).expect("Error registering timer");
|
||||
}
|
||||
|
||||
fn timeout(&self, _io: &IoContext<ClientIoMessage>, timer: TimerToken) {
|
||||
if timer != TOKEN { return }
|
||||
|
||||
let senders = self.txq.read().queued_senders();
|
||||
if senders.is_empty() { return }
|
||||
|
||||
let (sync, on_demand, txq) = (self.sync.clone(), self.on_demand.clone(), self.txq.clone());
|
||||
let best_header = self.client.best_block_header();
|
||||
let start_nonce = self.client.engine().account_start_nonce(best_header.number());
|
||||
|
||||
info!(target: "cull", "Attempting to cull queued transactions from {} senders.", senders.len());
|
||||
self.executor.spawn_with_timeout(move || {
|
||||
let maybe_fetching = sync.with_context(move |ctx| {
|
||||
// fetch the nonce of each sender in the queue.
|
||||
let nonce_reqs = senders.iter()
|
||||
.map(|&address| request::Account { header: best_header.clone().into(), address: address })
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// when they come in, update each sender to the new nonce.
|
||||
on_demand.request(ctx, nonce_reqs)
|
||||
.expect("No back-references; therefore all back-references are valid; qed")
|
||||
.map(move |accs| {
|
||||
let txq = txq.write();
|
||||
let _ = accs.into_iter()
|
||||
.map(|maybe_acc| maybe_acc.map_or(start_nonce, |acc| acc.nonce))
|
||||
.zip(senders)
|
||||
.fold(txq, |mut txq, (nonce, addr)| {
|
||||
txq.cull(addr, nonce);
|
||||
txq
|
||||
});
|
||||
})
|
||||
.map_err(|_| debug!(target: "cull", "OnDemand prematurely closed channel."))
|
||||
});
|
||||
|
||||
match maybe_fetching {
|
||||
Some(fut) => future::Either::A(fut),
|
||||
None => {
|
||||
debug!(target: "cull", "Unable to acquire network context; qed");
|
||||
future::Either::B(future::ok(()))
|
||||
},
|
||||
}
|
||||
}, PURGE_TIMEOUT, || {})
|
||||
}
|
||||
}
|
||||
@@ -34,7 +34,7 @@ extern crate ethcore_logger;
|
||||
|
||||
use std::ffi::OsString;
|
||||
use std::fs::{remove_file, metadata, File, create_dir_all};
|
||||
use std::io::{self as stdio, Read, Write};
|
||||
use std::io::{Read, Write};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicBool, Ordering};
|
||||
@@ -213,8 +213,6 @@ fn main_direct(force_can_restart: bool) -> i32 {
|
||||
"{}",
|
||||
Colour::Red.paint(format!("{}", e))
|
||||
);
|
||||
// flush before returning
|
||||
let _ = std::io::stderr().flush();
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
@@ -287,7 +285,7 @@ fn main_direct(force_can_restart: bool) -> i32 {
|
||||
let e = exit.clone();
|
||||
let exiting = exiting.clone();
|
||||
move |panic_msg| {
|
||||
let _ = stdio::stderr().write_all(panic_msg.as_bytes());
|
||||
eprintln!("{}", panic_msg);
|
||||
if !exiting.swap(true, Ordering::SeqCst) {
|
||||
*e.0.lock() = ExitStatus {
|
||||
panicking: true,
|
||||
@@ -350,7 +348,7 @@ fn main_direct(force_can_restart: bool) -> i32 {
|
||||
if let Some(mut handle) = handle {
|
||||
handle.detach_with_msg(format!("{}", Colour::Red.paint(&err)))
|
||||
}
|
||||
writeln!(&mut stdio::stderr(), "{}", err).expect("StdErr available; qed");
|
||||
eprintln!("{}", err);
|
||||
1
|
||||
},
|
||||
};
|
||||
|
||||
@@ -45,6 +45,9 @@ pub enum SpecType {
|
||||
Morden,
|
||||
Ropsten,
|
||||
Kovan,
|
||||
Rinkeby,
|
||||
Goerli,
|
||||
Kotti,
|
||||
Sokol,
|
||||
Dev,
|
||||
Custom(String),
|
||||
@@ -73,6 +76,9 @@ impl str::FromStr for SpecType {
|
||||
"morden" | "classic-testnet" => SpecType::Morden,
|
||||
"ropsten" => SpecType::Ropsten,
|
||||
"kovan" | "testnet" => SpecType::Kovan,
|
||||
"rinkeby" => SpecType::Rinkeby,
|
||||
"goerli" | "görli" => SpecType::Goerli,
|
||||
"kotti" => SpecType::Kotti,
|
||||
"sokol" | "poasokol" => SpecType::Sokol,
|
||||
"dev" => SpecType::Dev,
|
||||
other => SpecType::Custom(other.into()),
|
||||
@@ -96,6 +102,9 @@ impl fmt::Display for SpecType {
|
||||
SpecType::Morden => "morden",
|
||||
SpecType::Ropsten => "ropsten",
|
||||
SpecType::Kovan => "kovan",
|
||||
SpecType::Rinkeby => "rinkeby",
|
||||
SpecType::Goerli => "goerli",
|
||||
SpecType::Kotti => "kotti",
|
||||
SpecType::Sokol => "sokol",
|
||||
SpecType::Dev => "dev",
|
||||
SpecType::Custom(ref custom) => custom,
|
||||
@@ -119,6 +128,9 @@ impl SpecType {
|
||||
SpecType::Morden => Ok(ethereum::new_morden(params)),
|
||||
SpecType::Ropsten => Ok(ethereum::new_ropsten(params)),
|
||||
SpecType::Kovan => Ok(ethereum::new_kovan(params)),
|
||||
SpecType::Rinkeby => Ok(ethereum::new_rinkeby(params)),
|
||||
SpecType::Goerli => Ok(ethereum::new_goerli(params)),
|
||||
SpecType::Kotti => Ok(ethereum::new_kotti(params)),
|
||||
SpecType::Sokol => Ok(ethereum::new_sokol(params)),
|
||||
SpecType::Dev => Ok(Spec::new_instant()),
|
||||
SpecType::Custom(ref filename) => {
|
||||
@@ -375,6 +387,10 @@ mod tests {
|
||||
assert_eq!(SpecType::Ropsten, "ropsten".parse().unwrap());
|
||||
assert_eq!(SpecType::Kovan, "kovan".parse().unwrap());
|
||||
assert_eq!(SpecType::Kovan, "testnet".parse().unwrap());
|
||||
assert_eq!(SpecType::Rinkeby, "rinkeby".parse().unwrap());
|
||||
assert_eq!(SpecType::Goerli, "goerli".parse().unwrap());
|
||||
assert_eq!(SpecType::Goerli, "görli".parse().unwrap());
|
||||
assert_eq!(SpecType::Kotti, "kotti".parse().unwrap());
|
||||
assert_eq!(SpecType::Sokol, "sokol".parse().unwrap());
|
||||
assert_eq!(SpecType::Sokol, "poasokol".parse().unwrap());
|
||||
}
|
||||
@@ -398,6 +414,9 @@ mod tests {
|
||||
assert_eq!(format!("{}", SpecType::Morden), "morden");
|
||||
assert_eq!(format!("{}", SpecType::Ropsten), "ropsten");
|
||||
assert_eq!(format!("{}", SpecType::Kovan), "kovan");
|
||||
assert_eq!(format!("{}", SpecType::Rinkeby), "rinkeby");
|
||||
assert_eq!(format!("{}", SpecType::Goerli), "goerli");
|
||||
assert_eq!(format!("{}", SpecType::Kotti), "kotti");
|
||||
assert_eq!(format!("{}", SpecType::Sokol), "sokol");
|
||||
assert_eq!(format!("{}", SpecType::Dev), "dev");
|
||||
assert_eq!(format!("{}", SpecType::Custom("foo/bar".into())), "foo/bar");
|
||||
|
||||
@@ -252,6 +252,7 @@ pub struct FullDependencies {
|
||||
pub gas_price_percentile: usize,
|
||||
pub poll_lifetime: u32,
|
||||
pub allow_missing_blocks: bool,
|
||||
pub no_ancient_blocks: bool,
|
||||
}
|
||||
|
||||
impl FullDependencies {
|
||||
@@ -283,7 +284,7 @@ impl FullDependencies {
|
||||
handler.extend_with(DebugClient::new(self.client.clone()).to_delegate());
|
||||
}
|
||||
Api::Web3 => {
|
||||
handler.extend_with(Web3Client::new().to_delegate());
|
||||
handler.extend_with(Web3Client::default().to_delegate());
|
||||
}
|
||||
Api::Net => {
|
||||
handler.extend_with(NetClient::new(&self.sync).to_delegate());
|
||||
@@ -303,6 +304,7 @@ impl FullDependencies {
|
||||
gas_price_percentile: self.gas_price_percentile,
|
||||
allow_missing_blocks: self.allow_missing_blocks,
|
||||
allow_experimental_rpcs: self.experimental_rpcs,
|
||||
no_ancient_blocks: self.no_ancient_blocks
|
||||
}
|
||||
);
|
||||
handler.extend_with(client.to_delegate());
|
||||
@@ -529,7 +531,7 @@ impl<C: LightChainClient + 'static> LightDependencies<C> {
|
||||
warn!(target: "rpc", "Debug API is not available in light client mode.")
|
||||
}
|
||||
Api::Web3 => {
|
||||
handler.extend_with(Web3Client::new().to_delegate());
|
||||
handler.extend_with(Web3Client::default().to_delegate());
|
||||
}
|
||||
Api::Net => {
|
||||
handler.extend_with(light::NetClient::new(self.sync.clone()).to_delegate());
|
||||
|
||||
@@ -295,17 +295,6 @@ fn execute_light_impl<Cr>(cmd: RunCmd, logger: Arc<RotatingLogger>, on_client_rq
|
||||
// spin up event loop
|
||||
let runtime = Runtime::with_default_thread_count();
|
||||
|
||||
// queue cull service.
|
||||
let queue_cull = Arc::new(::light_helpers::QueueCull {
|
||||
client: client.clone(),
|
||||
sync: light_sync.clone(),
|
||||
on_demand: on_demand.clone(),
|
||||
txq: txq.clone(),
|
||||
executor: runtime.executor(),
|
||||
});
|
||||
|
||||
service.register_handler(queue_cull).map_err(|e| format!("Error attaching service: {:?}", e))?;
|
||||
|
||||
// start the network.
|
||||
light_sync.start_network();
|
||||
|
||||
@@ -753,6 +742,7 @@ fn execute_impl<Cr, Rr>(cmd: RunCmd, logger: Arc<RotatingLogger>, on_client_rq:
|
||||
gas_price_percentile: cmd.gas_price_percentile,
|
||||
poll_lifetime: cmd.poll_lifetime,
|
||||
allow_missing_blocks: cmd.allow_missing_blocks,
|
||||
no_ancient_blocks: !cmd.download_old_blocks,
|
||||
});
|
||||
|
||||
let dependencies = rpc::Dependencies {
|
||||
@@ -903,17 +893,27 @@ impl RunningClient {
|
||||
// Create a weak reference to the client so that we can wait on shutdown
|
||||
// until it is dropped
|
||||
let weak_client = Arc::downgrade(&client);
|
||||
// Shutdown and drop the ServiceClient
|
||||
// Shutdown and drop the ClientService
|
||||
client_service.shutdown();
|
||||
trace!(target: "shutdown", "ClientService shut down");
|
||||
drop(client_service);
|
||||
trace!(target: "shutdown", "ClientService dropped");
|
||||
// drop this stuff as soon as exit detected.
|
||||
drop(rpc);
|
||||
trace!(target: "shutdown", "RPC dropped");
|
||||
drop(keep_alive);
|
||||
trace!(target: "shutdown", "KeepAlive dropped");
|
||||
// to make sure timer does not spawn requests while shutdown is in progress
|
||||
informant.shutdown();
|
||||
trace!(target: "shutdown", "Informant shut down");
|
||||
// just Arc is dropping here, to allow other reference release in its default time
|
||||
drop(informant);
|
||||
trace!(target: "shutdown", "Informant dropped");
|
||||
drop(client);
|
||||
trace!(target: "shutdown", "Client dropped");
|
||||
// This may help when debugging ref cycles. Requires nightly-only `#![feature(weak_counts)]`
|
||||
// trace!(target: "shutdown", "Waiting for refs to Client to shutdown, strong_count={:?}, weak_count={:?}", weak_client.strong_count(), weak_client.weak_count());
|
||||
trace!(target: "shutdown", "Waiting for refs to Client to shutdown");
|
||||
wait_for_drop(weak_client);
|
||||
}
|
||||
}
|
||||
@@ -947,24 +947,30 @@ fn print_running_environment(data_dir: &str, dirs: &Directories, db_dirs: &Datab
|
||||
}
|
||||
|
||||
fn wait_for_drop<T>(w: Weak<T>) {
|
||||
let sleep_duration = Duration::from_secs(1);
|
||||
let warn_timeout = Duration::from_secs(60);
|
||||
let max_timeout = Duration::from_secs(300);
|
||||
const SLEEP_DURATION: Duration = Duration::from_secs(1);
|
||||
const WARN_TIMEOUT: Duration = Duration::from_secs(60);
|
||||
const MAX_TIMEOUT: Duration = Duration::from_secs(300);
|
||||
|
||||
let instant = Instant::now();
|
||||
let mut warned = false;
|
||||
|
||||
while instant.elapsed() < max_timeout {
|
||||
while instant.elapsed() < MAX_TIMEOUT {
|
||||
if w.upgrade().is_none() {
|
||||
return;
|
||||
}
|
||||
|
||||
if !warned && instant.elapsed() > warn_timeout {
|
||||
if !warned && instant.elapsed() > WARN_TIMEOUT {
|
||||
warned = true;
|
||||
warn!("Shutdown is taking longer than expected.");
|
||||
}
|
||||
|
||||
thread::sleep(sleep_duration);
|
||||
thread::sleep(SLEEP_DURATION);
|
||||
|
||||
// When debugging shutdown issues on a nightly build it can help to enable this with the
|
||||
// `#![feature(weak_counts)]` added to lib.rs (TODO: enable when
|
||||
// https://github.com/rust-lang/rust/issues/57977 is stable)
|
||||
// trace!(target: "shutdown", "Waiting for client to drop, strong_count={:?}, weak_count={:?}", w.strong_count(), w.weak_count());
|
||||
trace!(target: "shutdown", "Waiting for client to drop");
|
||||
}
|
||||
|
||||
warn!("Shutdown timeout reached, exiting uncleanly.");
|
||||
|
||||
@@ -261,7 +261,7 @@ impl SnapshotCommand {
|
||||
let cur_size = p.size();
|
||||
if cur_size != last_size {
|
||||
last_size = cur_size;
|
||||
let bytes = ::informant::format_bytes(p.size());
|
||||
let bytes = ::informant::format_bytes(cur_size as usize);
|
||||
info!("Snapshot: {} accounts {} blocks {}", p.accounts(), p.blocks(), bytes);
|
||||
}
|
||||
|
||||
|
||||
@@ -38,7 +38,6 @@ common-types = { path = "../ethcore/types" }
|
||||
ethash = { path = "../ethash" }
|
||||
ethcore = { path = "../ethcore", features = ["test-helpers"] }
|
||||
ethcore-accounts = { path = "../accounts", optional = true }
|
||||
ethcore-io = { path = "../util/io" }
|
||||
ethcore-light = { path = "../ethcore/light" }
|
||||
ethcore-logger = { path = "../parity/logger" }
|
||||
ethcore-miner = { path = "../miner" }
|
||||
@@ -59,7 +58,6 @@ keccak-hash = "0.1.2"
|
||||
parity-runtime = { path = "../util/runtime" }
|
||||
parity-updater = { path = "../updater" }
|
||||
parity-version = { path = "../util/version" }
|
||||
trie-db = "0.11.0"
|
||||
rlp = { version = "0.3.0", features = ["ethereum"] }
|
||||
stats = { path = "../util/stats" }
|
||||
vm = { path = "../ethcore/vm" }
|
||||
@@ -67,9 +65,9 @@ vm = { path = "../ethcore/vm" }
|
||||
[dev-dependencies]
|
||||
ethcore = { path = "../ethcore", features = ["test-helpers"] }
|
||||
ethcore-accounts = { path = "../accounts" }
|
||||
ethcore-io = { path = "../util/io" }
|
||||
ethcore-network = { path = "../util/network" }
|
||||
fake-fetch = { path = "../util/fake-fetch" }
|
||||
kvdb-memorydb = "0.1"
|
||||
macros = { path = "../util/macros" }
|
||||
pretty_assertions = "0.1"
|
||||
transaction-pool = "2.0"
|
||||
|
||||
@@ -51,7 +51,7 @@ const TIME_THRESHOLD: u64 = 7;
|
||||
/// minimal length of hash
|
||||
const TOKEN_LENGTH: usize = 16;
|
||||
/// Separator between fields in serialized tokens file.
|
||||
const SEPARATOR: &'static str = ";";
|
||||
const SEPARATOR: &str = ";";
|
||||
/// Number of seconds to keep unused tokens.
|
||||
const UNUSED_TOKEN_TIMEOUT: u64 = 3600 * 24; // a day
|
||||
|
||||
@@ -115,7 +115,7 @@ impl AuthCodes<DefaultTimeProvider> {
|
||||
})
|
||||
.collect();
|
||||
Ok(AuthCodes {
|
||||
codes: codes,
|
||||
codes,
|
||||
now: time_provider,
|
||||
})
|
||||
}
|
||||
@@ -128,7 +128,7 @@ impl<T: TimeProvider> AuthCodes<T> {
|
||||
pub fn to_file(&self, file: &Path) -> io::Result<()> {
|
||||
let mut file = fs::File::create(file)?;
|
||||
let content = self.codes.iter().map(|code| {
|
||||
let mut data = vec![code.code.clone(), encode_time(code.created_at.clone())];
|
||||
let mut data = vec![code.code.clone(), encode_time(code.created_at)];
|
||||
if let Some(used_at) = code.last_used_at {
|
||||
data.push(encode_time(used_at));
|
||||
}
|
||||
@@ -141,11 +141,11 @@ impl<T: TimeProvider> AuthCodes<T> {
|
||||
pub fn new(codes: Vec<String>, now: T) -> Self {
|
||||
AuthCodes {
|
||||
codes: codes.into_iter().map(|code| Code {
|
||||
code: code,
|
||||
code,
|
||||
created_at: time::Duration::from_secs(now.now()),
|
||||
last_used_at: None,
|
||||
}).collect(),
|
||||
now: now,
|
||||
now,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -183,7 +183,7 @@ impl<T: TimeProvider> AuthCodes<T> {
|
||||
.join("-");
|
||||
trace!(target: "signer", "New authentication token generated.");
|
||||
self.codes.push(Code {
|
||||
code: code,
|
||||
code,
|
||||
created_at: time::Duration::from_secs(self.now.now()),
|
||||
last_used_at: None,
|
||||
});
|
||||
|
||||
@@ -44,7 +44,7 @@ impl<M, T> http::MetaExtractor<M> for MetaExtractor<T> where
|
||||
{
|
||||
fn read_metadata(&self, req: &hyper::Request<hyper::Body>) -> M {
|
||||
let as_string = |header: Option<&hyper::header::HeaderValue>| {
|
||||
header.and_then(|val| val.to_str().ok().map(|s| s.to_owned()))
|
||||
header.and_then(|val| val.to_str().ok().map(ToOwned::to_owned))
|
||||
};
|
||||
|
||||
let origin = as_string(req.headers().get("origin"));
|
||||
|
||||
@@ -16,7 +16,26 @@
|
||||
|
||||
//! Parity RPC.
|
||||
|
||||
#![warn(missing_docs)]
|
||||
#![warn(missing_docs, unused_extern_crates)]
|
||||
#![cfg_attr(feature = "cargo-clippy", warn(clippy::all, clippy::pedantic))]
|
||||
#![cfg_attr(
|
||||
feature = "cargo-clippy",
|
||||
allow(
|
||||
// things are often more readable this way
|
||||
clippy::cast_lossless,
|
||||
clippy::module_name_repetitions,
|
||||
clippy::single_match_else,
|
||||
clippy::type_complexity,
|
||||
clippy::use_self,
|
||||
// not practical
|
||||
clippy::match_bool,
|
||||
clippy::needless_pass_by_value,
|
||||
clippy::similar_names,
|
||||
// don't require markdown syntax for docs
|
||||
clippy::doc_markdown,
|
||||
),
|
||||
warn(clippy::indexing_slicing)
|
||||
)]
|
||||
|
||||
#[macro_use]
|
||||
extern crate futures;
|
||||
@@ -32,7 +51,6 @@ extern crate rustc_hex;
|
||||
extern crate semver;
|
||||
extern crate serde;
|
||||
extern crate serde_json;
|
||||
extern crate tiny_keccak;
|
||||
extern crate tokio_timer;
|
||||
extern crate transient_hashmap;
|
||||
|
||||
@@ -48,7 +66,6 @@ extern crate ethcore;
|
||||
extern crate fastmap;
|
||||
extern crate parity_bytes as bytes;
|
||||
extern crate parity_crypto as crypto;
|
||||
extern crate ethcore_io as io;
|
||||
extern crate ethcore_light as light;
|
||||
extern crate ethcore_logger;
|
||||
extern crate ethcore_miner as miner;
|
||||
@@ -63,15 +80,18 @@ extern crate keccak_hash as hash;
|
||||
extern crate parity_runtime;
|
||||
extern crate parity_updater as updater;
|
||||
extern crate parity_version as version;
|
||||
extern crate trie_db as trie;
|
||||
extern crate eip_712;
|
||||
extern crate rlp;
|
||||
extern crate stats;
|
||||
extern crate tempdir;
|
||||
extern crate vm;
|
||||
|
||||
#[cfg(any(test, feature = "ethcore-accounts"))]
|
||||
extern crate ethcore_accounts as accounts;
|
||||
|
||||
#[cfg(any(test, feature = "ethcore-accounts"))]
|
||||
extern crate tiny_keccak;
|
||||
|
||||
#[macro_use]
|
||||
extern crate log;
|
||||
#[macro_use]
|
||||
@@ -90,13 +110,11 @@ extern crate pretty_assertions;
|
||||
#[macro_use]
|
||||
extern crate macros;
|
||||
|
||||
#[cfg(test)]
|
||||
extern crate kvdb_memorydb;
|
||||
|
||||
#[cfg(test)]
|
||||
extern crate fake_fetch;
|
||||
|
||||
extern crate tempdir;
|
||||
#[cfg(test)]
|
||||
extern crate ethcore_io as io;
|
||||
|
||||
pub extern crate jsonrpc_ws_server as ws;
|
||||
|
||||
@@ -146,8 +164,8 @@ pub fn start_http<M, S, H, T>(
|
||||
Ok(http::ServerBuilder::with_meta_extractor(handler, extractor)
|
||||
.keep_alive(keep_alive)
|
||||
.threads(threads)
|
||||
.cors(cors_domains.into())
|
||||
.allowed_hosts(allowed_hosts.into())
|
||||
.cors(cors_domains)
|
||||
.allowed_hosts(allowed_hosts)
|
||||
.health_api(("/api/health", "parity_nodeStatus"))
|
||||
.cors_allow_headers(AccessControlAllowHeaders::Any)
|
||||
.max_request_body_size(max_payload * 1024 * 1024)
|
||||
@@ -177,8 +195,8 @@ pub fn start_http_with_middleware<M, S, H, T, R>(
|
||||
Ok(http::ServerBuilder::with_meta_extractor(handler, extractor)
|
||||
.keep_alive(keep_alive)
|
||||
.threads(threads)
|
||||
.cors(cors_domains.into())
|
||||
.allowed_hosts(allowed_hosts.into())
|
||||
.cors(cors_domains)
|
||||
.allowed_hosts(allowed_hosts)
|
||||
.cors_allow_headers(AccessControlAllowHeaders::Any)
|
||||
.max_request_body_size(max_payload * 1024 * 1024)
|
||||
.request_middleware(middleware)
|
||||
|
||||
@@ -39,7 +39,7 @@ impl<T> Server<T> {
|
||||
|
||||
Server {
|
||||
server: f(remote),
|
||||
event_loop: event_loop,
|
||||
event_loop,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -60,8 +60,8 @@ pub struct GuardedAuthCodes {
|
||||
pub path: PathBuf,
|
||||
}
|
||||
|
||||
impl GuardedAuthCodes {
|
||||
pub fn new() -> Self {
|
||||
impl Default for GuardedAuthCodes {
|
||||
fn default() -> Self {
|
||||
let tempdir = TempDir::new("").unwrap();
|
||||
let path = tempdir.path().join("file");
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ pub struct Response {
|
||||
impl Response {
|
||||
pub fn assert_header(&self, header: &str, value: &str) {
|
||||
let header = format!("{}: {}", header, value);
|
||||
assert!(self.headers.iter().find(|h| *h == &header).is_some(), "Couldn't find header {} in {:?}", header, &self.headers)
|
||||
assert!(self.headers.iter().any(|h| h == &header), "Couldn't find header {} in {:?}", header, &self.headers)
|
||||
}
|
||||
|
||||
pub fn assert_status(&self, status: &str) {
|
||||
@@ -98,35 +98,35 @@ pub fn request(address: &SocketAddr, request: &str) -> Response {
|
||||
let mut lines = response.lines();
|
||||
let status = lines.next().expect("Expected a response").to_owned();
|
||||
let headers_raw = read_block(&mut lines, false);
|
||||
let headers = headers_raw.split('\n').map(|v| v.to_owned()).collect();
|
||||
let headers = headers_raw.split('\n').map(ToOwned::to_owned).collect();
|
||||
let body = read_block(&mut lines, true);
|
||||
|
||||
Response {
|
||||
status: status,
|
||||
headers: headers,
|
||||
headers_raw: headers_raw,
|
||||
body: body,
|
||||
status,
|
||||
headers,
|
||||
headers_raw,
|
||||
body,
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if all required security headers are present
|
||||
pub fn assert_security_headers_present(headers: &[String], port: Option<u16>) {
|
||||
if let None = port {
|
||||
if port.is_none() {
|
||||
assert!(
|
||||
headers.iter().find(|header| header.as_str() == "X-Frame-Options: SAMEORIGIN").is_some(),
|
||||
headers.iter().any(|header| header.as_str() == "X-Frame-Options: SAMEORIGIN")
|
||||
"X-Frame-Options: SAMEORIGIN missing: {:?}", headers
|
||||
);
|
||||
}
|
||||
assert!(
|
||||
headers.iter().find(|header| header.as_str() == "X-XSS-Protection: 1; mode=block").is_some(),
|
||||
headers.iter().any(|header| header.as_str() == "X-XSS-Protection: 1; mode=block")
|
||||
"X-XSS-Protection missing: {:?}", headers
|
||||
);
|
||||
assert!(
|
||||
headers.iter().find(|header| header.as_str() == "X-Content-Type-Options: nosniff").is_some(),
|
||||
headers.iter().any(|header| header.as_str() == "X-Content-Type-Options: nosniff")
|
||||
"X-Content-Type-Options missing: {:?}", headers
|
||||
);
|
||||
assert!(
|
||||
headers.iter().find(|header| header.starts_with("Content-Security-Policy: ")).is_some(),
|
||||
headers.iter().any(|header| header.starts_with("Content-Security-Policy: "))
|
||||
"Content-Security-Policy missing: {:?}", headers
|
||||
)
|
||||
}
|
||||
|
||||
@@ -29,7 +29,7 @@ use tests::http_client;
|
||||
pub fn serve() -> (Server<ws::Server>, usize, GuardedAuthCodes) {
|
||||
let address = "127.0.0.1:0".parse().unwrap();
|
||||
let io = MetaIoHandler::default();
|
||||
let authcodes = GuardedAuthCodes::new();
|
||||
let authcodes = GuardedAuthCodes::default();
|
||||
let stats = Arc::new(informant::RpcStats::default());
|
||||
|
||||
let res = Server::new(|_| ::start_ws(
|
||||
|
||||
@@ -41,8 +41,8 @@ impl HttpMetaExtractor for RpcExtractor {
|
||||
Metadata {
|
||||
origin: Origin::Rpc(
|
||||
format!("{} / {}",
|
||||
origin.unwrap_or("unknown origin".to_string()),
|
||||
user_agent.unwrap_or("unknown agent".to_string()))
|
||||
origin.unwrap_or_else(|| "unknown origin".to_string()),
|
||||
user_agent.unwrap_or_else(|| "unknown agent".to_string()))
|
||||
),
|
||||
session: None,
|
||||
}
|
||||
@@ -67,7 +67,7 @@ impl WsExtractor {
|
||||
/// Creates new `WsExtractor` with given authcodes path.
|
||||
pub fn new(path: Option<&Path>) -> Self {
|
||||
WsExtractor {
|
||||
authcodes_path: path.map(|p| p.to_owned()),
|
||||
authcodes_path: path.map(ToOwned::to_owned),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -80,7 +80,7 @@ impl ws::MetaExtractor<Metadata> for WsExtractor {
|
||||
Some(ref path) => {
|
||||
let authorization = req.protocols.get(0).and_then(|p| auth_token_hash(&path, p, true));
|
||||
match authorization {
|
||||
Some(id) => Origin::Signer { session: id.into() },
|
||||
Some(id) => Origin::Signer { session: id },
|
||||
None => Origin::Ws { session: id.into() },
|
||||
}
|
||||
},
|
||||
@@ -186,7 +186,7 @@ impl WsStats {
|
||||
/// Creates new WS usage tracker.
|
||||
pub fn new(stats: Arc<RpcStats>) -> Self {
|
||||
WsStats {
|
||||
stats: stats,
|
||||
stats,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -210,7 +210,7 @@ impl<M: core::Middleware<Metadata>> WsDispatcher<M> {
|
||||
/// Create new `WsDispatcher` with given full handler.
|
||||
pub fn new(full_handler: core::MetaIoHandler<Metadata, M>) -> Self {
|
||||
WsDispatcher {
|
||||
full_handler: full_handler,
|
||||
full_handler,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -229,7 +229,7 @@ impl<M: core::Middleware<Metadata>> core::Middleware<Metadata> for WsDispatcher<
|
||||
X: core::futures::Future<Item=Option<core::Response>, Error=()> + Send + 'static,
|
||||
{
|
||||
let use_full = match &meta.origin {
|
||||
&Origin::Signer { .. } => true,
|
||||
Origin::Signer { .. } => true,
|
||||
_ => false,
|
||||
};
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user