Compare commits
15 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7d1415a253 | ||
|
|
7aab6b74da | ||
|
|
ebd0fd0117 | ||
|
|
0d63c932af | ||
|
|
0b78a1b5a0 | ||
|
|
0e95db11d4 | ||
|
|
7f3a72bde1 | ||
|
|
3b9b1a8f14 | ||
|
|
a6c4b17303 | ||
|
|
678138f097 | ||
|
|
b4e4038fb5 | ||
|
|
7a8e5976bc | ||
|
|
938c8d8bcd | ||
|
|
3aefa2b960 | ||
|
|
10657d96c4 |
@@ -247,7 +247,6 @@ publish-awss3-release:
|
|||||||
|
|
||||||
publish-docs:
|
publish-docs:
|
||||||
stage: publish
|
stage: publish
|
||||||
# <<: *no_git
|
|
||||||
only:
|
only:
|
||||||
- tags
|
- tags
|
||||||
except:
|
except:
|
||||||
|
|||||||
409
CHANGELOG.md
409
CHANGELOG.md
@@ -1,163 +1,276 @@
|
|||||||
## Parity-Ethereum [v2.3.0](https://github.com/paritytech/parity-ethereum/releases/tag/v2.3.0) (2019-01-16)
|
## Parity-Ethereum [v2.2.5](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.5) (2018-12-14)
|
||||||
|
Parity-Ethereum 2.2.5-beta is an important release that introduces Constantinople fork at block 7080000 on Mainnet.
|
||||||
|
This release also contains a fix for chains using AuRa + EmptySteps. Read carefully if this applies to you.
|
||||||
|
If you have a chain with`empty_steps` already running, some blocks most likely contain non-strict entries (unordered or duplicated empty steps). In this release`strict_empty_steps_transition` **is enabled by default at block 0** for any chain with `empty_steps`.
|
||||||
|
If your network uses `empty_steps` you **must**:
|
||||||
|
- plan a hard fork and change `strict_empty_steps_transition` to the desire fork block
|
||||||
|
- update the clients of the whole network to 2.2.5-beta / 2.1.10-stable.
|
||||||
|
If for some reason you don't want to do this please set`strict_empty_steps_transition` to `0xfffffffff` to disable it.
|
||||||
|
|
||||||
Parity-Ethereum 2.3.0-beta is a consensus-relevant security release that reverts Constantinople on the Ethereum network. Upgrading is mandatory for Ethereum, and strongly recommended for other networks.
|
The full list of included changes:
|
||||||
|
- Backports for beta 2.2.5 ([#10047](https://github.com/paritytech/parity-ethereum/pull/10047))
|
||||||
|
- Bump beta to 2.2.5 ([#10047](https://github.com/paritytech/parity-ethereum/pull/10047))
|
||||||
|
- Fix empty steps ([#9939](https://github.com/paritytech/parity-ethereum/pull/9939))
|
||||||
|
- Prevent sending empty step message twice
|
||||||
|
- Prevent sending empty step and then block in the same step
|
||||||
|
- Don't accept double empty steps
|
||||||
|
- Do basic validation of self-sealed blocks
|
||||||
|
- Strict empty steps validation ([#10041](https://github.com/paritytech/parity-ethereum/pull/10041))
|
||||||
|
- Enables strict verification of empty steps - there can be no duplicates and empty steps should be ordered inside the seal.
|
||||||
|
- Note that authorities won't produce invalid seals after [#9939](https://github.com/paritytech/parity-ethereum/pull/9939), this PR just adds verification to the seal to prevent forging incorrect blocks and potentially causing consensus issues.
|
||||||
|
- This features is enabled by default so any AuRa + EmptySteps chain should set strict_empty_steps_transition fork block number in their spec and upgrade to v2.2.5-beta or v2.1.10-stable.
|
||||||
|
- ethcore: enable constantinople on ethereum ([#10031](https://github.com/paritytech/parity-ethereum/pull/10031))
|
||||||
|
- ethcore: change blockreward to 2e18 for foundation after constantinople
|
||||||
|
- ethcore: delay diff bomb by 2e6 blocks for foundation after constantinople
|
||||||
|
- ethcore: enable eip-{145,1014,1052,1283} for foundation after constantinople
|
||||||
|
- Change test miner max memory to malloc reports. ([#10024](https://github.com/paritytech/parity-ethereum/pull/10024))
|
||||||
|
- Fix: test corpus_inaccessible panic ([#10019](https://github.com/paritytech/parity-ethereum/pull/10019))
|
||||||
|
|
||||||
- **Consensus** - Ethereum Network: Pull Constantinople protocol upgrade on Ethereum (#10189)
|
## Parity-Ethereum [v2.2.2](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.2) (2018-11-29)
|
||||||
- Read more: [Security Alert: Ethereum Constantinople Postponement](https://blog.ethereum.org/2019/01/15/security-alert-ethereum-constantinople-postponement/)
|
|
||||||
- **Networking** - All networks: Ping nodes from discovery (#10167)
|
|
||||||
- **Wasm** - Kovan Network: Update pwasm-utils to 0.6.1 (#10134)
|
|
||||||
|
|
||||||
Other notable changes:
|
Parity-Ethereum 2.2.2-beta is an exciting release. Among others, it improves sync performance, peering stability, block propagation, and transaction propagation times. Also, a warp-sync no longer removes existing blocks from the database, but rather reuses locally available information to decrease sync times and reduces required bandwidth.
|
||||||
|
|
||||||
- Existing blocks in the database are now kept when restoring a Snapshot. (#8643)
|
Before upgrading to 2.2.2, please also verify the validity of your chain specs. Parity Ethereum now denies unknown fields in the specification. To do this, use the chainspec tool:
|
||||||
- Block and transaction propagation is improved significantly. (#9954)
|
|
||||||
- The ERC-191 Signed Data Standard is now supported by `personal_sign191`. (#9701)
|
|
||||||
- Add support for ERC-191/712 `eth_signTypedData` as a standard for machine-verifiable and human-readable typed data signing with Ethereum keys. (#9631)
|
|
||||||
- Add support for ERC-1186 `eth_getProof` (#9001)
|
|
||||||
- Add experimental RPCs flag to enable ERC-191, ERC-712, and ERC-1186 APIs via `--jsonrpc-experimental` (#9928)
|
|
||||||
- Make `CALLCODE` to trace value to be the code address. (#9881)
|
|
||||||
|
|
||||||
Configuration changes:
|
```
|
||||||
|
cargo build --release -p chainspec
|
||||||
|
./target/release/chainspec /path/to/spec.json
|
||||||
|
```
|
||||||
|
|
||||||
- The EIP-98 transition is now disabled by default. If you previously had no `eip98transition` specified in your chain specification, you would enable this now manually on block `0x0`. (#9955)
|
Last but not least, JSONRPC APIs which are not yet accepted as an EIP in the `eth`, `personal`, or `web3` namespace, are now considere experimental as their final specification might change in future. These APIs have to be manually enabled by explicitly running `--jsonrpc-experimental`.
|
||||||
- Also, unknown fields in chain specs are now rejected. (#9972)
|
|
||||||
- The Tendermint engine was removed from Parity Ethereum and is no longer available and maintained. (#9980)
|
|
||||||
- Ropsten testnet data and keys moved from `test/` to `ropsten/` subdir. To reuse your old keys and data either copy or symlink them to the new location. (#10123)
|
|
||||||
- Strict empty steps validation (#10041)
|
|
||||||
- If you have a chain with`empty_steps` already running, some blocks most likely contain non-strict entries (unordered or duplicated empty steps). In this release `strict_empty_steps_transition` is enabled by default at block `0x0` for any chain with `empty_steps`.
|
|
||||||
- If your network uses `empty_steps` you **must** (A) plan a hard fork and change `strict_empty_steps_transition` to the desired fork block and (B) update the clients of the whole network to 2.2.7-stable / 2.3.0-beta. If for some reason you don't want to do this please set`strict_empty_steps_transition` to `0xfffffffff` to disable it.
|
|
||||||
|
|
||||||
_Note:_ This release marks Parity 2.3 as _beta_. All versions of Parity 2.2 are now considered _stable_.
|
|
||||||
|
|
||||||
The full list of included changes:
|
The full list of included changes:
|
||||||
|
|
||||||
- Backports for 2.3.0 beta ([#10164](https://github.com/paritytech/parity-ethereum/pull/10164))
|
- Backports For beta 2.2.2 ([#9976](https://github.com/paritytech/parity-ethereum/pull/9976))
|
||||||
- Snap: fix path in script ([#10157](https://github.com/paritytech/parity-ethereum/pull/10157))
|
- Version: bump beta to 2.2.2
|
||||||
- Make sure parent block is not in importing queue when importing ancient blocks ([#10138](https://github.com/paritytech/parity-ethereum/pull/10138))
|
- Add experimental RPCs flag ([#9928](https://github.com/paritytech/parity-ethereum/pull/9928))
|
||||||
- Ci: re-enable snap publishing ([#10142](https://github.com/paritytech/parity-ethereum/pull/10142))
|
- Keep existing blocks when restoring a Snapshot ([#8643](https://github.com/paritytech/parity-ethereum/pull/8643))
|
||||||
- Hf in POA Core (2019-01-18) - Constantinople ([#10155](https://github.com/paritytech/parity-ethereum/pull/10155))
|
- Rename db_restore => client
|
||||||
- Update EWF's tobalaba chainspec ([#10152](https://github.com/paritytech/parity-ethereum/pull/10152))
|
- First step: make it compile!
|
||||||
- Replace ethcore-logger with env-logger. ([#10102](https://github.com/paritytech/parity-ethereum/pull/10102))
|
- Second step: working implementation!
|
||||||
- Finality: dont require chain head to be in the chain ([#10054](https://github.com/paritytech/parity-ethereum/pull/10054))
|
- Refactoring
|
||||||
- Remove caching for node connections ([#10143](https://github.com/paritytech/parity-ethereum/pull/10143))
|
- Fix tests
|
||||||
- Blooms file iterator empty on out of range position. ([#10145](https://github.com/paritytech/parity-ethereum/pull/10145))
|
- Migrate ancient blocks interacting backward
|
||||||
- Autogen docs for the "Configuring Parity Ethereum" wiki page. ([#10067](https://github.com/paritytech/parity-ethereum/pull/10067))
|
- Early return in block migration if snapshot is aborted
|
||||||
- Misc: bump license header to 2019 ([#10135](https://github.com/paritytech/parity-ethereum/pull/10135))
|
- Remove RwLock getter (PR Grumble I)
|
||||||
- Hide most of the logs from cpp example. ([#10139](https://github.com/paritytech/parity-ethereum/pull/10139))
|
- Remove dependency on `Client`: only used Traits
|
||||||
- Don't try to send oversized packets ([#10042](https://github.com/paritytech/parity-ethereum/pull/10042))
|
- Add test for recovering aborted snapshot recovery
|
||||||
- Private tx enabled flag added into STATUS packet ([#9999](https://github.com/paritytech/parity-ethereum/pull/9999))
|
- Add test for migrating old blocks
|
||||||
- Update pwasm-utils to 0.6.1 ([#10134](https://github.com/paritytech/parity-ethereum/pull/10134))
|
- Release RwLock earlier
|
||||||
- Extract blockchain from ethcore ([#10114](https://github.com/paritytech/parity-ethereum/pull/10114))
|
- Revert Cargo.lock
|
||||||
- Ethcore: update hardcoded headers ([#10123](https://github.com/paritytech/parity-ethereum/pull/10123))
|
- Update _update ancient block_ logic: set local in `commit`
|
||||||
- Identity fix ([#10128](https://github.com/paritytech/parity-ethereum/pull/10128))
|
- Update typo in ethcore/src/snapshot/service.rs
|
||||||
- Use LenCachingMutex to optimize verification. ([#10117](https://github.com/paritytech/parity-ethereum/pull/10117))
|
- Adjust requests costs for light client ([#9925](https://github.com/paritytech/parity-ethereum/pull/9925))
|
||||||
- Pyethereum keystore support ([#9710](https://github.com/paritytech/parity-ethereum/pull/9710))
|
- Pip Table Cost relative to average peers instead of max peers
|
||||||
- Bump rocksdb-sys to 0.5.5 ([#10124](https://github.com/paritytech/parity-ethereum/pull/10124))
|
- Add tracing in PIP new_cost_table
|
||||||
- Parity-clib: `async C bindings to RPC requests` + `subscribe/unsubscribe to websocket events` ([#9920](https://github.com/paritytech/parity-ethereum/pull/9920))
|
- Update stat peer_count
|
||||||
- Refactor (hardware wallet) : reduce the number of threads ([#9644](https://github.com/paritytech/parity-ethereum/pull/9644))
|
- Use number of leeching peers for Light serve costs
|
||||||
- Hf in POA Sokol (2019-01-04) ([#10077](https://github.com/paritytech/parity-ethereum/pull/10077))
|
- Fix test::light_params_load_share_depends_on_max_peers (wrong type)
|
||||||
- Fix broken links ([#10119](https://github.com/paritytech/parity-ethereum/pull/10119))
|
- Remove (now) useless test
|
||||||
- Follow-up to [#10105](https://github.com/paritytech/parity-ethereum/issues/10105) ([#10107](https://github.com/paritytech/parity-ethereum/pull/10107))
|
- Remove `load_share` from LightParams.Config
|
||||||
- Move EIP-712 crate back to parity-ethereum ([#10106](https://github.com/paritytech/parity-ethereum/pull/10106))
|
- Add LEECHER_COUNT_FACTOR
|
||||||
- Move a bunch of stuff around ([#10101](https://github.com/paritytech/parity-ethereum/pull/10101))
|
- Pr Grumble: u64 to u32 for f64 casting
|
||||||
- Revert "Add --frozen when running cargo ([#10081](https://github.com/paritytech/parity-ethereum/pull/10081))" ([#10105](https://github.com/paritytech/parity-ethereum/pull/10105))
|
- Prevent u32 overflow for avg_peer_count
|
||||||
- Fix left over small grumbles on whitespaces ([#10084](https://github.com/paritytech/parity-ethereum/pull/10084))
|
- Add tests for LightSync::Statistics
|
||||||
- Add --frozen when running cargo ([#10081](https://github.com/paritytech/parity-ethereum/pull/10081))
|
- Fix empty steps ([#9939](https://github.com/paritytech/parity-ethereum/pull/9939))
|
||||||
- Fix pubsub new_blocks notifications to include all blocks ([#9987](https://github.com/paritytech/parity-ethereum/pull/9987))
|
- Don't send empty step twice or empty step then block.
|
||||||
- Update some dependencies for compilation with pc-windows-gnu ([#10082](https://github.com/paritytech/parity-ethereum/pull/10082))
|
- Perform basic validation of locally sealed blocks.
|
||||||
- Fill transaction hash on ethGetLog of light client. ([#9938](https://github.com/paritytech/parity-ethereum/pull/9938))
|
- Don't include empty step twice.
|
||||||
- Update changelog update for 2.2.5-beta and 2.1.10-stable ([#10064](https://github.com/paritytech/parity-ethereum/pull/10064))
|
- Prevent silent errors in daemon mode, closes [#9367](https://github.com/paritytech/parity-ethereum/issues/9367) ([#9946](https://github.com/paritytech/parity-ethereum/pull/9946))
|
||||||
- Implement len caching for parking_lot RwLock ([#10032](https://github.com/paritytech/parity-ethereum/pull/10032))
|
- Fix a deadlock ([#9952](https://github.com/paritytech/parity-ethereum/pull/9952))
|
||||||
- Update parking_lot to 0.7 ([#10050](https://github.com/paritytech/parity-ethereum/pull/10050))
|
- Update informant:
|
||||||
- Bump crossbeam. ([#10048](https://github.com/paritytech/parity-ethereum/pull/10048))
|
- Decimal in Mgas/s
|
||||||
- Ethcore: enable constantinople on ethereum ([#10031](https://github.com/paritytech/parity-ethereum/pull/10031))
|
- Print every 5s (not randomly between 5s and 10s)
|
||||||
- Strict empty steps validation ([#10041](https://github.com/paritytech/parity-ethereum/pull/10041))
|
- Fix dead-lock in `blockchain.rs`
|
||||||
- Center the Subtitle, use some CAPS ([#10034](https://github.com/paritytech/parity-ethereum/pull/10034))
|
- Update locks ordering
|
||||||
- Change test miner max memory to malloc reports. ([#10024](https://github.com/paritytech/parity-ethereum/pull/10024))
|
- Fix light client informant while syncing ([#9932](https://github.com/paritytech/parity-ethereum/pull/9932))
|
||||||
- Sort the storage for private state ([#10018](https://github.com/paritytech/parity-ethereum/pull/10018))
|
- Add `is_idle` to LightSync to check importing status
|
||||||
- Fix: test corpus_inaccessible panic ([#10019](https://github.com/paritytech/parity-ethereum/pull/10019))
|
- Use SyncStateWrapper to make sure is_idle gets updates
|
||||||
- Ci: move future releases to ethereum subdir on s3 ([#10017](https://github.com/paritytech/parity-ethereum/pull/10017))
|
- Update is_major_import to use verified queue size as well
|
||||||
- Light(on_demand): decrease default time window to 10 secs ([#10016](https://github.com/paritytech/parity-ethereum/pull/10016))
|
- Add comment for `is_idle`
|
||||||
- Light client : failsafe crate (circuit breaker) ([#9790](https://github.com/paritytech/parity-ethereum/pull/9790))
|
- Add Debug to `SyncStateWrapper`
|
||||||
- Lencachingmutex ([#9988](https://github.com/paritytech/parity-ethereum/pull/9988))
|
- `fn get` -> `fn into_inner`
|
||||||
- Version and notification for private contract wrapper added ([#9761](https://github.com/paritytech/parity-ethereum/pull/9761))
|
- Ci: rearrange pipeline by logic ([#9970](https://github.com/paritytech/parity-ethereum/pull/9970))
|
||||||
- Handle failing case for update account cache in require ([#9989](https://github.com/paritytech/parity-ethereum/pull/9989))
|
- Ci: rearrange pipeline by logic
|
||||||
- Add tokio runtime to ethcore io worker ([#9979](https://github.com/paritytech/parity-ethereum/pull/9979))
|
- Ci: rename docs script
|
||||||
- Move daemonize before creating account provider ([#10003](https://github.com/paritytech/parity-ethereum/pull/10003))
|
- Fix docker build ([#9971](https://github.com/paritytech/parity-ethereum/pull/9971))
|
||||||
- Docs: update changelogs ([#9990](https://github.com/paritytech/parity-ethereum/pull/9990))
|
- Deny unknown fields for chainspec ([#9972](https://github.com/paritytech/parity-ethereum/pull/9972))
|
||||||
- Fix daemonize ([#10000](https://github.com/paritytech/parity-ethereum/pull/10000))
|
- Add deny_unknown_fields to chainspec
|
||||||
- Fix Bloom migration ([#9992](https://github.com/paritytech/parity-ethereum/pull/9992))
|
- Add tests and fix existing one
|
||||||
- Remove tendermint engine support ([#9980](https://github.com/paritytech/parity-ethereum/pull/9980))
|
- Remove serde_ignored dependency for chainspec
|
||||||
- Calculate gas for deployment transaction ([#9840](https://github.com/paritytech/parity-ethereum/pull/9840))
|
- Fix rpc test eth chain spec
|
||||||
- Fix unstable peers and slowness in sync ([#9967](https://github.com/paritytech/parity-ethereum/pull/9967))
|
- Fix starting_nonce_test spec
|
||||||
- Adds parity_verifySignature RPC method ([#9507](https://github.com/paritytech/parity-ethereum/pull/9507))
|
- Improve block and transaction propagation ([#9954](https://github.com/paritytech/parity-ethereum/pull/9954))
|
||||||
- Improve block and transaction propagation ([#9954](https://github.com/paritytech/parity-ethereum/pull/9954))
|
- Refactor sync to add priority tasks.
|
||||||
- Deny unknown fields for chainspec ([#9972](https://github.com/paritytech/parity-ethereum/pull/9972))
|
- Send priority tasks notifications.
|
||||||
- Fix docker build ([#9971](https://github.com/paritytech/parity-ethereum/pull/9971))
|
- Propagate blocks, optimize transactions.
|
||||||
- Ci: rearrange pipeline by logic ([#9970](https://github.com/paritytech/parity-ethereum/pull/9970))
|
- Implement transaction propagation. Use sync_channel.
|
||||||
- Add changelogs for 2.0.9, 2.1.4, 2.1.6, and 2.2.1 ([#9963](https://github.com/paritytech/parity-ethereum/pull/9963))
|
- Tone down info.
|
||||||
- Add Error message when sync is still in progress. ([#9475](https://github.com/paritytech/parity-ethereum/pull/9475))
|
- Prevent deadlock by not waiting forever for sync lock.
|
||||||
- Make CALLCODE to trace value to be the code address ([#9881](https://github.com/paritytech/parity-ethereum/pull/9881))
|
- Fix lock order.
|
||||||
- Fix light client informant while syncing ([#9932](https://github.com/paritytech/parity-ethereum/pull/9932))
|
- Don't use sync_channel to prevent deadlocks.
|
||||||
- Add a optional json dump state to evm-bin ([#9706](https://github.com/paritytech/parity-ethereum/pull/9706))
|
- Fix tests.
|
||||||
- Disable EIP-98 transition by default ([#9955](https://github.com/paritytech/parity-ethereum/pull/9955))
|
- Fix unstable peers and slowness in sync ([#9967](https://github.com/paritytech/parity-ethereum/pull/9967))
|
||||||
- Remove secret_store runtimes. ([#9888](https://github.com/paritytech/parity-ethereum/pull/9888))
|
- Don't sync all peers after each response
|
||||||
- Fix a deadlock ([#9952](https://github.com/paritytech/parity-ethereum/pull/9952))
|
- Update formating
|
||||||
- Chore(eip712): remove unused `failure-derive` ([#9958](https://github.com/paritytech/parity-ethereum/pull/9958))
|
- Fix tests: add `continue_sync` to `Sync_step`
|
||||||
- Do not use the home directory as the working dir in docker ([#9834](https://github.com/paritytech/parity-ethereum/pull/9834))
|
- Update ethcore/sync/src/chain/mod.rs
|
||||||
- Prevent silent errors in daemon mode, closes [#9367](https://github.com/paritytech/parity-ethereum/issues/9367) ([#9946](https://github.com/paritytech/parity-ethereum/pull/9946))
|
- Fix rpc middlewares
|
||||||
- Fix empty steps ([#9939](https://github.com/paritytech/parity-ethereum/pull/9939))
|
- Fix Cargo.lock
|
||||||
- Adjust requests costs for light client ([#9925](https://github.com/paritytech/parity-ethereum/pull/9925))
|
- Json: resolve merge in spec
|
||||||
- Eip-1186: add `eth_getProof` RPC-Method ([#9001](https://github.com/paritytech/parity-ethereum/pull/9001))
|
- Rpc: fix starting_nonce_test
|
||||||
- Missing blocks in filter_changes RPC ([#9947](https://github.com/paritytech/parity-ethereum/pull/9947))
|
- Ci: allow nightl job to fail
|
||||||
- Allow rust-nightly builds fail in nightly builds ([#9944](https://github.com/paritytech/parity-ethereum/pull/9944))
|
|
||||||
- Update eth-secp256k1 to include fix for BSDs ([#9935](https://github.com/paritytech/parity-ethereum/pull/9935))
|
## Parity-Ethereum [v2.2.1](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.1) (2018-11-15)
|
||||||
- Unbreak build on rust -stable ([#9934](https://github.com/paritytech/parity-ethereum/pull/9934))
|
|
||||||
- Keep existing blocks when restoring a Snapshot ([#8643](https://github.com/paritytech/parity-ethereum/pull/8643))
|
Parity-Ethereum 2.2.1-beta is the first v2.2 release, and might introduce features that break previous work flows, among others:
|
||||||
- Add experimental RPCs flag ([#9928](https://github.com/paritytech/parity-ethereum/pull/9928))
|
|
||||||
- Clarify poll lifetime ([#9922](https://github.com/paritytech/parity-ethereum/pull/9922))
|
- Prevent zero network ID ([#9763](https://github.com/paritytech/parity-ethereum/pull/9763)) and drop support for Olympic testnet ([#9801](https://github.com/paritytech/parity-ethereum/pull/9801)): The Olympic test net is dead for years and never used a chain ID but network ID zero. Parity Ethereum is now preventing the network ID to be zero, thus Olympic support is dropped. Make sure to chose positive non-zero network IDs in future.
|
||||||
- Docs(require rust 1.30) ([#9923](https://github.com/paritytech/parity-ethereum/pull/9923))
|
- Multithreaded snapshot creation ([#9239](https://github.com/paritytech/parity-ethereum/pull/9239)): adds a CLI argument `--snapshot-threads` which specifies the number of threads. This helps improving the performance of full nodes that wish to provide warp-snapshots for the network. The gain in performance comes with a slight drawback in increased snapshot size.
|
||||||
- Use block header for building finality ([#9914](https://github.com/paritytech/parity-ethereum/pull/9914))
|
- Expose config max-round-blocks-to-import ([#9439](https://github.com/paritytech/parity-ethereum/pull/9439)): Parity Ethereum imports blocks in rounds. If at the end of any round, the queue is not empty, we consider it to be _importing_ and won't notify pubsub. On large re-orgs (10+ blocks), this is possible. The default `max_round_blocks_to_import` is increased to 12 and configurable via the `--max-round-blocks-to-import` CLI flag. With unstable network conditions, it is advised to increase the number. This shouldn't have any noticeable performance impact unless the number is set to really large.
|
||||||
- Simplify cargo audit ([#9918](https://github.com/paritytech/parity-ethereum/pull/9918))
|
- Increase Gas-floor-target and Gas Cap ([#9564](https://github.com/paritytech/parity-ethereum/pull/9564)): the default values for gas floor target are `8_000_000` and gas cap `10_000_000`, similar to Geth 1.8.15+.
|
||||||
- Light-fetch: Differentiate between out-of-gas/manual throw and use required gas from response on failure ([#9824](https://github.com/paritytech/parity-ethereum/pull/9824))
|
- Produce portable binaries ([#9725](https://github.com/paritytech/parity-ethereum/pull/9725)): we now produce portable binaries, but it may incur some performance degradation. For ultimate performance it's now better to compile Parity Ethereum from source with `PORTABLE=OFF` environment variable.
|
||||||
- Eip 191 ([#9701](https://github.com/paritytech/parity-ethereum/pull/9701))
|
- RPC: `parity_allTransactionHashes` ([#9745](https://github.com/paritytech/parity-ethereum/pull/9745)): Get all pending transactions from the queue with the high performant `parity_allTransactionHashes` RPC method.
|
||||||
- Fix(logger): `reqwest` no longer a dependency ([#9908](https://github.com/paritytech/parity-ethereum/pull/9908))
|
- Support `eth_chainId` RPC method ([#9783](https://github.com/paritytech/parity-ethereum/pull/9783)): implements EIP-695 to get the chainID via RPC.
|
||||||
- Remove rust-toolchain file ([#9906](https://github.com/paritytech/parity-ethereum/pull/9906))
|
- AuRa: finalize blocks ([#9692](https://github.com/paritytech/parity-ethereum/pull/9692)): The AuRa engine was updated to emit ancestry actions to finalize blocks. The full client stores block finality in the database, the engine builds finality from an ancestry of `ExtendedHeader`; `is_epoch_end` was updated to take a vec of recently finalized headers; `is_epoch_end_light` was added which maintains the previous interface and is used by the light client since the client itself doesn't track finality.
|
||||||
- Foundation: 6692865, ropsten: 4417537, kovan: 9363457 ([#9907](https://github.com/paritytech/parity-ethereum/pull/9907))
|
|
||||||
- Ethcore: use Machine::verify_transaction on parent block ([#9900](https://github.com/paritytech/parity-ethereum/pull/9900))
|
The full list of included changes:
|
||||||
- Chore(rpc-tests): remove unused rand ([#9896](https://github.com/paritytech/parity-ethereum/pull/9896))
|
|
||||||
- Fix: Intermittent failing CI due to addr in use ([#9885](https://github.com/paritytech/parity-ethereum/pull/9885))
|
- Backport to parity 2.2.1 beta ([#9905](https://github.com/paritytech/parity-ethereum/pull/9905))
|
||||||
- Chore(bump docopt): 0.8 -> 1.0 ([#9889](https://github.com/paritytech/parity-ethereum/pull/9889))
|
- Bump version to 2.2.1
|
||||||
- Use expect ([#9883](https://github.com/paritytech/parity-ethereum/pull/9883))
|
- Fix: Intermittent failing CI due to addr in use ([#9885](https://github.com/paritytech/parity-ethereum/pull/9885))
|
||||||
- Use Weak reference in PubSubClient ([#9886](https://github.com/paritytech/parity-ethereum/pull/9886))
|
- Fix Parity not closing on Ctrl-C ([#9886](https://github.com/paritytech/parity-ethereum/pull/9886))
|
||||||
- Ci: nuke the gitlab caches ([#9855](https://github.com/paritytech/parity-ethereum/pull/9855))
|
- Fix json tracer overflow ([#9873](https://github.com/paritytech/parity-ethereum/pull/9873))
|
||||||
- Remove unused code ([#9884](https://github.com/paritytech/parity-ethereum/pull/9884))
|
- Fix docker script ([#9854](https://github.com/paritytech/parity-ethereum/pull/9854))
|
||||||
- Fix json tracer overflow ([#9873](https://github.com/paritytech/parity-ethereum/pull/9873))
|
- Add hardcoded headers for light client ([#9907](https://github.com/paritytech/parity-ethereum/pull/9907))
|
||||||
- Allow to seal work on latest block ([#9876](https://github.com/paritytech/parity-ethereum/pull/9876))
|
- Gitlab-ci: make android release build succeed ([#9743](https://github.com/paritytech/parity-ethereum/pull/9743))
|
||||||
- Fix docker script ([#9854](https://github.com/paritytech/parity-ethereum/pull/9854))
|
- Allow to seal work on latest block ([#9876](https://github.com/paritytech/parity-ethereum/pull/9876))
|
||||||
- Health endpoint ([#9847](https://github.com/paritytech/parity-ethereum/pull/9847))
|
- Remove rust-toolchain file ([#9906](https://github.com/paritytech/parity-ethereum/pull/9906))
|
||||||
- Gitlab-ci: make android release build succeed ([#9743](https://github.com/paritytech/parity-ethereum/pull/9743))
|
- Light-fetch: Differentiate between out-of-gas/manual throw and use required gas from response on failure ([#9824](https://github.com/paritytech/parity-ethereum/pull/9824))
|
||||||
- Clean up existing benchmarks ([#9839](https://github.com/paritytech/parity-ethereum/pull/9839))
|
- Eip-712 implementation ([#9631](https://github.com/paritytech/parity-ethereum/pull/9631))
|
||||||
- Update Callisto block reward code to support HF1 ([#9811](https://github.com/paritytech/parity-ethereum/pull/9811))
|
- Eip-191 implementation ([#9701](https://github.com/paritytech/parity-ethereum/pull/9701))
|
||||||
- Option to disable keep alive for JSON-RPC http transport ([#9848](https://github.com/paritytech/parity-ethereum/pull/9848))
|
- Simplify cargo audit ([#9918](https://github.com/paritytech/parity-ethereum/pull/9918))
|
||||||
- Classic.json Bootnode Update ([#9828](https://github.com/paritytech/parity-ethereum/pull/9828))
|
- Fix performance issue importing Kovan blocks ([#9914](https://github.com/paritytech/parity-ethereum/pull/9914))
|
||||||
- Support MIX. ([#9767](https://github.com/paritytech/parity-ethereum/pull/9767))
|
- Ci: nuke the gitlab caches ([#9855](https://github.com/paritytech/parity-ethereum/pull/9855))
|
||||||
- Ci: remove failing tests for android, windows, and macos ([#9788](https://github.com/paritytech/parity-ethereum/pull/9788))
|
- Backports to parity beta 2.2.0 ([#9820](https://github.com/paritytech/parity-ethereum/pull/9820))
|
||||||
- Implement NoProof for json tests and update tests reference (replaces [#9744](https://github.com/paritytech/parity-ethereum/issues/9744)) ([#9814](https://github.com/paritytech/parity-ethereum/pull/9814))
|
- Ci: remove failing tests for android, windows, and macos ([#9788](https://github.com/paritytech/parity-ethereum/pull/9788))
|
||||||
- Chore(bump regex) ([#9842](https://github.com/paritytech/parity-ethereum/pull/9842))
|
- Implement NoProof for json tests and update tests reference ([#9814](https://github.com/paritytech/parity-ethereum/pull/9814))
|
||||||
- Ignore global cache for patched accounts ([#9752](https://github.com/paritytech/parity-ethereum/pull/9752))
|
- Move state root verification before gas used ([#9841](https://github.com/paritytech/parity-ethereum/pull/9841))
|
||||||
- Move state root verification before gas used ([#9841](https://github.com/paritytech/parity-ethereum/pull/9841))
|
- Classic.json Bootnode Update ([#9828](https://github.com/paritytech/parity-ethereum/pull/9828))
|
||||||
- Fix(docker-aarch64) : cross-compile config ([#9798](https://github.com/paritytech/parity-ethereum/pull/9798))
|
- Rpc: parity_allTransactionHashes ([#9745](https://github.com/paritytech/parity-ethereum/pull/9745))
|
||||||
- Version: bump nightly to 2.3.0 ([#9819](https://github.com/paritytech/parity-ethereum/pull/9819))
|
- Revert "prevent zero networkID ([#9763](https://github.com/paritytech/parity-ethereum/pull/9763))" ([#9815](https://github.com/paritytech/parity-ethereum/pull/9815))
|
||||||
- Tests modification for windows CI ([#9671](https://github.com/paritytech/parity-ethereum/pull/9671))
|
- Allow zero chain id in EIP155 signing process ([#9792](https://github.com/paritytech/parity-ethereum/pull/9792))
|
||||||
- Eip-712 implementation ([#9631](https://github.com/paritytech/parity-ethereum/pull/9631))
|
- Add readiness check for docker container ([#9804](https://github.com/paritytech/parity-ethereum/pull/9804))
|
||||||
- Fix typo ([#9826](https://github.com/paritytech/parity-ethereum/pull/9826))
|
- Insert dev account before unlocking ([#9813](https://github.com/paritytech/parity-ethereum/pull/9813))
|
||||||
- Clean up serde rename and use rename_all = camelCase when possible ([#9823](https://github.com/paritytech/parity-ethereum/pull/9823))
|
- Removed "rustup" & added new runner tag ([#9731](https://github.com/paritytech/parity-ethereum/pull/9731))
|
||||||
|
- Expose config max-round-blocks-to-import ([#9439](https://github.com/paritytech/parity-ethereum/pull/9439))
|
||||||
|
- Aura: finalize blocks ([#9692](https://github.com/paritytech/parity-ethereum/pull/9692))
|
||||||
|
- Sync: retry different peer after empty subchain heads response ([#9753](https://github.com/paritytech/parity-ethereum/pull/9753))
|
||||||
|
- Fix(light-rpc/parity) : Remove unused client ([#9802](https://github.com/paritytech/parity-ethereum/pull/9802))
|
||||||
|
- Drops support for olympic testnet, closes [#9800](https://github.com/paritytech/parity-ethereum/issues/9800) ([#9801](https://github.com/paritytech/parity-ethereum/pull/9801))
|
||||||
|
- Replace `tokio_core` with `tokio` (`ring` -> 0.13) ([#9657](https://github.com/paritytech/parity-ethereum/pull/9657))
|
||||||
|
- Support eth_chainId RPC method ([#9783](https://github.com/paritytech/parity-ethereum/pull/9783))
|
||||||
|
- Ethcore: bump ropsten forkblock checkpoint ([#9775](https://github.com/paritytech/parity-ethereum/pull/9775))
|
||||||
|
- Docs: changelogs for 2.0.8 and 2.1.3 ([#9758](https://github.com/paritytech/parity-ethereum/pull/9758))
|
||||||
|
- Prevent zero networkID ([#9763](https://github.com/paritytech/parity-ethereum/pull/9763))
|
||||||
|
- Skip seal fields count check when --no-seal-check is used ([#9757](https://github.com/paritytech/parity-ethereum/pull/9757))
|
||||||
|
- Aura: fix panic on extra_info with unsealed block ([#9755](https://github.com/paritytech/parity-ethereum/pull/9755))
|
||||||
|
- Docs: update changelogs ([#9742](https://github.com/paritytech/parity-ethereum/pull/9742))
|
||||||
|
- Removed extra assert in generation_session_is_removed_when_succeeded ([#9738](https://github.com/paritytech/parity-ethereum/pull/9738))
|
||||||
|
- Make checkpoint_storage_at use plain loop instead of recursion ([#9734](https://github.com/paritytech/parity-ethereum/pull/9734))
|
||||||
|
- Use signed 256-bit integer for sstore gas refund substate ([#9746](https://github.com/paritytech/parity-ethereum/pull/9746))
|
||||||
|
- Heads ref not present for branches beta and stable ([#9741](https://github.com/paritytech/parity-ethereum/pull/9741))
|
||||||
|
- Add Callisto support ([#9534](https://github.com/paritytech/parity-ethereum/pull/9534))
|
||||||
|
- Add --force to cargo audit install script ([#9735](https://github.com/paritytech/parity-ethereum/pull/9735))
|
||||||
|
- Remove unused expired value from Handshake ([#9732](https://github.com/paritytech/parity-ethereum/pull/9732))
|
||||||
|
- Add hardcoded headers ([#9730](https://github.com/paritytech/parity-ethereum/pull/9730))
|
||||||
|
- Produce portable binaries ([#9725](https://github.com/paritytech/parity-ethereum/pull/9725))
|
||||||
|
- Gitlab ci: releasable_branches: change variables condition to schedule ([#9729](https://github.com/paritytech/parity-ethereum/pull/9729))
|
||||||
|
- Update a few parity-common dependencies ([#9663](https://github.com/paritytech/parity-ethereum/pull/9663))
|
||||||
|
- Hf in POA Core (2018-10-22) ([#9724](https://github.com/paritytech/parity-ethereum/pull/9724))
|
||||||
|
- Schedule nightly builds ([#9717](https://github.com/paritytech/parity-ethereum/pull/9717))
|
||||||
|
- Fix ancient blocks sync ([#9531](https://github.com/paritytech/parity-ethereum/pull/9531))
|
||||||
|
- Ci: Skip docs job for nightly ([#9693](https://github.com/paritytech/parity-ethereum/pull/9693))
|
||||||
|
- Fix (light/provider) : Make `read_only executions` read-only ([#9591](https://github.com/paritytech/parity-ethereum/pull/9591))
|
||||||
|
- Ethcore: fix detection of major import ([#9552](https://github.com/paritytech/parity-ethereum/pull/9552))
|
||||||
|
- Return 0 on error ([#9705](https://github.com/paritytech/parity-ethereum/pull/9705))
|
||||||
|
- Ethcore: delay ropsten hardfork ([#9704](https://github.com/paritytech/parity-ethereum/pull/9704))
|
||||||
|
- Make instantSeal engine backwards compatible, closes [#9696](https://github.com/paritytech/parity-ethereum/issues/9696) ([#9700](https://github.com/paritytech/parity-ethereum/pull/9700))
|
||||||
|
- Implement CREATE2 gas changes and fix some potential overflowing ([#9694](https://github.com/paritytech/parity-ethereum/pull/9694))
|
||||||
|
- Don't hash the init_code of CREATE. ([#9688](https://github.com/paritytech/parity-ethereum/pull/9688))
|
||||||
|
- Ethcore: minor optimization of modexp by using LR exponentiation ([#9697](https://github.com/paritytech/parity-ethereum/pull/9697))
|
||||||
|
- Removed redundant clone before each block import ([#9683](https://github.com/paritytech/parity-ethereum/pull/9683))
|
||||||
|
- Add Foundation Bootnodes ([#9666](https://github.com/paritytech/parity-ethereum/pull/9666))
|
||||||
|
- Docker: run as parity user ([#9689](https://github.com/paritytech/parity-ethereum/pull/9689))
|
||||||
|
- Ethcore: mcip3 block reward contract ([#9605](https://github.com/paritytech/parity-ethereum/pull/9605))
|
||||||
|
- Verify block syncing responses against requests ([#9670](https://github.com/paritytech/parity-ethereum/pull/9670))
|
||||||
|
- Add a new RPC `parity_submitWorkDetail` similar `eth_submitWork` but return block hash ([#9404](https://github.com/paritytech/parity-ethereum/pull/9404))
|
||||||
|
- Resumable EVM and heap-allocated callstack ([#9360](https://github.com/paritytech/parity-ethereum/pull/9360))
|
||||||
|
- Update parity-wordlist library ([#9682](https://github.com/paritytech/parity-ethereum/pull/9682))
|
||||||
|
- Ci: Remove unnecessary pipes ([#9681](https://github.com/paritytech/parity-ethereum/pull/9681))
|
||||||
|
- Test.sh: use cargo --target for platforms other than linux, win or mac ([#9650](https://github.com/paritytech/parity-ethereum/pull/9650))
|
||||||
|
- Ci: fix push script ([#9679](https://github.com/paritytech/parity-ethereum/pull/9679))
|
||||||
|
- Hardfork the testnets ([#9562](https://github.com/paritytech/parity-ethereum/pull/9562))
|
||||||
|
- Calculate sha3 instead of sha256 for push-release. ([#9673](https://github.com/paritytech/parity-ethereum/pull/9673))
|
||||||
|
- Ethcore-io retries failed work steal ([#9651](https://github.com/paritytech/parity-ethereum/pull/9651))
|
||||||
|
- Fix(light_fetch): avoid race with BlockNumber::Latest ([#9665](https://github.com/paritytech/parity-ethereum/pull/9665))
|
||||||
|
- Test fix for windows cache name... ([#9658](https://github.com/paritytech/parity-ethereum/pull/9658))
|
||||||
|
- Refactor(fetch) : light use only one `DNS` thread ([#9647](https://github.com/paritytech/parity-ethereum/pull/9647))
|
||||||
|
- Ethereum libfuzzer integration small change ([#9547](https://github.com/paritytech/parity-ethereum/pull/9547))
|
||||||
|
- Cli: remove reference to --no-ui in --unlock flag help ([#9616](https://github.com/paritytech/parity-ethereum/pull/9616))
|
||||||
|
- Remove master from releasable branches ([#9655](https://github.com/paritytech/parity-ethereum/pull/9655))
|
||||||
|
- Ethcore/VerificationQueue don't spawn up extra `worker-threads` when explictly specified not to ([#9620](https://github.com/paritytech/parity-ethereum/pull/9620))
|
||||||
|
- Rpc: parity_getBlockReceipts ([#9527](https://github.com/paritytech/parity-ethereum/pull/9527))
|
||||||
|
- Remove unused dependencies ([#9589](https://github.com/paritytech/parity-ethereum/pull/9589))
|
||||||
|
- Ignore key_server_cluster randomly failing tests ([#9639](https://github.com/paritytech/parity-ethereum/pull/9639))
|
||||||
|
- Ethcore: handle vm exception when estimating gas ([#9615](https://github.com/paritytech/parity-ethereum/pull/9615))
|
||||||
|
- Fix bad-block reporting no reason ([#9638](https://github.com/paritytech/parity-ethereum/pull/9638))
|
||||||
|
- Use static call and apparent value transfer for block reward contract code ([#9603](https://github.com/paritytech/parity-ethereum/pull/9603))
|
||||||
|
- Hf in POA Sokol (2018-09-19) ([#9607](https://github.com/paritytech/parity-ethereum/pull/9607))
|
||||||
|
- Bump smallvec to 0.6 in ethcore-light, ethstore and whisper ([#9588](https://github.com/paritytech/parity-ethereum/pull/9588))
|
||||||
|
- Add constantinople conf to EvmTestClient. ([#9570](https://github.com/paritytech/parity-ethereum/pull/9570))
|
||||||
|
- Fix(network): don't disconnect reserved peers ([#9608](https://github.com/paritytech/parity-ethereum/pull/9608))
|
||||||
|
- Fix failing node-table tests on mac os, closes [#9632](https://github.com/paritytech/parity-ethereum/issues/9632) ([#9633](https://github.com/paritytech/parity-ethereum/pull/9633))
|
||||||
|
- Update ropsten.json ([#9602](https://github.com/paritytech/parity-ethereum/pull/9602))
|
||||||
|
- Simplify ethcore errors by removing BlockImportError ([#9593](https://github.com/paritytech/parity-ethereum/pull/9593))
|
||||||
|
- Fix windows compilation, replaces [#9561](https://github.com/paritytech/parity-ethereum/issues/9561) ([#9621](https://github.com/paritytech/parity-ethereum/pull/9621))
|
||||||
|
- Master: rpc-docs set github token ([#9610](https://github.com/paritytech/parity-ethereum/pull/9610))
|
||||||
|
- Docs: add changelogs for 1.11.10, 1.11.11, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.1.0, and 2.1.1 ([#9554](https://github.com/paritytech/parity-ethereum/pull/9554))
|
||||||
|
- Docs(rpc): annotate tag with the provided message ([#9601](https://github.com/paritytech/parity-ethereum/pull/9601))
|
||||||
|
- Ci: fix regex roll_eyes ([#9597](https://github.com/paritytech/parity-ethereum/pull/9597))
|
||||||
|
- Remove snapcraft clean ([#9585](https://github.com/paritytech/parity-ethereum/pull/9585))
|
||||||
|
- Add snapcraft package image (master) ([#9584](https://github.com/paritytech/parity-ethereum/pull/9584))
|
||||||
|
- Docs(rpc): push the branch along with tags ([#9578](https://github.com/paritytech/parity-ethereum/pull/9578))
|
||||||
|
- Fix typo for jsonrpc-threads flag ([#9574](https://github.com/paritytech/parity-ethereum/pull/9574))
|
||||||
|
- Fix informant compile ([#9571](https://github.com/paritytech/parity-ethereum/pull/9571))
|
||||||
|
- Added ropsten bootnodes ([#9569](https://github.com/paritytech/parity-ethereum/pull/9569))
|
||||||
|
- Increase Gas-floor-target and Gas Cap ([#9564](https://github.com/paritytech/parity-ethereum/pull/9564))
|
||||||
|
- While working on the platform tests make them non-breaking ([#9563](https://github.com/paritytech/parity-ethereum/pull/9563))
|
||||||
|
- Improve P2P discovery ([#9526](https://github.com/paritytech/parity-ethereum/pull/9526))
|
||||||
|
- Move dockerfile for android build container to scripts repo ([#9560](https://github.com/paritytech/parity-ethereum/pull/9560))
|
||||||
|
- Simultaneous platform tests WIP ([#9557](https://github.com/paritytech/parity-ethereum/pull/9557))
|
||||||
|
- Update ethabi-derive, serde, serde_json, serde_derive, syn && quote ([#9553](https://github.com/paritytech/parity-ethereum/pull/9553))
|
||||||
|
- Ci: fix rpc docs generation 2 ([#9550](https://github.com/paritytech/parity-ethereum/pull/9550))
|
||||||
|
- Ci: always run build pipelines for win, mac, linux, and android ([#9537](https://github.com/paritytech/parity-ethereum/pull/9537))
|
||||||
|
- Multithreaded snapshot creation ([#9239](https://github.com/paritytech/parity-ethereum/pull/9239))
|
||||||
|
- New ethabi ([#9511](https://github.com/paritytech/parity-ethereum/pull/9511))
|
||||||
|
- Remove initial token for WS. ([#9545](https://github.com/paritytech/parity-ethereum/pull/9545))
|
||||||
|
- Net_version caches network_id to avoid redundant aquire of sync readlock ([#9544](https://github.com/paritytech/parity-ethereum/pull/9544))
|
||||||
|
- Correct before_script for nightly build versions ([#9543](https://github.com/paritytech/parity-ethereum/pull/9543))
|
||||||
|
- Deps: bump kvdb-rocksdb to 0.1.4 ([#9539](https://github.com/paritytech/parity-ethereum/pull/9539))
|
||||||
|
- State: test when contract creation fails, old storage values should re-appear ([#9532](https://github.com/paritytech/parity-ethereum/pull/9532))
|
||||||
|
- Allow dropping light client RPC query with no results ([#9318](https://github.com/paritytech/parity-ethereum/pull/9318))
|
||||||
|
- Bump master to 2.2.0 ([#9517](https://github.com/paritytech/parity-ethereum/pull/9517))
|
||||||
|
- Enable all Constantinople hard fork changes in constantinople_test.json ([#9505](https://github.com/paritytech/parity-ethereum/pull/9505))
|
||||||
|
- [Light] Validate `account balance` before importing transactions ([#9417](https://github.com/paritytech/parity-ethereum/pull/9417))
|
||||||
|
- In create memory calculation is the same for create2 because the additional parameter was popped before. ([#9522](https://github.com/paritytech/parity-ethereum/pull/9522))
|
||||||
|
- Update patricia trie to 0.2.2 ([#9525](https://github.com/paritytech/parity-ethereum/pull/9525))
|
||||||
|
- Replace hardcoded JSON with serde json! macro ([#9489](https://github.com/paritytech/parity-ethereum/pull/9489))
|
||||||
|
- Fix typo in version string ([#9516](https://github.com/paritytech/parity-ethereum/pull/9516))
|
||||||
|
|
||||||
## Previous releases
|
## Previous releases
|
||||||
|
|
||||||
- [CHANGELOG-2.2](docs/CHANGELOG-2.2.md) (_stable_)
|
- [CHANGELOG-2.1](docs/CHANGELOG-2.1.md) (_stable_)
|
||||||
- [CHANGELOG-2.1](docs/CHANGELOG-2.1.md) (EOL: 2019-01-16)
|
|
||||||
- [CHANGELOG-2.0](docs/CHANGELOG-2.0.md) (EOL: 2018-11-15)
|
- [CHANGELOG-2.0](docs/CHANGELOG-2.0.md) (EOL: 2018-11-15)
|
||||||
- [CHANGELOG-1.11](docs/CHANGELOG-1.11.md) (EOL: 2018-09-19)
|
- [CHANGELOG-1.11](docs/CHANGELOG-1.11.md) (EOL: 2018-09-19)
|
||||||
- [CHANGELOG-1.10](docs/CHANGELOG-1.10.md) (EOL: 2018-07-18)
|
- [CHANGELOG-1.10](docs/CHANGELOG-1.10.md) (EOL: 2018-07-18)
|
||||||
|
|||||||
627
Cargo.lock
generated
627
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
22
Cargo.toml
22
Cargo.toml
@@ -2,7 +2,7 @@
|
|||||||
description = "Parity Ethereum client"
|
description = "Parity Ethereum client"
|
||||||
name = "parity-ethereum"
|
name = "parity-ethereum"
|
||||||
# NOTE Make sure to update util/version/Cargo.toml as well
|
# NOTE Make sure to update util/version/Cargo.toml as well
|
||||||
version = "2.4.3"
|
version = "2.3.7"
|
||||||
license = "GPL-3.0"
|
license = "GPL-3.0"
|
||||||
authors = ["Parity Technologies <admin@parity.io>"]
|
authors = ["Parity Technologies <admin@parity.io>"]
|
||||||
|
|
||||||
@@ -29,11 +29,10 @@ serde_derive = "1.0"
|
|||||||
futures = "0.1"
|
futures = "0.1"
|
||||||
fdlimit = "0.1"
|
fdlimit = "0.1"
|
||||||
ctrlc = { git = "https://github.com/paritytech/rust-ctrlc.git" }
|
ctrlc = { git = "https://github.com/paritytech/rust-ctrlc.git" }
|
||||||
jsonrpc-core = "10.0.1"
|
jsonrpc-core = { git = "https://github.com/paritytech/jsonrpc.git", branch = "parity-2.2" }
|
||||||
|
ethcore = { path = "ethcore", features = ["parity"] }
|
||||||
parity-bytes = "0.1"
|
parity-bytes = "0.1"
|
||||||
common-types = { path = "ethcore/types" }
|
common-types = { path = "ethcore/types" }
|
||||||
ethcore = { path = "ethcore", features = ["parity"] }
|
|
||||||
ethcore-accounts = { path = "accounts", optional = true }
|
|
||||||
ethcore-blockchain = { path = "ethcore/blockchain" }
|
ethcore-blockchain = { path = "ethcore/blockchain" }
|
||||||
ethcore-call-contract = { path = "ethcore/call-contract"}
|
ethcore-call-contract = { path = "ethcore/call-contract"}
|
||||||
ethcore-db = { path = "ethcore/db" }
|
ethcore-db = { path = "ethcore/db" }
|
||||||
@@ -45,13 +44,12 @@ ethcore-network = { path = "util/network" }
|
|||||||
ethcore-private-tx = { path = "ethcore/private-tx" }
|
ethcore-private-tx = { path = "ethcore/private-tx" }
|
||||||
ethcore-service = { path = "ethcore/service" }
|
ethcore-service = { path = "ethcore/service" }
|
||||||
ethcore-sync = { path = "ethcore/sync" }
|
ethcore-sync = { path = "ethcore/sync" }
|
||||||
ethereum-types = "0.4"
|
|
||||||
ethkey = { path = "accounts/ethkey" }
|
|
||||||
ethstore = { path = "accounts/ethstore" }
|
ethstore = { path = "accounts/ethstore" }
|
||||||
|
ethereum-types = "0.4"
|
||||||
node-filter = { path = "ethcore/node-filter" }
|
node-filter = { path = "ethcore/node-filter" }
|
||||||
|
ethkey = { path = "accounts/ethkey" }
|
||||||
rlp = { version = "0.3.0", features = ["ethereum"] }
|
rlp = { version = "0.3.0", features = ["ethereum"] }
|
||||||
cli-signer= { path = "cli-signer" }
|
cli-signer= { path = "cli-signer" }
|
||||||
parity-daemonize = "0.3"
|
|
||||||
parity-hash-fetch = { path = "updater/hash-fetch" }
|
parity-hash-fetch = { path = "updater/hash-fetch" }
|
||||||
parity-ipfs-api = { path = "ipfs" }
|
parity-ipfs-api = { path = "ipfs" }
|
||||||
parity-local-store = { path = "miner/local-store" }
|
parity-local-store = { path = "miner/local-store" }
|
||||||
@@ -81,22 +79,22 @@ pretty_assertions = "0.1"
|
|||||||
ipnetwork = "0.12.6"
|
ipnetwork = "0.12.6"
|
||||||
tempdir = "0.3"
|
tempdir = "0.3"
|
||||||
fake-fetch = { path = "util/fake-fetch" }
|
fake-fetch = { path = "util/fake-fetch" }
|
||||||
lazy_static = "1.2.0"
|
|
||||||
|
[target.'cfg(not(windows))'.dependencies]
|
||||||
|
daemonize = "0.3"
|
||||||
|
|
||||||
[target.'cfg(windows)'.dependencies]
|
[target.'cfg(windows)'.dependencies]
|
||||||
winapi = { version = "0.3.4", features = ["winsock2", "winuser", "shellapi"] }
|
winapi = { version = "0.3.4", features = ["winsock2", "winuser", "shellapi"] }
|
||||||
|
|
||||||
[features]
|
[features]
|
||||||
default = ["accounts"]
|
|
||||||
accounts = ["ethcore-accounts", "parity-rpc/accounts"]
|
|
||||||
miner-debug = ["ethcore/miner-debug"]
|
miner-debug = ["ethcore/miner-debug"]
|
||||||
json-tests = ["ethcore/json-tests"]
|
json-tests = ["ethcore/json-tests"]
|
||||||
ci-skip-tests = ["ethcore/ci-skip-tests"]
|
ci-skip-issue = ["ethcore/ci-skip-issue"]
|
||||||
test-heavy = ["ethcore/test-heavy"]
|
test-heavy = ["ethcore/test-heavy"]
|
||||||
evm-debug = ["ethcore/evm-debug"]
|
evm-debug = ["ethcore/evm-debug"]
|
||||||
evm-debug-tests = ["ethcore/evm-debug-tests"]
|
evm-debug-tests = ["ethcore/evm-debug-tests"]
|
||||||
slow-blocks = ["ethcore/slow-blocks"]
|
slow-blocks = ["ethcore/slow-blocks"]
|
||||||
secretstore = ["ethcore-secretstore", "ethcore-secretstore/accounts"]
|
secretstore = ["ethcore-secretstore"]
|
||||||
final = ["parity-version/final"]
|
final = ["parity-version/final"]
|
||||||
deadlock_detection = ["parking_lot/deadlock_detection"]
|
deadlock_detection = ["parking_lot/deadlock_detection"]
|
||||||
# to create a memory profile (requires nightly rust), use e.g.
|
# to create a memory profile (requires nightly rust), use e.g.
|
||||||
|
|||||||
@@ -125,8 +125,7 @@ To start Parity Ethereum as a regular user using `systemd` init:
|
|||||||
|
|
||||||
1. Copy `./scripts/parity.service` to your
|
1. Copy `./scripts/parity.service` to your
|
||||||
`systemd` user directory (usually `~/.config/systemd/user`).
|
`systemd` user directory (usually `~/.config/systemd/user`).
|
||||||
2. Copy release to bin folder, write `sudo install ./target/release/parity /usr/bin/parity`
|
2. To configure Parity Ethereum, write a `/etc/parity/config.toml` config file, see [Configuring Parity Ethereum](https://paritytech.github.io/wiki/Configuring-Parity) for details.
|
||||||
3. To configure Parity Ethereum, write a `/etc/parity/config.toml` config file, see [Configuring Parity Ethereum](https://paritytech.github.io/wiki/Configuring-Parity) for details.
|
|
||||||
|
|
||||||
## Parity Ethereum toolchain
|
## Parity Ethereum toolchain
|
||||||
|
|
||||||
|
|||||||
@@ -1,28 +0,0 @@
|
|||||||
[package]
|
|
||||||
description = "Account management for Parity Ethereum"
|
|
||||||
homepage = "http://parity.io"
|
|
||||||
license = "GPL-3.0"
|
|
||||||
name = "ethcore-accounts"
|
|
||||||
version = "0.1.0"
|
|
||||||
authors = ["Parity Technologies <admin@parity.io>"]
|
|
||||||
edition = "2018"
|
|
||||||
|
|
||||||
[dependencies]
|
|
||||||
common-types = { path = "../ethcore/types" }
|
|
||||||
ethkey = { path = "ethkey" }
|
|
||||||
ethstore = { path = "ethstore" }
|
|
||||||
log = "0.4"
|
|
||||||
parking_lot = "0.7"
|
|
||||||
serde = "1.0"
|
|
||||||
serde_derive = "1.0"
|
|
||||||
serde_json = "1.0"
|
|
||||||
|
|
||||||
[target.'cfg(any(target_os = "linux", target_os = "macos", target_os = "windows"))'.dependencies]
|
|
||||||
hardware-wallet = { path = "hw" }
|
|
||||||
|
|
||||||
[target.'cfg(not(any(target_os = "linux", target_os = "macos", target_os = "windows")))'.dependencies]
|
|
||||||
fake-hardware-wallet = { path = "fake-hardware-wallet" }
|
|
||||||
|
|
||||||
[dev-dependencies]
|
|
||||||
ethereum-types = "0.4"
|
|
||||||
tempdir = "0.3"
|
|
||||||
@@ -6,7 +6,7 @@ authors = ["Parity Technologies <admin@parity.io>"]
|
|||||||
[dependencies]
|
[dependencies]
|
||||||
byteorder = "1.0"
|
byteorder = "1.0"
|
||||||
edit-distance = "2.0"
|
edit-distance = "2.0"
|
||||||
parity-crypto = "0.3.0"
|
parity-crypto = "0.2"
|
||||||
eth-secp256k1 = { git = "https://github.com/paritytech/rust-secp256k1" }
|
eth-secp256k1 = { git = "https://github.com/paritytech/rust-secp256k1" }
|
||||||
ethereum-types = "0.4"
|
ethereum-types = "0.4"
|
||||||
lazy_static = "1.0"
|
lazy_static = "1.0"
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ Parity Ethereum keys generator.
|
|||||||
|
|
||||||
```
|
```
|
||||||
Parity Ethereum keys generator.
|
Parity Ethereum keys generator.
|
||||||
Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
ethkey info <secret-or-phrase> [options]
|
ethkey info <secret-or-phrase> [options]
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ use rustc_hex::{FromHex, FromHexError};
|
|||||||
|
|
||||||
const USAGE: &'static str = r#"
|
const USAGE: &'static str = r#"
|
||||||
Parity Ethereum keys generator.
|
Parity Ethereum keys generator.
|
||||||
Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
ethkey info <secret-or-phrase> [options]
|
ethkey info <secret-or-phrase> [options]
|
||||||
|
|||||||
@@ -16,13 +16,12 @@ tiny-keccak = "1.4"
|
|||||||
time = "0.1.34"
|
time = "0.1.34"
|
||||||
itertools = "0.5"
|
itertools = "0.5"
|
||||||
parking_lot = "0.7"
|
parking_lot = "0.7"
|
||||||
parity-crypto = "0.3.0"
|
parity-crypto = "0.2"
|
||||||
ethereum-types = "0.4"
|
ethereum-types = "0.4"
|
||||||
dir = { path = "../../util/dir" }
|
dir = { path = "../../util/dir" }
|
||||||
smallvec = "0.6"
|
smallvec = "0.6"
|
||||||
parity-wordlist = "1.0"
|
parity-wordlist = "1.0"
|
||||||
tempdir = "0.3"
|
tempdir = "0.3"
|
||||||
lazy_static = "1.2.0"
|
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
matches = "0.1"
|
matches = "0.1"
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ Parity Ethereum key management.
|
|||||||
|
|
||||||
```
|
```
|
||||||
Parity Ethereum key management tool.
|
Parity Ethereum key management tool.
|
||||||
Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
ethstore insert <secret> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
|
ethstore insert <secret> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
|
||||||
|
|||||||
@@ -41,7 +41,7 @@ mod crack;
|
|||||||
|
|
||||||
pub const USAGE: &'static str = r#"
|
pub const USAGE: &'static str = r#"
|
||||||
Parity Ethereum key management tool.
|
Parity Ethereum key management tool.
|
||||||
Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
ethstore insert <secret> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
|
ethstore insert <secret> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
|
||||||
|
|||||||
@@ -15,7 +15,6 @@
|
|||||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
use std::str;
|
use std::str;
|
||||||
use std::num::NonZeroU32;
|
|
||||||
use ethkey::{Password, Secret};
|
use ethkey::{Password, Secret};
|
||||||
use {json, Error, crypto};
|
use {json, Error, crypto};
|
||||||
use crypto::Keccak256;
|
use crypto::Keccak256;
|
||||||
@@ -74,12 +73,12 @@ impl From<Crypto> for String {
|
|||||||
|
|
||||||
impl Crypto {
|
impl Crypto {
|
||||||
/// Encrypt account secret
|
/// Encrypt account secret
|
||||||
pub fn with_secret(secret: &Secret, password: &Password, iterations: NonZeroU32) -> Result<Self, crypto::Error> {
|
pub fn with_secret(secret: &Secret, password: &Password, iterations: u32) -> Result<Self, crypto::Error> {
|
||||||
Crypto::with_plain(&*secret, password, iterations)
|
Crypto::with_plain(&*secret, password, iterations)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Encrypt custom plain data
|
/// Encrypt custom plain data
|
||||||
pub fn with_plain(plain: &[u8], password: &Password, iterations: NonZeroU32) -> Result<Self, crypto::Error> {
|
pub fn with_plain(plain: &[u8], password: &Password, iterations: u32) -> Result<Self, crypto::Error> {
|
||||||
let salt: [u8; 32] = Random::random();
|
let salt: [u8; 32] = Random::random();
|
||||||
let iv: [u8; 16] = Random::random();
|
let iv: [u8; 16] = Random::random();
|
||||||
|
|
||||||
@@ -160,17 +159,13 @@ impl Crypto {
|
|||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use ethkey::{Generator, Random};
|
use ethkey::{Generator, Random};
|
||||||
use super::{Crypto, Error, NonZeroU32};
|
use super::{Crypto, Error};
|
||||||
|
|
||||||
lazy_static! {
|
|
||||||
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(10240).expect("10240 > 0; qed");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn crypto_with_secret_create() {
|
fn crypto_with_secret_create() {
|
||||||
let keypair = Random.generate().unwrap();
|
let keypair = Random.generate().unwrap();
|
||||||
let passwd = "this is sparta".into();
|
let passwd = "this is sparta".into();
|
||||||
let crypto = Crypto::with_secret(keypair.secret(), &passwd, *ITERATIONS).unwrap();
|
let crypto = Crypto::with_secret(keypair.secret(), &passwd, 10240).unwrap();
|
||||||
let secret = crypto.secret(&passwd).unwrap();
|
let secret = crypto.secret(&passwd).unwrap();
|
||||||
assert_eq!(keypair.secret(), &secret);
|
assert_eq!(keypair.secret(), &secret);
|
||||||
}
|
}
|
||||||
@@ -178,7 +173,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn crypto_with_secret_invalid_password() {
|
fn crypto_with_secret_invalid_password() {
|
||||||
let keypair = Random.generate().unwrap();
|
let keypair = Random.generate().unwrap();
|
||||||
let crypto = Crypto::with_secret(keypair.secret(), &"this is sparta".into(), *ITERATIONS).unwrap();
|
let crypto = Crypto::with_secret(keypair.secret(), &"this is sparta".into(), 10240).unwrap();
|
||||||
assert_matches!(crypto.secret(&"this is sparta!".into()), Err(Error::InvalidPassword))
|
assert_matches!(crypto.secret(&"this is sparta!".into()), Err(Error::InvalidPassword))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -186,7 +181,7 @@ mod tests {
|
|||||||
fn crypto_with_null_plain_data() {
|
fn crypto_with_null_plain_data() {
|
||||||
let original_data = b"";
|
let original_data = b"";
|
||||||
let passwd = "this is sparta".into();
|
let passwd = "this is sparta".into();
|
||||||
let crypto = Crypto::with_plain(&original_data[..], &passwd, *ITERATIONS).unwrap();
|
let crypto = Crypto::with_plain(&original_data[..], &passwd, 10240).unwrap();
|
||||||
let decrypted_data = crypto.decrypt(&passwd).unwrap();
|
let decrypted_data = crypto.decrypt(&passwd).unwrap();
|
||||||
assert_eq!(original_data[..], *decrypted_data);
|
assert_eq!(original_data[..], *decrypted_data);
|
||||||
}
|
}
|
||||||
@@ -195,7 +190,7 @@ mod tests {
|
|||||||
fn crypto_with_tiny_plain_data() {
|
fn crypto_with_tiny_plain_data() {
|
||||||
let original_data = b"{}";
|
let original_data = b"{}";
|
||||||
let passwd = "this is sparta".into();
|
let passwd = "this is sparta".into();
|
||||||
let crypto = Crypto::with_plain(&original_data[..], &passwd, *ITERATIONS).unwrap();
|
let crypto = Crypto::with_plain(&original_data[..], &passwd, 10240).unwrap();
|
||||||
let decrypted_data = crypto.decrypt(&passwd).unwrap();
|
let decrypted_data = crypto.decrypt(&passwd).unwrap();
|
||||||
assert_eq!(original_data[..], *decrypted_data);
|
assert_eq!(original_data[..], *decrypted_data);
|
||||||
}
|
}
|
||||||
@@ -204,7 +199,7 @@ mod tests {
|
|||||||
fn crypto_with_huge_plain_data() {
|
fn crypto_with_huge_plain_data() {
|
||||||
let original_data: Vec<_> = (1..65536).map(|i| (i % 256) as u8).collect();
|
let original_data: Vec<_> = (1..65536).map(|i| (i % 256) as u8).collect();
|
||||||
let passwd = "this is sparta".into();
|
let passwd = "this is sparta".into();
|
||||||
let crypto = Crypto::with_plain(&original_data, &passwd, *ITERATIONS).unwrap();
|
let crypto = Crypto::with_plain(&original_data, &passwd, 10240).unwrap();
|
||||||
let decrypted_data = crypto.decrypt(&passwd).unwrap();
|
let decrypted_data = crypto.decrypt(&passwd).unwrap();
|
||||||
assert_eq!(&original_data, &decrypted_data);
|
assert_eq!(&original_data, &decrypted_data);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,7 +15,6 @@
|
|||||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
use json;
|
use json;
|
||||||
use std::num::NonZeroU32;
|
|
||||||
|
|
||||||
#[derive(Debug, PartialEq, Clone)]
|
#[derive(Debug, PartialEq, Clone)]
|
||||||
pub enum Prf {
|
pub enum Prf {
|
||||||
@@ -24,7 +23,7 @@ pub enum Prf {
|
|||||||
|
|
||||||
#[derive(Debug, PartialEq, Clone)]
|
#[derive(Debug, PartialEq, Clone)]
|
||||||
pub struct Pbkdf2 {
|
pub struct Pbkdf2 {
|
||||||
pub c: NonZeroU32,
|
pub c: u32,
|
||||||
pub dklen: u32,
|
pub dklen: u32,
|
||||||
pub prf: Prf,
|
pub prf: Prf,
|
||||||
pub salt: Vec<u8>,
|
pub salt: Vec<u8>,
|
||||||
|
|||||||
@@ -20,7 +20,6 @@ use {json, Error};
|
|||||||
use account::Version;
|
use account::Version;
|
||||||
use crypto;
|
use crypto;
|
||||||
use super::crypto::Crypto;
|
use super::crypto::Crypto;
|
||||||
use std::num::NonZeroU32;
|
|
||||||
|
|
||||||
/// Account representation.
|
/// Account representation.
|
||||||
#[derive(Debug, PartialEq, Clone)]
|
#[derive(Debug, PartialEq, Clone)]
|
||||||
@@ -60,7 +59,7 @@ impl SafeAccount {
|
|||||||
keypair: &KeyPair,
|
keypair: &KeyPair,
|
||||||
id: [u8; 16],
|
id: [u8; 16],
|
||||||
password: &Password,
|
password: &Password,
|
||||||
iterations: NonZeroU32,
|
iterations: u32,
|
||||||
name: String,
|
name: String,
|
||||||
meta: String
|
meta: String
|
||||||
) -> Result<Self, crypto::Error> {
|
) -> Result<Self, crypto::Error> {
|
||||||
@@ -136,7 +135,7 @@ impl SafeAccount {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Create a new `VaultKeyFile` from the given `self`
|
/// Create a new `VaultKeyFile` from the given `self`
|
||||||
pub fn into_vault_file(self, iterations: NonZeroU32, password: &Password) -> Result<json::VaultKeyFile, Error> {
|
pub fn into_vault_file(self, iterations: u32, password: &Password) -> Result<json::VaultKeyFile, Error> {
|
||||||
let meta_plain = json::VaultKeyMeta {
|
let meta_plain = json::VaultKeyMeta {
|
||||||
address: self.address.into(),
|
address: self.address.into(),
|
||||||
name: Some(self.name),
|
name: Some(self.name),
|
||||||
@@ -178,7 +177,7 @@ impl SafeAccount {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Change account's password.
|
/// Change account's password.
|
||||||
pub fn change_password(&self, old_password: &Password, new_password: &Password, iterations: NonZeroU32) -> Result<Self, Error> {
|
pub fn change_password(&self, old_password: &Password, new_password: &Password, iterations: u32) -> Result<Self, Error> {
|
||||||
let secret = self.crypto.secret(old_password)?;
|
let secret = self.crypto.secret(old_password)?;
|
||||||
let result = SafeAccount {
|
let result = SafeAccount {
|
||||||
id: self.id.clone(),
|
id: self.id.clone(),
|
||||||
@@ -201,19 +200,14 @@ impl SafeAccount {
|
|||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use ethkey::{Generator, Random, verify_public, Message};
|
use ethkey::{Generator, Random, verify_public, Message};
|
||||||
use super::{SafeAccount, NonZeroU32};
|
use super::SafeAccount;
|
||||||
|
|
||||||
lazy_static! {
|
|
||||||
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(10240).expect("10240 > 0; qed");
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn sign_and_verify_public() {
|
fn sign_and_verify_public() {
|
||||||
let keypair = Random.generate().unwrap();
|
let keypair = Random.generate().unwrap();
|
||||||
let password = "hello world".into();
|
let password = "hello world".into();
|
||||||
let message = Message::default();
|
let message = Message::default();
|
||||||
let account = SafeAccount::create(&keypair, [0u8; 16], &password, *ITERATIONS, "Test".to_owned(), "{}".to_owned());
|
let account = SafeAccount::create(&keypair, [0u8; 16], &password, 10240, "Test".to_owned(), "{}".to_owned());
|
||||||
let signature = account.unwrap().sign(&password, &message).unwrap();
|
let signature = account.unwrap().sign(&password, &message).unwrap();
|
||||||
assert!(verify_public(keypair.public(), &signature, &message).unwrap());
|
assert!(verify_public(keypair.public(), &signature, &message).unwrap());
|
||||||
}
|
}
|
||||||
@@ -223,9 +217,10 @@ mod tests {
|
|||||||
let keypair = Random.generate().unwrap();
|
let keypair = Random.generate().unwrap();
|
||||||
let first_password = "hello world".into();
|
let first_password = "hello world".into();
|
||||||
let sec_password = "this is sparta".into();
|
let sec_password = "this is sparta".into();
|
||||||
|
let i = 10240;
|
||||||
let message = Message::default();
|
let message = Message::default();
|
||||||
let account = SafeAccount::create(&keypair, [0u8; 16], &first_password, *ITERATIONS, "Test".to_owned(), "{}".to_owned()).unwrap();
|
let account = SafeAccount::create(&keypair, [0u8; 16], &first_password, i, "Test".to_owned(), "{}".to_owned()).unwrap();
|
||||||
let new_account = account.change_password(&first_password, &sec_password, *ITERATIONS).unwrap();
|
let new_account = account.change_password(&first_password, &sec_password, i).unwrap();
|
||||||
assert!(account.sign(&first_password, &message).is_ok());
|
assert!(account.sign(&first_password, &message).is_ok());
|
||||||
assert!(account.sign(&sec_password, &message).is_err());
|
assert!(account.sign(&sec_password, &message).is_err());
|
||||||
assert!(new_account.sign(&first_password, &message).is_err());
|
assert!(new_account.sign(&first_password, &message).is_err());
|
||||||
|
|||||||
@@ -356,16 +356,11 @@ mod test {
|
|||||||
extern crate tempdir;
|
extern crate tempdir;
|
||||||
|
|
||||||
use std::{env, fs};
|
use std::{env, fs};
|
||||||
use std::num::NonZeroU32;
|
|
||||||
use super::{KeyDirectory, RootDiskDirectory, VaultKey};
|
use super::{KeyDirectory, RootDiskDirectory, VaultKey};
|
||||||
use account::SafeAccount;
|
use account::SafeAccount;
|
||||||
use ethkey::{Random, Generator};
|
use ethkey::{Random, Generator};
|
||||||
use self::tempdir::TempDir;
|
use self::tempdir::TempDir;
|
||||||
|
|
||||||
lazy_static! {
|
|
||||||
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(1024).expect("1024 > 0; qed");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn should_create_new_account() {
|
fn should_create_new_account() {
|
||||||
// given
|
// given
|
||||||
@@ -376,7 +371,7 @@ mod test {
|
|||||||
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
|
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
|
||||||
|
|
||||||
// when
|
// when
|
||||||
let account = SafeAccount::create(&keypair, [0u8; 16], &password, *ITERATIONS, "Test".to_owned(), "{}".to_owned());
|
let account = SafeAccount::create(&keypair, [0u8; 16], &password, 1024, "Test".to_owned(), "{}".to_owned());
|
||||||
let res = directory.insert(account.unwrap());
|
let res = directory.insert(account.unwrap());
|
||||||
|
|
||||||
// then
|
// then
|
||||||
@@ -397,7 +392,7 @@ mod test {
|
|||||||
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
|
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
|
||||||
|
|
||||||
// when
|
// when
|
||||||
let account = SafeAccount::create(&keypair, [0u8; 16], &password, *ITERATIONS, "Test".to_owned(), "{}".to_owned()).unwrap();
|
let account = SafeAccount::create(&keypair, [0u8; 16], &password, 1024, "Test".to_owned(), "{}".to_owned()).unwrap();
|
||||||
let filename = "test".to_string();
|
let filename = "test".to_string();
|
||||||
let dedup = true;
|
let dedup = true;
|
||||||
|
|
||||||
@@ -433,7 +428,7 @@ mod test {
|
|||||||
|
|
||||||
// and when
|
// and when
|
||||||
let before_root_items_count = fs::read_dir(&dir).unwrap().count();
|
let before_root_items_count = fs::read_dir(&dir).unwrap().count();
|
||||||
let vault = directory.as_vault_provider().unwrap().create(vault_name, VaultKey::new(&password, *ITERATIONS));
|
let vault = directory.as_vault_provider().unwrap().create(vault_name, VaultKey::new(&password, 1024));
|
||||||
|
|
||||||
// then
|
// then
|
||||||
assert!(vault.is_ok());
|
assert!(vault.is_ok());
|
||||||
@@ -441,7 +436,7 @@ mod test {
|
|||||||
assert!(after_root_items_count > before_root_items_count);
|
assert!(after_root_items_count > before_root_items_count);
|
||||||
|
|
||||||
// and when
|
// and when
|
||||||
let vault = directory.as_vault_provider().unwrap().open(vault_name, VaultKey::new(&password, *ITERATIONS));
|
let vault = directory.as_vault_provider().unwrap().open(vault_name, VaultKey::new(&password, 1024));
|
||||||
|
|
||||||
// then
|
// then
|
||||||
assert!(vault.is_ok());
|
assert!(vault.is_ok());
|
||||||
@@ -458,9 +453,8 @@ mod test {
|
|||||||
let temp_path = TempDir::new("").unwrap();
|
let temp_path = TempDir::new("").unwrap();
|
||||||
let directory = RootDiskDirectory::create(&temp_path).unwrap();
|
let directory = RootDiskDirectory::create(&temp_path).unwrap();
|
||||||
let vault_provider = directory.as_vault_provider().unwrap();
|
let vault_provider = directory.as_vault_provider().unwrap();
|
||||||
let iter = NonZeroU32::new(1).expect("1 > 0; qed");
|
vault_provider.create("vault1", VaultKey::new(&"password1".into(), 1)).unwrap();
|
||||||
vault_provider.create("vault1", VaultKey::new(&"password1".into(), iter)).unwrap();
|
vault_provider.create("vault2", VaultKey::new(&"password2".into(), 1)).unwrap();
|
||||||
vault_provider.create("vault2", VaultKey::new(&"password2".into(), iter)).unwrap();
|
|
||||||
|
|
||||||
// then
|
// then
|
||||||
let vaults = vault_provider.list_vaults().unwrap();
|
let vaults = vault_provider.list_vaults().unwrap();
|
||||||
@@ -482,7 +476,7 @@ mod test {
|
|||||||
|
|
||||||
let keypair = Random.generate().unwrap();
|
let keypair = Random.generate().unwrap();
|
||||||
let password = "test pass".into();
|
let password = "test pass".into();
|
||||||
let account = SafeAccount::create(&keypair, [0u8; 16], &password, *ITERATIONS, "Test".to_owned(), "{}".to_owned());
|
let account = SafeAccount::create(&keypair, [0u8; 16], &password, 1024, "Test".to_owned(), "{}".to_owned());
|
||||||
directory.insert(account.unwrap()).expect("Account should be inserted ok");
|
directory.insert(account.unwrap()).expect("Account should be inserted ok");
|
||||||
|
|
||||||
let new_hash = directory.files_hash().expect("New files hash should be calculated ok");
|
let new_hash = directory.files_hash().expect("New files hash should be calculated ok");
|
||||||
|
|||||||
@@ -17,7 +17,6 @@
|
|||||||
//! Accounts Directory
|
//! Accounts Directory
|
||||||
|
|
||||||
use ethkey::Password;
|
use ethkey::Password;
|
||||||
use std::num::NonZeroU32;
|
|
||||||
use std::path::{PathBuf};
|
use std::path::{PathBuf};
|
||||||
use {SafeAccount, Error};
|
use {SafeAccount, Error};
|
||||||
|
|
||||||
@@ -42,7 +41,7 @@ pub struct VaultKey {
|
|||||||
/// Vault password
|
/// Vault password
|
||||||
pub password: Password,
|
pub password: Password,
|
||||||
/// Number of iterations to produce a derived key from password
|
/// Number of iterations to produce a derived key from password
|
||||||
pub iterations: NonZeroU32,
|
pub iterations: u32,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Keys directory
|
/// Keys directory
|
||||||
@@ -97,7 +96,7 @@ pub use self::vault::VaultDiskDirectory;
|
|||||||
|
|
||||||
impl VaultKey {
|
impl VaultKey {
|
||||||
/// Create new vault key
|
/// Create new vault key
|
||||||
pub fn new(password: &Password, iterations: NonZeroU32) -> Self {
|
pub fn new(password: &Password, iterations: u32) -> Self {
|
||||||
VaultKey {
|
VaultKey {
|
||||||
password: password.clone(),
|
password: password.clone(),
|
||||||
iterations: iterations,
|
iterations: iterations,
|
||||||
|
|||||||
@@ -282,17 +282,11 @@ mod test {
|
|||||||
|
|
||||||
use std::fs;
|
use std::fs;
|
||||||
use std::io::Write;
|
use std::io::Write;
|
||||||
use std::num::NonZeroU32;
|
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use super::VaultKey;
|
use super::VaultKey;
|
||||||
use super::{VAULT_FILE_NAME, check_vault_name, make_vault_dir_path, create_vault_file, read_vault_file, VaultDiskDirectory};
|
use super::{VAULT_FILE_NAME, check_vault_name, make_vault_dir_path, create_vault_file, read_vault_file, VaultDiskDirectory};
|
||||||
use self::tempdir::TempDir;
|
use self::tempdir::TempDir;
|
||||||
|
|
||||||
|
|
||||||
lazy_static! {
|
|
||||||
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(1024).expect("1024 > 0; qed");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn check_vault_name_succeeds() {
|
fn check_vault_name_succeeds() {
|
||||||
assert!(check_vault_name("vault"));
|
assert!(check_vault_name("vault"));
|
||||||
@@ -331,7 +325,7 @@ mod test {
|
|||||||
fn create_vault_file_succeeds() {
|
fn create_vault_file_succeeds() {
|
||||||
// given
|
// given
|
||||||
let temp_path = TempDir::new("").unwrap();
|
let temp_path = TempDir::new("").unwrap();
|
||||||
let key = VaultKey::new(&"password".into(), *ITERATIONS);
|
let key = VaultKey::new(&"password".into(), 1024);
|
||||||
let mut vault_dir: PathBuf = temp_path.path().into();
|
let mut vault_dir: PathBuf = temp_path.path().into();
|
||||||
vault_dir.push("vault");
|
vault_dir.push("vault");
|
||||||
fs::create_dir_all(&vault_dir).unwrap();
|
fs::create_dir_all(&vault_dir).unwrap();
|
||||||
@@ -350,7 +344,7 @@ mod test {
|
|||||||
fn read_vault_file_succeeds() {
|
fn read_vault_file_succeeds() {
|
||||||
// given
|
// given
|
||||||
let temp_path = TempDir::new("").unwrap();
|
let temp_path = TempDir::new("").unwrap();
|
||||||
let key = VaultKey::new(&"password".into(), *ITERATIONS);
|
let key = VaultKey::new(&"password".into(), 1024);
|
||||||
let vault_file_contents = r#"{"crypto":{"cipher":"aes-128-ctr","cipherparams":{"iv":"758696c8dc6378ab9b25bb42790da2f5"},"ciphertext":"54eb50683717d41caaeb12ea969f2c159daada5907383f26f327606a37dc7168","kdf":"pbkdf2","kdfparams":{"c":1024,"dklen":32,"prf":"hmac-sha256","salt":"3c320fa566a1a7963ac8df68a19548d27c8f40bf92ef87c84594dcd5bbc402b6"},"mac":"9e5c2314c2a0781962db85611417c614bd6756666b6b1e93840f5b6ed895f003"}}"#;
|
let vault_file_contents = r#"{"crypto":{"cipher":"aes-128-ctr","cipherparams":{"iv":"758696c8dc6378ab9b25bb42790da2f5"},"ciphertext":"54eb50683717d41caaeb12ea969f2c159daada5907383f26f327606a37dc7168","kdf":"pbkdf2","kdfparams":{"c":1024,"dklen":32,"prf":"hmac-sha256","salt":"3c320fa566a1a7963ac8df68a19548d27c8f40bf92ef87c84594dcd5bbc402b6"},"mac":"9e5c2314c2a0781962db85611417c614bd6756666b6b1e93840f5b6ed895f003"}}"#;
|
||||||
let dir: PathBuf = temp_path.path().into();
|
let dir: PathBuf = temp_path.path().into();
|
||||||
let mut vault_file_path: PathBuf = dir.clone();
|
let mut vault_file_path: PathBuf = dir.clone();
|
||||||
@@ -371,7 +365,7 @@ mod test {
|
|||||||
fn read_vault_file_fails() {
|
fn read_vault_file_fails() {
|
||||||
// given
|
// given
|
||||||
let temp_path = TempDir::new("").unwrap();
|
let temp_path = TempDir::new("").unwrap();
|
||||||
let key = VaultKey::new(&"password1".into(), *ITERATIONS);
|
let key = VaultKey::new(&"password1".into(), 1024);
|
||||||
let dir: PathBuf = temp_path.path().into();
|
let dir: PathBuf = temp_path.path().into();
|
||||||
let mut vault_file_path: PathBuf = dir.clone();
|
let mut vault_file_path: PathBuf = dir.clone();
|
||||||
vault_file_path.push(VAULT_FILE_NAME);
|
vault_file_path.push(VAULT_FILE_NAME);
|
||||||
@@ -400,7 +394,7 @@ mod test {
|
|||||||
fn vault_directory_can_be_created() {
|
fn vault_directory_can_be_created() {
|
||||||
// given
|
// given
|
||||||
let temp_path = TempDir::new("").unwrap();
|
let temp_path = TempDir::new("").unwrap();
|
||||||
let key = VaultKey::new(&"password".into(), *ITERATIONS);
|
let key = VaultKey::new(&"password".into(), 1024);
|
||||||
let dir: PathBuf = temp_path.path().into();
|
let dir: PathBuf = temp_path.path().into();
|
||||||
|
|
||||||
// when
|
// when
|
||||||
@@ -420,7 +414,7 @@ mod test {
|
|||||||
fn vault_directory_cannot_be_created_if_already_exists() {
|
fn vault_directory_cannot_be_created_if_already_exists() {
|
||||||
// given
|
// given
|
||||||
let temp_path = TempDir::new("").unwrap();
|
let temp_path = TempDir::new("").unwrap();
|
||||||
let key = VaultKey::new(&"password".into(), *ITERATIONS);
|
let key = VaultKey::new(&"password".into(), 1024);
|
||||||
let dir: PathBuf = temp_path.path().into();
|
let dir: PathBuf = temp_path.path().into();
|
||||||
let mut vault_dir = dir.clone();
|
let mut vault_dir = dir.clone();
|
||||||
vault_dir.push("vault");
|
vault_dir.push("vault");
|
||||||
@@ -437,7 +431,7 @@ mod test {
|
|||||||
fn vault_directory_cannot_be_opened_if_not_exists() {
|
fn vault_directory_cannot_be_opened_if_not_exists() {
|
||||||
// given
|
// given
|
||||||
let temp_path = TempDir::new("").unwrap();
|
let temp_path = TempDir::new("").unwrap();
|
||||||
let key = VaultKey::new(&"password".into(), *ITERATIONS);
|
let key = VaultKey::new(&"password".into(), 1024);
|
||||||
let dir: PathBuf = temp_path.path().into();
|
let dir: PathBuf = temp_path.path().into();
|
||||||
|
|
||||||
// when
|
// when
|
||||||
|
|||||||
@@ -15,12 +15,12 @@
|
|||||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
use std::collections::{BTreeMap, HashMap};
|
use std::collections::{BTreeMap, HashMap};
|
||||||
use std::num::NonZeroU32;
|
|
||||||
use std::mem;
|
use std::mem;
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use parking_lot::{Mutex, RwLock};
|
use parking_lot::{Mutex, RwLock};
|
||||||
use std::time::{Instant, Duration};
|
use std::time::{Instant, Duration};
|
||||||
|
|
||||||
|
use crypto::KEY_ITERATIONS;
|
||||||
use random::Random;
|
use random::Random;
|
||||||
use ethkey::{self, Signature, Password, Address, Message, Secret, Public, KeyPair, ExtendedKeyPair};
|
use ethkey::{self, Signature, Password, Address, Message, Secret, Public, KeyPair, ExtendedKeyPair};
|
||||||
use accounts_dir::{KeyDirectory, VaultKeyDirectory, VaultKey, SetKeyError};
|
use accounts_dir::{KeyDirectory, VaultKeyDirectory, VaultKey, SetKeyError};
|
||||||
@@ -29,12 +29,6 @@ use presale::PresaleWallet;
|
|||||||
use json::{self, Uuid, OpaqueKeyFile};
|
use json::{self, Uuid, OpaqueKeyFile};
|
||||||
use {import, Error, SimpleSecretStore, SecretStore, SecretVaultRef, StoreAccountRef, Derivation, OpaqueSecret};
|
use {import, Error, SimpleSecretStore, SecretStore, SecretVaultRef, StoreAccountRef, Derivation, OpaqueSecret};
|
||||||
|
|
||||||
|
|
||||||
lazy_static! {
|
|
||||||
static ref KEY_ITERATIONS: NonZeroU32 =
|
|
||||||
NonZeroU32::new(crypto::KEY_ITERATIONS as u32).expect("KEY_ITERATIONS > 0; qed");
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Accounts store.
|
/// Accounts store.
|
||||||
pub struct EthStore {
|
pub struct EthStore {
|
||||||
store: EthMultiStore,
|
store: EthMultiStore,
|
||||||
@@ -43,11 +37,11 @@ pub struct EthStore {
|
|||||||
impl EthStore {
|
impl EthStore {
|
||||||
/// Open a new accounts store with given key directory backend.
|
/// Open a new accounts store with given key directory backend.
|
||||||
pub fn open(directory: Box<KeyDirectory>) -> Result<Self, Error> {
|
pub fn open(directory: Box<KeyDirectory>) -> Result<Self, Error> {
|
||||||
Self::open_with_iterations(directory, *KEY_ITERATIONS)
|
Self::open_with_iterations(directory, KEY_ITERATIONS as u32)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Open a new account store with given key directory backend and custom number of iterations.
|
/// Open a new account store with given key directory backend and custom number of iterations.
|
||||||
pub fn open_with_iterations(directory: Box<KeyDirectory>, iterations: NonZeroU32) -> Result<Self, Error> {
|
pub fn open_with_iterations(directory: Box<KeyDirectory>, iterations: u32) -> Result<Self, Error> {
|
||||||
Ok(EthStore {
|
Ok(EthStore {
|
||||||
store: EthMultiStore::open_with_iterations(directory, iterations)?,
|
store: EthMultiStore::open_with_iterations(directory, iterations)?,
|
||||||
})
|
})
|
||||||
@@ -263,7 +257,7 @@ impl SecretStore for EthStore {
|
|||||||
/// Similar to `EthStore` but may store many accounts (with different passwords) for the same `Address`
|
/// Similar to `EthStore` but may store many accounts (with different passwords) for the same `Address`
|
||||||
pub struct EthMultiStore {
|
pub struct EthMultiStore {
|
||||||
dir: Box<KeyDirectory>,
|
dir: Box<KeyDirectory>,
|
||||||
iterations: NonZeroU32,
|
iterations: u32,
|
||||||
// order lock: cache, then vaults
|
// order lock: cache, then vaults
|
||||||
cache: RwLock<BTreeMap<StoreAccountRef, Vec<SafeAccount>>>,
|
cache: RwLock<BTreeMap<StoreAccountRef, Vec<SafeAccount>>>,
|
||||||
vaults: Mutex<HashMap<String, Box<VaultKeyDirectory>>>,
|
vaults: Mutex<HashMap<String, Box<VaultKeyDirectory>>>,
|
||||||
@@ -279,11 +273,11 @@ struct Timestamp {
|
|||||||
impl EthMultiStore {
|
impl EthMultiStore {
|
||||||
/// Open new multi-accounts store with given key directory backend.
|
/// Open new multi-accounts store with given key directory backend.
|
||||||
pub fn open(directory: Box<KeyDirectory>) -> Result<Self, Error> {
|
pub fn open(directory: Box<KeyDirectory>) -> Result<Self, Error> {
|
||||||
Self::open_with_iterations(directory, *KEY_ITERATIONS)
|
Self::open_with_iterations(directory, KEY_ITERATIONS as u32)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Open new multi-accounts store with given key directory backend and custom number of iterations for new keys.
|
/// Open new multi-accounts store with given key directory backend and custom number of iterations for new keys.
|
||||||
pub fn open_with_iterations(directory: Box<KeyDirectory>, iterations: NonZeroU32) -> Result<Self, Error> {
|
pub fn open_with_iterations(directory: Box<KeyDirectory>, iterations: u32) -> Result<Self, Error> {
|
||||||
let store = EthMultiStore {
|
let store = EthMultiStore {
|
||||||
dir: directory,
|
dir: directory,
|
||||||
vaults: Mutex::new(HashMap::new()),
|
vaults: Mutex::new(HashMap::new()),
|
||||||
|
|||||||
@@ -15,7 +15,6 @@
|
|||||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
use std::fmt;
|
use std::fmt;
|
||||||
use std::num::NonZeroU32;
|
|
||||||
use serde::{Serialize, Serializer, Deserialize, Deserializer};
|
use serde::{Serialize, Serializer, Deserialize, Deserializer};
|
||||||
use serde::de::{Visitor, Error as SerdeError};
|
use serde::de::{Visitor, Error as SerdeError};
|
||||||
use super::{Error, Bytes};
|
use super::{Error, Bytes};
|
||||||
@@ -109,7 +108,7 @@ impl<'a> Visitor<'a> for PrfVisitor {
|
|||||||
|
|
||||||
#[derive(Debug, PartialEq, Serialize, Deserialize)]
|
#[derive(Debug, PartialEq, Serialize, Deserialize)]
|
||||||
pub struct Pbkdf2 {
|
pub struct Pbkdf2 {
|
||||||
pub c: NonZeroU32,
|
pub c: u32,
|
||||||
pub dklen: u32,
|
pub dklen: u32,
|
||||||
pub prf: Prf,
|
pub prf: Prf,
|
||||||
pub salt: Bytes,
|
pub salt: Bytes,
|
||||||
|
|||||||
@@ -41,11 +41,6 @@ impl VaultFile {
|
|||||||
mod test {
|
mod test {
|
||||||
use serde_json;
|
use serde_json;
|
||||||
use json::{VaultFile, Crypto, Cipher, Aes128Ctr, Kdf, Pbkdf2, Prf};
|
use json::{VaultFile, Crypto, Cipher, Aes128Ctr, Kdf, Pbkdf2, Prf};
|
||||||
use std::num::NonZeroU32;
|
|
||||||
|
|
||||||
lazy_static! {
|
|
||||||
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(1024).expect("1024 > 0; qed");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn to_and_from_json() {
|
fn to_and_from_json() {
|
||||||
@@ -56,7 +51,7 @@ mod test {
|
|||||||
}),
|
}),
|
||||||
ciphertext: "4d6938a1f49b7782".into(),
|
ciphertext: "4d6938a1f49b7782".into(),
|
||||||
kdf: Kdf::Pbkdf2(Pbkdf2 {
|
kdf: Kdf::Pbkdf2(Pbkdf2 {
|
||||||
c: *ITERATIONS,
|
c: 1024,
|
||||||
dklen: 32,
|
dklen: 32,
|
||||||
prf: Prf::HmacSha256,
|
prf: Prf::HmacSha256,
|
||||||
salt: "b6a9338a7ccd39288a86dba73bfecd9101b4f3db9c9830e7c76afdbd4f6872e5".into(),
|
salt: "b6a9338a7ccd39288a86dba73bfecd9101b4f3db9c9830e7c76afdbd4f6872e5".into(),
|
||||||
@@ -81,7 +76,7 @@ mod test {
|
|||||||
}),
|
}),
|
||||||
ciphertext: "4d6938a1f49b7782".into(),
|
ciphertext: "4d6938a1f49b7782".into(),
|
||||||
kdf: Kdf::Pbkdf2(Pbkdf2 {
|
kdf: Kdf::Pbkdf2(Pbkdf2 {
|
||||||
c: *ITERATIONS,
|
c: 1024,
|
||||||
dklen: 32,
|
dklen: 32,
|
||||||
prf: Prf::HmacSha256,
|
prf: Prf::HmacSha256,
|
||||||
salt: "b6a9338a7ccd39288a86dba73bfecd9101b4f3db9c9830e7c76afdbd4f6872e5".into(),
|
salt: "b6a9338a7ccd39288a86dba73bfecd9101b4f3db9c9830e7c76afdbd4f6872e5".into(),
|
||||||
|
|||||||
@@ -106,11 +106,6 @@ mod test {
|
|||||||
use serde_json;
|
use serde_json;
|
||||||
use json::{VaultKeyFile, Version, Crypto, Cipher, Aes128Ctr, Kdf, Pbkdf2, Prf,
|
use json::{VaultKeyFile, Version, Crypto, Cipher, Aes128Ctr, Kdf, Pbkdf2, Prf,
|
||||||
insert_vault_name_to_json_meta, remove_vault_name_from_json_meta};
|
insert_vault_name_to_json_meta, remove_vault_name_from_json_meta};
|
||||||
use std::num::NonZeroU32;
|
|
||||||
|
|
||||||
lazy_static! {
|
|
||||||
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(10240).expect("10240 > 0; qed");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn to_and_from_json() {
|
fn to_and_from_json() {
|
||||||
@@ -123,7 +118,7 @@ mod test {
|
|||||||
}),
|
}),
|
||||||
ciphertext: "4befe0a66d9a4b6fec8e39eb5c90ac5dafdeaab005fff1af665fd1f9af925c91".into(),
|
ciphertext: "4befe0a66d9a4b6fec8e39eb5c90ac5dafdeaab005fff1af665fd1f9af925c91".into(),
|
||||||
kdf: Kdf::Pbkdf2(Pbkdf2 {
|
kdf: Kdf::Pbkdf2(Pbkdf2 {
|
||||||
c: *ITERATIONS,
|
c: 10240,
|
||||||
dklen: 32,
|
dklen: 32,
|
||||||
prf: Prf::HmacSha256,
|
prf: Prf::HmacSha256,
|
||||||
salt: "f17731e84ecac390546692dbd4ccf6a3a2720dc9652984978381e61c28a471b2".into(),
|
salt: "f17731e84ecac390546692dbd4ccf6a3a2720dc9652984978381e61c28a471b2".into(),
|
||||||
@@ -136,7 +131,7 @@ mod test {
|
|||||||
}),
|
}),
|
||||||
ciphertext: "fef0d113d7576c1702daf380ad6f4c5408389e57991cae2a174facd74bd549338e1014850bddbab7eb486ff5f5c9c5532800c6a6d4db2be2212cd5cd3769244ab230e1f369e8382a9e6d7c0a".into(),
|
ciphertext: "fef0d113d7576c1702daf380ad6f4c5408389e57991cae2a174facd74bd549338e1014850bddbab7eb486ff5f5c9c5532800c6a6d4db2be2212cd5cd3769244ab230e1f369e8382a9e6d7c0a".into(),
|
||||||
kdf: Kdf::Pbkdf2(Pbkdf2 {
|
kdf: Kdf::Pbkdf2(Pbkdf2 {
|
||||||
c: *ITERATIONS,
|
c: 10240,
|
||||||
dklen: 32,
|
dklen: 32,
|
||||||
prf: Prf::HmacSha256,
|
prf: Prf::HmacSha256,
|
||||||
salt: "aca82865174a82249a198814b263f43a631f272cbf7ed329d0f0839d259c652a".into(),
|
salt: "aca82865174a82249a198814b263f43a631f272cbf7ed329d0f0839d259c652a".into(),
|
||||||
|
|||||||
@@ -36,8 +36,6 @@ extern crate ethereum_types;
|
|||||||
extern crate ethkey as _ethkey;
|
extern crate ethkey as _ethkey;
|
||||||
extern crate parity_wordlist;
|
extern crate parity_wordlist;
|
||||||
|
|
||||||
#[macro_use]
|
|
||||||
extern crate lazy_static;
|
|
||||||
#[macro_use]
|
#[macro_use]
|
||||||
extern crate log;
|
extern crate log;
|
||||||
#[macro_use]
|
#[macro_use]
|
||||||
|
|||||||
@@ -15,7 +15,6 @@
|
|||||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
use std::fs;
|
use std::fs;
|
||||||
use std::num::NonZeroU32;
|
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
use json;
|
use json;
|
||||||
use ethkey::{Address, Secret, KeyPair, Password};
|
use ethkey::{Address, Secret, KeyPair, Password};
|
||||||
@@ -59,8 +58,7 @@ impl PresaleWallet {
|
|||||||
let mut derived_key = [0u8; 32];
|
let mut derived_key = [0u8; 32];
|
||||||
let salt = pbkdf2::Salt(password.as_bytes());
|
let salt = pbkdf2::Salt(password.as_bytes());
|
||||||
let sec = pbkdf2::Secret(password.as_bytes());
|
let sec = pbkdf2::Secret(password.as_bytes());
|
||||||
let iter = NonZeroU32::new(2000).expect("2000 > 0; qed");
|
pbkdf2::sha256(2000, salt, sec, &mut derived_key);
|
||||||
pbkdf2::sha256(iter, salt, sec, &mut derived_key);
|
|
||||||
|
|
||||||
let mut key = vec![0; self.ciphertext.len()];
|
let mut key = vec![0; self.ciphertext.len()];
|
||||||
let len = crypto::aes::decrypt_128_cbc(&derived_key[0..16], &self.iv, &self.ciphertext, &mut key)
|
let len = crypto::aes::decrypt_128_cbc(&derived_key[0..16], &self.iv, &self.ciphertext, &mut key)
|
||||||
|
|||||||
@@ -1,56 +0,0 @@
|
|||||||
// Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
|
||||||
// This file is part of Parity.
|
|
||||||
|
|
||||||
// Parity is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
|
|
||||||
// Parity is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU General Public License for more details.
|
|
||||||
|
|
||||||
// You should have received a copy of the GNU General Public License
|
|
||||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
use std::fmt;
|
|
||||||
|
|
||||||
use ethstore::{Error as SSError};
|
|
||||||
use hardware_wallet::{Error as HardwareError};
|
|
||||||
|
|
||||||
/// Signing error
|
|
||||||
#[derive(Debug)]
|
|
||||||
pub enum SignError {
|
|
||||||
/// Account is not unlocked
|
|
||||||
NotUnlocked,
|
|
||||||
/// Account does not exist.
|
|
||||||
NotFound,
|
|
||||||
/// Low-level hardware device error.
|
|
||||||
Hardware(HardwareError),
|
|
||||||
/// Low-level error from store
|
|
||||||
SStore(SSError),
|
|
||||||
}
|
|
||||||
|
|
||||||
impl fmt::Display for SignError {
|
|
||||||
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
|
|
||||||
match *self {
|
|
||||||
SignError::NotUnlocked => write!(f, "Account is locked"),
|
|
||||||
SignError::NotFound => write!(f, "Account does not exist"),
|
|
||||||
SignError::Hardware(ref e) => write!(f, "{}", e),
|
|
||||||
SignError::SStore(ref e) => write!(f, "{}", e),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl From<HardwareError> for SignError {
|
|
||||||
fn from(e: HardwareError) -> Self {
|
|
||||||
SignError::Hardware(e)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl From<SSError> for SignError {
|
|
||||||
fn from(e: SSError) -> Self {
|
|
||||||
SignError::SStore(e)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -14,7 +14,7 @@ serde_json = "1.0"
|
|||||||
url = "1.2.0"
|
url = "1.2.0"
|
||||||
matches = "0.1"
|
matches = "0.1"
|
||||||
parking_lot = "0.7"
|
parking_lot = "0.7"
|
||||||
jsonrpc-core = "10.0.1"
|
jsonrpc-core = { git = "https://github.com/paritytech/jsonrpc.git", branch = "parity-2.2" }
|
||||||
jsonrpc-ws-server = "10.0.1"
|
jsonrpc-ws-server = { git = "https://github.com/paritytech/jsonrpc.git", branch = "parity-2.2" }
|
||||||
parity-rpc = { path = "../../rpc" }
|
parity-rpc = { path = "../../rpc" }
|
||||||
keccak-hash = "0.1"
|
keccak-hash = "0.1"
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
Note: Parity Ethereum 2.0 reached End-of-Life on 2018-11-15 (EOL).
|
Note: Parity 2.0 reached End-of-Life on 2018-11-15 (EOL).
|
||||||
|
|
||||||
## Parity-Ethereum [v2.0.9](https://github.com/paritytech/parity-ethereum/releases/tag/v2.0.9) (2018-10-29)
|
## Parity-Ethereum [v2.0.9](https://github.com/paritytech/parity-ethereum/releases/tag/v2.0.9) (2018-10-29)
|
||||||
|
|
||||||
|
|||||||
@@ -1,31 +1,13 @@
|
|||||||
Note: Parity Ethereum 2.1 reached End-of-Life on 2019-01-16 (EOL).
|
|
||||||
|
|
||||||
## Parity-Ethereum [v2.1.11](https://github.com/paritytech/parity-ethereum/releases/tag/v2.1.11) (2019-01-09)
|
|
||||||
|
|
||||||
Parity-Ethereum 2.1.11-stable is a bugfix release that improves performance and stability.
|
|
||||||
|
|
||||||
The full list of included changes:
|
|
||||||
|
|
||||||
- Stable backports v2.1.11 ([#10112](https://github.com/paritytech/parity-ethereum/pull/10112))
|
|
||||||
- Version: bump stable to v2.1.11
|
|
||||||
- HF in POA Sokol (2019-01-04) ([#10077](https://github.com/paritytech/parity-ethereum/pull/10077))
|
|
||||||
- Add --locked when running cargo ([#10107](https://github.com/paritytech/parity-ethereum/pull/10107))
|
|
||||||
- Ethcore: update hardcoded headers ([#10123](https://github.com/paritytech/parity-ethereum/pull/10123))
|
|
||||||
- Identity fix ([#10128](https://github.com/paritytech/parity-ethereum/pull/10128))
|
|
||||||
- Update pwasm-utils to 0.6.1 ([#10134](https://github.com/paritytech/parity-ethereum/pull/10134))
|
|
||||||
- Version: mark upgrade critical on kovan
|
|
||||||
|
|
||||||
## Parity-Ethereum [v2.1.10](https://github.com/paritytech/parity-ethereum/releases/tag/v2.1.10) (2018-12-14)
|
## Parity-Ethereum [v2.1.10](https://github.com/paritytech/parity-ethereum/releases/tag/v2.1.10) (2018-12-14)
|
||||||
|
Parity-Ethereum 2.1.10-stable is an important release that introduces Constantinople fork at block 7080000 on Mainnet.
|
||||||
Parity-Ethereum 2.1.10-stable is an important release that introduces Constantinople fork at block 7080000 on Mainnet.
|
This release also contains a fix for chains using AuRa + EmptySteps. Read carefully if this applies to you.
|
||||||
This release also contains a fix for chains using AuRa + EmptySteps. Read carefully if this applies to you.
|
|
||||||
If you have a chain with`empty_steps` already running, some blocks most likely contain non-strict entries (unordered or duplicated empty steps). In this release`strict_empty_steps_transition` **is enabled by default at block 0** for any chain with `empty_steps`.
|
If you have a chain with`empty_steps` already running, some blocks most likely contain non-strict entries (unordered or duplicated empty steps). In this release`strict_empty_steps_transition` **is enabled by default at block 0** for any chain with `empty_steps`.
|
||||||
If your network uses `empty_steps` you **must**:
|
If your network uses `empty_steps` you **must**:
|
||||||
- plan a hard fork and change `strict_empty_steps_transition` to the desire fork block
|
- plan a hard fork and change `strict_empty_steps_transition` to the desire fork block
|
||||||
- update the clients of the whole network to 2.2.5-beta / 2.1.10-stable.
|
- update the clients of the whole network to 2.2.5-beta / 2.1.10-stable.
|
||||||
If for some reason you don't want to do this please set`strict_empty_steps_transition` to `0xfffffffff` to disable it.
|
If for some reason you don't want to do this please set`strict_empty_steps_transition` to `0xfffffffff` to disable it.
|
||||||
|
|
||||||
The full list of included changes:
|
The full list of included changes:
|
||||||
|
|
||||||
- Backports for stable 2.1.10 ([#10046](https://github.com/paritytech/parity-ethereum/pull/10046))
|
- Backports for stable 2.1.10 ([#10046](https://github.com/paritytech/parity-ethereum/pull/10046))
|
||||||
- Bump stable to 2.1.10 ([#10046](https://github.com/paritytech/parity-ethereum/pull/10046))
|
- Bump stable to 2.1.10 ([#10046](https://github.com/paritytech/parity-ethereum/pull/10046))
|
||||||
|
|||||||
@@ -1,317 +0,0 @@
|
|||||||
## Parity-Ethereum [v2.2.7](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.7) (2019-01-15)
|
|
||||||
|
|
||||||
Parity-Ethereum 2.2.7-stable is a consensus-relevant security release that reverts Constantinople on the Ethereum network. Upgrading is mandatory for Ethereum, and strongly recommended for other networks.
|
|
||||||
|
|
||||||
- **Consensus** - Ethereum Network: Pull Constantinople protocol upgrade on Ethereum ([#10189](https://github.com/paritytech/parity-ethereum/pull/10189))
|
|
||||||
- Read more: [Security Alert: Ethereum Constantinople Postponement](https://blog.ethereum.org/2019/01/15/security-alert-ethereum-constantinople-postponement/)
|
|
||||||
- **Networking** - All networks: Ping nodes from discovery ([#10167](https://github.com/paritytech/parity-ethereum/pull/10167))
|
|
||||||
- **Wasm** - Kovan Network: Update pwasm-utils to 0.6.1 ([#10134](https://github.com/paritytech/parity-ethereum/pull/10134))
|
|
||||||
|
|
||||||
_Note:_ This release marks Parity 2.2 as _stable_. All versions of Parity 2.1 now reached _end of life_.
|
|
||||||
|
|
||||||
The full list of included changes:
|
|
||||||
|
|
||||||
- Backports for stable 2.2.7 ([#10163](https://github.com/paritytech/parity-ethereum/pull/10163))
|
|
||||||
- Version: bump stable to 2.2.7
|
|
||||||
- Version: mark 2.2 track stable
|
|
||||||
- Version: mark update critical on all networks
|
|
||||||
- Handle the case for contract creation on an empty but exist account with storage items ([#10065](https://github.com/paritytech/parity-ethereum/pull/10065))
|
|
||||||
- Fix _cannot recursively call into `Core`_ issue ([#10144](https://github.com/paritytech/parity-ethereum/pull/10144))
|
|
||||||
- Snap: fix path in script ([#10157](https://github.com/paritytech/parity-ethereum/pull/10157))
|
|
||||||
- Ping nodes from discovery ([#10167](https://github.com/paritytech/parity-ethereum/pull/10167))
|
|
||||||
- Version: bump fork blocks for kovan and foundation, mark releases non critical
|
|
||||||
- Pull constantinople on ethereum network ([#10189](https://github.com/paritytech/parity-ethereum/pull/10189))
|
|
||||||
|
|
||||||
## Parity-Ethereum [v2.2.6](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.6) (2019-01-10)
|
|
||||||
|
|
||||||
Parity-Ethereum 2.2.6-beta is a bugfix release that improves performance and stability.
|
|
||||||
|
|
||||||
The full list of included changes:
|
|
||||||
|
|
||||||
- Beta backports v2.2.6 ([#10113](https://github.com/paritytech/parity-ethereum/pull/10113))
|
|
||||||
- Version: bump beta to v2.2.6
|
|
||||||
- Fill transaction hash on ethGetLog of light client. ([#9938](https://github.com/paritytech/parity-ethereum/pull/9938))
|
|
||||||
- Fix pubsub new_blocks notifications to include all blocks ([#9987](https://github.com/paritytech/parity-ethereum/pull/9987))
|
|
||||||
- Finality: dont require chain head to be in the chain ([#10054](https://github.com/paritytech/parity-ethereum/pull/10054))
|
|
||||||
- Handle the case for contract creation on an empty but exist account with storage items ([#10065](https://github.com/paritytech/parity-ethereum/pull/10065))
|
|
||||||
- Autogen docs for the "Configuring Parity Ethereum" wiki page. ([#10067](https://github.com/paritytech/parity-ethereum/pull/10067))
|
|
||||||
- HF in POA Sokol (2019-01-04) ([#10077](https://github.com/paritytech/parity-ethereum/pull/10077))
|
|
||||||
- Add --locked when running cargo ([#10107](https://github.com/paritytech/parity-ethereum/pull/10107))
|
|
||||||
- Ethcore: update hardcoded headers ([#10123](https://github.com/paritytech/parity-ethereum/pull/10123))
|
|
||||||
- Identity fix ([#10128](https://github.com/paritytech/parity-ethereum/pull/10128))
|
|
||||||
- Update pwasm-utils to 0.6.1 ([#10134](https://github.com/paritytech/parity-ethereum/pull/10134))
|
|
||||||
- Make sure parent block is not in importing queue when importing ancient blocks ([#10138](https://github.com/paritytech/parity-ethereum/pull/10138))
|
|
||||||
- CI: re-enable snap publishing ([#10142](https://github.com/paritytech/parity-ethereum/pull/10142))
|
|
||||||
- HF in POA Core (2019-01-18) - Constantinople ([#10155](https://github.com/paritytech/parity-ethereum/pull/10155))
|
|
||||||
- Version: mark upgrade critical on kovan
|
|
||||||
|
|
||||||
## Parity-Ethereum [v2.2.5](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.5) (2018-12-14)
|
|
||||||
|
|
||||||
Parity-Ethereum 2.2.5-beta is an important release that introduces Constantinople fork at block 7080000 on Mainnet.
|
|
||||||
This release also contains a fix for chains using AuRa + EmptySteps. Read carefully if this applies to you.
|
|
||||||
If you have a chain with`empty_steps` already running, some blocks most likely contain non-strict entries (unordered or duplicated empty steps). In this release`strict_empty_steps_transition` **is enabled by default at block 0** for any chain with `empty_steps`.
|
|
||||||
If your network uses `empty_steps` you **must**:
|
|
||||||
- plan a hard fork and change `strict_empty_steps_transition` to the desire fork block
|
|
||||||
- update the clients of the whole network to 2.2.5-beta / 2.1.10-stable.
|
|
||||||
If for some reason you don't want to do this please set`strict_empty_steps_transition` to `0xfffffffff` to disable it.
|
|
||||||
|
|
||||||
The full list of included changes:
|
|
||||||
- Backports for beta 2.2.5 ([#10047](https://github.com/paritytech/parity-ethereum/pull/10047))
|
|
||||||
- Bump beta to 2.2.5 ([#10047](https://github.com/paritytech/parity-ethereum/pull/10047))
|
|
||||||
- Fix empty steps ([#9939](https://github.com/paritytech/parity-ethereum/pull/9939))
|
|
||||||
- Prevent sending empty step message twice
|
|
||||||
- Prevent sending empty step and then block in the same step
|
|
||||||
- Don't accept double empty steps
|
|
||||||
- Do basic validation of self-sealed blocks
|
|
||||||
- Strict empty steps validation ([#10041](https://github.com/paritytech/parity-ethereum/pull/10041))
|
|
||||||
- Enables strict verification of empty steps - there can be no duplicates and empty steps should be ordered inside the seal.
|
|
||||||
- Note that authorities won't produce invalid seals after [#9939](https://github.com/paritytech/parity-ethereum/pull/9939), this PR just adds verification to the seal to prevent forging incorrect blocks and potentially causing consensus issues.
|
|
||||||
- This features is enabled by default so any AuRa + EmptySteps chain should set strict_empty_steps_transition fork block number in their spec and upgrade to v2.2.5-beta or v2.1.10-stable.
|
|
||||||
- ethcore: enable constantinople on ethereum ([#10031](https://github.com/paritytech/parity-ethereum/pull/10031))
|
|
||||||
- ethcore: change blockreward to 2e18 for foundation after constantinople
|
|
||||||
- ethcore: delay diff bomb by 2e6 blocks for foundation after constantinople
|
|
||||||
- ethcore: enable eip-{145,1014,1052,1283} for foundation after constantinople
|
|
||||||
- Change test miner max memory to malloc reports. ([#10024](https://github.com/paritytech/parity-ethereum/pull/10024))
|
|
||||||
- Fix: test corpus_inaccessible panic ([#10019](https://github.com/paritytech/parity-ethereum/pull/10019))
|
|
||||||
|
|
||||||
## Parity-Ethereum [v2.2.2](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.2) (2018-11-29)
|
|
||||||
|
|
||||||
Parity-Ethereum 2.2.2-beta is an exciting release. Among others, it improves sync performance, peering stability, block propagation, and transaction propagation times. Also, a warp-sync no longer removes existing blocks from the database, but rather reuses locally available information to decrease sync times and reduces required bandwidth.
|
|
||||||
|
|
||||||
Before upgrading to 2.2.2, please also verify the validity of your chain specs. Parity Ethereum now denies unknown fields in the specification. To do this, use the chainspec tool:
|
|
||||||
|
|
||||||
```
|
|
||||||
cargo build --release -p chainspec
|
|
||||||
./target/release/chainspec /path/to/spec.json
|
|
||||||
```
|
|
||||||
|
|
||||||
Last but not least, JSONRPC APIs which are not yet accepted as an EIP in the `eth`, `personal`, or `web3` namespace, are now considere experimental as their final specification might change in future. These APIs have to be manually enabled by explicitly running `--jsonrpc-experimental`.
|
|
||||||
|
|
||||||
The full list of included changes:
|
|
||||||
|
|
||||||
- Backports For beta 2.2.2 ([#9976](https://github.com/paritytech/parity-ethereum/pull/9976))
|
|
||||||
- Version: bump beta to 2.2.2
|
|
||||||
- Add experimental RPCs flag ([#9928](https://github.com/paritytech/parity-ethereum/pull/9928))
|
|
||||||
- Keep existing blocks when restoring a Snapshot ([#8643](https://github.com/paritytech/parity-ethereum/pull/8643))
|
|
||||||
- Rename db_restore => client
|
|
||||||
- First step: make it compile!
|
|
||||||
- Second step: working implementation!
|
|
||||||
- Refactoring
|
|
||||||
- Fix tests
|
|
||||||
- Migrate ancient blocks interacting backward
|
|
||||||
- Early return in block migration if snapshot is aborted
|
|
||||||
- Remove RwLock getter (PR Grumble I)
|
|
||||||
- Remove dependency on `Client`: only used Traits
|
|
||||||
- Add test for recovering aborted snapshot recovery
|
|
||||||
- Add test for migrating old blocks
|
|
||||||
- Release RwLock earlier
|
|
||||||
- Revert Cargo.lock
|
|
||||||
- Update _update ancient block_ logic: set local in `commit`
|
|
||||||
- Update typo in ethcore/src/snapshot/service.rs
|
|
||||||
- Adjust requests costs for light client ([#9925](https://github.com/paritytech/parity-ethereum/pull/9925))
|
|
||||||
- Pip Table Cost relative to average peers instead of max peers
|
|
||||||
- Add tracing in PIP new_cost_table
|
|
||||||
- Update stat peer_count
|
|
||||||
- Use number of leeching peers for Light serve costs
|
|
||||||
- Fix test::light_params_load_share_depends_on_max_peers (wrong type)
|
|
||||||
- Remove (now) useless test
|
|
||||||
- Remove `load_share` from LightParams.Config
|
|
||||||
- Add LEECHER_COUNT_FACTOR
|
|
||||||
- Pr Grumble: u64 to u32 for f64 casting
|
|
||||||
- Prevent u32 overflow for avg_peer_count
|
|
||||||
- Add tests for LightSync::Statistics
|
|
||||||
- Fix empty steps ([#9939](https://github.com/paritytech/parity-ethereum/pull/9939))
|
|
||||||
- Don't send empty step twice or empty step then block.
|
|
||||||
- Perform basic validation of locally sealed blocks.
|
|
||||||
- Don't include empty step twice.
|
|
||||||
- Prevent silent errors in daemon mode, closes [#9367](https://github.com/paritytech/parity-ethereum/issues/9367) ([#9946](https://github.com/paritytech/parity-ethereum/pull/9946))
|
|
||||||
- Fix a deadlock ([#9952](https://github.com/paritytech/parity-ethereum/pull/9952))
|
|
||||||
- Update informant:
|
|
||||||
- Decimal in Mgas/s
|
|
||||||
- Print every 5s (not randomly between 5s and 10s)
|
|
||||||
- Fix dead-lock in `blockchain.rs`
|
|
||||||
- Update locks ordering
|
|
||||||
- Fix light client informant while syncing ([#9932](https://github.com/paritytech/parity-ethereum/pull/9932))
|
|
||||||
- Add `is_idle` to LightSync to check importing status
|
|
||||||
- Use SyncStateWrapper to make sure is_idle gets updates
|
|
||||||
- Update is_major_import to use verified queue size as well
|
|
||||||
- Add comment for `is_idle`
|
|
||||||
- Add Debug to `SyncStateWrapper`
|
|
||||||
- `fn get` -> `fn into_inner`
|
|
||||||
- Ci: rearrange pipeline by logic ([#9970](https://github.com/paritytech/parity-ethereum/pull/9970))
|
|
||||||
- Ci: rearrange pipeline by logic
|
|
||||||
- Ci: rename docs script
|
|
||||||
- Fix docker build ([#9971](https://github.com/paritytech/parity-ethereum/pull/9971))
|
|
||||||
- Deny unknown fields for chainspec ([#9972](https://github.com/paritytech/parity-ethereum/pull/9972))
|
|
||||||
- Add deny_unknown_fields to chainspec
|
|
||||||
- Add tests and fix existing one
|
|
||||||
- Remove serde_ignored dependency for chainspec
|
|
||||||
- Fix rpc test eth chain spec
|
|
||||||
- Fix starting_nonce_test spec
|
|
||||||
- Improve block and transaction propagation ([#9954](https://github.com/paritytech/parity-ethereum/pull/9954))
|
|
||||||
- Refactor sync to add priority tasks.
|
|
||||||
- Send priority tasks notifications.
|
|
||||||
- Propagate blocks, optimize transactions.
|
|
||||||
- Implement transaction propagation. Use sync_channel.
|
|
||||||
- Tone down info.
|
|
||||||
- Prevent deadlock by not waiting forever for sync lock.
|
|
||||||
- Fix lock order.
|
|
||||||
- Don't use sync_channel to prevent deadlocks.
|
|
||||||
- Fix tests.
|
|
||||||
- Fix unstable peers and slowness in sync ([#9967](https://github.com/paritytech/parity-ethereum/pull/9967))
|
|
||||||
- Don't sync all peers after each response
|
|
||||||
- Update formating
|
|
||||||
- Fix tests: add `continue_sync` to `Sync_step`
|
|
||||||
- Update ethcore/sync/src/chain/mod.rs
|
|
||||||
- Fix rpc middlewares
|
|
||||||
- Fix Cargo.lock
|
|
||||||
- Json: resolve merge in spec
|
|
||||||
- Rpc: fix starting_nonce_test
|
|
||||||
- Ci: allow nightl job to fail
|
|
||||||
|
|
||||||
## Parity-Ethereum [v2.2.1](https://github.com/paritytech/parity-ethereum/releases/tag/v2.2.1) (2018-11-15)
|
|
||||||
|
|
||||||
Parity-Ethereum 2.2.1-beta is the first v2.2 release, and might introduce features that break previous work flows, among others:
|
|
||||||
|
|
||||||
- Prevent zero network ID ([#9763](https://github.com/paritytech/parity-ethereum/pull/9763)) and drop support for Olympic testnet ([#9801](https://github.com/paritytech/parity-ethereum/pull/9801)): The Olympic test net is dead for years and never used a chain ID but network ID zero. Parity Ethereum is now preventing the network ID to be zero, thus Olympic support is dropped. Make sure to chose positive non-zero network IDs in future.
|
|
||||||
- Multithreaded snapshot creation ([#9239](https://github.com/paritytech/parity-ethereum/pull/9239)): adds a CLI argument `--snapshot-threads` which specifies the number of threads. This helps improving the performance of full nodes that wish to provide warp-snapshots for the network. The gain in performance comes with a slight drawback in increased snapshot size.
|
|
||||||
- Expose config max-round-blocks-to-import ([#9439](https://github.com/paritytech/parity-ethereum/pull/9439)): Parity Ethereum imports blocks in rounds. If at the end of any round, the queue is not empty, we consider it to be _importing_ and won't notify pubsub. On large re-orgs (10+ blocks), this is possible. The default `max_round_blocks_to_import` is increased to 12 and configurable via the `--max-round-blocks-to-import` CLI flag. With unstable network conditions, it is advised to increase the number. This shouldn't have any noticeable performance impact unless the number is set to really large.
|
|
||||||
- Increase Gas-floor-target and Gas Cap ([#9564](https://github.com/paritytech/parity-ethereum/pull/9564)): the default values for gas floor target are `8_000_000` and gas cap `10_000_000`, similar to Geth 1.8.15+.
|
|
||||||
- Produce portable binaries ([#9725](https://github.com/paritytech/parity-ethereum/pull/9725)): we now produce portable binaries, but it may incur some performance degradation. For ultimate performance it's now better to compile Parity Ethereum from source with `PORTABLE=OFF` environment variable.
|
|
||||||
- RPC: `parity_allTransactionHashes` ([#9745](https://github.com/paritytech/parity-ethereum/pull/9745)): Get all pending transactions from the queue with the high performant `parity_allTransactionHashes` RPC method.
|
|
||||||
- Support `eth_chainId` RPC method ([#9783](https://github.com/paritytech/parity-ethereum/pull/9783)): implements EIP-695 to get the chainID via RPC.
|
|
||||||
- AuRa: finalize blocks ([#9692](https://github.com/paritytech/parity-ethereum/pull/9692)): The AuRa engine was updated to emit ancestry actions to finalize blocks. The full client stores block finality in the database, the engine builds finality from an ancestry of `ExtendedHeader`; `is_epoch_end` was updated to take a vec of recently finalized headers; `is_epoch_end_light` was added which maintains the previous interface and is used by the light client since the client itself doesn't track finality.
|
|
||||||
|
|
||||||
The full list of included changes:
|
|
||||||
|
|
||||||
- Backport to parity 2.2.1 beta ([#9905](https://github.com/paritytech/parity-ethereum/pull/9905))
|
|
||||||
- Bump version to 2.2.1
|
|
||||||
- Fix: Intermittent failing CI due to addr in use ([#9885](https://github.com/paritytech/parity-ethereum/pull/9885))
|
|
||||||
- Fix Parity not closing on Ctrl-C ([#9886](https://github.com/paritytech/parity-ethereum/pull/9886))
|
|
||||||
- Fix json tracer overflow ([#9873](https://github.com/paritytech/parity-ethereum/pull/9873))
|
|
||||||
- Fix docker script ([#9854](https://github.com/paritytech/parity-ethereum/pull/9854))
|
|
||||||
- Add hardcoded headers for light client ([#9907](https://github.com/paritytech/parity-ethereum/pull/9907))
|
|
||||||
- Gitlab-ci: make android release build succeed ([#9743](https://github.com/paritytech/parity-ethereum/pull/9743))
|
|
||||||
- Allow to seal work on latest block ([#9876](https://github.com/paritytech/parity-ethereum/pull/9876))
|
|
||||||
- Remove rust-toolchain file ([#9906](https://github.com/paritytech/parity-ethereum/pull/9906))
|
|
||||||
- Light-fetch: Differentiate between out-of-gas/manual throw and use required gas from response on failure ([#9824](https://github.com/paritytech/parity-ethereum/pull/9824))
|
|
||||||
- Eip-712 implementation ([#9631](https://github.com/paritytech/parity-ethereum/pull/9631))
|
|
||||||
- Eip-191 implementation ([#9701](https://github.com/paritytech/parity-ethereum/pull/9701))
|
|
||||||
- Simplify cargo audit ([#9918](https://github.com/paritytech/parity-ethereum/pull/9918))
|
|
||||||
- Fix performance issue importing Kovan blocks ([#9914](https://github.com/paritytech/parity-ethereum/pull/9914))
|
|
||||||
- Ci: nuke the gitlab caches ([#9855](https://github.com/paritytech/parity-ethereum/pull/9855))
|
|
||||||
- Backports to parity beta 2.2.0 ([#9820](https://github.com/paritytech/parity-ethereum/pull/9820))
|
|
||||||
- Ci: remove failing tests for android, windows, and macos ([#9788](https://github.com/paritytech/parity-ethereum/pull/9788))
|
|
||||||
- Implement NoProof for json tests and update tests reference ([#9814](https://github.com/paritytech/parity-ethereum/pull/9814))
|
|
||||||
- Move state root verification before gas used ([#9841](https://github.com/paritytech/parity-ethereum/pull/9841))
|
|
||||||
- Classic.json Bootnode Update ([#9828](https://github.com/paritytech/parity-ethereum/pull/9828))
|
|
||||||
- Rpc: parity_allTransactionHashes ([#9745](https://github.com/paritytech/parity-ethereum/pull/9745))
|
|
||||||
- Revert "prevent zero networkID ([#9763](https://github.com/paritytech/parity-ethereum/pull/9763))" ([#9815](https://github.com/paritytech/parity-ethereum/pull/9815))
|
|
||||||
- Allow zero chain id in EIP155 signing process ([#9792](https://github.com/paritytech/parity-ethereum/pull/9792))
|
|
||||||
- Add readiness check for docker container ([#9804](https://github.com/paritytech/parity-ethereum/pull/9804))
|
|
||||||
- Insert dev account before unlocking ([#9813](https://github.com/paritytech/parity-ethereum/pull/9813))
|
|
||||||
- Removed "rustup" & added new runner tag ([#9731](https://github.com/paritytech/parity-ethereum/pull/9731))
|
|
||||||
- Expose config max-round-blocks-to-import ([#9439](https://github.com/paritytech/parity-ethereum/pull/9439))
|
|
||||||
- Aura: finalize blocks ([#9692](https://github.com/paritytech/parity-ethereum/pull/9692))
|
|
||||||
- Sync: retry different peer after empty subchain heads response ([#9753](https://github.com/paritytech/parity-ethereum/pull/9753))
|
|
||||||
- Fix(light-rpc/parity) : Remove unused client ([#9802](https://github.com/paritytech/parity-ethereum/pull/9802))
|
|
||||||
- Drops support for olympic testnet, closes [#9800](https://github.com/paritytech/parity-ethereum/issues/9800) ([#9801](https://github.com/paritytech/parity-ethereum/pull/9801))
|
|
||||||
- Replace `tokio_core` with `tokio` (`ring` -> 0.13) ([#9657](https://github.com/paritytech/parity-ethereum/pull/9657))
|
|
||||||
- Support eth_chainId RPC method ([#9783](https://github.com/paritytech/parity-ethereum/pull/9783))
|
|
||||||
- Ethcore: bump ropsten forkblock checkpoint ([#9775](https://github.com/paritytech/parity-ethereum/pull/9775))
|
|
||||||
- Docs: changelogs for 2.0.8 and 2.1.3 ([#9758](https://github.com/paritytech/parity-ethereum/pull/9758))
|
|
||||||
- Prevent zero networkID ([#9763](https://github.com/paritytech/parity-ethereum/pull/9763))
|
|
||||||
- Skip seal fields count check when --no-seal-check is used ([#9757](https://github.com/paritytech/parity-ethereum/pull/9757))
|
|
||||||
- Aura: fix panic on extra_info with unsealed block ([#9755](https://github.com/paritytech/parity-ethereum/pull/9755))
|
|
||||||
- Docs: update changelogs ([#9742](https://github.com/paritytech/parity-ethereum/pull/9742))
|
|
||||||
- Removed extra assert in generation_session_is_removed_when_succeeded ([#9738](https://github.com/paritytech/parity-ethereum/pull/9738))
|
|
||||||
- Make checkpoint_storage_at use plain loop instead of recursion ([#9734](https://github.com/paritytech/parity-ethereum/pull/9734))
|
|
||||||
- Use signed 256-bit integer for sstore gas refund substate ([#9746](https://github.com/paritytech/parity-ethereum/pull/9746))
|
|
||||||
- Heads ref not present for branches beta and stable ([#9741](https://github.com/paritytech/parity-ethereum/pull/9741))
|
|
||||||
- Add Callisto support ([#9534](https://github.com/paritytech/parity-ethereum/pull/9534))
|
|
||||||
- Add --force to cargo audit install script ([#9735](https://github.com/paritytech/parity-ethereum/pull/9735))
|
|
||||||
- Remove unused expired value from Handshake ([#9732](https://github.com/paritytech/parity-ethereum/pull/9732))
|
|
||||||
- Add hardcoded headers ([#9730](https://github.com/paritytech/parity-ethereum/pull/9730))
|
|
||||||
- Produce portable binaries ([#9725](https://github.com/paritytech/parity-ethereum/pull/9725))
|
|
||||||
- Gitlab ci: releasable_branches: change variables condition to schedule ([#9729](https://github.com/paritytech/parity-ethereum/pull/9729))
|
|
||||||
- Update a few parity-common dependencies ([#9663](https://github.com/paritytech/parity-ethereum/pull/9663))
|
|
||||||
- Hf in POA Core (2018-10-22) ([#9724](https://github.com/paritytech/parity-ethereum/pull/9724))
|
|
||||||
- Schedule nightly builds ([#9717](https://github.com/paritytech/parity-ethereum/pull/9717))
|
|
||||||
- Fix ancient blocks sync ([#9531](https://github.com/paritytech/parity-ethereum/pull/9531))
|
|
||||||
- Ci: Skip docs job for nightly ([#9693](https://github.com/paritytech/parity-ethereum/pull/9693))
|
|
||||||
- Fix (light/provider) : Make `read_only executions` read-only ([#9591](https://github.com/paritytech/parity-ethereum/pull/9591))
|
|
||||||
- Ethcore: fix detection of major import ([#9552](https://github.com/paritytech/parity-ethereum/pull/9552))
|
|
||||||
- Return 0 on error ([#9705](https://github.com/paritytech/parity-ethereum/pull/9705))
|
|
||||||
- Ethcore: delay ropsten hardfork ([#9704](https://github.com/paritytech/parity-ethereum/pull/9704))
|
|
||||||
- Make instantSeal engine backwards compatible, closes [#9696](https://github.com/paritytech/parity-ethereum/issues/9696) ([#9700](https://github.com/paritytech/parity-ethereum/pull/9700))
|
|
||||||
- Implement CREATE2 gas changes and fix some potential overflowing ([#9694](https://github.com/paritytech/parity-ethereum/pull/9694))
|
|
||||||
- Don't hash the init_code of CREATE. ([#9688](https://github.com/paritytech/parity-ethereum/pull/9688))
|
|
||||||
- Ethcore: minor optimization of modexp by using LR exponentiation ([#9697](https://github.com/paritytech/parity-ethereum/pull/9697))
|
|
||||||
- Removed redundant clone before each block import ([#9683](https://github.com/paritytech/parity-ethereum/pull/9683))
|
|
||||||
- Add Foundation Bootnodes ([#9666](https://github.com/paritytech/parity-ethereum/pull/9666))
|
|
||||||
- Docker: run as parity user ([#9689](https://github.com/paritytech/parity-ethereum/pull/9689))
|
|
||||||
- Ethcore: mcip3 block reward contract ([#9605](https://github.com/paritytech/parity-ethereum/pull/9605))
|
|
||||||
- Verify block syncing responses against requests ([#9670](https://github.com/paritytech/parity-ethereum/pull/9670))
|
|
||||||
- Add a new RPC `parity_submitWorkDetail` similar `eth_submitWork` but return block hash ([#9404](https://github.com/paritytech/parity-ethereum/pull/9404))
|
|
||||||
- Resumable EVM and heap-allocated callstack ([#9360](https://github.com/paritytech/parity-ethereum/pull/9360))
|
|
||||||
- Update parity-wordlist library ([#9682](https://github.com/paritytech/parity-ethereum/pull/9682))
|
|
||||||
- Ci: Remove unnecessary pipes ([#9681](https://github.com/paritytech/parity-ethereum/pull/9681))
|
|
||||||
- Test.sh: use cargo --target for platforms other than linux, win or mac ([#9650](https://github.com/paritytech/parity-ethereum/pull/9650))
|
|
||||||
- Ci: fix push script ([#9679](https://github.com/paritytech/parity-ethereum/pull/9679))
|
|
||||||
- Hardfork the testnets ([#9562](https://github.com/paritytech/parity-ethereum/pull/9562))
|
|
||||||
- Calculate sha3 instead of sha256 for push-release. ([#9673](https://github.com/paritytech/parity-ethereum/pull/9673))
|
|
||||||
- Ethcore-io retries failed work steal ([#9651](https://github.com/paritytech/parity-ethereum/pull/9651))
|
|
||||||
- Fix(light_fetch): avoid race with BlockNumber::Latest ([#9665](https://github.com/paritytech/parity-ethereum/pull/9665))
|
|
||||||
- Test fix for windows cache name... ([#9658](https://github.com/paritytech/parity-ethereum/pull/9658))
|
|
||||||
- Refactor(fetch) : light use only one `DNS` thread ([#9647](https://github.com/paritytech/parity-ethereum/pull/9647))
|
|
||||||
- Ethereum libfuzzer integration small change ([#9547](https://github.com/paritytech/parity-ethereum/pull/9547))
|
|
||||||
- Cli: remove reference to --no-ui in --unlock flag help ([#9616](https://github.com/paritytech/parity-ethereum/pull/9616))
|
|
||||||
- Remove master from releasable branches ([#9655](https://github.com/paritytech/parity-ethereum/pull/9655))
|
|
||||||
- Ethcore/VerificationQueue don't spawn up extra `worker-threads` when explictly specified not to ([#9620](https://github.com/paritytech/parity-ethereum/pull/9620))
|
|
||||||
- Rpc: parity_getBlockReceipts ([#9527](https://github.com/paritytech/parity-ethereum/pull/9527))
|
|
||||||
- Remove unused dependencies ([#9589](https://github.com/paritytech/parity-ethereum/pull/9589))
|
|
||||||
- Ignore key_server_cluster randomly failing tests ([#9639](https://github.com/paritytech/parity-ethereum/pull/9639))
|
|
||||||
- Ethcore: handle vm exception when estimating gas ([#9615](https://github.com/paritytech/parity-ethereum/pull/9615))
|
|
||||||
- Fix bad-block reporting no reason ([#9638](https://github.com/paritytech/parity-ethereum/pull/9638))
|
|
||||||
- Use static call and apparent value transfer for block reward contract code ([#9603](https://github.com/paritytech/parity-ethereum/pull/9603))
|
|
||||||
- Hf in POA Sokol (2018-09-19) ([#9607](https://github.com/paritytech/parity-ethereum/pull/9607))
|
|
||||||
- Bump smallvec to 0.6 in ethcore-light, ethstore and whisper ([#9588](https://github.com/paritytech/parity-ethereum/pull/9588))
|
|
||||||
- Add constantinople conf to EvmTestClient. ([#9570](https://github.com/paritytech/parity-ethereum/pull/9570))
|
|
||||||
- Fix(network): don't disconnect reserved peers ([#9608](https://github.com/paritytech/parity-ethereum/pull/9608))
|
|
||||||
- Fix failing node-table tests on mac os, closes [#9632](https://github.com/paritytech/parity-ethereum/issues/9632) ([#9633](https://github.com/paritytech/parity-ethereum/pull/9633))
|
|
||||||
- Update ropsten.json ([#9602](https://github.com/paritytech/parity-ethereum/pull/9602))
|
|
||||||
- Simplify ethcore errors by removing BlockImportError ([#9593](https://github.com/paritytech/parity-ethereum/pull/9593))
|
|
||||||
- Fix windows compilation, replaces [#9561](https://github.com/paritytech/parity-ethereum/issues/9561) ([#9621](https://github.com/paritytech/parity-ethereum/pull/9621))
|
|
||||||
- Master: rpc-docs set github token ([#9610](https://github.com/paritytech/parity-ethereum/pull/9610))
|
|
||||||
- Docs: add changelogs for 1.11.10, 1.11.11, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.1.0, and 2.1.1 ([#9554](https://github.com/paritytech/parity-ethereum/pull/9554))
|
|
||||||
- Docs(rpc): annotate tag with the provided message ([#9601](https://github.com/paritytech/parity-ethereum/pull/9601))
|
|
||||||
- Ci: fix regex roll_eyes ([#9597](https://github.com/paritytech/parity-ethereum/pull/9597))
|
|
||||||
- Remove snapcraft clean ([#9585](https://github.com/paritytech/parity-ethereum/pull/9585))
|
|
||||||
- Add snapcraft package image (master) ([#9584](https://github.com/paritytech/parity-ethereum/pull/9584))
|
|
||||||
- Docs(rpc): push the branch along with tags ([#9578](https://github.com/paritytech/parity-ethereum/pull/9578))
|
|
||||||
- Fix typo for jsonrpc-threads flag ([#9574](https://github.com/paritytech/parity-ethereum/pull/9574))
|
|
||||||
- Fix informant compile ([#9571](https://github.com/paritytech/parity-ethereum/pull/9571))
|
|
||||||
- Added ropsten bootnodes ([#9569](https://github.com/paritytech/parity-ethereum/pull/9569))
|
|
||||||
- Increase Gas-floor-target and Gas Cap ([#9564](https://github.com/paritytech/parity-ethereum/pull/9564))
|
|
||||||
- While working on the platform tests make them non-breaking ([#9563](https://github.com/paritytech/parity-ethereum/pull/9563))
|
|
||||||
- Improve P2P discovery ([#9526](https://github.com/paritytech/parity-ethereum/pull/9526))
|
|
||||||
- Move dockerfile for android build container to scripts repo ([#9560](https://github.com/paritytech/parity-ethereum/pull/9560))
|
|
||||||
- Simultaneous platform tests WIP ([#9557](https://github.com/paritytech/parity-ethereum/pull/9557))
|
|
||||||
- Update ethabi-derive, serde, serde_json, serde_derive, syn && quote ([#9553](https://github.com/paritytech/parity-ethereum/pull/9553))
|
|
||||||
- Ci: fix rpc docs generation 2 ([#9550](https://github.com/paritytech/parity-ethereum/pull/9550))
|
|
||||||
- Ci: always run build pipelines for win, mac, linux, and android ([#9537](https://github.com/paritytech/parity-ethereum/pull/9537))
|
|
||||||
- Multithreaded snapshot creation ([#9239](https://github.com/paritytech/parity-ethereum/pull/9239))
|
|
||||||
- New ethabi ([#9511](https://github.com/paritytech/parity-ethereum/pull/9511))
|
|
||||||
- Remove initial token for WS. ([#9545](https://github.com/paritytech/parity-ethereum/pull/9545))
|
|
||||||
- Net_version caches network_id to avoid redundant aquire of sync readlock ([#9544](https://github.com/paritytech/parity-ethereum/pull/9544))
|
|
||||||
- Correct before_script for nightly build versions ([#9543](https://github.com/paritytech/parity-ethereum/pull/9543))
|
|
||||||
- Deps: bump kvdb-rocksdb to 0.1.4 ([#9539](https://github.com/paritytech/parity-ethereum/pull/9539))
|
|
||||||
- State: test when contract creation fails, old storage values should re-appear ([#9532](https://github.com/paritytech/parity-ethereum/pull/9532))
|
|
||||||
- Allow dropping light client RPC query with no results ([#9318](https://github.com/paritytech/parity-ethereum/pull/9318))
|
|
||||||
- Bump master to 2.2.0 ([#9517](https://github.com/paritytech/parity-ethereum/pull/9517))
|
|
||||||
- Enable all Constantinople hard fork changes in constantinople_test.json ([#9505](https://github.com/paritytech/parity-ethereum/pull/9505))
|
|
||||||
- [Light] Validate `account balance` before importing transactions ([#9417](https://github.com/paritytech/parity-ethereum/pull/9417))
|
|
||||||
- In create memory calculation is the same for create2 because the additional parameter was popped before. ([#9522](https://github.com/paritytech/parity-ethereum/pull/9522))
|
|
||||||
- Update patricia trie to 0.2.2 ([#9525](https://github.com/paritytech/parity-ethereum/pull/9525))
|
|
||||||
- Replace hardcoded JSON with serde json! macro ([#9489](https://github.com/paritytech/parity-ethereum/pull/9489))
|
|
||||||
- Fix typo in version string ([#9516](https://github.com/paritytech/parity-ethereum/pull/9516))
|
|
||||||
@@ -14,19 +14,9 @@ parking_lot = "0.7"
|
|||||||
primal = "0.2.3"
|
primal = "0.2.3"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
criterion = "0.2"
|
|
||||||
rustc-hex = "1.0"
|
|
||||||
serde_json = "1.0"
|
|
||||||
tempdir = "0.3"
|
tempdir = "0.3"
|
||||||
|
criterion = "0.2"
|
||||||
[features]
|
|
||||||
default = []
|
|
||||||
bench = []
|
|
||||||
|
|
||||||
[[bench]]
|
[[bench]]
|
||||||
name = "basic"
|
name = "basic"
|
||||||
harness = false
|
harness = false
|
||||||
|
|
||||||
[[bench]]
|
|
||||||
name = "progpow"
|
|
||||||
harness = false
|
|
||||||
|
|||||||
@@ -40,28 +40,28 @@ criterion_main!(basic);
|
|||||||
fn bench_light_compute_memmap(b: &mut Criterion) {
|
fn bench_light_compute_memmap(b: &mut Criterion) {
|
||||||
use std::env;
|
use std::env;
|
||||||
|
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
let builder = NodeCacheBuilder::new(OptimizeFor::Memory);
|
||||||
let light = builder.light(&env::temp_dir(), 486382);
|
let light = builder.light(&env::temp_dir(), 486382);
|
||||||
|
|
||||||
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| light.compute(&HASH, NONCE, u64::max_value())));
|
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| light.compute(&HASH, NONCE)));
|
||||||
}
|
}
|
||||||
|
|
||||||
fn bench_light_compute_memory(b: &mut Criterion) {
|
fn bench_light_compute_memory(b: &mut Criterion) {
|
||||||
use std::env;
|
use std::env;
|
||||||
|
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Cpu, u64::max_value());
|
let builder = NodeCacheBuilder::new(OptimizeFor::Cpu);
|
||||||
let light = builder.light(&env::temp_dir(), 486382);
|
let light = builder.light(&env::temp_dir(), 486382);
|
||||||
|
|
||||||
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| light.compute(&HASH, NONCE, u64::max_value())));
|
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| light.compute(&HASH, NONCE)));
|
||||||
}
|
}
|
||||||
|
|
||||||
fn bench_light_new_round_trip_memmap(b: &mut Criterion) {
|
fn bench_light_new_round_trip_memmap(b: &mut Criterion) {
|
||||||
use std::env;
|
use std::env;
|
||||||
|
|
||||||
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| {
|
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| {
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
let builder = NodeCacheBuilder::new(OptimizeFor::Memory);
|
||||||
let light = builder.light(&env::temp_dir(), 486382);
|
let light = builder.light(&env::temp_dir(), 486382);
|
||||||
light.compute(&HASH, NONCE, u64::max_value());
|
light.compute(&HASH, NONCE);
|
||||||
}));
|
}));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -69,9 +69,9 @@ fn bench_light_new_round_trip_memory(b: &mut Criterion) {
|
|||||||
use std::env;
|
use std::env;
|
||||||
|
|
||||||
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| {
|
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| {
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Cpu, u64::max_value());
|
let builder = NodeCacheBuilder::new(OptimizeFor::Cpu);
|
||||||
let light = builder.light(&env::temp_dir(), 486382);
|
let light = builder.light(&env::temp_dir(), 486382);
|
||||||
light.compute(&HASH, NONCE, u64::max_value());
|
light.compute(&HASH, NONCE);
|
||||||
}));
|
}));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -81,15 +81,15 @@ fn bench_light_from_file_round_trip_memory(b: &mut Criterion) {
|
|||||||
let dir = env::temp_dir();
|
let dir = env::temp_dir();
|
||||||
let height = 486382;
|
let height = 486382;
|
||||||
{
|
{
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Cpu, u64::max_value());
|
let builder = NodeCacheBuilder::new(OptimizeFor::Cpu);
|
||||||
let mut dummy = builder.light(&dir, height);
|
let mut dummy = builder.light(&dir, height);
|
||||||
dummy.to_file().unwrap();
|
dummy.to_file().unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| {
|
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| {
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Cpu, u64::max_value());
|
let builder = NodeCacheBuilder::new(OptimizeFor::Cpu);
|
||||||
let light = builder.light_from_file(&dir, 486382).unwrap();
|
let light = builder.light_from_file(&dir, 486382).unwrap();
|
||||||
light.compute(&HASH, NONCE, u64::max_value());
|
light.compute(&HASH, NONCE);
|
||||||
}));
|
}));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -100,14 +100,14 @@ fn bench_light_from_file_round_trip_memmap(b: &mut Criterion) {
|
|||||||
let height = 486382;
|
let height = 486382;
|
||||||
|
|
||||||
{
|
{
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
let builder = NodeCacheBuilder::new(OptimizeFor::Memory);
|
||||||
let mut dummy = builder.light(&dir, height);
|
let mut dummy = builder.light(&dir, height);
|
||||||
dummy.to_file().unwrap();
|
dummy.to_file().unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| {
|
b.bench_function("bench_light_compute_memmap", move |b| b.iter(|| {
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
let builder = NodeCacheBuilder::new(OptimizeFor::Memory);
|
||||||
let light = builder.light_from_file(&dir, 486382).unwrap();
|
let light = builder.light_from_file(&dir, 486382).unwrap();
|
||||||
light.compute(&HASH, NONCE, u64::max_value());
|
light.compute(&HASH, NONCE);
|
||||||
}));
|
}));
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,86 +0,0 @@
|
|||||||
#[macro_use]
|
|
||||||
extern crate criterion;
|
|
||||||
extern crate ethash;
|
|
||||||
extern crate rustc_hex;
|
|
||||||
extern crate tempdir;
|
|
||||||
|
|
||||||
use criterion::Criterion;
|
|
||||||
use ethash::progpow;
|
|
||||||
|
|
||||||
use tempdir::TempDir;
|
|
||||||
use rustc_hex::FromHex;
|
|
||||||
use ethash::{NodeCacheBuilder, OptimizeFor};
|
|
||||||
use ethash::compute::light_compute;
|
|
||||||
|
|
||||||
fn bench_hashimoto_light(c: &mut Criterion) {
|
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
|
||||||
let tempdir = TempDir::new("").unwrap();
|
|
||||||
let light = builder.light(&tempdir.path(), 1);
|
|
||||||
let h = FromHex::from_hex("c9149cc0386e689d789a1c2f3d5d169a61a6218ed30e74414dc736e442ef3d1f").unwrap();
|
|
||||||
let mut hash = [0; 32];
|
|
||||||
hash.copy_from_slice(&h);
|
|
||||||
|
|
||||||
c.bench_function("hashimoto_light", move |b| {
|
|
||||||
b.iter(|| light_compute(&light, &hash, 0))
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
fn bench_progpow_light(c: &mut Criterion) {
|
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
|
||||||
let tempdir = TempDir::new("").unwrap();
|
|
||||||
let cache = builder.new_cache(tempdir.into_path(), 0);
|
|
||||||
|
|
||||||
let h = FromHex::from_hex("c9149cc0386e689d789a1c2f3d5d169a61a6218ed30e74414dc736e442ef3d1f").unwrap();
|
|
||||||
let mut hash = [0; 32];
|
|
||||||
hash.copy_from_slice(&h);
|
|
||||||
|
|
||||||
c.bench_function("progpow_light", move |b| {
|
|
||||||
b.iter(|| {
|
|
||||||
let c_dag = progpow::generate_cdag(cache.as_ref());
|
|
||||||
progpow::progpow(
|
|
||||||
hash,
|
|
||||||
0,
|
|
||||||
0,
|
|
||||||
cache.as_ref(),
|
|
||||||
&c_dag,
|
|
||||||
);
|
|
||||||
})
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
fn bench_progpow_optimal_light(c: &mut Criterion) {
|
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
|
||||||
let tempdir = TempDir::new("").unwrap();
|
|
||||||
let cache = builder.new_cache(tempdir.into_path(), 0);
|
|
||||||
let c_dag = progpow::generate_cdag(cache.as_ref());
|
|
||||||
|
|
||||||
let h = FromHex::from_hex("c9149cc0386e689d789a1c2f3d5d169a61a6218ed30e74414dc736e442ef3d1f").unwrap();
|
|
||||||
let mut hash = [0; 32];
|
|
||||||
hash.copy_from_slice(&h);
|
|
||||||
|
|
||||||
c.bench_function("progpow_optimal_light", move |b| {
|
|
||||||
b.iter(|| {
|
|
||||||
progpow::progpow(
|
|
||||||
hash,
|
|
||||||
0,
|
|
||||||
0,
|
|
||||||
cache.as_ref(),
|
|
||||||
&c_dag,
|
|
||||||
);
|
|
||||||
})
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
fn bench_keccak_f800_long(c: &mut Criterion) {
|
|
||||||
c.bench_function("keccak_f800_long(0, 0, 0)", |b| {
|
|
||||||
b.iter(|| progpow::keccak_f800_long([0; 32], 0, [0; 8]))
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
criterion_group!(benches,
|
|
||||||
bench_hashimoto_light,
|
|
||||||
bench_progpow_light,
|
|
||||||
bench_progpow_optimal_light,
|
|
||||||
bench_keccak_f800_long,
|
|
||||||
);
|
|
||||||
criterion_main!(benches);
|
|
||||||
@@ -1,86 +0,0 @@
|
|||||||
[
|
|
||||||
[
|
|
||||||
0,
|
|
||||||
"0000000000000000000000000000000000000000000000000000000000000000",
|
|
||||||
"0000000000000000",
|
|
||||||
"faeb1be51075b03a4ff44b335067951ead07a3b078539ace76fd56fc410557a3",
|
|
||||||
"63155f732f2bf556967f906155b510c917e48e99685ead76ea83f4eca03ab12b"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
49,
|
|
||||||
"63155f732f2bf556967f906155b510c917e48e99685ead76ea83f4eca03ab12b",
|
|
||||||
"0000000006ff2c47",
|
|
||||||
"c789c1180f890ec555ff42042913465481e8e6bc512cb981e1c1108dc3f2227d",
|
|
||||||
"9e7248f20914913a73d80a70174c331b1d34f260535ac3631d770e656b5dd922"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
50,
|
|
||||||
"9e7248f20914913a73d80a70174c331b1d34f260535ac3631d770e656b5dd922",
|
|
||||||
"00000000076e482e",
|
|
||||||
"c7340542c2a06b3a7dc7222635f7cd402abf8b528ae971ddac6bbe2b0c7cb518",
|
|
||||||
"de37e1824c86d35d154cf65a88de6d9286aec4f7f10c3fc9f0fa1bcc2687188d"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
99,
|
|
||||||
"de37e1824c86d35d154cf65a88de6d9286aec4f7f10c3fc9f0fa1bcc2687188d",
|
|
||||||
"000000003917afab",
|
|
||||||
"f5e60b2c5bfddd136167a30cbc3c8dbdbd15a512257dee7964e0bc6daa9f8ba7",
|
|
||||||
"ac7b55e801511b77e11d52e9599206101550144525b5679f2dab19386f23dcce"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
29950,
|
|
||||||
"ac7b55e801511b77e11d52e9599206101550144525b5679f2dab19386f23dcce",
|
|
||||||
"005d409dbc23a62a",
|
|
||||||
"07393d15805eb08ee6fc6cb3ad4ad1010533bd0ff92d6006850246829f18fd6e",
|
|
||||||
"e43d7e0bdc8a4a3f6e291a5ed790b9fa1a0948a2b9e33c844888690847de19f5"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
29999,
|
|
||||||
"e43d7e0bdc8a4a3f6e291a5ed790b9fa1a0948a2b9e33c844888690847de19f5",
|
|
||||||
"005db5fa4c2a3d03",
|
|
||||||
"7551bddf977491da2f6cfc1679299544b23483e8f8ee0931c4c16a796558a0b8",
|
|
||||||
"d34519f72c97cae8892c277776259db3320820cb5279a299d0ef1e155e5c6454"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
30000,
|
|
||||||
"d34519f72c97cae8892c277776259db3320820cb5279a299d0ef1e155e5c6454",
|
|
||||||
"005db8607994ff30",
|
|
||||||
"f1c2c7c32266af9635462e6ce1c98ebe4e7e3ecab7a38aaabfbf2e731e0fbff4",
|
|
||||||
"8b6ce5da0b06d18db7bd8492d9e5717f8b53e7e098d9fef7886d58a6e913ef64"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
30049,
|
|
||||||
"8b6ce5da0b06d18db7bd8492d9e5717f8b53e7e098d9fef7886d58a6e913ef64",
|
|
||||||
"005e2e215a8ca2e7",
|
|
||||||
"57fe6a9fbf920b4e91deeb66cb0efa971e08229d1a160330e08da54af0689add",
|
|
||||||
"c2c46173481b9ced61123d2e293b42ede5a1b323210eb2a684df0874ffe09047"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
30050,
|
|
||||||
"c2c46173481b9ced61123d2e293b42ede5a1b323210eb2a684df0874ffe09047",
|
|
||||||
"005e30899481055e",
|
|
||||||
"ba30c61cc5a2c74a5ecaf505965140a08f24a296d687e78720f0b48baf712f2d",
|
|
||||||
"ea42197eb2ba79c63cb5e655b8b1f612c5f08aae1a49ff236795a3516d87bc71"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
30099,
|
|
||||||
"ea42197eb2ba79c63cb5e655b8b1f612c5f08aae1a49ff236795a3516d87bc71",
|
|
||||||
"005ea6aef136f88b",
|
|
||||||
"cfd5e46048cd133d40f261fe8704e51d3f497fc14203ac6a9ef6a0841780b1cd",
|
|
||||||
"49e15ba4bf501ce8fe8876101c808e24c69a859be15de554bf85dbc095491bd6"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
59950,
|
|
||||||
"49e15ba4bf501ce8fe8876101c808e24c69a859be15de554bf85dbc095491bd6",
|
|
||||||
"02ebe0503bd7b1da",
|
|
||||||
"21511fbaa31fb9f5fc4998a754e97b3083a866f4de86fa7500a633346f56d773",
|
|
||||||
"f5c50ba5c0d6210ddb16250ec3efda178de857b2b1703d8d5403bd0f848e19cf"
|
|
||||||
],
|
|
||||||
[
|
|
||||||
59999,
|
|
||||||
"f5c50ba5c0d6210ddb16250ec3efda178de857b2b1703d8d5403bd0f848e19cf",
|
|
||||||
"02edb6275bd221e3",
|
|
||||||
"653eda37d337e39d311d22be9bbd3458d3abee4e643bee4a7280a6d08106ef98",
|
|
||||||
"341562d10d4afb706ec2c8d5537cb0c810de02b4ebb0a0eea5ae335af6fb2e88"
|
|
||||||
]
|
|
||||||
]
|
|
||||||
@@ -69,7 +69,6 @@ pub struct NodeCacheBuilder {
|
|||||||
// TODO: Remove this locking and just use an `Rc`?
|
// TODO: Remove this locking and just use an `Rc`?
|
||||||
seedhash: Arc<Mutex<SeedHashCompute>>,
|
seedhash: Arc<Mutex<SeedHashCompute>>,
|
||||||
optimize_for: OptimizeFor,
|
optimize_for: OptimizeFor,
|
||||||
progpow_transition: u64,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO: Abstract the "optimize for" logic
|
// TODO: Abstract the "optimize for" logic
|
||||||
@@ -83,18 +82,17 @@ pub struct NodeCache {
|
|||||||
|
|
||||||
impl NodeCacheBuilder {
|
impl NodeCacheBuilder {
|
||||||
pub fn light(&self, cache_dir: &Path, block_number: u64) -> Light {
|
pub fn light(&self, cache_dir: &Path, block_number: u64) -> Light {
|
||||||
Light::new_with_builder(self, cache_dir, block_number, self.progpow_transition)
|
Light::new_with_builder(self, cache_dir, block_number)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn light_from_file(&self, cache_dir: &Path, block_number: u64) -> io::Result<Light> {
|
pub fn light_from_file(&self, cache_dir: &Path, block_number: u64) -> io::Result<Light> {
|
||||||
Light::from_file_with_builder(self, cache_dir, block_number, self.progpow_transition)
|
Light::from_file_with_builder(self, cache_dir, block_number)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn new<T: Into<Option<OptimizeFor>>>(optimize_for: T, progpow_transition: u64) -> Self {
|
pub fn new<T: Into<Option<OptimizeFor>>>(optimize_for: T) -> Self {
|
||||||
NodeCacheBuilder {
|
NodeCacheBuilder {
|
||||||
seedhash: Arc::new(Mutex::new(SeedHashCompute::default())),
|
seedhash: Arc::new(Mutex::new(SeedHashCompute::default())),
|
||||||
optimize_for: optimize_for.into().unwrap_or_default(),
|
optimize_for: optimize_for.into().unwrap_or_default(),
|
||||||
progpow_transition
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -21,7 +21,6 @@
|
|||||||
|
|
||||||
use keccak::{keccak_512, keccak_256, H256};
|
use keccak::{keccak_512, keccak_256, H256};
|
||||||
use cache::{NodeCache, NodeCacheBuilder};
|
use cache::{NodeCache, NodeCacheBuilder};
|
||||||
use progpow::{CDag, generate_cdag, progpow, keccak_f800_short, keccak_f800_long};
|
|
||||||
use seed_compute::SeedHashCompute;
|
use seed_compute::SeedHashCompute;
|
||||||
use shared::*;
|
use shared::*;
|
||||||
use std::io;
|
use std::io;
|
||||||
@@ -31,7 +30,7 @@ use std::path::Path;
|
|||||||
|
|
||||||
const MIX_WORDS: usize = ETHASH_MIX_BYTES / 4;
|
const MIX_WORDS: usize = ETHASH_MIX_BYTES / 4;
|
||||||
const MIX_NODES: usize = MIX_WORDS / NODE_WORDS;
|
const MIX_NODES: usize = MIX_WORDS / NODE_WORDS;
|
||||||
pub const FNV_PRIME: u32 = 0x01000193;
|
const FNV_PRIME: u32 = 0x01000193;
|
||||||
|
|
||||||
/// Computation result
|
/// Computation result
|
||||||
pub struct ProofOfWork {
|
pub struct ProofOfWork {
|
||||||
@@ -41,15 +40,9 @@ pub struct ProofOfWork {
|
|||||||
pub mix_hash: H256,
|
pub mix_hash: H256,
|
||||||
}
|
}
|
||||||
|
|
||||||
enum Algorithm {
|
|
||||||
Hashimoto,
|
|
||||||
Progpow(Box<CDag>),
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct Light {
|
pub struct Light {
|
||||||
block_number: u64,
|
block_number: u64,
|
||||||
cache: NodeCache,
|
cache: NodeCache,
|
||||||
algorithm: Algorithm,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Light cache structure
|
/// Light cache structure
|
||||||
@@ -58,55 +51,32 @@ impl Light {
|
|||||||
builder: &NodeCacheBuilder,
|
builder: &NodeCacheBuilder,
|
||||||
cache_dir: &Path,
|
cache_dir: &Path,
|
||||||
block_number: u64,
|
block_number: u64,
|
||||||
progpow_transition: u64,
|
|
||||||
) -> Self {
|
) -> Self {
|
||||||
let cache = builder.new_cache(cache_dir.to_path_buf(), block_number);
|
let cache = builder.new_cache(cache_dir.to_path_buf(), block_number);
|
||||||
|
|
||||||
let algorithm = if block_number >= progpow_transition {
|
Light {
|
||||||
Algorithm::Progpow(Box::new(generate_cdag(cache.as_ref())))
|
block_number: block_number,
|
||||||
} else {
|
cache: cache,
|
||||||
Algorithm::Hashimoto
|
}
|
||||||
};
|
|
||||||
|
|
||||||
Light { block_number, cache, algorithm }
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Calculate the light boundary data
|
/// Calculate the light boundary data
|
||||||
/// `header_hash` - The header hash to pack into the mix
|
/// `header_hash` - The header hash to pack into the mix
|
||||||
/// `nonce` - The nonce to pack into the mix
|
/// `nonce` - The nonce to pack into the mix
|
||||||
pub fn compute(&self, header_hash: &H256, nonce: u64, block_number: u64) -> ProofOfWork {
|
pub fn compute(&self, header_hash: &H256, nonce: u64) -> ProofOfWork {
|
||||||
match self.algorithm {
|
light_compute(self, header_hash, nonce)
|
||||||
Algorithm::Progpow(ref c_dag) => {
|
|
||||||
let (value, mix_hash) = progpow(
|
|
||||||
*header_hash,
|
|
||||||
nonce,
|
|
||||||
block_number,
|
|
||||||
self.cache.as_ref(),
|
|
||||||
c_dag,
|
|
||||||
);
|
|
||||||
|
|
||||||
ProofOfWork { value, mix_hash }
|
|
||||||
},
|
|
||||||
Algorithm::Hashimoto => light_compute(self, header_hash, nonce),
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn from_file_with_builder(
|
pub fn from_file_with_builder(
|
||||||
builder: &NodeCacheBuilder,
|
builder: &NodeCacheBuilder,
|
||||||
cache_dir: &Path,
|
cache_dir: &Path,
|
||||||
block_number: u64,
|
block_number: u64,
|
||||||
progpow_transition: u64,
|
|
||||||
) -> io::Result<Self> {
|
) -> io::Result<Self> {
|
||||||
let cache = builder.from_file(cache_dir.to_path_buf(), block_number)?;
|
let cache = builder.from_file(cache_dir.to_path_buf(), block_number)?;
|
||||||
|
Ok(Light {
|
||||||
let algorithm = if block_number >= progpow_transition {
|
block_number: block_number,
|
||||||
Algorithm::Progpow(Box::new(generate_cdag(cache.as_ref())))
|
cache: cache,
|
||||||
} else {
|
})
|
||||||
Algorithm::Hashimoto
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(Light { block_number, cache, algorithm })
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn to_file(&mut self) -> io::Result<&Path> {
|
pub fn to_file(&mut self) -> io::Result<&Path> {
|
||||||
@@ -129,32 +99,27 @@ fn fnv_hash(x: u32, y: u32) -> u32 {
|
|||||||
/// `nonce` The block's nonce
|
/// `nonce` The block's nonce
|
||||||
/// `mix_hash` The mix digest hash
|
/// `mix_hash` The mix digest hash
|
||||||
/// Boundary recovered from mix hash
|
/// Boundary recovered from mix hash
|
||||||
pub fn quick_get_difficulty(header_hash: &H256, nonce: u64, mix_hash: &H256, progpow: bool) -> H256 {
|
pub fn quick_get_difficulty(header_hash: &H256, nonce: u64, mix_hash: &H256) -> H256 {
|
||||||
unsafe {
|
unsafe {
|
||||||
if progpow {
|
// This is safe - the `keccak_512` call below reads the first 40 bytes (which we explicitly set
|
||||||
let seed = keccak_f800_short(*header_hash, nonce, [0u32; 8]);
|
// with two `copy_nonoverlapping` calls) but writes the first 64, and then we explicitly write
|
||||||
keccak_f800_long(*header_hash, seed, mem::transmute(*mix_hash))
|
// the next 32 bytes before we read the whole thing with `keccak_256`.
|
||||||
} else {
|
//
|
||||||
// This is safe - the `keccak_512` call below reads the first 40 bytes (which we explicitly set
|
// This cannot be elided by the compiler as it doesn't know the implementation of
|
||||||
// with two `copy_nonoverlapping` calls) but writes the first 64, and then we explicitly write
|
// `keccak_512`.
|
||||||
// the next 32 bytes before we read the whole thing with `keccak_256`.
|
let mut buf: [u8; 64 + 32] = mem::uninitialized();
|
||||||
//
|
|
||||||
// This cannot be elided by the compiler as it doesn't know the implementation of
|
|
||||||
// `keccak_512`.
|
|
||||||
let mut buf: [u8; 64 + 32] = mem::uninitialized();
|
|
||||||
|
|
||||||
ptr::copy_nonoverlapping(header_hash.as_ptr(), buf.as_mut_ptr(), 32);
|
ptr::copy_nonoverlapping(header_hash.as_ptr(), buf.as_mut_ptr(), 32);
|
||||||
ptr::copy_nonoverlapping(&nonce as *const u64 as *const u8, buf[32..].as_mut_ptr(), 8);
|
ptr::copy_nonoverlapping(&nonce as *const u64 as *const u8, buf[32..].as_mut_ptr(), 8);
|
||||||
|
|
||||||
keccak_512::unchecked(buf.as_mut_ptr(), 64, buf.as_ptr(), 40);
|
keccak_512::unchecked(buf.as_mut_ptr(), 64, buf.as_ptr(), 40);
|
||||||
ptr::copy_nonoverlapping(mix_hash.as_ptr(), buf[64..].as_mut_ptr(), 32);
|
ptr::copy_nonoverlapping(mix_hash.as_ptr(), buf[64..].as_mut_ptr(), 32);
|
||||||
|
|
||||||
// This is initialized in `keccak_256`
|
// This is initialized in `keccak_256`
|
||||||
let mut hash: [u8; 32] = mem::uninitialized();
|
let mut hash: [u8; 32] = mem::uninitialized();
|
||||||
keccak_256::unchecked(hash.as_mut_ptr(), hash.len(), buf.as_ptr(), buf.len());
|
keccak_256::unchecked(hash.as_mut_ptr(), hash.len(), buf.as_ptr(), buf.len());
|
||||||
|
|
||||||
hash
|
hash
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -307,7 +272,7 @@ fn hash_compute(light: &Light, full_size: usize, header_hash: &H256, nonce: u64)
|
|||||||
// We overwrite the second half since `keccak_256` has an internal buffer and so allows
|
// We overwrite the second half since `keccak_256` has an internal buffer and so allows
|
||||||
// overlapping arrays as input.
|
// overlapping arrays as input.
|
||||||
let write_ptr: *mut u8 = &mut buf.compress_bytes as *mut [u8; 32] as *mut u8;
|
let write_ptr: *mut u8 = &mut buf.compress_bytes as *mut [u8; 32] as *mut u8;
|
||||||
unsafe {
|
unsafe {
|
||||||
keccak_256::unchecked(
|
keccak_256::unchecked(
|
||||||
write_ptr,
|
write_ptr,
|
||||||
buf.compress_bytes.len(),
|
buf.compress_bytes.len(),
|
||||||
@@ -322,7 +287,7 @@ fn hash_compute(light: &Light, full_size: usize, header_hash: &H256, nonce: u64)
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TODO: Use the `simd` crate
|
// TODO: Use the `simd` crate
|
||||||
pub fn calculate_dag_item(node_index: u32, cache: &[Node]) -> Node {
|
fn calculate_dag_item(node_index: u32, cache: &[Node]) -> Node {
|
||||||
let num_parent_nodes = cache.len();
|
let num_parent_nodes = cache.len();
|
||||||
let mut ret = cache[node_index as usize % num_parent_nodes].clone();
|
let mut ret = cache[node_index as usize % num_parent_nodes].clone();
|
||||||
ret.as_words_mut()[0] ^= node_index;
|
ret.as_words_mut()[0] ^= node_index;
|
||||||
@@ -396,13 +361,13 @@ mod test {
|
|||||||
0x4a, 0x8e, 0x95, 0x69, 0xef, 0xc7, 0xd7, 0x1b, 0x33, 0x35, 0xdf, 0x36, 0x8c, 0x9a,
|
0x4a, 0x8e, 0x95, 0x69, 0xef, 0xc7, 0xd7, 0x1b, 0x33, 0x35, 0xdf, 0x36, 0x8c, 0x9a,
|
||||||
0xe9, 0x7e, 0x53, 0x84,
|
0xe9, 0x7e, 0x53, 0x84,
|
||||||
];
|
];
|
||||||
assert_eq!(quick_get_difficulty(&hash, nonce, &mix_hash, false)[..], boundary_good[..]);
|
assert_eq!(quick_get_difficulty(&hash, nonce, &mix_hash)[..], boundary_good[..]);
|
||||||
let boundary_bad = [
|
let boundary_bad = [
|
||||||
0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3a, 0x9b, 0x6c, 0x69, 0xbc, 0x2c, 0xe2, 0xa2,
|
0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3a, 0x9b, 0x6c, 0x69, 0xbc, 0x2c, 0xe2, 0xa2,
|
||||||
0x4a, 0x8e, 0x95, 0x69, 0xef, 0xc7, 0xd7, 0x1b, 0x33, 0x35, 0xdf, 0x36, 0x8c, 0x9a,
|
0x4a, 0x8e, 0x95, 0x69, 0xef, 0xc7, 0xd7, 0x1b, 0x33, 0x35, 0xdf, 0x36, 0x8c, 0x9a,
|
||||||
0xe9, 0x7e, 0x53, 0x84,
|
0xe9, 0x7e, 0x53, 0x84,
|
||||||
];
|
];
|
||||||
assert!(quick_get_difficulty(&hash, nonce, &mix_hash, false)[..] != boundary_bad[..]);
|
assert!(quick_get_difficulty(&hash, nonce, &mix_hash)[..] != boundary_bad[..]);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -426,7 +391,7 @@ mod test {
|
|||||||
|
|
||||||
let tempdir = TempDir::new("").unwrap();
|
let tempdir = TempDir::new("").unwrap();
|
||||||
// difficulty = 0x085657254bd9u64;
|
// difficulty = 0x085657254bd9u64;
|
||||||
let light = NodeCacheBuilder::new(None, u64::max_value()).light(tempdir.path(), 486382);
|
let light = NodeCacheBuilder::new(None).light(tempdir.path(), 486382);
|
||||||
let result = light_compute(&light, &hash, nonce);
|
let result = light_compute(&light, &hash, nonce);
|
||||||
assert_eq!(result.mix_hash[..], mix_hash[..]);
|
assert_eq!(result.mix_hash[..], mix_hash[..]);
|
||||||
assert_eq!(result.value[..], boundary[..]);
|
assert_eq!(result.value[..], boundary[..]);
|
||||||
@@ -435,7 +400,7 @@ mod test {
|
|||||||
#[test]
|
#[test]
|
||||||
fn test_drop_old_data() {
|
fn test_drop_old_data() {
|
||||||
let tempdir = TempDir::new("").unwrap();
|
let tempdir = TempDir::new("").unwrap();
|
||||||
let builder = NodeCacheBuilder::new(None, u64::max_value());
|
let builder = NodeCacheBuilder::new(None);
|
||||||
let first = builder.light(tempdir.path(), 0).to_file().unwrap().to_owned();
|
let first = builder.light(tempdir.path(), 0).to_file().unwrap().to_owned();
|
||||||
|
|
||||||
let second = builder.light(tempdir.path(), ETHASH_EPOCH_LENGTH).to_file().unwrap().to_owned();
|
let second = builder.light(tempdir.path(), ETHASH_EPOCH_LENGTH).to_file().unwrap().to_owned();
|
||||||
|
|||||||
@@ -25,30 +25,15 @@ extern crate crunchy;
|
|||||||
#[macro_use]
|
#[macro_use]
|
||||||
extern crate log;
|
extern crate log;
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
extern crate rustc_hex;
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
extern crate serde_json;
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
extern crate tempdir;
|
extern crate tempdir;
|
||||||
|
|
||||||
#[cfg(feature = "bench")]
|
|
||||||
pub mod compute;
|
|
||||||
#[cfg(not(feature = "bench"))]
|
|
||||||
mod compute;
|
mod compute;
|
||||||
|
|
||||||
mod seed_compute;
|
mod seed_compute;
|
||||||
mod cache;
|
mod cache;
|
||||||
mod keccak;
|
mod keccak;
|
||||||
mod shared;
|
mod shared;
|
||||||
|
|
||||||
#[cfg(feature = "bench")]
|
|
||||||
pub mod progpow;
|
|
||||||
#[cfg(not(feature = "bench"))]
|
|
||||||
mod progpow;
|
|
||||||
|
|
||||||
pub use cache::{NodeCacheBuilder, OptimizeFor};
|
pub use cache::{NodeCacheBuilder, OptimizeFor};
|
||||||
pub use compute::{ProofOfWork, quick_get_difficulty, slow_hash_block_number};
|
pub use compute::{ProofOfWork, quick_get_difficulty, slow_hash_block_number};
|
||||||
use compute::Light;
|
use compute::Light;
|
||||||
@@ -74,16 +59,14 @@ pub struct EthashManager {
|
|||||||
nodecache_builder: NodeCacheBuilder,
|
nodecache_builder: NodeCacheBuilder,
|
||||||
cache: Mutex<LightCache>,
|
cache: Mutex<LightCache>,
|
||||||
cache_dir: PathBuf,
|
cache_dir: PathBuf,
|
||||||
progpow_transition: u64,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl EthashManager {
|
impl EthashManager {
|
||||||
/// Create a new new instance of ethash manager
|
/// Create a new new instance of ethash manager
|
||||||
pub fn new<T: Into<Option<OptimizeFor>>>(cache_dir: &Path, optimize_for: T, progpow_transition: u64) -> EthashManager {
|
pub fn new<T: Into<Option<OptimizeFor>>>(cache_dir: &Path, optimize_for: T) -> EthashManager {
|
||||||
EthashManager {
|
EthashManager {
|
||||||
cache_dir: cache_dir.to_path_buf(),
|
cache_dir: cache_dir.to_path_buf(),
|
||||||
nodecache_builder: NodeCacheBuilder::new(optimize_for.into().unwrap_or_default(), progpow_transition),
|
nodecache_builder: NodeCacheBuilder::new(optimize_for.into().unwrap_or_default()),
|
||||||
progpow_transition: progpow_transition,
|
|
||||||
cache: Mutex::new(LightCache {
|
cache: Mutex::new(LightCache {
|
||||||
recent_epoch: None,
|
recent_epoch: None,
|
||||||
recent: None,
|
recent: None,
|
||||||
@@ -102,33 +85,27 @@ impl EthashManager {
|
|||||||
let epoch = block_number / ETHASH_EPOCH_LENGTH;
|
let epoch = block_number / ETHASH_EPOCH_LENGTH;
|
||||||
let light = {
|
let light = {
|
||||||
let mut lights = self.cache.lock();
|
let mut lights = self.cache.lock();
|
||||||
let light = if block_number == self.progpow_transition {
|
let light = match lights.recent_epoch.clone() {
|
||||||
// we need to regenerate the cache to trigger algorithm change to progpow inside `Light`
|
Some(ref e) if *e == epoch => lights.recent.clone(),
|
||||||
None
|
_ => match lights.prev_epoch.clone() {
|
||||||
} else {
|
Some(e) if e == epoch => {
|
||||||
match lights.recent_epoch.clone() {
|
// don't swap if recent is newer.
|
||||||
Some(ref e) if *e == epoch => lights.recent.clone(),
|
if lights.recent_epoch > lights.prev_epoch {
|
||||||
_ => match lights.prev_epoch.clone() {
|
None
|
||||||
Some(e) if e == epoch => {
|
} else {
|
||||||
// don't swap if recent is newer.
|
// swap
|
||||||
if lights.recent_epoch > lights.prev_epoch {
|
let t = lights.prev_epoch;
|
||||||
None
|
lights.prev_epoch = lights.recent_epoch;
|
||||||
} else {
|
lights.recent_epoch = t;
|
||||||
// swap
|
let t = lights.prev.clone();
|
||||||
let t = lights.prev_epoch;
|
lights.prev = lights.recent.clone();
|
||||||
lights.prev_epoch = lights.recent_epoch;
|
lights.recent = t;
|
||||||
lights.recent_epoch = t;
|
lights.recent.clone()
|
||||||
let t = lights.prev.clone();
|
|
||||||
lights.prev = lights.recent.clone();
|
|
||||||
lights.recent = t;
|
|
||||||
lights.recent.clone()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
_ => None,
|
}
|
||||||
},
|
_ => None,
|
||||||
}
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
match light {
|
match light {
|
||||||
None => {
|
None => {
|
||||||
let light = match self.nodecache_builder.light_from_file(
|
let light = match self.nodecache_builder.light_from_file(
|
||||||
@@ -155,7 +132,7 @@ impl EthashManager {
|
|||||||
Some(light) => light,
|
Some(light) => light,
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
light.compute(header_hash, nonce, block_number)
|
light.compute(header_hash, nonce)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -187,7 +164,7 @@ fn test_lru() {
|
|||||||
use tempdir::TempDir;
|
use tempdir::TempDir;
|
||||||
|
|
||||||
let tempdir = TempDir::new("").unwrap();
|
let tempdir = TempDir::new("").unwrap();
|
||||||
let ethash = EthashManager::new(tempdir.path(), None, u64::max_value());
|
let ethash = EthashManager::new(tempdir.path(), None);
|
||||||
let hash = [0u8; 32];
|
let hash = [0u8; 32];
|
||||||
ethash.compute_light(1, &hash, 1);
|
ethash.compute_light(1, &hash, 1);
|
||||||
ethash.compute_light(50000, &hash, 1);
|
ethash.compute_light(50000, &hash, 1);
|
||||||
|
|||||||
@@ -1,595 +0,0 @@
|
|||||||
// Copyright 2015-2019 Parity Technologies (UK) Ltd.
|
|
||||||
// This file is part of Parity.
|
|
||||||
|
|
||||||
// Parity is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
|
|
||||||
// Parity is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU General Public License for more details.
|
|
||||||
|
|
||||||
// You should have received a copy of the GNU General Public License
|
|
||||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
use compute::{FNV_PRIME, calculate_dag_item};
|
|
||||||
use keccak::H256;
|
|
||||||
use shared::{ETHASH_ACCESSES, ETHASH_MIX_BYTES, Node, get_data_size};
|
|
||||||
|
|
||||||
const PROGPOW_CACHE_BYTES: usize = 16 * 1024;
|
|
||||||
const PROGPOW_CACHE_WORDS: usize = PROGPOW_CACHE_BYTES / 4;
|
|
||||||
const PROGPOW_CNT_CACHE: usize = 12;
|
|
||||||
const PROGPOW_CNT_MATH: usize = 20;
|
|
||||||
const PROGPOW_CNT_DAG: usize = ETHASH_ACCESSES;
|
|
||||||
const PROGPOW_DAG_LOADS: usize = 4;
|
|
||||||
const PROGPOW_MIX_BYTES: usize = 2 * ETHASH_MIX_BYTES;
|
|
||||||
const PROGPOW_PERIOD_LENGTH: usize = 50; // blocks per progpow epoch (N)
|
|
||||||
const PROGPOW_LANES: usize = 16;
|
|
||||||
const PROGPOW_REGS: usize = 32;
|
|
||||||
|
|
||||||
const FNV_HASH: u32 = 0x811c9dc5;
|
|
||||||
|
|
||||||
const KECCAKF_RNDC: [u32; 24] = [
|
|
||||||
0x00000001, 0x00008082, 0x0000808a, 0x80008000, 0x0000808b, 0x80000001,
|
|
||||||
0x80008081, 0x00008009, 0x0000008a, 0x00000088, 0x80008009, 0x8000000a,
|
|
||||||
0x8000808b, 0x0000008b, 0x00008089, 0x00008003, 0x00008002, 0x00000080,
|
|
||||||
0x0000800a, 0x8000000a, 0x80008081, 0x00008080, 0x80000001, 0x80008008
|
|
||||||
];
|
|
||||||
|
|
||||||
const KECCAKF_ROTC: [u32; 24] = [
|
|
||||||
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 2, 14,
|
|
||||||
27, 41, 56, 8, 25, 43, 62, 18, 39, 61, 20, 44
|
|
||||||
];
|
|
||||||
|
|
||||||
const KECCAKF_PILN: [usize; 24] = [
|
|
||||||
10, 7, 11, 17, 18, 3, 5, 16, 8, 21, 24, 4,
|
|
||||||
15, 23, 19, 13, 12, 2, 20, 14, 22, 9, 6, 1
|
|
||||||
];
|
|
||||||
|
|
||||||
fn keccak_f800_round(st: &mut [u32; 25], r: usize) {
|
|
||||||
// Theta
|
|
||||||
let mut bc = [0u32; 5];
|
|
||||||
for i in 0..bc.len() {
|
|
||||||
bc[i] = st[i] ^ st[i + 5] ^ st[i + 10] ^ st[i + 15] ^ st[i + 20];
|
|
||||||
}
|
|
||||||
|
|
||||||
for i in 0..bc.len() {
|
|
||||||
let t = bc[(i + 4) % 5] ^ bc[(i + 1) % 5].rotate_left(1);
|
|
||||||
for j in (0..st.len()).step_by(5) {
|
|
||||||
st[j + i] ^= t;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Rho Pi
|
|
||||||
let mut t = st[1];
|
|
||||||
|
|
||||||
debug_assert_eq!(KECCAKF_ROTC.len(), 24);
|
|
||||||
for i in 0..24 {
|
|
||||||
let j = KECCAKF_PILN[i];
|
|
||||||
bc[0] = st[j];
|
|
||||||
st[j] = t.rotate_left(KECCAKF_ROTC[i]);
|
|
||||||
t = bc[0];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Chi
|
|
||||||
for j in (0..st.len()).step_by(5) {
|
|
||||||
for i in 0..bc.len() {
|
|
||||||
bc[i] = st[j + i];
|
|
||||||
}
|
|
||||||
for i in 0..bc.len() {
|
|
||||||
st[j + i] ^= (!bc[(i + 1) % 5]) & bc[(i + 2) % 5];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Iota
|
|
||||||
debug_assert!(r < KECCAKF_RNDC.len());
|
|
||||||
st[0] ^= KECCAKF_RNDC[r];
|
|
||||||
}
|
|
||||||
|
|
||||||
fn keccak_f800(header_hash: H256, nonce: u64, result: [u32; 8], st: &mut [u32; 25]) {
|
|
||||||
for i in 0..8 {
|
|
||||||
st[i] = (header_hash[4 * i] as u32) +
|
|
||||||
((header_hash[4 * i + 1] as u32) << 8) +
|
|
||||||
((header_hash[4 * i + 2] as u32) << 16) +
|
|
||||||
((header_hash[4 * i + 3] as u32) << 24);
|
|
||||||
}
|
|
||||||
|
|
||||||
st[8] = nonce as u32;
|
|
||||||
st[9] = (nonce >> 32) as u32;
|
|
||||||
|
|
||||||
for i in 0..8 {
|
|
||||||
st[10 + i] = result[i];
|
|
||||||
}
|
|
||||||
|
|
||||||
for r in 0..22 {
|
|
||||||
keccak_f800_round(st, r);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn keccak_f800_short(header_hash: H256, nonce: u64, result: [u32; 8]) -> u64 {
|
|
||||||
let mut st = [0u32; 25];
|
|
||||||
keccak_f800(header_hash, nonce, result, &mut st);
|
|
||||||
(st[0].swap_bytes() as u64) << 32 | st[1].swap_bytes() as u64
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn keccak_f800_long(header_hash: H256, nonce: u64, result: [u32; 8]) -> H256 {
|
|
||||||
let mut st = [0u32; 25];
|
|
||||||
keccak_f800(header_hash, nonce, result, &mut st);
|
|
||||||
|
|
||||||
// NOTE: transmute from `[u32; 8]` to `[u8; 32]`
|
|
||||||
unsafe {
|
|
||||||
std::mem::transmute(
|
|
||||||
[st[0], st[1], st[2], st[3], st[4], st[5], st[6], st[7]]
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[inline]
|
|
||||||
fn fnv1a_hash(h: u32, d: u32) -> u32 {
|
|
||||||
(h ^ d).wrapping_mul(FNV_PRIME)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone)]
|
|
||||||
struct Kiss99 {
|
|
||||||
z: u32,
|
|
||||||
w: u32,
|
|
||||||
jsr: u32,
|
|
||||||
jcong: u32,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Kiss99 {
|
|
||||||
fn new(z: u32, w: u32, jsr: u32, jcong: u32) -> Kiss99 {
|
|
||||||
Kiss99 { z, w, jsr, jcong }
|
|
||||||
}
|
|
||||||
|
|
||||||
#[inline]
|
|
||||||
fn next_u32(&mut self) -> u32 {
|
|
||||||
self.z = 36969u32.wrapping_mul(self.z & 65535).wrapping_add(self.z >> 16);
|
|
||||||
self.w = 18000u32.wrapping_mul(self.w & 65535).wrapping_add(self.w >> 16);
|
|
||||||
let mwc = (self.z << 16).wrapping_add(self.w);
|
|
||||||
self.jsr ^= self.jsr << 17;
|
|
||||||
self.jsr ^= self.jsr >> 13;
|
|
||||||
self.jsr ^= self.jsr << 5;
|
|
||||||
self.jcong = 69069u32.wrapping_mul(self.jcong).wrapping_add(1234567);
|
|
||||||
|
|
||||||
(mwc ^ self.jcong).wrapping_add(self.jsr)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn fill_mix(seed: u64, lane_id: u32) -> [u32; PROGPOW_REGS] {
|
|
||||||
// Use FNV to expand the per-warp seed to per-lane
|
|
||||||
// Use KISS to expand the per-lane seed to fill mix
|
|
||||||
let z = fnv1a_hash(FNV_HASH, seed as u32);
|
|
||||||
let w = fnv1a_hash(z, (seed >> 32) as u32);
|
|
||||||
let jsr = fnv1a_hash(w, lane_id);
|
|
||||||
let jcong = fnv1a_hash(jsr, lane_id);
|
|
||||||
|
|
||||||
let mut rnd = Kiss99::new(z, w, jsr, jcong);
|
|
||||||
|
|
||||||
let mut mix = [0; PROGPOW_REGS];
|
|
||||||
|
|
||||||
debug_assert_eq!(PROGPOW_REGS, 32);
|
|
||||||
for i in 0..32 {
|
|
||||||
mix[i] = rnd.next_u32();
|
|
||||||
}
|
|
||||||
|
|
||||||
mix
|
|
||||||
}
|
|
||||||
|
|
||||||
// Merge new data from b into the value in a. Assuming A has high entropy only
|
|
||||||
// do ops that retain entropy even if B is low entropy (IE don't do A&B)
|
|
||||||
fn merge(a: u32, b: u32, r: u32) -> u32 {
|
|
||||||
match r % 4 {
|
|
||||||
0 => a.wrapping_mul(33).wrapping_add(b),
|
|
||||||
1 => (a ^ b).wrapping_mul(33),
|
|
||||||
2 => a.rotate_left(((r >> 16) % 31) + 1) ^ b,
|
|
||||||
_ => a.rotate_right(((r >> 16) % 31) + 1) ^ b,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn math(a: u32, b: u32, r: u32) -> u32 {
|
|
||||||
match r % 11 {
|
|
||||||
0 => a.wrapping_add(b),
|
|
||||||
1 => a.wrapping_mul(b),
|
|
||||||
2 => ((a as u64).wrapping_mul(b as u64) >> 32) as u32,
|
|
||||||
3 => a.min(b),
|
|
||||||
4 => a.rotate_left(b),
|
|
||||||
5 => a.rotate_right(b),
|
|
||||||
6 => a & b,
|
|
||||||
7 => a | b,
|
|
||||||
8 => a ^ b,
|
|
||||||
9 => a.leading_zeros() + b.leading_zeros(),
|
|
||||||
_ => a.count_ones() + b.count_ones(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn progpow_init(seed: u64) -> (Kiss99, [u32; PROGPOW_REGS], [u32; PROGPOW_REGS]) {
|
|
||||||
let z = fnv1a_hash(FNV_HASH, seed as u32);
|
|
||||||
let w = fnv1a_hash(z, (seed >> 32) as u32);
|
|
||||||
let jsr = fnv1a_hash(w, seed as u32);
|
|
||||||
let jcong = fnv1a_hash(jsr, (seed >> 32) as u32);
|
|
||||||
|
|
||||||
let mut rnd = Kiss99::new(z, w, jsr, jcong);
|
|
||||||
|
|
||||||
// Create a random sequence of mix destinations for merge() and mix sources
|
|
||||||
// for cache reads guarantees every destination merged once and guarantees
|
|
||||||
// no duplicate cache reads, which could be optimized away. Uses
|
|
||||||
// Fisher-Yates shuffle.
|
|
||||||
let mut mix_seq_dst = [0u32; PROGPOW_REGS];
|
|
||||||
let mut mix_seq_cache = [0u32; PROGPOW_REGS];
|
|
||||||
for i in 0..mix_seq_dst.len() {
|
|
||||||
mix_seq_dst[i] = i as u32;
|
|
||||||
mix_seq_cache[i] = i as u32;
|
|
||||||
}
|
|
||||||
|
|
||||||
for i in (1..mix_seq_dst.len()).rev() {
|
|
||||||
let j = rnd.next_u32() as usize % (i + 1);
|
|
||||||
mix_seq_dst.swap(i, j);
|
|
||||||
|
|
||||||
let j = rnd.next_u32() as usize % (i + 1);
|
|
||||||
mix_seq_cache.swap(i, j);
|
|
||||||
}
|
|
||||||
|
|
||||||
(rnd, mix_seq_dst, mix_seq_cache)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub type CDag = [u32; PROGPOW_CACHE_WORDS];
|
|
||||||
|
|
||||||
fn progpow_loop(
|
|
||||||
seed: u64,
|
|
||||||
loop_: usize,
|
|
||||||
mix: &mut [[u32; PROGPOW_REGS]; PROGPOW_LANES],
|
|
||||||
cache: &[Node],
|
|
||||||
c_dag: &CDag,
|
|
||||||
data_size: usize,
|
|
||||||
) {
|
|
||||||
// All lanes share a base address for the global load. Global offset uses
|
|
||||||
// mix[0] to guarantee it depends on the load result.
|
|
||||||
let g_offset = mix[loop_ % PROGPOW_LANES][0] as usize %
|
|
||||||
(64 * data_size / (PROGPOW_LANES * PROGPOW_DAG_LOADS));
|
|
||||||
|
|
||||||
// 256 bytes of dag data
|
|
||||||
let mut dag_item = [0u32; 64];
|
|
||||||
|
|
||||||
// Fetch DAG nodes (64 bytes each)
|
|
||||||
for l in 0..PROGPOW_DAG_LOADS {
|
|
||||||
let index = g_offset * PROGPOW_LANES * PROGPOW_DAG_LOADS + l * 16;
|
|
||||||
let node = calculate_dag_item(index as u32 / 16, cache);
|
|
||||||
dag_item[l * 16..(l + 1) * 16].clone_from_slice(node.as_words());
|
|
||||||
}
|
|
||||||
|
|
||||||
let (rnd, mix_seq_dst, mix_seq_cache) = progpow_init(seed);
|
|
||||||
|
|
||||||
// Lanes can execute in parallel and will be convergent
|
|
||||||
for l in 0..mix.len() {
|
|
||||||
let mut rnd = rnd.clone();
|
|
||||||
|
|
||||||
// Initialize the seed and mix destination sequence
|
|
||||||
let mut mix_seq_dst_cnt = 0;
|
|
||||||
let mut mix_seq_cache_cnt = 0;
|
|
||||||
|
|
||||||
let mut mix_dst = || {
|
|
||||||
let res = mix_seq_dst[mix_seq_dst_cnt % PROGPOW_REGS] as usize;
|
|
||||||
mix_seq_dst_cnt += 1;
|
|
||||||
res
|
|
||||||
};
|
|
||||||
let mut mix_cache = || {
|
|
||||||
let res = mix_seq_cache[mix_seq_cache_cnt % PROGPOW_REGS] as usize;
|
|
||||||
mix_seq_cache_cnt += 1;
|
|
||||||
res
|
|
||||||
};
|
|
||||||
|
|
||||||
for i in 0..PROGPOW_CNT_CACHE.max(PROGPOW_CNT_MATH) {
|
|
||||||
if i < PROGPOW_CNT_CACHE {
|
|
||||||
// Cached memory access, lanes access random 32-bit locations
|
|
||||||
// within the first portion of the DAG
|
|
||||||
let offset = mix[l][mix_cache()] as usize % PROGPOW_CACHE_WORDS;
|
|
||||||
let data = c_dag[offset];
|
|
||||||
let dst = mix_dst();
|
|
||||||
|
|
||||||
mix[l][dst] = merge(mix[l][dst], data, rnd.next_u32());
|
|
||||||
}
|
|
||||||
|
|
||||||
if i < PROGPOW_CNT_MATH {
|
|
||||||
// Random math
|
|
||||||
// Generate 2 unique sources
|
|
||||||
let src_rnd = rnd.next_u32() % (PROGPOW_REGS * (PROGPOW_REGS - 1)) as u32;
|
|
||||||
let src1 = src_rnd % PROGPOW_REGS as u32; // 0 <= src1 < PROGPOW_REGS
|
|
||||||
let mut src2 = src_rnd / PROGPOW_REGS as u32; // 0 <= src2 < PROGPOW_REGS - 1
|
|
||||||
if src2 >= src1 {
|
|
||||||
src2 += 1; // src2 is now any reg other than src1
|
|
||||||
}
|
|
||||||
|
|
||||||
let data = math(mix[l][src1 as usize], mix[l][src2 as usize], rnd.next_u32());
|
|
||||||
let dst = mix_dst();
|
|
||||||
|
|
||||||
mix[l][dst] = merge(mix[l][dst], data, rnd.next_u32());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Global load to sequential locations
|
|
||||||
let mut data_g = [0u32; PROGPOW_DAG_LOADS];
|
|
||||||
let index = ((l ^ loop_) % PROGPOW_LANES) * PROGPOW_DAG_LOADS;
|
|
||||||
for i in 0..PROGPOW_DAG_LOADS {
|
|
||||||
data_g[i] = dag_item[index + i];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Consume the global load data at the very end of the loop to allow
|
|
||||||
// full latency hiding. Always merge into `mix[0]` to feed the offset
|
|
||||||
// calculation.
|
|
||||||
mix[l][0] = merge(mix[l][0], data_g[0], rnd.next_u32());
|
|
||||||
for i in 1..PROGPOW_DAG_LOADS {
|
|
||||||
let dst = mix_dst();
|
|
||||||
mix[l][dst] = merge(mix[l][dst], data_g[i], rnd.next_u32());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn progpow(
|
|
||||||
header_hash: H256,
|
|
||||||
nonce: u64,
|
|
||||||
block_number: u64,
|
|
||||||
cache: &[Node],
|
|
||||||
c_dag: &CDag,
|
|
||||||
) -> (H256, H256) {
|
|
||||||
let mut mix = [[0u32; PROGPOW_REGS]; PROGPOW_LANES];
|
|
||||||
let mut lane_results = [0u32; PROGPOW_LANES];
|
|
||||||
let mut result = [0u32; 8];
|
|
||||||
|
|
||||||
let data_size = get_data_size(block_number) / PROGPOW_MIX_BYTES;
|
|
||||||
|
|
||||||
// NOTE: This assert is required to aid the optimizer elide the non-zero
|
|
||||||
// remainder check in `progpow_loop`.
|
|
||||||
assert!(data_size > 0);
|
|
||||||
|
|
||||||
// Initialize mix for all lanes
|
|
||||||
let seed = keccak_f800_short(header_hash, nonce, result);
|
|
||||||
|
|
||||||
for l in 0..mix.len() {
|
|
||||||
mix[l] = fill_mix(seed, l as u32);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Execute the randomly generated inner loop
|
|
||||||
let period = block_number / PROGPOW_PERIOD_LENGTH as u64;
|
|
||||||
for i in 0..PROGPOW_CNT_DAG {
|
|
||||||
progpow_loop(
|
|
||||||
period,
|
|
||||||
i,
|
|
||||||
&mut mix,
|
|
||||||
cache,
|
|
||||||
c_dag,
|
|
||||||
data_size,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reduce mix data to a single per-lane result
|
|
||||||
for l in 0..lane_results.len() {
|
|
||||||
lane_results[l] = FNV_HASH;
|
|
||||||
for i in 0..PROGPOW_REGS {
|
|
||||||
lane_results[l] = fnv1a_hash(lane_results[l], mix[l][i]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reduce all lanes to a single 128-bit result
|
|
||||||
result = [FNV_HASH; 8];
|
|
||||||
for l in 0..PROGPOW_LANES {
|
|
||||||
result[l % 8] = fnv1a_hash(result[l % 8], lane_results[l]);
|
|
||||||
}
|
|
||||||
|
|
||||||
let digest = keccak_f800_long(header_hash, seed, result);
|
|
||||||
|
|
||||||
// NOTE: transmute from `[u32; 8]` to `[u8; 32]`
|
|
||||||
let result = unsafe { ::std::mem::transmute(result) };
|
|
||||||
|
|
||||||
(digest, result)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn generate_cdag(cache: &[Node]) -> CDag {
|
|
||||||
let mut c_dag = [0u32; PROGPOW_CACHE_WORDS];
|
|
||||||
|
|
||||||
for i in 0..PROGPOW_CACHE_WORDS / 16 {
|
|
||||||
let node = calculate_dag_item(i as u32, cache);
|
|
||||||
for j in 0..16 {
|
|
||||||
c_dag[i * 16 + j] = node.as_words()[j];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c_dag
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod test {
|
|
||||||
use tempdir::TempDir;
|
|
||||||
|
|
||||||
use cache::{NodeCacheBuilder, OptimizeFor};
|
|
||||||
use keccak::H256;
|
|
||||||
use rustc_hex::FromHex;
|
|
||||||
use serde_json::{self, Value};
|
|
||||||
use std::collections::VecDeque;
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
fn h256(hex: &str) -> H256 {
|
|
||||||
let bytes = FromHex::from_hex(hex).unwrap();
|
|
||||||
let mut res = [0; 32];
|
|
||||||
res.copy_from_slice(&bytes);
|
|
||||||
res
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_cdag() {
|
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
|
||||||
let tempdir = TempDir::new("").unwrap();
|
|
||||||
let cache = builder.new_cache(tempdir.into_path(), 0);
|
|
||||||
|
|
||||||
let c_dag = generate_cdag(cache.as_ref());
|
|
||||||
|
|
||||||
let expected = vec![
|
|
||||||
690150178u32, 1181503948, 2248155602, 2118233073, 2193871115,
|
|
||||||
1791778428, 1067701239, 724807309, 530799275, 3480325829, 3899029234,
|
|
||||||
1998124059, 2541974622, 1100859971, 1297211151, 3268320000, 2217813733,
|
|
||||||
2690422980, 3172863319, 2651064309
|
|
||||||
];
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
c_dag.iter().take(20).cloned().collect::<Vec<_>>(),
|
|
||||||
expected,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_random_merge() {
|
|
||||||
let tests = [
|
|
||||||
(1000000u32, 101u32, 33000101u32),
|
|
||||||
(2000000, 102, 66003366),
|
|
||||||
(3000000, 103, 6000103),
|
|
||||||
(4000000, 104, 2000104),
|
|
||||||
(1000000, 0, 33000000),
|
|
||||||
(2000000, 0, 66000000),
|
|
||||||
(3000000, 0, 6000000),
|
|
||||||
(4000000, 0, 2000000),
|
|
||||||
];
|
|
||||||
|
|
||||||
for (i, &(a, b, expected)) in tests.iter().enumerate() {
|
|
||||||
assert_eq!(
|
|
||||||
merge(a, b, i as u32),
|
|
||||||
expected,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_random_math() {
|
|
||||||
let tests = [
|
|
||||||
(20u32, 22u32, 42u32),
|
|
||||||
(70000, 80000, 1305032704),
|
|
||||||
(70000, 80000, 1),
|
|
||||||
(1, 2, 1),
|
|
||||||
(3, 10000, 196608),
|
|
||||||
(3, 0, 3),
|
|
||||||
(3, 6, 2),
|
|
||||||
(3, 6, 7),
|
|
||||||
(3, 6, 5),
|
|
||||||
(0, 0xffffffff, 32),
|
|
||||||
(3 << 13, 1 << 5, 3),
|
|
||||||
(22, 20, 42),
|
|
||||||
(80000, 70000, 1305032704),
|
|
||||||
(80000, 70000, 1),
|
|
||||||
(2, 1, 1),
|
|
||||||
(10000, 3, 80000),
|
|
||||||
(0, 3, 0),
|
|
||||||
(6, 3, 2),
|
|
||||||
(6, 3, 7),
|
|
||||||
(6, 3, 5),
|
|
||||||
(0, 0xffffffff, 32),
|
|
||||||
(3 << 13, 1 << 5, 3),
|
|
||||||
];
|
|
||||||
|
|
||||||
for (i, &(a, b, expected)) in tests.iter().enumerate() {
|
|
||||||
assert_eq!(
|
|
||||||
math(a, b, i as u32),
|
|
||||||
expected,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_keccak_256() {
|
|
||||||
let expected = "5dd431e5fbc604f499bfa0232f45f8f142d0ff5178f539e5a7800bf0643697af";
|
|
||||||
assert_eq!(
|
|
||||||
keccak_f800_long([0; 32], 0, [0; 8]),
|
|
||||||
h256(expected),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_keccak_64() {
|
|
||||||
let expected: u64 = 0x5dd431e5fbc604f4;
|
|
||||||
assert_eq!(
|
|
||||||
keccak_f800_short([0; 32], 0, [0; 8]),
|
|
||||||
expected,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_progpow_hash() {
|
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
|
||||||
let tempdir = TempDir::new("").unwrap();
|
|
||||||
let cache = builder.new_cache(tempdir.into_path(), 0);
|
|
||||||
let c_dag = generate_cdag(cache.as_ref());
|
|
||||||
|
|
||||||
let header_hash = [0; 32];
|
|
||||||
|
|
||||||
let (digest, result) = progpow(
|
|
||||||
header_hash,
|
|
||||||
0,
|
|
||||||
0,
|
|
||||||
cache.as_ref(),
|
|
||||||
&c_dag,
|
|
||||||
);
|
|
||||||
|
|
||||||
let expected_digest = FromHex::from_hex("63155f732f2bf556967f906155b510c917e48e99685ead76ea83f4eca03ab12b").unwrap();
|
|
||||||
let expected_result = FromHex::from_hex("faeb1be51075b03a4ff44b335067951ead07a3b078539ace76fd56fc410557a3").unwrap();
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
digest.to_vec(),
|
|
||||||
expected_digest,
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
result.to_vec(),
|
|
||||||
expected_result,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_progpow_testvectors() {
|
|
||||||
struct ProgpowTest {
|
|
||||||
block_number: u64,
|
|
||||||
header_hash: H256,
|
|
||||||
nonce: u64,
|
|
||||||
mix_hash: H256,
|
|
||||||
final_hash: H256,
|
|
||||||
}
|
|
||||||
|
|
||||||
let tests: Vec<VecDeque<Value>> =
|
|
||||||
serde_json::from_slice(include_bytes!("../res/progpow_testvectors.json")).unwrap();
|
|
||||||
|
|
||||||
let tests: Vec<ProgpowTest> = tests.into_iter().map(|mut test: VecDeque<Value>| {
|
|
||||||
assert!(test.len() == 5);
|
|
||||||
|
|
||||||
let block_number: u64 = serde_json::from_value(test.pop_front().unwrap()).unwrap();
|
|
||||||
let header_hash: String = serde_json::from_value(test.pop_front().unwrap()).unwrap();
|
|
||||||
let nonce: String = serde_json::from_value(test.pop_front().unwrap()).unwrap();
|
|
||||||
let mix_hash: String = serde_json::from_value(test.pop_front().unwrap()).unwrap();
|
|
||||||
let final_hash: String = serde_json::from_value(test.pop_front().unwrap()).unwrap();
|
|
||||||
|
|
||||||
ProgpowTest {
|
|
||||||
block_number,
|
|
||||||
header_hash: h256(&header_hash),
|
|
||||||
nonce: u64::from_str_radix(&nonce, 16).unwrap(),
|
|
||||||
mix_hash: h256(&mix_hash),
|
|
||||||
final_hash: h256(&final_hash),
|
|
||||||
}
|
|
||||||
}).collect();
|
|
||||||
|
|
||||||
for test in tests {
|
|
||||||
let builder = NodeCacheBuilder::new(OptimizeFor::Memory, u64::max_value());
|
|
||||||
let tempdir = TempDir::new("").unwrap();
|
|
||||||
let cache = builder.new_cache(tempdir.path().to_owned(), test.block_number);
|
|
||||||
let c_dag = generate_cdag(cache.as_ref());
|
|
||||||
|
|
||||||
let (digest, result) = progpow(
|
|
||||||
test.header_hash,
|
|
||||||
test.nonce,
|
|
||||||
test.block_number,
|
|
||||||
cache.as_ref(),
|
|
||||||
&c_dag,
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(digest, test.final_hash);
|
|
||||||
assert_eq!(result, test.mix_hash);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -29,8 +29,9 @@ ethcore-stratum = { path = "../miner/stratum", optional = true }
|
|||||||
ethereum-types = "0.4"
|
ethereum-types = "0.4"
|
||||||
ethjson = { path = "../json" }
|
ethjson = { path = "../json" }
|
||||||
ethkey = { path = "../accounts/ethkey" }
|
ethkey = { path = "../accounts/ethkey" }
|
||||||
|
ethstore = { path = "../accounts/ethstore" }
|
||||||
evm = { path = "evm" }
|
evm = { path = "evm" }
|
||||||
hash-db = "0.11.0"
|
hashdb = "0.3.0"
|
||||||
heapsize = "0.4"
|
heapsize = "0.4"
|
||||||
itertools = "0.5"
|
itertools = "0.5"
|
||||||
journaldb = { path = "../util/journaldb" }
|
journaldb = { path = "../util/journaldb" }
|
||||||
@@ -45,15 +46,15 @@ log = "0.4"
|
|||||||
lru-cache = "0.1"
|
lru-cache = "0.1"
|
||||||
macros = { path = "../util/macros" }
|
macros = { path = "../util/macros" }
|
||||||
memory-cache = { path = "../util/memory-cache" }
|
memory-cache = { path = "../util/memory-cache" }
|
||||||
memory-db = "0.11.0"
|
memorydb = "0.3.0"
|
||||||
num = { version = "0.1", default-features = false, features = ["bigint"] }
|
num = { version = "0.1", default-features = false, features = ["bigint"] }
|
||||||
num_cpus = "1.2"
|
num_cpus = "1.2"
|
||||||
parity-bytes = "0.1"
|
parity-bytes = "0.1"
|
||||||
parity-crypto = "0.3.0"
|
parity-crypto = "0.2"
|
||||||
parity-machine = { path = "../machine" }
|
parity-machine = { path = "../machine" }
|
||||||
parity-snappy = "0.1"
|
parity-snappy = "0.1"
|
||||||
parking_lot = "0.7"
|
parking_lot = "0.7"
|
||||||
trie-db = "0.11.0"
|
patricia-trie = "0.3.0"
|
||||||
patricia-trie-ethereum = { path = "../util/patricia-trie-ethereum" }
|
patricia-trie-ethereum = { path = "../util/patricia-trie-ethereum" }
|
||||||
rand = "0.4"
|
rand = "0.4"
|
||||||
rayon = "1.0"
|
rayon = "1.0"
|
||||||
@@ -72,14 +73,17 @@ using_queue = { path = "../miner/using-queue" }
|
|||||||
vm = { path = "vm" }
|
vm = { path = "vm" }
|
||||||
wasm = { path = "wasm" }
|
wasm = { path = "wasm" }
|
||||||
|
|
||||||
|
[target.'cfg(any(target_os = "linux", target_os = "macos", target_os = "windows", target_os = "android"))'.dependencies]
|
||||||
|
hardware-wallet = { path = "../accounts/hw" }
|
||||||
|
|
||||||
|
[target.'cfg(not(any(target_os = "linux", target_os = "macos", target_os = "windows", target_os = "android")))'.dependencies]
|
||||||
|
fake-hardware-wallet = { path = "../accounts/fake-hardware-wallet" }
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
blooms-db = { path = "../util/blooms-db" }
|
blooms-db = { path = "../util/blooms-db" }
|
||||||
criterion = "0.2"
|
criterion = "0.2"
|
||||||
env_logger = "0.5"
|
env_logger = "0.5"
|
||||||
ethcore-accounts = { path = "../accounts" }
|
|
||||||
fetch = { path = "../util/fetch" }
|
|
||||||
kvdb-rocksdb = "0.1.3"
|
kvdb-rocksdb = "0.1.3"
|
||||||
parity-runtime = { path = "../util/runtime" }
|
|
||||||
rlp_compress = { path = "../util/rlp-compress" }
|
rlp_compress = { path = "../util/rlp-compress" }
|
||||||
tempdir = "0.3"
|
tempdir = "0.3"
|
||||||
trie-standardmap = "0.1"
|
trie-standardmap = "0.1"
|
||||||
@@ -108,7 +112,7 @@ slow-blocks = []
|
|||||||
# Run JSON consensus tests.
|
# Run JSON consensus tests.
|
||||||
json-tests = ["env_logger", "test-helpers", "to-pod-full"]
|
json-tests = ["env_logger", "test-helpers", "to-pod-full"]
|
||||||
# Skip JSON consensus tests with pending issues.
|
# Skip JSON consensus tests with pending issues.
|
||||||
ci-skip-tests = []
|
ci-skip-issue = []
|
||||||
# Run memory/cpu heavy tests.
|
# Run memory/cpu heavy tests.
|
||||||
test-heavy = []
|
test-heavy = []
|
||||||
# Compile test helpers
|
# Compile test helpers
|
||||||
|
|||||||
@@ -668,21 +668,6 @@ impl BlockChain {
|
|||||||
self.db.key_value().read_with_cache(db::COL_EXTRA, &self.block_details, parent).map_or(false, |d| d.children.contains(hash))
|
self.db.key_value().read_with_cache(db::COL_EXTRA, &self.block_details, parent).map_or(false, |d| d.children.contains(hash))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// fetches the list of blocks from best block to n, and n's parent hash
|
|
||||||
/// where n > 0
|
|
||||||
pub fn block_headers_from_best_block(&self, n: u32) -> Option<(Vec<encoded::Header>, H256)> {
|
|
||||||
let mut blocks = Vec::with_capacity(n as usize);
|
|
||||||
let mut hash = self.best_block_hash();
|
|
||||||
|
|
||||||
for _ in 0..n {
|
|
||||||
let current_hash = self.block_header_data(&hash)?;
|
|
||||||
hash = current_hash.parent_hash();
|
|
||||||
blocks.push(current_hash);
|
|
||||||
}
|
|
||||||
|
|
||||||
Some((blocks, hash))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns a tree route between `from` and `to`, which is a tuple of:
|
/// Returns a tree route between `from` and `to`, which is a tuple of:
|
||||||
///
|
///
|
||||||
/// - a vector of hashes of all blocks, ordered from `from` to `to`.
|
/// - a vector of hashes of all blocks, ordered from `from` to `to`.
|
||||||
|
|||||||
@@ -33,5 +33,5 @@ pub use self::cache::CacheSize;
|
|||||||
pub use self::config::Config;
|
pub use self::config::Config;
|
||||||
pub use self::import_route::ImportRoute;
|
pub use self::import_route::ImportRoute;
|
||||||
pub use self::update::ExtrasInsert;
|
pub use self::update::ExtrasInsert;
|
||||||
pub use ethcore_db::keys::{BlockReceipts, BlockDetails, TransactionAddress, BlockNumberKey};
|
pub use ethcore_db::keys::{BlockReceipts, BlockDetails, TransactionAddress};
|
||||||
pub use common_types::tree_route::TreeRoute;
|
pub use common_types::tree_route::TreeRoute;
|
||||||
|
|||||||
@@ -14,12 +14,12 @@ ethcore = { path = ".."}
|
|||||||
ethcore-db = { path = "../db" }
|
ethcore-db = { path = "../db" }
|
||||||
ethcore-blockchain = { path = "../blockchain" }
|
ethcore-blockchain = { path = "../blockchain" }
|
||||||
ethereum-types = "0.4"
|
ethereum-types = "0.4"
|
||||||
memory-db = "0.11.0"
|
memorydb = "0.3.0"
|
||||||
trie-db = "0.11.0"
|
patricia-trie = "0.3.0"
|
||||||
patricia-trie-ethereum = { path = "../../util/patricia-trie-ethereum" }
|
patricia-trie-ethereum = { path = "../../util/patricia-trie-ethereum" }
|
||||||
ethcore-network = { path = "../../util/network" }
|
ethcore-network = { path = "../../util/network" }
|
||||||
ethcore-io = { path = "../../util/io" }
|
ethcore-io = { path = "../../util/io" }
|
||||||
hash-db = "0.11.0"
|
hashdb = "0.3.0"
|
||||||
heapsize = "0.4"
|
heapsize = "0.4"
|
||||||
vm = { path = "../vm" }
|
vm = { path = "../vm" }
|
||||||
fastmap = { path = "../../util/fastmap" }
|
fastmap = { path = "../../util/fastmap" }
|
||||||
@@ -41,7 +41,6 @@ triehash-ethereum = { version = "0.2", path = "../../util/triehash-ethereum" }
|
|||||||
kvdb = "0.1"
|
kvdb = "0.1"
|
||||||
memory-cache = { path = "../../util/memory-cache" }
|
memory-cache = { path = "../../util/memory-cache" }
|
||||||
error-chain = { version = "0.12", default-features = false }
|
error-chain = { version = "0.12", default-features = false }
|
||||||
journaldb = { path = "../../util/journaldb" }
|
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
ethcore = { path = "..", features = ["test-helpers"] }
|
ethcore = { path = "..", features = ["test-helpers"] }
|
||||||
|
|||||||
@@ -25,11 +25,10 @@
|
|||||||
|
|
||||||
use common_types::ids::BlockId;
|
use common_types::ids::BlockId;
|
||||||
use ethereum_types::{H256, U256};
|
use ethereum_types::{H256, U256};
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
use memory_db::MemoryDB;
|
use memorydb::MemoryDB;
|
||||||
use journaldb::new_memory_db;
|
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use trie::{TrieMut, Trie, Recorder};
|
use trie::{TrieMut, Trie, Recorder};
|
||||||
use ethtrie::{self, TrieDB, TrieDBMut};
|
use ethtrie::{self, TrieDB, TrieDBMut};
|
||||||
@@ -74,8 +73,7 @@ impl<DB: HashDB<KeccakHasher, DBValue>> CHT<DB> {
|
|||||||
if block_to_cht_number(num) != Some(self.number) { return Ok(None) }
|
if block_to_cht_number(num) != Some(self.number) { return Ok(None) }
|
||||||
|
|
||||||
let mut recorder = Recorder::with_depth(from_level);
|
let mut recorder = Recorder::with_depth(from_level);
|
||||||
let db: &HashDB<_,_> = &self.db;
|
let t = TrieDB::new(&self.db, &self.root)?;
|
||||||
let t = TrieDB::new(&db, &self.root)?;
|
|
||||||
t.get_with(&key!(num), &mut recorder)?;
|
t.get_with(&key!(num), &mut recorder)?;
|
||||||
|
|
||||||
Ok(Some(recorder.drain().into_iter().map(|x| x.data).collect()))
|
Ok(Some(recorder.drain().into_iter().map(|x| x.data).collect()))
|
||||||
@@ -98,7 +96,7 @@ pub struct BlockInfo {
|
|||||||
pub fn build<F>(cht_num: u64, mut fetcher: F) -> Option<CHT<MemoryDB<KeccakHasher, DBValue>>>
|
pub fn build<F>(cht_num: u64, mut fetcher: F) -> Option<CHT<MemoryDB<KeccakHasher, DBValue>>>
|
||||||
where F: FnMut(BlockId) -> Option<BlockInfo>
|
where F: FnMut(BlockId) -> Option<BlockInfo>
|
||||||
{
|
{
|
||||||
let mut db = new_memory_db();
|
let mut db = MemoryDB::<KeccakHasher, DBValue>::new();
|
||||||
|
|
||||||
// start from the last block by number and work backwards.
|
// start from the last block by number and work backwards.
|
||||||
let last_num = start_number(cht_num + 1) - 1;
|
let last_num = start_number(cht_num + 1) - 1;
|
||||||
@@ -152,7 +150,7 @@ pub fn compute_root<I>(cht_num: u64, iterable: I) -> Option<H256>
|
|||||||
/// verify the given trie branch and extract the canonical hash and total difficulty.
|
/// verify the given trie branch and extract the canonical hash and total difficulty.
|
||||||
// TODO: better support for partially-checked queries.
|
// TODO: better support for partially-checked queries.
|
||||||
pub fn check_proof(proof: &[Bytes], num: u64, root: H256) -> Option<(H256, U256)> {
|
pub fn check_proof(proof: &[Bytes], num: u64, root: H256) -> Option<(H256, U256)> {
|
||||||
let mut db = new_memory_db();
|
let mut db = MemoryDB::<KeccakHasher, DBValue>::new();
|
||||||
|
|
||||||
for node in proof { db.insert(&node[..]); }
|
for node in proof { db.insert(&node[..]); }
|
||||||
let res = match TrieDB::new(&db, &root) {
|
let res = match TrieDB::new(&db, &root) {
|
||||||
|
|||||||
@@ -116,9 +116,6 @@ pub trait LightChainClient: Send + Sync {
|
|||||||
/// Query whether a block is known.
|
/// Query whether a block is known.
|
||||||
fn is_known(&self, hash: &H256) -> bool;
|
fn is_known(&self, hash: &H256) -> bool;
|
||||||
|
|
||||||
/// Set the chain via a spec name.
|
|
||||||
fn set_spec_name(&self, new_spec_name: String) -> Result<(), ()>;
|
|
||||||
|
|
||||||
/// Clear the queue.
|
/// Clear the queue.
|
||||||
fn clear_queue(&self);
|
fn clear_queue(&self);
|
||||||
|
|
||||||
@@ -167,8 +164,6 @@ pub struct Client<T> {
|
|||||||
listeners: RwLock<Vec<Weak<LightChainNotify>>>,
|
listeners: RwLock<Vec<Weak<LightChainNotify>>>,
|
||||||
fetcher: T,
|
fetcher: T,
|
||||||
verify_full: bool,
|
verify_full: bool,
|
||||||
/// A closure to call when we want to restart the client
|
|
||||||
exit_handler: Mutex<Option<Box<Fn(String) + 'static + Send>>>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<T: ChainDataFetcher> Client<T> {
|
impl<T: ChainDataFetcher> Client<T> {
|
||||||
@@ -195,7 +190,6 @@ impl<T: ChainDataFetcher> Client<T> {
|
|||||||
listeners: RwLock::new(vec![]),
|
listeners: RwLock::new(vec![]),
|
||||||
fetcher,
|
fetcher,
|
||||||
verify_full: config.verify_full,
|
verify_full: config.verify_full,
|
||||||
exit_handler: Mutex::new(None),
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -366,14 +360,6 @@ impl<T: ChainDataFetcher> Client<T> {
|
|||||||
self.chain.heap_size_of_children()
|
self.chain.heap_size_of_children()
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Set a closure to call when the client wants to be restarted.
|
|
||||||
///
|
|
||||||
/// The parameter passed to the callback is the name of the new chain spec to use after
|
|
||||||
/// the restart.
|
|
||||||
pub fn set_exit_handler<F>(&self, f: F) where F: Fn(String) + 'static + Send {
|
|
||||||
*self.exit_handler.lock() = Some(Box::new(f));
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Get a handle to the verification engine.
|
/// Get a handle to the verification engine.
|
||||||
pub fn engine(&self) -> &Arc<EthEngine> {
|
pub fn engine(&self) -> &Arc<EthEngine> {
|
||||||
&self.engine
|
&self.engine
|
||||||
@@ -577,17 +563,6 @@ impl<T: ChainDataFetcher> LightChainClient for Client<T> {
|
|||||||
Client::engine(self)
|
Client::engine(self)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_spec_name(&self, new_spec_name: String) -> Result<(), ()> {
|
|
||||||
trace!(target: "mode", "Client::set_spec_name({:?})", new_spec_name);
|
|
||||||
if let Some(ref h) = *self.exit_handler.lock() {
|
|
||||||
(*h)(new_spec_name);
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
warn!("Not hypervised; cannot change chain.");
|
|
||||||
Err(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn is_known(&self, hash: &H256) -> bool {
|
fn is_known(&self, hash: &H256) -> bool {
|
||||||
self.status(hash) == BlockStatus::InChain
|
self.status(hash) == BlockStatus::InChain
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -62,14 +62,14 @@ extern crate ethcore_network as network;
|
|||||||
extern crate parity_bytes as bytes;
|
extern crate parity_bytes as bytes;
|
||||||
extern crate ethereum_types;
|
extern crate ethereum_types;
|
||||||
extern crate ethcore;
|
extern crate ethcore;
|
||||||
extern crate hash_db;
|
extern crate hashdb;
|
||||||
extern crate heapsize;
|
extern crate heapsize;
|
||||||
extern crate failsafe;
|
extern crate failsafe;
|
||||||
extern crate futures;
|
extern crate futures;
|
||||||
extern crate itertools;
|
extern crate itertools;
|
||||||
extern crate keccak_hasher;
|
extern crate keccak_hasher;
|
||||||
extern crate memory_db;
|
extern crate memorydb;
|
||||||
extern crate trie_db as trie;
|
extern crate patricia_trie as trie;
|
||||||
extern crate patricia_trie_ethereum as ethtrie;
|
extern crate patricia_trie_ethereum as ethtrie;
|
||||||
extern crate fastmap;
|
extern crate fastmap;
|
||||||
extern crate rand;
|
extern crate rand;
|
||||||
@@ -92,4 +92,3 @@ extern crate error_chain;
|
|||||||
extern crate kvdb_memorydb;
|
extern crate kvdb_memorydb;
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
extern crate tempdir;
|
extern crate tempdir;
|
||||||
extern crate journaldb;
|
|
||||||
|
|||||||
@@ -30,8 +30,9 @@ use ethcore::state::{self, ProvedExecution};
|
|||||||
use ethereum_types::{H256, U256, Address};
|
use ethereum_types::{H256, U256, Address};
|
||||||
use ethtrie::{TrieError, TrieDB};
|
use ethtrie::{TrieError, TrieDB};
|
||||||
use hash::{KECCAK_NULL_RLP, KECCAK_EMPTY, KECCAK_EMPTY_LIST_RLP, keccak};
|
use hash::{KECCAK_NULL_RLP, KECCAK_EMPTY, KECCAK_EMPTY_LIST_RLP, keccak};
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
|
use memorydb::MemoryDB;
|
||||||
use parking_lot::Mutex;
|
use parking_lot::Mutex;
|
||||||
use request::{self as net_request, IncompleteRequest, CompleteRequest, Output, OutputKind, Field};
|
use request::{self as net_request, IncompleteRequest, CompleteRequest, Output, OutputKind, Field};
|
||||||
use rlp::{RlpStream, Rlp};
|
use rlp::{RlpStream, Rlp};
|
||||||
@@ -980,7 +981,7 @@ impl Account {
|
|||||||
let header = self.header.as_ref()?;
|
let header = self.header.as_ref()?;
|
||||||
let state_root = header.state_root();
|
let state_root = header.state_root();
|
||||||
|
|
||||||
let mut db = journaldb::new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
for node in proof { db.insert(&node[..]); }
|
for node in proof { db.insert(&node[..]); }
|
||||||
|
|
||||||
match TrieDB::new(&db, &state_root).and_then(|t| t.get(&keccak(&self.address)))? {
|
match TrieDB::new(&db, &state_root).and_then(|t| t.get(&keccak(&self.address)))? {
|
||||||
@@ -1100,6 +1101,7 @@ mod tests {
|
|||||||
use super::*;
|
use super::*;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use ethereum_types::{H256, Address};
|
use ethereum_types::{H256, Address};
|
||||||
|
use memorydb::MemoryDB;
|
||||||
use parking_lot::Mutex;
|
use parking_lot::Mutex;
|
||||||
use trie::{Trie, TrieMut};
|
use trie::{Trie, TrieMut};
|
||||||
use ethtrie::{SecTrieDB, SecTrieDBMut};
|
use ethtrie::{SecTrieDB, SecTrieDBMut};
|
||||||
@@ -1279,7 +1281,7 @@ mod tests {
|
|||||||
use rlp::RlpStream;
|
use rlp::RlpStream;
|
||||||
|
|
||||||
let mut root = H256::default();
|
let mut root = H256::default();
|
||||||
let mut db = journaldb::new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
let mut header = Header::new();
|
let mut header = Header::new();
|
||||||
header.set_number(123_456);
|
header.set_number(123_456);
|
||||||
header.set_extra_data(b"test_header".to_vec());
|
header.set_extra_data(b"test_header".to_vec());
|
||||||
|
|||||||
@@ -255,78 +255,4 @@ mod tests {
|
|||||||
hash: Field::BackReference(0, 0),
|
hash: Field::BackReference(0, 0),
|
||||||
})).unwrap();
|
})).unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn batch_tx_index_backreference() {
|
|
||||||
let mut builder = Builder::default();
|
|
||||||
builder.push(Request::HeaderProof(IncompleteHeaderProofRequest {
|
|
||||||
num: 100.into(), // header proof puts hash at output 0.
|
|
||||||
})).unwrap();
|
|
||||||
builder.push(Request::TransactionIndex(IncompleteTransactionIndexRequest {
|
|
||||||
hash: Field::BackReference(0, 0),
|
|
||||||
})).unwrap();
|
|
||||||
|
|
||||||
let mut batch = builder.build();
|
|
||||||
batch.requests[1].fill(|_req_idx, _out_idx| Ok(Output::Hash(42.into())));
|
|
||||||
|
|
||||||
assert!(batch.next_complete().is_some());
|
|
||||||
batch.answered += 1;
|
|
||||||
assert!(batch.next_complete().is_some());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
#[should_panic]
|
|
||||||
fn batch_tx_index_backreference_wrong_output() {
|
|
||||||
let mut builder = Builder::default();
|
|
||||||
builder.push(Request::HeaderProof(IncompleteHeaderProofRequest {
|
|
||||||
num: 100.into(), // header proof puts hash at output 0.
|
|
||||||
})).unwrap();
|
|
||||||
builder.push(Request::TransactionIndex(IncompleteTransactionIndexRequest {
|
|
||||||
hash: Field::BackReference(0, 0),
|
|
||||||
})).unwrap();
|
|
||||||
|
|
||||||
let mut batch = builder.build();
|
|
||||||
batch.requests[1].fill(|_req_idx, _out_idx| Ok(Output::Number(42)));
|
|
||||||
|
|
||||||
batch.next_complete();
|
|
||||||
batch.answered += 1;
|
|
||||||
batch.next_complete();
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn batch_receipts_backreference() {
|
|
||||||
let mut builder = Builder::default();
|
|
||||||
builder.push(Request::HeaderProof(IncompleteHeaderProofRequest {
|
|
||||||
num: 100.into(), // header proof puts hash at output 0.
|
|
||||||
})).unwrap();
|
|
||||||
builder.push(Request::Receipts(IncompleteReceiptsRequest {
|
|
||||||
hash: Field::BackReference(0, 0),
|
|
||||||
})).unwrap();
|
|
||||||
|
|
||||||
let mut batch = builder.build();
|
|
||||||
batch.requests[1].fill(|_req_idx, _out_idx| Ok(Output::Hash(42.into())));
|
|
||||||
|
|
||||||
assert!(batch.next_complete().is_some());
|
|
||||||
batch.answered += 1;
|
|
||||||
assert!(batch.next_complete().is_some());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
#[should_panic]
|
|
||||||
fn batch_receipts_backreference_wrong_output() {
|
|
||||||
let mut builder = Builder::default();
|
|
||||||
builder.push(Request::HeaderProof(IncompleteHeaderProofRequest {
|
|
||||||
num: 100.into(), // header proof puts hash at output 0.
|
|
||||||
})).unwrap();
|
|
||||||
builder.push(Request::Receipts(IncompleteReceiptsRequest {
|
|
||||||
hash: Field::BackReference(0, 0),
|
|
||||||
})).unwrap();
|
|
||||||
|
|
||||||
let mut batch = builder.build();
|
|
||||||
batch.requests[1].fill(|_req_idx, _out_idx| Ok(Output::Number(42)));
|
|
||||||
|
|
||||||
batch.next_complete();
|
|
||||||
batch.answered += 1;
|
|
||||||
batch.next_complete();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -907,7 +907,7 @@ pub mod transaction_index {
|
|||||||
fn fill<F>(&mut self, oracle: F) where F: Fn(usize, usize) -> Result<Output, NoSuchOutput> {
|
fn fill<F>(&mut self, oracle: F) where F: Fn(usize, usize) -> Result<Output, NoSuchOutput> {
|
||||||
if let Field::BackReference(req, idx) = self.hash {
|
if let Field::BackReference(req, idx) = self.hash {
|
||||||
self.hash = match oracle(req, idx) {
|
self.hash = match oracle(req, idx) {
|
||||||
Ok(Output::Hash(hash)) => Field::Scalar(hash.into()),
|
Ok(Output::Number(hash)) => Field::Scalar(hash.into()),
|
||||||
_ => Field::BackReference(req, idx),
|
_ => Field::BackReference(req, idx),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -982,7 +982,7 @@ pub mod block_receipts {
|
|||||||
fn fill<F>(&mut self, oracle: F) where F: Fn(usize, usize) -> Result<Output, NoSuchOutput> {
|
fn fill<F>(&mut self, oracle: F) where F: Fn(usize, usize) -> Result<Output, NoSuchOutput> {
|
||||||
if let Field::BackReference(req, idx) = self.hash {
|
if let Field::BackReference(req, idx) = self.hash {
|
||||||
self.hash = match oracle(req, idx) {
|
self.hash = match oracle(req, idx) {
|
||||||
Ok(Output::Hash(hash)) => Field::Scalar(hash.into()),
|
Ok(Output::Number(hash)) => Field::Scalar(hash.into()),
|
||||||
_ => Field::BackReference(req, idx),
|
_ => Field::BackReference(req, idx),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -24,9 +24,9 @@ heapsize = "0.4"
|
|||||||
keccak-hash = "0.1.2"
|
keccak-hash = "0.1.2"
|
||||||
log = "0.4"
|
log = "0.4"
|
||||||
parity-bytes = "0.1"
|
parity-bytes = "0.1"
|
||||||
parity-crypto = "0.3.0"
|
parity-crypto = "0.2"
|
||||||
parking_lot = "0.7"
|
parking_lot = "0.7"
|
||||||
trie-db = "0.11.0"
|
patricia-trie = "0.3.0"
|
||||||
patricia-trie-ethereum = { path = "../../util/patricia-trie-ethereum" }
|
patricia-trie-ethereum = { path = "../../util/patricia-trie-ethereum" }
|
||||||
rand = "0.3"
|
rand = "0.3"
|
||||||
rlp = { version = "0.3.0", features = ["ethereum"] }
|
rlp = { version = "0.3.0", features = ["ethereum"] }
|
||||||
|
|||||||
@@ -1,43 +0,0 @@
|
|||||||
[
|
|
||||||
{
|
|
||||||
"constant": true,
|
|
||||||
"inputs": [
|
|
||||||
{
|
|
||||||
"name":"user",
|
|
||||||
"type":"address"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"name": "availableKeys",
|
|
||||||
"outputs": [
|
|
||||||
{
|
|
||||||
"name": "",
|
|
||||||
"type": "bytes32[]"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"payable": false,
|
|
||||||
"stateMutability": "view",
|
|
||||||
"type": "function"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"constant":true,
|
|
||||||
"inputs": [
|
|
||||||
{
|
|
||||||
"name":"user",
|
|
||||||
"type":"address"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name":"document",
|
|
||||||
"type":"bytes32"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"name":"checkPermissions",
|
|
||||||
"outputs": [
|
|
||||||
{
|
|
||||||
"name":"",
|
|
||||||
"type":"bool"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"payable":false,
|
|
||||||
"type":"function"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
@@ -18,23 +18,22 @@
|
|||||||
|
|
||||||
use std::io::Read;
|
use std::io::Read;
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
use std::sync::Arc;
|
|
||||||
use std::iter::repeat;
|
use std::iter::repeat;
|
||||||
use std::time::{Instant, Duration};
|
use std::time::{Instant, Duration};
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::collections::hash_map::Entry;
|
use std::collections::hash_map::Entry;
|
||||||
use parking_lot::Mutex;
|
use parking_lot::Mutex;
|
||||||
|
use ethcore::account_provider::AccountProvider;
|
||||||
use ethereum_types::{H128, H256, Address};
|
use ethereum_types::{H128, H256, Address};
|
||||||
use ethjson;
|
use ethjson;
|
||||||
use ethkey::{Signature, Public};
|
use ethkey::{Signature, Password, Public};
|
||||||
use crypto;
|
use crypto;
|
||||||
use futures::Future;
|
use futures::Future;
|
||||||
use fetch::{Fetch, Client as FetchClient, Method, BodyReader, Request};
|
use fetch::{Fetch, Client as FetchClient, Method, BodyReader, Request};
|
||||||
use bytes::{Bytes, ToPretty};
|
use bytes::{Bytes, ToPretty};
|
||||||
use error::{Error, ErrorKind};
|
use error::{Error, ErrorKind};
|
||||||
use url::Url;
|
use url::Url;
|
||||||
use super::Signer;
|
use super::find_account_password;
|
||||||
use super::key_server_keys::address_to_key;
|
|
||||||
|
|
||||||
/// Initialization vector length.
|
/// Initialization vector length.
|
||||||
const INIT_VEC_LEN: usize = 16;
|
const INIT_VEC_LEN: usize = 16;
|
||||||
@@ -48,6 +47,7 @@ pub trait Encryptor: Send + Sync + 'static {
|
|||||||
fn encrypt(
|
fn encrypt(
|
||||||
&self,
|
&self,
|
||||||
contract_address: &Address,
|
contract_address: &Address,
|
||||||
|
accounts: &AccountProvider,
|
||||||
initialisation_vector: &H128,
|
initialisation_vector: &H128,
|
||||||
plain_data: &[u8],
|
plain_data: &[u8],
|
||||||
) -> Result<Bytes, Error>;
|
) -> Result<Bytes, Error>;
|
||||||
@@ -56,6 +56,7 @@ pub trait Encryptor: Send + Sync + 'static {
|
|||||||
fn decrypt(
|
fn decrypt(
|
||||||
&self,
|
&self,
|
||||||
contract_address: &Address,
|
contract_address: &Address,
|
||||||
|
accounts: &AccountProvider,
|
||||||
cypher: &[u8],
|
cypher: &[u8],
|
||||||
) -> Result<Bytes, Error>;
|
) -> Result<Bytes, Error>;
|
||||||
}
|
}
|
||||||
@@ -69,6 +70,8 @@ pub struct EncryptorConfig {
|
|||||||
pub threshold: u32,
|
pub threshold: u32,
|
||||||
/// Account used for signing requests to key server
|
/// Account used for signing requests to key server
|
||||||
pub key_server_account: Option<Address>,
|
pub key_server_account: Option<Address>,
|
||||||
|
/// Passwords used to unlock accounts
|
||||||
|
pub passwords: Vec<Password>,
|
||||||
}
|
}
|
||||||
|
|
||||||
struct EncryptionSession {
|
struct EncryptionSession {
|
||||||
@@ -81,20 +84,14 @@ pub struct SecretStoreEncryptor {
|
|||||||
config: EncryptorConfig,
|
config: EncryptorConfig,
|
||||||
client: FetchClient,
|
client: FetchClient,
|
||||||
sessions: Mutex<HashMap<Address, EncryptionSession>>,
|
sessions: Mutex<HashMap<Address, EncryptionSession>>,
|
||||||
signer: Arc<Signer>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl SecretStoreEncryptor {
|
impl SecretStoreEncryptor {
|
||||||
/// Create new encryptor
|
/// Create new encryptor
|
||||||
pub fn new(
|
pub fn new(config: EncryptorConfig, client: FetchClient) -> Result<Self, Error> {
|
||||||
config: EncryptorConfig,
|
|
||||||
client: FetchClient,
|
|
||||||
signer: Arc<Signer>,
|
|
||||||
) -> Result<Self, Error> {
|
|
||||||
Ok(SecretStoreEncryptor {
|
Ok(SecretStoreEncryptor {
|
||||||
config,
|
config,
|
||||||
client,
|
client,
|
||||||
signer,
|
|
||||||
sessions: Mutex::default(),
|
sessions: Mutex::default(),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -105,12 +102,13 @@ impl SecretStoreEncryptor {
|
|||||||
url_suffix: &str,
|
url_suffix: &str,
|
||||||
use_post: bool,
|
use_post: bool,
|
||||||
contract_address: &Address,
|
contract_address: &Address,
|
||||||
|
accounts: &AccountProvider,
|
||||||
) -> Result<Bytes, Error> {
|
) -> Result<Bytes, Error> {
|
||||||
// check if the key was already cached
|
// check if the key was already cached
|
||||||
if let Some(key) = self.obtained_key(contract_address) {
|
if let Some(key) = self.obtained_key(contract_address) {
|
||||||
return Ok(key);
|
return Ok(key);
|
||||||
}
|
}
|
||||||
let contract_address_signature = self.sign_contract_address(contract_address)?;
|
let contract_address_signature = self.sign_contract_address(contract_address, accounts)?;
|
||||||
let requester = self.config.key_server_account.ok_or_else(|| ErrorKind::KeyServerAccountNotSet)?;
|
let requester = self.config.key_server_account.ok_or_else(|| ErrorKind::KeyServerAccountNotSet)?;
|
||||||
|
|
||||||
// key id in SS is H256 && we have H160 here => expand with assitional zeros
|
// key id in SS is H256 && we have H160 here => expand with assitional zeros
|
||||||
@@ -150,9 +148,10 @@ impl SecretStoreEncryptor {
|
|||||||
|
|
||||||
// response is JSON string (which is, in turn, hex-encoded, encrypted Public)
|
// response is JSON string (which is, in turn, hex-encoded, encrypted Public)
|
||||||
let encrypted_bytes: ethjson::bytes::Bytes = result.trim_matches('\"').parse().map_err(|e| ErrorKind::Encrypt(e))?;
|
let encrypted_bytes: ethjson::bytes::Bytes = result.trim_matches('\"').parse().map_err(|e| ErrorKind::Encrypt(e))?;
|
||||||
|
let password = find_account_password(&self.config.passwords, &*accounts, &requester);
|
||||||
|
|
||||||
// decrypt Public
|
// decrypt Public
|
||||||
let decrypted_bytes = self.signer.decrypt(requester, &crypto::DEFAULT_MAC, &encrypted_bytes)?;
|
let decrypted_bytes = accounts.decrypt(requester, password, &crypto::DEFAULT_MAC, &encrypted_bytes)?;
|
||||||
let decrypted_key = Public::from_slice(&decrypted_bytes);
|
let decrypted_key = Public::from_slice(&decrypted_bytes);
|
||||||
|
|
||||||
// and now take x coordinate of Public as a key
|
// and now take x coordinate of Public as a key
|
||||||
@@ -188,9 +187,12 @@ impl SecretStoreEncryptor {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn sign_contract_address(&self, contract_address: &Address) -> Result<Signature, Error> {
|
fn sign_contract_address(&self, contract_address: &Address, accounts: &AccountProvider) -> Result<Signature, Error> {
|
||||||
|
// key id in SS is H256 && we have H160 here => expand with assitional zeros
|
||||||
|
let contract_address_extended: H256 = contract_address.into();
|
||||||
let key_server_account = self.config.key_server_account.ok_or_else(|| ErrorKind::KeyServerAccountNotSet)?;
|
let key_server_account = self.config.key_server_account.ok_or_else(|| ErrorKind::KeyServerAccountNotSet)?;
|
||||||
Ok(self.signer.sign(key_server_account, address_to_key(contract_address))?)
|
let password = find_account_password(&self.config.passwords, accounts, &key_server_account);
|
||||||
|
Ok(accounts.sign(key_server_account, password, H256::from_slice(&contract_address_extended))?)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -198,15 +200,16 @@ impl Encryptor for SecretStoreEncryptor {
|
|||||||
fn encrypt(
|
fn encrypt(
|
||||||
&self,
|
&self,
|
||||||
contract_address: &Address,
|
contract_address: &Address,
|
||||||
|
accounts: &AccountProvider,
|
||||||
initialisation_vector: &H128,
|
initialisation_vector: &H128,
|
||||||
plain_data: &[u8],
|
plain_data: &[u8],
|
||||||
) -> Result<Bytes, Error> {
|
) -> Result<Bytes, Error> {
|
||||||
// retrieve the key, try to generate it if it doesn't exist yet
|
// retrieve the key, try to generate it if it doesn't exist yet
|
||||||
let key = match self.retrieve_key("", false, contract_address) {
|
let key = match self.retrieve_key("", false, contract_address, &*accounts) {
|
||||||
Ok(key) => Ok(key),
|
Ok(key) => Ok(key),
|
||||||
Err(Error(ErrorKind::EncryptionKeyNotFound(_), _)) => {
|
Err(Error(ErrorKind::EncryptionKeyNotFound(_), _)) => {
|
||||||
trace!(target: "privatetx", "Key for account wasnt found in sstore. Creating. Address: {:?}", contract_address);
|
trace!(target: "privatetx", "Key for account wasnt found in sstore. Creating. Address: {:?}", contract_address);
|
||||||
self.retrieve_key(&format!("/{}", self.config.threshold), true, contract_address)
|
self.retrieve_key(&format!("/{}", self.config.threshold), true, contract_address, &*accounts)
|
||||||
}
|
}
|
||||||
Err(err) => Err(err),
|
Err(err) => Err(err),
|
||||||
}?;
|
}?;
|
||||||
@@ -225,6 +228,7 @@ impl Encryptor for SecretStoreEncryptor {
|
|||||||
fn decrypt(
|
fn decrypt(
|
||||||
&self,
|
&self,
|
||||||
contract_address: &Address,
|
contract_address: &Address,
|
||||||
|
accounts: &AccountProvider,
|
||||||
cypher: &[u8],
|
cypher: &[u8],
|
||||||
) -> Result<Bytes, Error> {
|
) -> Result<Bytes, Error> {
|
||||||
// initialization vector takes INIT_VEC_LEN bytes
|
// initialization vector takes INIT_VEC_LEN bytes
|
||||||
@@ -234,7 +238,7 @@ impl Encryptor for SecretStoreEncryptor {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// retrieve existing key
|
// retrieve existing key
|
||||||
let key = self.retrieve_key("", false, contract_address)?;
|
let key = self.retrieve_key("", false, contract_address, accounts)?;
|
||||||
|
|
||||||
// use symmetric decryption to decrypt document
|
// use symmetric decryption to decrypt document
|
||||||
let (cypher, iv) = cypher.split_at(cypher_len - INIT_VEC_LEN);
|
let (cypher, iv) = cypher.split_at(cypher_len - INIT_VEC_LEN);
|
||||||
@@ -254,6 +258,7 @@ impl Encryptor for NoopEncryptor {
|
|||||||
fn encrypt(
|
fn encrypt(
|
||||||
&self,
|
&self,
|
||||||
_contract_address: &Address,
|
_contract_address: &Address,
|
||||||
|
_accounts: &AccountProvider,
|
||||||
_initialisation_vector: &H128,
|
_initialisation_vector: &H128,
|
||||||
data: &[u8],
|
data: &[u8],
|
||||||
) -> Result<Bytes, Error> {
|
) -> Result<Bytes, Error> {
|
||||||
@@ -263,6 +268,7 @@ impl Encryptor for NoopEncryptor {
|
|||||||
fn decrypt(
|
fn decrypt(
|
||||||
&self,
|
&self,
|
||||||
_contract_address: &Address,
|
_contract_address: &Address,
|
||||||
|
_accounts: &AccountProvider,
|
||||||
data: &[u8],
|
data: &[u8],
|
||||||
) -> Result<Bytes, Error> {
|
) -> Result<Bytes, Error> {
|
||||||
Ok(data.to_vec())
|
Ok(data.to_vec())
|
||||||
|
|||||||
@@ -17,10 +17,10 @@
|
|||||||
use ethereum_types::Address;
|
use ethereum_types::Address;
|
||||||
use rlp::DecoderError;
|
use rlp::DecoderError;
|
||||||
use ethtrie::TrieError;
|
use ethtrie::TrieError;
|
||||||
|
use ethcore::account_provider::SignError;
|
||||||
use ethcore::error::{Error as EthcoreError, ExecutionError};
|
use ethcore::error::{Error as EthcoreError, ExecutionError};
|
||||||
use types::transaction::Error as TransactionError;
|
use types::transaction::Error as TransactionError;
|
||||||
use ethkey::Error as KeyError;
|
use ethkey::Error as KeyError;
|
||||||
use ethkey::crypto::Error as CryptoError;
|
|
||||||
use txpool::Error as TxPoolError;
|
use txpool::Error as TxPoolError;
|
||||||
|
|
||||||
error_chain! {
|
error_chain! {
|
||||||
@@ -29,7 +29,6 @@ error_chain! {
|
|||||||
Decoder(DecoderError) #[doc = "RLP decoding error."];
|
Decoder(DecoderError) #[doc = "RLP decoding error."];
|
||||||
Trie(TrieError) #[doc = "Error concerning TrieDBs."];
|
Trie(TrieError) #[doc = "Error concerning TrieDBs."];
|
||||||
Txpool(TxPoolError) #[doc = "Tx pool error."];
|
Txpool(TxPoolError) #[doc = "Tx pool error."];
|
||||||
Crypto(CryptoError) #[doc = "Crypto error."];
|
|
||||||
}
|
}
|
||||||
|
|
||||||
errors {
|
errors {
|
||||||
@@ -76,7 +75,7 @@ error_chain! {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[doc = "Wrong private transaction type."]
|
#[doc = "Wrong private transaction type."]
|
||||||
BadTransactionType {
|
BadTransactonType {
|
||||||
description("Wrong private transaction type."),
|
description("Wrong private transaction type."),
|
||||||
display("Wrong private transaction type"),
|
display("Wrong private transaction type"),
|
||||||
}
|
}
|
||||||
@@ -153,6 +152,12 @@ error_chain! {
|
|||||||
display("General signing error {}", err),
|
display("General signing error {}", err),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[doc = "Account provider signing error."]
|
||||||
|
Sign(err: SignError) {
|
||||||
|
description("Account provider signing error."),
|
||||||
|
display("Account provider signing error {}", err),
|
||||||
|
}
|
||||||
|
|
||||||
#[doc = "Error of transactions processing."]
|
#[doc = "Error of transactions processing."]
|
||||||
Transaction(err: TransactionError) {
|
Transaction(err: TransactionError) {
|
||||||
description("Error of transactions processing."),
|
description("Error of transactions processing."),
|
||||||
@@ -167,6 +172,12 @@ error_chain! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl From<SignError> for Error {
|
||||||
|
fn from(err: SignError) -> Self {
|
||||||
|
ErrorKind::Sign(err).into()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
impl From<KeyError> for Error {
|
impl From<KeyError> for Error {
|
||||||
fn from(err: KeyError) -> Self {
|
fn from(err: KeyError) -> Self {
|
||||||
ErrorKind::Key(err).into()
|
ErrorKind::Key(err).into()
|
||||||
|
|||||||
@@ -1,173 +0,0 @@
|
|||||||
// Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
|
||||||
// This file is part of Parity.
|
|
||||||
|
|
||||||
// Parity is free software: you can redistribute it and/or modify
|
|
||||||
// it under the terms of the GNU General Public License as published by
|
|
||||||
// the Free Software Foundation, either version 3 of the License, or
|
|
||||||
// (at your option) any later version.
|
|
||||||
|
|
||||||
// Parity is distributed in the hope that it will be useful,
|
|
||||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
// GNU General Public License for more details.
|
|
||||||
|
|
||||||
// You should have received a copy of the GNU General Public License
|
|
||||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
//! Wrapper around key server responsible for access keys processing.
|
|
||||||
|
|
||||||
use std::sync::Arc;
|
|
||||||
use parking_lot::RwLock;
|
|
||||||
use ethereum_types::{H256, Address};
|
|
||||||
use call_contract::{CallContract, RegistryInfo};
|
|
||||||
use ethcore::client::BlockId;
|
|
||||||
use ethabi::FunctionOutputDecoder;
|
|
||||||
|
|
||||||
const ACL_CHECKER_CONTRACT_REGISTRY_NAME: &'static str = "secretstore_acl_checker";
|
|
||||||
|
|
||||||
use_contract!(keys_acl_contract, "res/keys_acl.json");
|
|
||||||
|
|
||||||
/// Returns the address (of the contract), that corresponds to the key
|
|
||||||
pub fn key_to_address(key: &H256) -> Address {
|
|
||||||
Address::from_slice(&key.to_vec()[..10])
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns the key from the key server associated with the contract
|
|
||||||
pub fn address_to_key(contract_address: &Address) -> H256 {
|
|
||||||
// Current solution uses contract address extended with 0 as id
|
|
||||||
let contract_address_extended: H256 = contract_address.into();
|
|
||||||
|
|
||||||
H256::from_slice(&contract_address_extended)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Trait for keys server keys provider.
|
|
||||||
pub trait KeyProvider: Send + Sync + 'static {
|
|
||||||
/// Account, that is used for communication with key server
|
|
||||||
fn key_server_account(&self) -> Option<Address>;
|
|
||||||
|
|
||||||
/// List of keys available for the account
|
|
||||||
fn available_keys(&self, block: BlockId, account: &Address) -> Option<Vec<Address>>;
|
|
||||||
|
|
||||||
/// Update permissioning contract
|
|
||||||
fn update_acl_contract(&self);
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Secret Store keys provider
|
|
||||||
pub struct SecretStoreKeys<C> where C: CallContract + RegistryInfo + Send + Sync + 'static {
|
|
||||||
client: Arc<C>,
|
|
||||||
key_server_account: Option<Address>,
|
|
||||||
keys_acl_contract: RwLock<Option<Address>>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<C> SecretStoreKeys<C> where C: CallContract + RegistryInfo + Send + Sync + 'static {
|
|
||||||
/// Create provider
|
|
||||||
pub fn new(client: Arc<C>, key_server_account: Option<Address>) -> Self {
|
|
||||||
SecretStoreKeys {
|
|
||||||
client,
|
|
||||||
key_server_account,
|
|
||||||
keys_acl_contract: RwLock::new(None),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<C> KeyProvider for SecretStoreKeys<C> where C: CallContract + RegistryInfo + Send + Sync + 'static {
|
|
||||||
fn key_server_account(&self) -> Option<Address> {
|
|
||||||
self.key_server_account
|
|
||||||
}
|
|
||||||
|
|
||||||
fn available_keys(&self, block: BlockId, account: &Address) -> Option<Vec<Address>> {
|
|
||||||
match *self.keys_acl_contract.read() {
|
|
||||||
Some(acl_contract_address) => {
|
|
||||||
let (data, decoder) = keys_acl_contract::functions::available_keys::call(*account);
|
|
||||||
if let Ok(value) = self.client.call_contract(block, acl_contract_address, data) {
|
|
||||||
decoder.decode(&value).ok().map(|key_values| {
|
|
||||||
key_values.iter().map(key_to_address).collect()
|
|
||||||
})
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None => None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn update_acl_contract(&self) {
|
|
||||||
let contract_address = self.client.registry_address(ACL_CHECKER_CONTRACT_REGISTRY_NAME.into(), BlockId::Latest);
|
|
||||||
if *self.keys_acl_contract.read() != contract_address {
|
|
||||||
trace!(target: "privatetx", "Configuring for ACL checker contract from address {:?}",
|
|
||||||
contract_address);
|
|
||||||
*self.keys_acl_contract.write() = contract_address;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Dummy keys provider.
|
|
||||||
pub struct StoringKeyProvider {
|
|
||||||
available_keys: RwLock<Option<Vec<Address>>>,
|
|
||||||
key_server_account: Option<Address>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl StoringKeyProvider {
|
|
||||||
/// Store available keys
|
|
||||||
pub fn set_available_keys(&self, keys: &Vec<Address>) {
|
|
||||||
*self.available_keys.write() = Some(keys.clone())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for StoringKeyProvider {
|
|
||||||
fn default() -> Self {
|
|
||||||
StoringKeyProvider {
|
|
||||||
available_keys: RwLock::new(None),
|
|
||||||
key_server_account: Some(Address::default()),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl KeyProvider for StoringKeyProvider {
|
|
||||||
fn key_server_account(&self) -> Option<Address> {
|
|
||||||
self.key_server_account
|
|
||||||
}
|
|
||||||
|
|
||||||
fn available_keys(&self, _block: BlockId, _account: &Address) -> Option<Vec<Address>> {
|
|
||||||
self.available_keys.read().clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn update_acl_contract(&self) {}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use std::sync::Arc;
|
|
||||||
use ethkey::{Secret, KeyPair};
|
|
||||||
use bytes::Bytes;
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
struct DummyRegistryClient {
|
|
||||||
registry_address: Option<Address>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl DummyRegistryClient {
|
|
||||||
pub fn new(registry_address: Option<Address>) -> Self {
|
|
||||||
DummyRegistryClient {
|
|
||||||
registry_address
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl RegistryInfo for DummyRegistryClient {
|
|
||||||
fn registry_address(&self, _name: String, _block: BlockId) -> Option<Address> { self.registry_address }
|
|
||||||
}
|
|
||||||
|
|
||||||
impl CallContract for DummyRegistryClient {
|
|
||||||
fn call_contract(&self, _id: BlockId, _address: Address, _data: Bytes) -> Result<Bytes, String> { Ok(vec![]) }
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn should_update_acl_contract() {
|
|
||||||
let key = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000011")).unwrap();
|
|
||||||
let client = DummyRegistryClient::new(Some(key.address()));
|
|
||||||
let keys_data = SecretStoreKeys::new(Arc::new(client), None);
|
|
||||||
keys_data.update_acl_contract();
|
|
||||||
assert_eq!(keys_data.keys_acl_contract.read().unwrap(), key.address());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -21,7 +21,6 @@
|
|||||||
#![recursion_limit="256"]
|
#![recursion_limit="256"]
|
||||||
|
|
||||||
mod encryptor;
|
mod encryptor;
|
||||||
mod key_server_keys;
|
|
||||||
mod private_transactions;
|
mod private_transactions;
|
||||||
mod messages;
|
mod messages;
|
||||||
mod error;
|
mod error;
|
||||||
@@ -42,7 +41,7 @@ extern crate keccak_hash as hash;
|
|||||||
extern crate parity_bytes as bytes;
|
extern crate parity_bytes as bytes;
|
||||||
extern crate parity_crypto as crypto;
|
extern crate parity_crypto as crypto;
|
||||||
extern crate parking_lot;
|
extern crate parking_lot;
|
||||||
extern crate trie_db as trie;
|
extern crate patricia_trie as trie;
|
||||||
extern crate patricia_trie_ethereum as ethtrie;
|
extern crate patricia_trie_ethereum as ethtrie;
|
||||||
extern crate rlp;
|
extern crate rlp;
|
||||||
extern crate rustc_hex;
|
extern crate rustc_hex;
|
||||||
@@ -65,7 +64,6 @@ extern crate rand;
|
|||||||
extern crate env_logger;
|
extern crate env_logger;
|
||||||
|
|
||||||
pub use encryptor::{Encryptor, SecretStoreEncryptor, EncryptorConfig, NoopEncryptor};
|
pub use encryptor::{Encryptor, SecretStoreEncryptor, EncryptorConfig, NoopEncryptor};
|
||||||
pub use key_server_keys::{KeyProvider, SecretStoreKeys, StoringKeyProvider};
|
|
||||||
pub use private_transactions::{VerifiedPrivateTransaction, VerificationStore, PrivateTransactionSigningDesc, SigningStore};
|
pub use private_transactions::{VerifiedPrivateTransaction, VerificationStore, PrivateTransactionSigningDesc, SigningStore};
|
||||||
pub use messages::{PrivateTransaction, SignedPrivateTransaction};
|
pub use messages::{PrivateTransaction, SignedPrivateTransaction};
|
||||||
pub use error::{Error, ErrorKind};
|
pub use error::{Error, ErrorKind};
|
||||||
@@ -87,11 +85,12 @@ use ethcore::client::{
|
|||||||
Client, ChainNotify, NewBlocks, ChainMessageType, ClientIoMessage, BlockId,
|
Client, ChainNotify, NewBlocks, ChainMessageType, ClientIoMessage, BlockId,
|
||||||
Call, BlockInfo
|
Call, BlockInfo
|
||||||
};
|
};
|
||||||
|
use ethcore::account_provider::AccountProvider;
|
||||||
use ethcore::miner::{self, Miner, MinerService, pool_client::NonceCache};
|
use ethcore::miner::{self, Miner, MinerService, pool_client::NonceCache};
|
||||||
use ethcore::{state, state_db};
|
|
||||||
use ethcore::trace::{Tracer, VMTracer};
|
use ethcore::trace::{Tracer, VMTracer};
|
||||||
use call_contract::CallContract;
|
use call_contract::CallContract;
|
||||||
use rustc_hex::FromHex;
|
use rustc_hex::FromHex;
|
||||||
|
use ethkey::Password;
|
||||||
use ethabi::FunctionOutputDecoder;
|
use ethabi::FunctionOutputDecoder;
|
||||||
|
|
||||||
// Source avaiable at https://github.com/parity-contracts/private-tx/blob/master/contracts/PrivateContract.sol
|
// Source avaiable at https://github.com/parity-contracts/private-tx/blob/master/contracts/PrivateContract.sol
|
||||||
@@ -118,6 +117,8 @@ pub struct ProviderConfig {
|
|||||||
pub validator_accounts: Vec<Address>,
|
pub validator_accounts: Vec<Address>,
|
||||||
/// Account used for signing public transactions created from private transactions
|
/// Account used for signing public transactions created from private transactions
|
||||||
pub signer_account: Option<Address>,
|
pub signer_account: Option<Address>,
|
||||||
|
/// Passwords used to unlock accounts
|
||||||
|
pub passwords: Vec<Password>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
@@ -125,85 +126,50 @@ pub struct ProviderConfig {
|
|||||||
pub struct Receipt {
|
pub struct Receipt {
|
||||||
/// Private transaction hash.
|
/// Private transaction hash.
|
||||||
pub hash: H256,
|
pub hash: H256,
|
||||||
/// Contract address.
|
/// Created contract address if any.
|
||||||
pub contract_address: Address,
|
pub contract_address: Option<Address>,
|
||||||
/// Execution status.
|
/// Execution status.
|
||||||
pub status_code: u8,
|
pub status_code: u8,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Payload signing and decrypting capabilities.
|
|
||||||
pub trait Signer: Send + Sync {
|
|
||||||
/// Decrypt payload using private key of given address.
|
|
||||||
fn decrypt(&self, account: Address, shared_mac: &[u8], payload: &[u8]) -> Result<Vec<u8>, Error>;
|
|
||||||
/// Sign given hash using provided account.
|
|
||||||
fn sign(&self, account: Address, hash: ethkey::Message) -> Result<Signature, Error>;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Signer implementation that errors on any request.
|
|
||||||
pub struct DummySigner;
|
|
||||||
impl Signer for DummySigner {
|
|
||||||
fn decrypt(&self, _account: Address, _shared_mac: &[u8], _payload: &[u8]) -> Result<Vec<u8>, Error> {
|
|
||||||
Err("Decrypting is not supported.".to_owned())?
|
|
||||||
}
|
|
||||||
|
|
||||||
fn sign(&self, _account: Address, _hash: ethkey::Message) -> Result<Signature, Error> {
|
|
||||||
Err("Signing is not supported.".to_owned())?
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Signer implementation using multiple keypairs
|
|
||||||
pub struct KeyPairSigner(pub Vec<ethkey::KeyPair>);
|
|
||||||
impl Signer for KeyPairSigner {
|
|
||||||
fn decrypt(&self, account: Address, shared_mac: &[u8], payload: &[u8]) -> Result<Vec<u8>, Error> {
|
|
||||||
let kp = self.0.iter().find(|k| k.address() == account).ok_or(ethkey::Error::InvalidAddress)?;
|
|
||||||
Ok(ethkey::crypto::ecies::decrypt(kp.secret(), shared_mac, payload)?)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn sign(&self, account: Address, hash: ethkey::Message) -> Result<Signature, Error> {
|
|
||||||
let kp = self.0.iter().find(|k| k.address() == account).ok_or(ethkey::Error::InvalidAddress)?;
|
|
||||||
Ok(ethkey::sign(kp.secret(), &hash)?)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Manager of private transactions
|
/// Manager of private transactions
|
||||||
pub struct Provider {
|
pub struct Provider {
|
||||||
encryptor: Box<Encryptor>,
|
encryptor: Box<Encryptor>,
|
||||||
validator_accounts: HashSet<Address>,
|
validator_accounts: HashSet<Address>,
|
||||||
signer_account: Option<Address>,
|
signer_account: Option<Address>,
|
||||||
|
passwords: Vec<Password>,
|
||||||
notify: RwLock<Vec<Weak<ChainNotify>>>,
|
notify: RwLock<Vec<Weak<ChainNotify>>>,
|
||||||
transactions_for_signing: RwLock<SigningStore>,
|
transactions_for_signing: RwLock<SigningStore>,
|
||||||
transactions_for_verification: VerificationStore,
|
transactions_for_verification: VerificationStore,
|
||||||
client: Arc<Client>,
|
client: Arc<Client>,
|
||||||
miner: Arc<Miner>,
|
miner: Arc<Miner>,
|
||||||
accounts: Arc<Signer>,
|
accounts: Arc<AccountProvider>,
|
||||||
channel: IoChannel<ClientIoMessage>,
|
channel: IoChannel<ClientIoMessage>,
|
||||||
keys_provider: Arc<KeyProvider>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub struct PrivateExecutionResult<T, V> where T: Tracer, V: VMTracer {
|
pub struct PrivateExecutionResult<T, V> where T: Tracer, V: VMTracer {
|
||||||
code: Option<Bytes>,
|
code: Option<Bytes>,
|
||||||
state: Bytes,
|
state: Bytes,
|
||||||
contract_address: Address,
|
contract_address: Option<Address>,
|
||||||
result: Executed<T::Output, V::Output>,
|
result: Executed<T::Output, V::Output>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Provider {
|
impl Provider where {
|
||||||
/// Create a new provider.
|
/// Create a new provider.
|
||||||
pub fn new(
|
pub fn new(
|
||||||
client: Arc<Client>,
|
client: Arc<Client>,
|
||||||
miner: Arc<Miner>,
|
miner: Arc<Miner>,
|
||||||
accounts: Arc<Signer>,
|
accounts: Arc<AccountProvider>,
|
||||||
encryptor: Box<Encryptor>,
|
encryptor: Box<Encryptor>,
|
||||||
config: ProviderConfig,
|
config: ProviderConfig,
|
||||||
channel: IoChannel<ClientIoMessage>,
|
channel: IoChannel<ClientIoMessage>,
|
||||||
keys_provider: Arc<KeyProvider>,
|
|
||||||
) -> Self {
|
) -> Self {
|
||||||
keys_provider.update_acl_contract();
|
|
||||||
Provider {
|
Provider {
|
||||||
encryptor,
|
encryptor,
|
||||||
validator_accounts: config.validator_accounts.into_iter().collect(),
|
validator_accounts: config.validator_accounts.into_iter().collect(),
|
||||||
signer_account: config.signer_account,
|
signer_account: config.signer_account,
|
||||||
|
passwords: config.passwords,
|
||||||
notify: RwLock::default(),
|
notify: RwLock::default(),
|
||||||
transactions_for_signing: RwLock::default(),
|
transactions_for_signing: RwLock::default(),
|
||||||
transactions_for_verification: VerificationStore::default(),
|
transactions_for_verification: VerificationStore::default(),
|
||||||
@@ -211,7 +177,6 @@ impl Provider {
|
|||||||
miner,
|
miner,
|
||||||
accounts,
|
accounts,
|
||||||
channel,
|
channel,
|
||||||
keys_provider,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -241,7 +206,7 @@ impl Provider {
|
|||||||
bail!(ErrorKind::SignerAccountNotSet);
|
bail!(ErrorKind::SignerAccountNotSet);
|
||||||
}
|
}
|
||||||
let tx_hash = signed_transaction.hash();
|
let tx_hash = signed_transaction.hash();
|
||||||
let contract = Self::contract_address_from_transaction(&signed_transaction).map_err(|_| ErrorKind::BadTransactionType)?;
|
let contract = Self::contract_address_from_transaction(&signed_transaction).map_err(|_| ErrorKind::BadTransactonType)?;
|
||||||
let data = signed_transaction.rlp_bytes();
|
let data = signed_transaction.rlp_bytes();
|
||||||
let encrypted_transaction = self.encrypt(&contract, &Self::iv_from_transaction(&signed_transaction), &data)?;
|
let encrypted_transaction = self.encrypt(&contract, &Self::iv_from_transaction(&signed_transaction), &data)?;
|
||||||
let private = PrivateTransaction::new(encrypted_transaction, contract);
|
let private = PrivateTransaction::new(encrypted_transaction, contract);
|
||||||
@@ -262,7 +227,7 @@ impl Provider {
|
|||||||
self.broadcast_private_transaction(private.hash(), private.rlp_bytes());
|
self.broadcast_private_transaction(private.hash(), private.rlp_bytes());
|
||||||
Ok(Receipt {
|
Ok(Receipt {
|
||||||
hash: tx_hash,
|
hash: tx_hash,
|
||||||
contract_address: contract,
|
contract_address: Some(contract),
|
||||||
status_code: 0,
|
status_code: 0,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -276,20 +241,21 @@ impl Provider {
|
|||||||
keccak(&state_buf.as_ref())
|
keccak(&state_buf.as_ref())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn pool_client<'a>(&'a self, nonce_cache: &'a NonceCache, local_accounts: &'a HashSet<Address>) -> miner::pool_client::PoolClient<'a, Client> {
|
fn pool_client<'a>(&'a self, nonce_cache: &'a NonceCache) -> miner::pool_client::PoolClient<'a, Client> {
|
||||||
let engine = self.client.engine();
|
let engine = self.client.engine();
|
||||||
let refuse_service_transactions = true;
|
let refuse_service_transactions = true;
|
||||||
miner::pool_client::PoolClient::new(
|
miner::pool_client::PoolClient::new(
|
||||||
&*self.client,
|
&*self.client,
|
||||||
nonce_cache,
|
nonce_cache,
|
||||||
engine,
|
engine,
|
||||||
local_accounts,
|
Some(&*self.accounts),
|
||||||
refuse_service_transactions,
|
refuse_service_transactions,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Retrieve and verify the first available private transaction for every sender
|
/// Retrieve and verify the first available private transaction for every sender
|
||||||
fn process_verification_queue(&self) -> Result<(), Error> {
|
fn process_verification_queue(&self) -> Result<(), Error> {
|
||||||
|
let nonce_cache = NonceCache::new(NONCE_CACHE_SIZE);
|
||||||
let process_transaction = |transaction: &VerifiedPrivateTransaction| -> Result<_, String> {
|
let process_transaction = |transaction: &VerifiedPrivateTransaction| -> Result<_, String> {
|
||||||
let private_hash = transaction.private_transaction.hash();
|
let private_hash = transaction.private_transaction.hash();
|
||||||
match transaction.validator_account {
|
match transaction.validator_account {
|
||||||
@@ -319,7 +285,8 @@ impl Provider {
|
|||||||
let private_state = private_state.expect("Error was checked before");
|
let private_state = private_state.expect("Error was checked before");
|
||||||
let private_state_hash = self.calculate_state_hash(&private_state, contract_nonce);
|
let private_state_hash = self.calculate_state_hash(&private_state, contract_nonce);
|
||||||
trace!(target: "privatetx", "Hashed effective private state for validator: {:?}", private_state_hash);
|
trace!(target: "privatetx", "Hashed effective private state for validator: {:?}", private_state_hash);
|
||||||
let signed_state = self.accounts.sign(validator_account, private_state_hash);
|
let password = find_account_password(&self.passwords, &*self.accounts, &validator_account);
|
||||||
|
let signed_state = self.accounts.sign(validator_account, password, private_state_hash);
|
||||||
if let Err(e) = signed_state {
|
if let Err(e) = signed_state {
|
||||||
bail!("Cannot sign the state: {:?}", e);
|
bail!("Cannot sign the state: {:?}", e);
|
||||||
}
|
}
|
||||||
@@ -331,9 +298,7 @@ impl Provider {
|
|||||||
}
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
};
|
};
|
||||||
let nonce_cache = NonceCache::new(NONCE_CACHE_SIZE);
|
let ready_transactions = self.transactions_for_verification.drain(self.pool_client(&nonce_cache));
|
||||||
let local_accounts = HashSet::new();
|
|
||||||
let ready_transactions = self.transactions_for_verification.drain(self.pool_client(&nonce_cache, &local_accounts));
|
|
||||||
for transaction in ready_transactions {
|
for transaction in ready_transactions {
|
||||||
if let Err(e) = process_transaction(&transaction) {
|
if let Err(e) = process_transaction(&transaction) {
|
||||||
warn!(target: "privatetx", "Error: {:?}", e);
|
warn!(target: "privatetx", "Error: {:?}", e);
|
||||||
@@ -374,7 +339,8 @@ impl Provider {
|
|||||||
let chain_id = desc.original_transaction.chain_id();
|
let chain_id = desc.original_transaction.chain_id();
|
||||||
let hash = public_tx.hash(chain_id);
|
let hash = public_tx.hash(chain_id);
|
||||||
let signer_account = self.signer_account.ok_or_else(|| ErrorKind::SignerAccountNotSet)?;
|
let signer_account = self.signer_account.ok_or_else(|| ErrorKind::SignerAccountNotSet)?;
|
||||||
let signature = self.accounts.sign(signer_account, hash)?;
|
let password = find_account_password(&self.passwords, &*self.accounts, &signer_account);
|
||||||
|
let signature = self.accounts.sign(signer_account, password, hash)?;
|
||||||
let signed = SignedTransaction::new(public_tx.with_signature(signature, chain_id))?;
|
let signed = SignedTransaction::new(public_tx.with_signature(signature, chain_id))?;
|
||||||
match self.miner.import_own_transaction(&*self.client, signed.into()) {
|
match self.miner.import_own_transaction(&*self.client, signed.into()) {
|
||||||
Ok(_) => trace!(target: "privatetx", "Public transaction added to queue"),
|
Ok(_) => trace!(target: "privatetx", "Public transaction added to queue"),
|
||||||
@@ -415,7 +381,7 @@ impl Provider {
|
|||||||
Action::Call(contract) => Ok(contract),
|
Action::Call(contract) => Ok(contract),
|
||||||
_ => {
|
_ => {
|
||||||
warn!(target: "privatetx", "Incorrect type of action for the transaction");
|
warn!(target: "privatetx", "Incorrect type of action for the transaction");
|
||||||
bail!(ErrorKind::BadTransactionType);
|
bail!(ErrorKind::BadTransactonType);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -469,12 +435,12 @@ impl Provider {
|
|||||||
|
|
||||||
fn encrypt(&self, contract_address: &Address, initialisation_vector: &H128, data: &[u8]) -> Result<Bytes, Error> {
|
fn encrypt(&self, contract_address: &Address, initialisation_vector: &H128, data: &[u8]) -> Result<Bytes, Error> {
|
||||||
trace!(target: "privatetx", "Encrypt data using key(address): {:?}", contract_address);
|
trace!(target: "privatetx", "Encrypt data using key(address): {:?}", contract_address);
|
||||||
Ok(self.encryptor.encrypt(contract_address, initialisation_vector, data)?)
|
Ok(self.encryptor.encrypt(contract_address, &*self.accounts, initialisation_vector, data)?)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn decrypt(&self, contract_address: &Address, data: &[u8]) -> Result<Bytes, Error> {
|
fn decrypt(&self, contract_address: &Address, data: &[u8]) -> Result<Bytes, Error> {
|
||||||
trace!(target: "privatetx", "Decrypt data using key(address): {:?}", contract_address);
|
trace!(target: "privatetx", "Decrypt data using key(address): {:?}", contract_address);
|
||||||
Ok(self.encryptor.decrypt(contract_address, data)?)
|
Ok(self.encryptor.decrypt(contract_address, &*self.accounts, data)?)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_decrypted_state(&self, address: &Address, block: BlockId) -> Result<Bytes, Error> {
|
fn get_decrypted_state(&self, address: &Address, block: BlockId) -> Result<Bytes, Error> {
|
||||||
@@ -518,14 +484,6 @@ impl Provider {
|
|||||||
raw
|
raw
|
||||||
}
|
}
|
||||||
|
|
||||||
fn patch_account_state(&self, contract_address: &Address, block: BlockId, state: &mut state::State<state_db::StateDB>) -> Result<(), Error> {
|
|
||||||
let contract_code = Arc::new(self.get_decrypted_code(contract_address, block)?);
|
|
||||||
let contract_state = self.get_decrypted_state(contract_address, block)?;
|
|
||||||
trace!(target: "privatetx", "Patching contract at {:?}, code: {:?}, state: {:?}", contract_address, contract_code, contract_state);
|
|
||||||
state.patch_account(contract_address, contract_code, Self::snapshot_to_storage(contract_state))?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn execute_private<T, V>(&self, transaction: &SignedTransaction, options: TransactOptions<T, V>, block: BlockId) -> Result<PrivateExecutionResult<T, V>, Error>
|
pub fn execute_private<T, V>(&self, transaction: &SignedTransaction, options: TransactOptions<T, V>, block: BlockId) -> Result<PrivateExecutionResult<T, V>, Error>
|
||||||
where
|
where
|
||||||
T: Tracer,
|
T: Tracer,
|
||||||
@@ -538,48 +496,41 @@ impl Provider {
|
|||||||
// TODO #9825 in case of BlockId::Latest these need to operate on the same state
|
// TODO #9825 in case of BlockId::Latest these need to operate on the same state
|
||||||
let contract_address = match transaction.action {
|
let contract_address = match transaction.action {
|
||||||
Action::Call(ref contract_address) => {
|
Action::Call(ref contract_address) => {
|
||||||
// Patch current contract state
|
let contract_code = Arc::new(self.get_decrypted_code(contract_address, block)?);
|
||||||
self.patch_account_state(contract_address, block, &mut state)?;
|
let contract_state = self.get_decrypted_state(contract_address, block)?;
|
||||||
|
trace!(target: "privatetx", "Patching contract at {:?}, code: {:?}, state: {:?}", contract_address, contract_code, contract_state);
|
||||||
|
state.patch_account(contract_address, contract_code, Self::snapshot_to_storage(contract_state))?;
|
||||||
Some(*contract_address)
|
Some(*contract_address)
|
||||||
},
|
},
|
||||||
Action::Create => None,
|
Action::Create => None,
|
||||||
};
|
};
|
||||||
|
|
||||||
let engine = self.client.engine();
|
let engine = self.client.engine();
|
||||||
let sender = transaction.sender();
|
let contract_address = contract_address.or({
|
||||||
let nonce = state.nonce(&sender)?;
|
let sender = transaction.sender();
|
||||||
let contract_address = contract_address.unwrap_or_else(|| {
|
let nonce = state.nonce(&sender)?;
|
||||||
let (new_address, _) = ethcore_contract_address(engine.create_address_scheme(env_info.number), &sender, &nonce, &transaction.data);
|
let (new_address, _) = ethcore_contract_address(engine.create_address_scheme(env_info.number), &sender, &nonce, &transaction.data);
|
||||||
new_address
|
Some(new_address)
|
||||||
});
|
});
|
||||||
// Patch other available private contracts' states as well
|
|
||||||
// TODO: #10133 patch only required for the contract states
|
|
||||||
if let Some(key_server_account) = self.keys_provider.key_server_account() {
|
|
||||||
if let Some(available_contracts) = self.keys_provider.available_keys(block, &key_server_account) {
|
|
||||||
for private_contract in available_contracts {
|
|
||||||
if private_contract == contract_address {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
self.patch_account_state(&private_contract, block, &mut state)?;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
let machine = engine.machine();
|
let machine = engine.machine();
|
||||||
let schedule = machine.schedule(env_info.number);
|
let schedule = machine.schedule(env_info.number);
|
||||||
let result = Executive::new(&mut state, &env_info, &machine, &schedule).transact_virtual(transaction, options)?;
|
let result = Executive::new(&mut state, &env_info, &machine, &schedule).transact_virtual(transaction, options)?;
|
||||||
let (encrypted_code, encrypted_storage) = {
|
let (encrypted_code, encrypted_storage) = match contract_address {
|
||||||
let (code, storage) = state.into_account(&contract_address)?;
|
None => bail!(ErrorKind::ContractDoesNotExist),
|
||||||
trace!(target: "privatetx", "Private contract executed. code: {:?}, state: {:?}, result: {:?}", code, storage, result.output);
|
Some(address) => {
|
||||||
let enc_code = match code {
|
let (code, storage) = state.into_account(&address)?;
|
||||||
Some(c) => Some(self.encrypt(&contract_address, &Self::iv_from_address(&contract_address), &c)?),
|
trace!(target: "privatetx", "Private contract executed. code: {:?}, state: {:?}, result: {:?}", code, storage, result.output);
|
||||||
None => None,
|
let enc_code = match code {
|
||||||
};
|
Some(c) => Some(self.encrypt(&address, &Self::iv_from_address(&address), &c)?),
|
||||||
(enc_code, self.encrypt(&contract_address, &Self::iv_from_transaction(transaction), &Self::snapshot_from_storage(&storage))?)
|
None => None,
|
||||||
|
};
|
||||||
|
(enc_code, self.encrypt(&address, &Self::iv_from_transaction(transaction), &Self::snapshot_from_storage(&storage))?)
|
||||||
|
},
|
||||||
};
|
};
|
||||||
Ok(PrivateExecutionResult {
|
Ok(PrivateExecutionResult {
|
||||||
code: encrypted_code,
|
code: encrypted_code,
|
||||||
state: encrypted_storage,
|
state: encrypted_storage,
|
||||||
contract_address: contract_address,
|
contract_address,
|
||||||
result,
|
result,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -604,13 +555,16 @@ impl Provider {
|
|||||||
|
|
||||||
/// Returns the key from the key server associated with the contract
|
/// Returns the key from the key server associated with the contract
|
||||||
pub fn contract_key_id(&self, contract_address: &Address) -> Result<H256, Error> {
|
pub fn contract_key_id(&self, contract_address: &Address) -> Result<H256, Error> {
|
||||||
Ok(key_server_keys::address_to_key(contract_address))
|
// Current solution uses contract address extended with 0 as id
|
||||||
|
let contract_address_extended: H256 = contract_address.into();
|
||||||
|
|
||||||
|
Ok(H256::from_slice(&contract_address_extended))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Create encrypted public contract deployment transaction.
|
/// Create encrypted public contract deployment transaction.
|
||||||
pub fn public_creation_transaction(&self, block: BlockId, source: &SignedTransaction, validators: &[Address], gas_price: U256) -> Result<(Transaction, Address), Error> {
|
pub fn public_creation_transaction(&self, block: BlockId, source: &SignedTransaction, validators: &[Address], gas_price: U256) -> Result<(Transaction, Option<Address>), Error> {
|
||||||
if let Action::Call(_) = source.action {
|
if let Action::Call(_) = source.action {
|
||||||
bail!(ErrorKind::BadTransactionType);
|
bail!(ErrorKind::BadTransactonType);
|
||||||
}
|
}
|
||||||
let sender = source.sender();
|
let sender = source.sender();
|
||||||
let state = self.client.state_at(block).ok_or(ErrorKind::StatePruned)?;
|
let state = self.client.state_at(block).ok_or(ErrorKind::StatePruned)?;
|
||||||
@@ -649,7 +603,7 @@ impl Provider {
|
|||||||
/// Create encrypted public contract deployment transaction. Returns updated encrypted state.
|
/// Create encrypted public contract deployment transaction. Returns updated encrypted state.
|
||||||
pub fn execute_private_transaction(&self, block: BlockId, source: &SignedTransaction) -> Result<Bytes, Error> {
|
pub fn execute_private_transaction(&self, block: BlockId, source: &SignedTransaction) -> Result<Bytes, Error> {
|
||||||
if let Action::Create = source.action {
|
if let Action::Create = source.action {
|
||||||
bail!(ErrorKind::BadTransactionType);
|
bail!(ErrorKind::BadTransactonType);
|
||||||
}
|
}
|
||||||
let result = self.execute_private(source, TransactOptions::with_no_tracing(), block)?;
|
let result = self.execute_private(source, TransactOptions::with_no_tracing(), block)?;
|
||||||
Ok(result.state)
|
Ok(result.state)
|
||||||
@@ -729,13 +683,12 @@ impl Importer for Arc<Provider> {
|
|||||||
let transaction_bytes = self.decrypt(&contract, &encrypted_data)?;
|
let transaction_bytes = self.decrypt(&contract, &encrypted_data)?;
|
||||||
let original_tx: UnverifiedTransaction = Rlp::new(&transaction_bytes).as_val()?;
|
let original_tx: UnverifiedTransaction = Rlp::new(&transaction_bytes).as_val()?;
|
||||||
let nonce_cache = NonceCache::new(NONCE_CACHE_SIZE);
|
let nonce_cache = NonceCache::new(NONCE_CACHE_SIZE);
|
||||||
let local_accounts = HashSet::new();
|
|
||||||
// Add to the queue for further verification
|
// Add to the queue for further verification
|
||||||
self.transactions_for_verification.add_transaction(
|
self.transactions_for_verification.add_transaction(
|
||||||
original_tx,
|
original_tx,
|
||||||
validation_account.map(|&account| account),
|
validation_account.map(|&account| account),
|
||||||
private_tx,
|
private_tx,
|
||||||
self.pool_client(&nonce_cache, &local_accounts),
|
self.pool_client(&nonce_cache),
|
||||||
)?;
|
)?;
|
||||||
let provider = Arc::downgrade(self);
|
let provider = Arc::downgrade(self);
|
||||||
let result = self.channel.send(ClientIoMessage::execute(move |_| {
|
let result = self.channel.send(ClientIoMessage::execute(move |_| {
|
||||||
@@ -770,6 +723,16 @@ impl Importer for Arc<Provider> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Try to unlock account using stored password, return found password if any
|
||||||
|
fn find_account_password(passwords: &Vec<Password>, account_provider: &AccountProvider, account: &Address) -> Option<Password> {
|
||||||
|
for password in passwords {
|
||||||
|
if let Ok(true) = account_provider.test_password(account, password) {
|
||||||
|
return Some(password.clone());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
impl ChainNotify for Provider {
|
impl ChainNotify for Provider {
|
||||||
fn new_blocks(&self, new_blocks: NewBlocks) {
|
fn new_blocks(&self, new_blocks: NewBlocks) {
|
||||||
if new_blocks.imported.is_empty() || new_blocks.has_more_blocks_to_import { return }
|
if new_blocks.imported.is_empty() || new_blocks.has_more_blocks_to_import { return }
|
||||||
@@ -777,6 +740,5 @@ impl ChainNotify for Provider {
|
|||||||
if let Err(err) = self.process_verification_queue() {
|
if let Err(err) = self.process_verification_queue() {
|
||||||
warn!(target: "privatetx", "Cannot prune private transactions queue. error: {:?}", err);
|
warn!(target: "privatetx", "Cannot prune private transactions queue. error: {:?}", err);
|
||||||
}
|
}
|
||||||
self.keys_provider.update_acl_contract();
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -29,11 +29,12 @@ extern crate rustc_hex;
|
|||||||
extern crate log;
|
extern crate log;
|
||||||
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use rustc_hex::{FromHex, ToHex};
|
use rustc_hex::FromHex;
|
||||||
|
|
||||||
use types::ids::BlockId;
|
use types::ids::BlockId;
|
||||||
use types::transaction::{Transaction, Action};
|
use types::transaction::{Transaction, Action};
|
||||||
use ethcore::CreateContractAddress;
|
use ethcore::CreateContractAddress;
|
||||||
|
use ethcore::account_provider::AccountProvider;
|
||||||
use ethcore::client::BlockChainClient;
|
use ethcore::client::BlockChainClient;
|
||||||
use ethcore::executive::{contract_address};
|
use ethcore::executive::{contract_address};
|
||||||
use ethcore::miner::Miner;
|
use ethcore::miner::Miner;
|
||||||
@@ -41,7 +42,7 @@ use ethcore::test_helpers::{generate_dummy_client, push_block_with_transactions}
|
|||||||
use ethkey::{Secret, KeyPair, Signature};
|
use ethkey::{Secret, KeyPair, Signature};
|
||||||
use hash::keccak;
|
use hash::keccak;
|
||||||
|
|
||||||
use ethcore_private_tx::{NoopEncryptor, Provider, ProviderConfig, StoringKeyProvider};
|
use ethcore_private_tx::{NoopEncryptor, Provider, ProviderConfig};
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn private_contract() {
|
fn private_contract() {
|
||||||
@@ -53,25 +54,26 @@ fn private_contract() {
|
|||||||
let _key2 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000012")).unwrap();
|
let _key2 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000012")).unwrap();
|
||||||
let key3 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000013")).unwrap();
|
let key3 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000013")).unwrap();
|
||||||
let key4 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000014")).unwrap();
|
let key4 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000014")).unwrap();
|
||||||
|
let ap = Arc::new(AccountProvider::transient_provider());
|
||||||
let signer = Arc::new(ethcore_private_tx::KeyPairSigner(vec![key1.clone(), key3.clone(), key4.clone()]));
|
ap.insert_account(key1.secret().clone(), &"".into()).unwrap();
|
||||||
|
ap.insert_account(key3.secret().clone(), &"".into()).unwrap();
|
||||||
|
ap.insert_account(key4.secret().clone(), &"".into()).unwrap();
|
||||||
|
|
||||||
let config = ProviderConfig{
|
let config = ProviderConfig{
|
||||||
validator_accounts: vec![key3.address(), key4.address()],
|
validator_accounts: vec![key3.address(), key4.address()],
|
||||||
signer_account: None,
|
signer_account: None,
|
||||||
|
passwords: vec!["".into()],
|
||||||
};
|
};
|
||||||
|
|
||||||
let io = ethcore_io::IoChannel::disconnected();
|
let io = ethcore_io::IoChannel::disconnected();
|
||||||
let miner = Arc::new(Miner::new_for_tests(&::ethcore::spec::Spec::new_test(), None));
|
let miner = Arc::new(Miner::new_for_tests(&::ethcore::spec::Spec::new_test(), None));
|
||||||
let private_keys = Arc::new(StoringKeyProvider::default());
|
|
||||||
let pm = Arc::new(Provider::new(
|
let pm = Arc::new(Provider::new(
|
||||||
client.clone(),
|
client.clone(),
|
||||||
miner,
|
miner,
|
||||||
signer.clone(),
|
ap.clone(),
|
||||||
Box::new(NoopEncryptor::default()),
|
Box::new(NoopEncryptor::default()),
|
||||||
config,
|
config,
|
||||||
io,
|
io,
|
||||||
private_keys,
|
|
||||||
));
|
));
|
||||||
|
|
||||||
let (address, _) = contract_address(CreateContractAddress::FromSenderAndNonce, &key1.address(), &0.into(), &[]);
|
let (address, _) = contract_address(CreateContractAddress::FromSenderAndNonce, &key1.address(), &0.into(), &[]);
|
||||||
@@ -153,123 +155,3 @@ fn private_contract() {
|
|||||||
let result = pm.private_call(BlockId::Latest, &query_tx).unwrap();
|
let result = pm.private_call(BlockId::Latest, &query_tx).unwrap();
|
||||||
assert_eq!(result.output, "2a00000000000000000000000000000000000000000000000000000000000000".from_hex().unwrap());
|
assert_eq!(result.output, "2a00000000000000000000000000000000000000000000000000000000000000".from_hex().unwrap());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn call_other_private_contract() {
|
|
||||||
// This test verifies calls private contract methods from another one
|
|
||||||
// Two contract will be deployed
|
|
||||||
// The same contract A:
|
|
||||||
// contract Test1 {
|
|
||||||
// bytes32 public x;
|
|
||||||
// function setX(bytes32 _x) {
|
|
||||||
// x = _x;
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// And the following contract B:
|
|
||||||
// contract Deployed {
|
|
||||||
// function setX(uint) {}
|
|
||||||
// function x() returns (uint) {}
|
|
||||||
//}
|
|
||||||
// contract Existing {
|
|
||||||
// Deployed dc;
|
|
||||||
// function Existing(address t) {
|
|
||||||
// dc = Deployed(t);
|
|
||||||
// }
|
|
||||||
// function getX() returns (uint) {
|
|
||||||
// return dc.x();
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
//ethcore_logger::init_log();
|
|
||||||
|
|
||||||
// Create client and provider
|
|
||||||
let client = generate_dummy_client(0);
|
|
||||||
let chain_id = client.signing_chain_id();
|
|
||||||
let key1 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000011")).unwrap();
|
|
||||||
let _key2 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000012")).unwrap();
|
|
||||||
let key3 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000013")).unwrap();
|
|
||||||
let key4 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000014")).unwrap();
|
|
||||||
let signer = Arc::new(ethcore_private_tx::KeyPairSigner(vec![key1.clone(), key3.clone(), key4.clone()]));
|
|
||||||
|
|
||||||
let config = ProviderConfig{
|
|
||||||
validator_accounts: vec![key3.address(), key4.address()],
|
|
||||||
signer_account: None,
|
|
||||||
};
|
|
||||||
|
|
||||||
let io = ethcore_io::IoChannel::disconnected();
|
|
||||||
let miner = Arc::new(Miner::new_for_tests(&::ethcore::spec::Spec::new_test(), None));
|
|
||||||
let private_keys = Arc::new(StoringKeyProvider::default());
|
|
||||||
let pm = Arc::new(Provider::new(
|
|
||||||
client.clone(),
|
|
||||||
miner,
|
|
||||||
signer.clone(),
|
|
||||||
Box::new(NoopEncryptor::default()),
|
|
||||||
config,
|
|
||||||
io,
|
|
||||||
private_keys.clone(),
|
|
||||||
));
|
|
||||||
|
|
||||||
// Deploy contract A
|
|
||||||
let (address_a, _) = contract_address(CreateContractAddress::FromSenderAndNonce, &key1.address(), &0.into(), &[]);
|
|
||||||
trace!("Creating private contract A");
|
|
||||||
let private_contract_a_test = "6060604052341561000f57600080fd5b60d88061001d6000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680630c55699c146046578063bc64b76d14607457600080fd5b3415605057600080fd5b60566098565b60405180826000191660001916815260200191505060405180910390f35b3415607e57600080fd5b6096600480803560001916906020019091905050609e565b005b60005481565b8060008160001916905550505600a165627a7a723058206acbdf4b15ca4c2d43e1b1879b830451a34f1e9d02ff1f2f394d8d857e79d2080029".from_hex().unwrap();
|
|
||||||
let mut private_create_tx1 = Transaction::default();
|
|
||||||
private_create_tx1.action = Action::Create;
|
|
||||||
private_create_tx1.data = private_contract_a_test;
|
|
||||||
private_create_tx1.gas = 200000.into();
|
|
||||||
private_create_tx1.nonce = 0.into();
|
|
||||||
let private_create_tx_signed = private_create_tx1.sign(&key1.secret(), None);
|
|
||||||
let validators = vec![key3.address(), key4.address()];
|
|
||||||
let (public_tx1, _) = pm.public_creation_transaction(BlockId::Latest, &private_create_tx_signed, &validators, 0.into()).unwrap();
|
|
||||||
let public_tx1 = public_tx1.sign(&key1.secret(), chain_id);
|
|
||||||
trace!("Transaction created. Pushing block");
|
|
||||||
push_block_with_transactions(&client, &[public_tx1]);
|
|
||||||
|
|
||||||
// Deploy contract B
|
|
||||||
let (address_b, _) = contract_address(CreateContractAddress::FromSenderAndNonce, &key1.address(), &1.into(), &[]);
|
|
||||||
trace!("Creating private contract B");
|
|
||||||
// Build constructor data
|
|
||||||
let mut deploy_data = "6060604052341561000f57600080fd5b6040516020806101c583398101604052808051906020019091905050806000806101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908373ffffffffffffffffffffffffffffffffffffffff1602179055505061014a8061007b6000396000f300606060405260043610610041576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680635197c7aa14610046575b600080fd5b341561005157600080fd5b61005961006f565b6040518082815260200191505060405180910390f35b60008060009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16630c55699c6000604051602001526040518163ffffffff167c0100000000000000000000000000000000000000000000000000000000028152600401602060405180830381600087803b15156100fe57600080fd5b6102c65a03f1151561010f57600080fd5b505050604051805190509050905600a165627a7a723058207f8994e02725b47d76ec73e5c54a338d27b306dd1c830276bff2d75fcd1a5c920029000000000000000000000000".to_string();
|
|
||||||
deploy_data.push_str(&address_a.to_vec().to_hex());
|
|
||||||
let private_contract_b_test = deploy_data.from_hex().unwrap();
|
|
||||||
let mut private_create_tx2 = Transaction::default();
|
|
||||||
private_create_tx2.action = Action::Create;
|
|
||||||
private_create_tx2.data = private_contract_b_test;
|
|
||||||
private_create_tx2.gas = 200000.into();
|
|
||||||
private_create_tx2.nonce = 1.into();
|
|
||||||
let private_create_tx_signed = private_create_tx2.sign(&key1.secret(), None);
|
|
||||||
let (public_tx2, _) = pm.public_creation_transaction(BlockId::Latest, &private_create_tx_signed, &validators, 0.into()).unwrap();
|
|
||||||
let public_tx2 = public_tx2.sign(&key1.secret(), chain_id);
|
|
||||||
trace!("Transaction created. Pushing block");
|
|
||||||
push_block_with_transactions(&client, &[public_tx2]);
|
|
||||||
|
|
||||||
// Let provider know, that it has access to both keys for A and B
|
|
||||||
private_keys.set_available_keys(&vec![address_a, address_b]);
|
|
||||||
|
|
||||||
// Call A.setx(42)
|
|
||||||
trace!("Modifying private state");
|
|
||||||
let mut private_tx = Transaction::default();
|
|
||||||
private_tx.action = Action::Call(address_a.clone());
|
|
||||||
private_tx.data = "bc64b76d2a00000000000000000000000000000000000000000000000000000000000000".from_hex().unwrap(); //setX(42)
|
|
||||||
private_tx.gas = 120000.into();
|
|
||||||
private_tx.nonce = 2.into();
|
|
||||||
let private_tx = private_tx.sign(&key1.secret(), None);
|
|
||||||
let private_contract_nonce = pm.get_contract_nonce(&address_b, BlockId::Latest).unwrap();
|
|
||||||
let private_state = pm.execute_private_transaction(BlockId::Latest, &private_tx).unwrap();
|
|
||||||
let nonced_state_hash = pm.calculate_state_hash(&private_state, private_contract_nonce);
|
|
||||||
let signatures: Vec<_> = [&key3, &key4].iter().map(|k|
|
|
||||||
Signature::from(::ethkey::sign(&k.secret(), &nonced_state_hash).unwrap().into_electrum())).collect();
|
|
||||||
let public_tx = pm.public_transaction(private_state, &private_tx, &signatures, 2.into(), 0.into()).unwrap();
|
|
||||||
let public_tx = public_tx.sign(&key1.secret(), chain_id);
|
|
||||||
push_block_with_transactions(&client, &[public_tx]);
|
|
||||||
|
|
||||||
// Call B.getX()
|
|
||||||
trace!("Querying private state");
|
|
||||||
let mut query_tx = Transaction::default();
|
|
||||||
query_tx.action = Action::Call(address_b.clone());
|
|
||||||
query_tx.data = "5197c7aa".from_hex().unwrap(); // getX
|
|
||||||
query_tx.gas = 50000.into();
|
|
||||||
query_tx.nonce = 3.into();
|
|
||||||
let query_tx = query_tx.sign(&key1.secret(), chain_id);
|
|
||||||
let result = pm.private_call(BlockId::Latest, &query_tx).unwrap();
|
|
||||||
assert_eq!(&result.output[..], &("2a00000000000000000000000000000000000000000000000000000000000000".from_hex().unwrap()[..]));
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -32,8 +32,9 @@ use ethcore::miner::Miner;
|
|||||||
use ethcore::snapshot::service::{Service as SnapshotService, ServiceParams as SnapServiceParams};
|
use ethcore::snapshot::service::{Service as SnapshotService, ServiceParams as SnapServiceParams};
|
||||||
use ethcore::snapshot::{SnapshotService as _SnapshotService, RestorationStatus};
|
use ethcore::snapshot::{SnapshotService as _SnapshotService, RestorationStatus};
|
||||||
use ethcore::spec::Spec;
|
use ethcore::spec::Spec;
|
||||||
|
use ethcore::account_provider::AccountProvider;
|
||||||
|
|
||||||
use ethcore_private_tx::{self, Importer, Signer};
|
use ethcore_private_tx::{self, Importer};
|
||||||
use Error;
|
use Error;
|
||||||
|
|
||||||
pub struct PrivateTxService {
|
pub struct PrivateTxService {
|
||||||
@@ -95,10 +96,9 @@ impl ClientService {
|
|||||||
restoration_db_handler: Box<BlockChainDBHandler>,
|
restoration_db_handler: Box<BlockChainDBHandler>,
|
||||||
_ipc_path: &Path,
|
_ipc_path: &Path,
|
||||||
miner: Arc<Miner>,
|
miner: Arc<Miner>,
|
||||||
signer: Arc<Signer>,
|
account_provider: Arc<AccountProvider>,
|
||||||
encryptor: Box<ethcore_private_tx::Encryptor>,
|
encryptor: Box<ethcore_private_tx::Encryptor>,
|
||||||
private_tx_conf: ethcore_private_tx::ProviderConfig,
|
private_tx_conf: ethcore_private_tx::ProviderConfig,
|
||||||
private_encryptor_conf: ethcore_private_tx::EncryptorConfig,
|
|
||||||
) -> Result<ClientService, Error>
|
) -> Result<ClientService, Error>
|
||||||
{
|
{
|
||||||
let io_service = IoService::<ClientIoMessage>::start()?;
|
let io_service = IoService::<ClientIoMessage>::start()?;
|
||||||
@@ -127,18 +127,13 @@ impl ClientService {
|
|||||||
};
|
};
|
||||||
let snapshot = Arc::new(SnapshotService::new(snapshot_params)?);
|
let snapshot = Arc::new(SnapshotService::new(snapshot_params)?);
|
||||||
|
|
||||||
let private_keys = Arc::new(ethcore_private_tx::SecretStoreKeys::new(
|
|
||||||
client.clone(),
|
|
||||||
private_encryptor_conf.key_server_account,
|
|
||||||
));
|
|
||||||
let provider = Arc::new(ethcore_private_tx::Provider::new(
|
let provider = Arc::new(ethcore_private_tx::Provider::new(
|
||||||
client.clone(),
|
client.clone(),
|
||||||
miner,
|
miner,
|
||||||
signer,
|
account_provider,
|
||||||
encryptor,
|
encryptor,
|
||||||
private_tx_conf,
|
private_tx_conf,
|
||||||
io_service.channel(),
|
io_service.channel(),
|
||||||
private_keys,
|
|
||||||
));
|
));
|
||||||
let private_tx = Arc::new(PrivateTxService::new(provider));
|
let private_tx = Arc::new(PrivateTxService::new(provider));
|
||||||
|
|
||||||
@@ -281,6 +276,7 @@ mod tests {
|
|||||||
use tempdir::TempDir;
|
use tempdir::TempDir;
|
||||||
|
|
||||||
use ethcore_db::NUM_COLUMNS;
|
use ethcore_db::NUM_COLUMNS;
|
||||||
|
use ethcore::account_provider::AccountProvider;
|
||||||
use ethcore::client::ClientConfig;
|
use ethcore::client::ClientConfig;
|
||||||
use ethcore::miner::Miner;
|
use ethcore::miner::Miner;
|
||||||
use ethcore::spec::Spec;
|
use ethcore::spec::Spec;
|
||||||
@@ -315,10 +311,9 @@ mod tests {
|
|||||||
restoration_db_handler,
|
restoration_db_handler,
|
||||||
tempdir.path(),
|
tempdir.path(),
|
||||||
Arc::new(Miner::new_for_tests(&spec, None)),
|
Arc::new(Miner::new_for_tests(&spec, None)),
|
||||||
Arc::new(ethcore_private_tx::DummySigner),
|
Arc::new(AccountProvider::transient_provider()),
|
||||||
Box::new(ethcore_private_tx::NoopEncryptor),
|
Box::new(ethcore_private_tx::NoopEncryptor),
|
||||||
Default::default(),
|
Default::default(),
|
||||||
Default::default(),
|
|
||||||
);
|
);
|
||||||
assert!(service.is_ok());
|
assert!(service.is_ok());
|
||||||
drop(service.unwrap());
|
drop(service.unwrap());
|
||||||
|
|||||||
@@ -17,10 +17,11 @@
|
|||||||
//! DB backend wrapper for Account trie
|
//! DB backend wrapper for Account trie
|
||||||
use ethereum_types::H256;
|
use ethereum_types::H256;
|
||||||
use hash::{KECCAK_NULL_RLP, keccak};
|
use hash::{KECCAK_NULL_RLP, keccak};
|
||||||
use hash_db::{HashDB, AsHashDB};
|
use hashdb::{HashDB, AsHashDB};
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
use rlp::NULL_RLP;
|
use rlp::NULL_RLP;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
use ethereum_types::Address;
|
use ethereum_types::Address;
|
||||||
@@ -98,11 +99,15 @@ impl<'db> AccountDB<'db> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<'db> AsHashDB<KeccakHasher, DBValue> for AccountDB<'db> {
|
impl<'db> AsHashDB<KeccakHasher, DBValue> for AccountDB<'db> {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'db> HashDB<KeccakHasher, DBValue> for AccountDB<'db> {
|
impl<'db> HashDB<KeccakHasher, DBValue> for AccountDB<'db> {
|
||||||
|
fn keys(&self) -> HashMap<H256, i32> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
fn get(&self, key: &H256) -> Option<DBValue> {
|
fn get(&self, key: &H256) -> Option<DBValue> {
|
||||||
if key == &KECCAK_NULL_RLP {
|
if key == &KECCAK_NULL_RLP {
|
||||||
return Some(DBValue::from_slice(&NULL_RLP));
|
return Some(DBValue::from_slice(&NULL_RLP));
|
||||||
@@ -158,6 +163,10 @@ impl<'db> AccountDBMut<'db> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<'db> HashDB<KeccakHasher, DBValue> for AccountDBMut<'db>{
|
impl<'db> HashDB<KeccakHasher, DBValue> for AccountDBMut<'db>{
|
||||||
|
fn keys(&self) -> HashMap<H256, i32> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
fn get(&self, key: &H256) -> Option<DBValue> {
|
fn get(&self, key: &H256) -> Option<DBValue> {
|
||||||
if key == &KECCAK_NULL_RLP {
|
if key == &KECCAK_NULL_RLP {
|
||||||
return Some(DBValue::from_slice(&NULL_RLP));
|
return Some(DBValue::from_slice(&NULL_RLP));
|
||||||
@@ -200,18 +209,22 @@ impl<'db> HashDB<KeccakHasher, DBValue> for AccountDBMut<'db>{
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<'db> AsHashDB<KeccakHasher, DBValue> for AccountDBMut<'db> {
|
impl<'db> AsHashDB<KeccakHasher, DBValue> for AccountDBMut<'db> {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
||||||
}
|
}
|
||||||
|
|
||||||
struct Wrapping<'db>(&'db HashDB<KeccakHasher, DBValue>);
|
struct Wrapping<'db>(&'db HashDB<KeccakHasher, DBValue>);
|
||||||
|
|
||||||
impl<'db> AsHashDB<KeccakHasher, DBValue> for Wrapping<'db> {
|
impl<'db> AsHashDB<KeccakHasher, DBValue> for Wrapping<'db> {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'db> HashDB<KeccakHasher, DBValue> for Wrapping<'db> {
|
impl<'db> HashDB<KeccakHasher, DBValue> for Wrapping<'db> {
|
||||||
|
fn keys(&self) -> HashMap<H256, i32> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
fn get(&self, key: &H256) -> Option<DBValue> {
|
fn get(&self, key: &H256) -> Option<DBValue> {
|
||||||
if key == &KECCAK_NULL_RLP {
|
if key == &KECCAK_NULL_RLP {
|
||||||
return Some(DBValue::from_slice(&NULL_RLP));
|
return Some(DBValue::from_slice(&NULL_RLP));
|
||||||
@@ -241,11 +254,15 @@ impl<'db> HashDB<KeccakHasher, DBValue> for Wrapping<'db> {
|
|||||||
|
|
||||||
struct WrappingMut<'db>(&'db mut HashDB<KeccakHasher, DBValue>);
|
struct WrappingMut<'db>(&'db mut HashDB<KeccakHasher, DBValue>);
|
||||||
impl<'db> AsHashDB<KeccakHasher, DBValue> for WrappingMut<'db> {
|
impl<'db> AsHashDB<KeccakHasher, DBValue> for WrappingMut<'db> {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'db> HashDB<KeccakHasher, DBValue> for WrappingMut<'db>{
|
impl<'db> HashDB<KeccakHasher, DBValue> for WrappingMut<'db>{
|
||||||
|
fn keys(&self) -> HashMap<H256, i32> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
fn get(&self, key: &H256) -> Option<DBValue> {
|
fn get(&self, key: &H256) -> Option<DBValue> {
|
||||||
if key == &KECCAK_NULL_RLP {
|
if key == &KECCAK_NULL_RLP {
|
||||||
return Some(DBValue::from_slice(&NULL_RLP));
|
return Some(DBValue::from_slice(&NULL_RLP));
|
||||||
|
|||||||
@@ -14,55 +14,94 @@
|
|||||||
// You should have received a copy of the GNU General Public License
|
// You should have received a copy of the GNU General Public License
|
||||||
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
#![warn(missing_docs)]
|
|
||||||
|
|
||||||
//! Account management.
|
//! Account management.
|
||||||
|
|
||||||
mod account_data;
|
|
||||||
mod error;
|
|
||||||
mod stores;
|
mod stores;
|
||||||
|
|
||||||
#[cfg(not(any(target_os = "linux", target_os = "macos", target_os = "windows")))]
|
|
||||||
extern crate fake_hardware_wallet as hardware_wallet;
|
|
||||||
|
|
||||||
use self::account_data::{Unlock, AccountData};
|
|
||||||
use self::stores::AddressBook;
|
use self::stores::AddressBook;
|
||||||
|
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
|
use std::fmt;
|
||||||
use std::time::{Instant, Duration};
|
use std::time::{Instant, Duration};
|
||||||
|
|
||||||
use common_types::transaction::{Action, Transaction};
|
|
||||||
use ethkey::{Address, Message, Public, Secret, Password, Random, Generator};
|
|
||||||
use ethstore::accounts_dir::MemoryDirectory;
|
use ethstore::accounts_dir::MemoryDirectory;
|
||||||
|
use ethstore::ethkey::{Address, Message, Public, Secret, Password, Random, Generator};
|
||||||
|
use ethjson::misc::AccountMeta;
|
||||||
use ethstore::{
|
use ethstore::{
|
||||||
SimpleSecretStore, SecretStore, EthStore, EthMultiStore,
|
SimpleSecretStore, SecretStore, Error as SSError, EthStore, EthMultiStore,
|
||||||
random_string, SecretVaultRef, StoreAccountRef, OpaqueSecret,
|
random_string, SecretVaultRef, StoreAccountRef, OpaqueSecret,
|
||||||
};
|
};
|
||||||
use log::{warn, debug};
|
|
||||||
use parking_lot::RwLock;
|
use parking_lot::RwLock;
|
||||||
|
use types::transaction::{Action, Transaction};
|
||||||
|
|
||||||
pub use ethkey::Signature;
|
pub use ethstore::ethkey::Signature;
|
||||||
pub use ethstore::{Derivation, IndexDerivation, KeyFile, Error};
|
pub use ethstore::{Derivation, IndexDerivation, KeyFile};
|
||||||
pub use hardware_wallet::{Error as HardwareError, HardwareWalletManager, KeyPath, TransactionInfo};
|
pub use hardware_wallet::{Error as HardwareError, HardwareWalletManager, KeyPath, TransactionInfo};
|
||||||
|
|
||||||
pub use self::account_data::AccountMeta;
|
/// Type of unlock.
|
||||||
pub use self::error::SignError;
|
#[derive(Clone, PartialEq)]
|
||||||
|
enum Unlock {
|
||||||
|
/// If account is unlocked temporarily, it should be locked after first usage.
|
||||||
|
OneTime,
|
||||||
|
/// Account unlocked permanently can always sign message.
|
||||||
|
/// Use with caution.
|
||||||
|
Perm,
|
||||||
|
/// Account unlocked with a timeout
|
||||||
|
Timed(Instant),
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Data associated with account.
|
||||||
|
#[derive(Clone)]
|
||||||
|
struct AccountData {
|
||||||
|
unlock: Unlock,
|
||||||
|
password: Password,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Signing error
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum SignError {
|
||||||
|
/// Account is not unlocked
|
||||||
|
NotUnlocked,
|
||||||
|
/// Account does not exist.
|
||||||
|
NotFound,
|
||||||
|
/// Low-level hardware device error.
|
||||||
|
Hardware(HardwareError),
|
||||||
|
/// Low-level error from store
|
||||||
|
SStore(SSError),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl fmt::Display for SignError {
|
||||||
|
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
|
||||||
|
match *self {
|
||||||
|
SignError::NotUnlocked => write!(f, "Account is locked"),
|
||||||
|
SignError::NotFound => write!(f, "Account does not exist"),
|
||||||
|
SignError::Hardware(ref e) => write!(f, "{}", e),
|
||||||
|
SignError::SStore(ref e) => write!(f, "{}", e),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<HardwareError> for SignError {
|
||||||
|
fn from(e: HardwareError) -> Self {
|
||||||
|
SignError::Hardware(e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<SSError> for SignError {
|
||||||
|
fn from(e: SSError) -> Self {
|
||||||
|
SignError::SStore(e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `AccountProvider` errors.
|
||||||
|
pub type Error = SSError;
|
||||||
|
|
||||||
|
fn transient_sstore() -> EthMultiStore {
|
||||||
|
EthMultiStore::open(Box::new(MemoryDirectory::default())).expect("MemoryDirectory load always succeeds; qed")
|
||||||
|
}
|
||||||
|
|
||||||
type AccountToken = Password;
|
type AccountToken = Password;
|
||||||
|
|
||||||
/// Account management settings.
|
|
||||||
#[derive(Debug, Default)]
|
|
||||||
pub struct AccountProviderSettings {
|
|
||||||
/// Enable hardware wallet support.
|
|
||||||
pub enable_hardware_wallets: bool,
|
|
||||||
/// Use the classic chain key on the hardware wallet.
|
|
||||||
pub hardware_wallet_classic_key: bool,
|
|
||||||
/// Store raw account secret when unlocking the account permanently.
|
|
||||||
pub unlock_keep_secret: bool,
|
|
||||||
/// Disallowed accounts.
|
|
||||||
pub blacklisted_accounts: Vec<Address>,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Account management.
|
/// Account management.
|
||||||
/// Responsible for unlocking accounts.
|
/// Responsible for unlocking accounts.
|
||||||
pub struct AccountProvider {
|
pub struct AccountProvider {
|
||||||
@@ -85,8 +124,27 @@ pub struct AccountProvider {
|
|||||||
blacklisted_accounts: Vec<Address>,
|
blacklisted_accounts: Vec<Address>,
|
||||||
}
|
}
|
||||||
|
|
||||||
fn transient_sstore() -> EthMultiStore {
|
/// Account management settings.
|
||||||
EthMultiStore::open(Box::new(MemoryDirectory::default())).expect("MemoryDirectory load always succeeds; qed")
|
pub struct AccountProviderSettings {
|
||||||
|
/// Enable hardware wallet support.
|
||||||
|
pub enable_hardware_wallets: bool,
|
||||||
|
/// Use the classic chain key on the hardware wallet.
|
||||||
|
pub hardware_wallet_classic_key: bool,
|
||||||
|
/// Store raw account secret when unlocking the account permanently.
|
||||||
|
pub unlock_keep_secret: bool,
|
||||||
|
/// Disallowed accounts.
|
||||||
|
pub blacklisted_accounts: Vec<Address>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for AccountProviderSettings {
|
||||||
|
fn default() -> Self {
|
||||||
|
AccountProviderSettings {
|
||||||
|
enable_hardware_wallets: false,
|
||||||
|
hardware_wallet_classic_key: false,
|
||||||
|
unlock_keep_secret: false,
|
||||||
|
blacklisted_accounts: vec![],
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl AccountProvider {
|
impl AccountProvider {
|
||||||
@@ -163,7 +221,7 @@ impl AccountProvider {
|
|||||||
let account = self.sstore.insert_account(SecretVaultRef::Root, secret, password)?;
|
let account = self.sstore.insert_account(SecretVaultRef::Root, secret, password)?;
|
||||||
if self.blacklisted_accounts.contains(&account.address) {
|
if self.blacklisted_accounts.contains(&account.address) {
|
||||||
self.sstore.remove_account(&account, password)?;
|
self.sstore.remove_account(&account, password)?;
|
||||||
return Err(Error::InvalidAccount.into());
|
return Err(SSError::InvalidAccount.into());
|
||||||
}
|
}
|
||||||
Ok(account.address)
|
Ok(account.address)
|
||||||
}
|
}
|
||||||
@@ -193,7 +251,7 @@ impl AccountProvider {
|
|||||||
let account = self.sstore.import_wallet(SecretVaultRef::Root, json, password, gen_id)?;
|
let account = self.sstore.import_wallet(SecretVaultRef::Root, json, password, gen_id)?;
|
||||||
if self.blacklisted_accounts.contains(&account.address) {
|
if self.blacklisted_accounts.contains(&account.address) {
|
||||||
self.sstore.remove_account(&account, password)?;
|
self.sstore.remove_account(&account, password)?;
|
||||||
return Err(Error::InvalidAccount.into());
|
return Err(SSError::InvalidAccount.into());
|
||||||
}
|
}
|
||||||
Ok(Address::from(account.address).into())
|
Ok(Address::from(account.address).into())
|
||||||
}
|
}
|
||||||
@@ -226,7 +284,7 @@ impl AccountProvider {
|
|||||||
return Ok(accounts.into_iter().map(|a| a.address).collect());
|
return Ok(accounts.into_iter().map(|a| a.address).collect());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(Error::Custom("No hardware wallet accounts were found".into()))
|
Err(SSError::Custom("No hardware wallet accounts were found".into()))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get a list of paths to locked hardware wallets
|
/// Get a list of paths to locked hardware wallets
|
||||||
@@ -611,7 +669,7 @@ impl AccountProvider {
|
|||||||
mod tests {
|
mod tests {
|
||||||
use super::{AccountProvider, Unlock};
|
use super::{AccountProvider, Unlock};
|
||||||
use std::time::{Duration, Instant};
|
use std::time::{Duration, Instant};
|
||||||
use ethkey::{Generator, Random, Address};
|
use ethstore::ethkey::{Generator, Random, Address};
|
||||||
use ethstore::{StoreAccountRef, Derivation};
|
use ethstore::{StoreAccountRef, Derivation};
|
||||||
use ethereum_types::H256;
|
use ethereum_types::H256;
|
||||||
|
|
||||||
@@ -20,10 +20,8 @@ use std::{fs, fmt, hash, ops};
|
|||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
use ethkey::Address;
|
use ethstore::ethkey::Address;
|
||||||
use log::{trace, warn};
|
use ethjson::misc::AccountMeta;
|
||||||
|
|
||||||
use crate::AccountMeta;
|
|
||||||
|
|
||||||
/// Disk-backed map from Address to String. Uses JSON.
|
/// Disk-backed map from Address to String. Uses JSON.
|
||||||
pub struct AddressBook {
|
pub struct AddressBook {
|
||||||
@@ -155,8 +153,8 @@ impl<K: hash::Hash + Eq, V> DiskMap<K, V> {
|
|||||||
mod tests {
|
mod tests {
|
||||||
use super::AddressBook;
|
use super::AddressBook;
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
|
use ethjson::misc::AccountMeta;
|
||||||
use tempdir::TempDir;
|
use tempdir::TempDir;
|
||||||
use crate::account_data::AccountMeta;
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn should_save_and_reload_address_book() {
|
fn should_save_and_reload_address_book() {
|
||||||
@@ -165,9 +163,7 @@ mod tests {
|
|||||||
b.set_name(1.into(), "One".to_owned());
|
b.set_name(1.into(), "One".to_owned());
|
||||||
b.set_meta(1.into(), "{1:1}".to_owned());
|
b.set_meta(1.into(), "{1:1}".to_owned());
|
||||||
let b = AddressBook::new(tempdir.path());
|
let b = AddressBook::new(tempdir.path());
|
||||||
assert_eq!(b.get(), vec![
|
assert_eq!(b.get(), hash_map![1.into() => AccountMeta{name: "One".to_owned(), meta: "{1:1}".to_owned(), uuid: None}]);
|
||||||
(1, AccountMeta {name: "One".to_owned(), meta: "{1:1}".to_owned(), uuid: None})
|
|
||||||
].into_iter().map(|(a, b)| (a.into(), b)).collect::<HashMap<_, _>>());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -181,9 +177,9 @@ mod tests {
|
|||||||
b.remove(2.into());
|
b.remove(2.into());
|
||||||
|
|
||||||
let b = AddressBook::new(tempdir.path());
|
let b = AddressBook::new(tempdir.path());
|
||||||
assert_eq!(b.get(), vec![
|
assert_eq!(b.get(), hash_map![
|
||||||
(1, AccountMeta{name: "One".to_owned(), meta: "{}".to_owned(), uuid: None}),
|
1.into() => AccountMeta{name: "One".to_owned(), meta: "{}".to_owned(), uuid: None},
|
||||||
(3, AccountMeta{name: "Three".to_owned(), meta: "{}".to_owned(), uuid: None}),
|
3.into() => AccountMeta{name: "Three".to_owned(), meta: "{}".to_owned(), uuid: None}
|
||||||
].into_iter().map(|(a, b)| (a.into(), b)).collect::<HashMap<_, _>>());
|
]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -21,11 +21,10 @@ use std::sync::atomic::{AtomicUsize, AtomicBool, Ordering as AtomicOrdering};
|
|||||||
use std::sync::{Arc, Weak};
|
use std::sync::{Arc, Weak};
|
||||||
use std::time::{Instant, Duration};
|
use std::time::{Instant, Duration};
|
||||||
|
|
||||||
use blockchain::{BlockReceipts, BlockChain, BlockChainDB, BlockProvider, TreeRoute, ImportRoute, TransactionAddress, ExtrasInsert, BlockNumberKey};
|
use blockchain::{BlockReceipts, BlockChain, BlockChainDB, BlockProvider, TreeRoute, ImportRoute, TransactionAddress, ExtrasInsert};
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use call_contract::{CallContract, RegistryInfo};
|
use call_contract::{CallContract, RegistryInfo};
|
||||||
use ethcore_miner::pool::VerifiedTransaction;
|
use ethcore_miner::pool::VerifiedTransaction;
|
||||||
use ethcore_miner::service_transaction_checker::ServiceTransactionChecker;
|
|
||||||
use ethereum_types::{H256, Address, U256};
|
use ethereum_types::{H256, Address, U256};
|
||||||
use evm::Schedule;
|
use evm::Schedule;
|
||||||
use hash::keccak;
|
use hash::keccak;
|
||||||
@@ -52,7 +51,7 @@ use client::{
|
|||||||
ReopenBlock, PrepareOpenBlock, ScheduleInfo, ImportSealedBlock,
|
ReopenBlock, PrepareOpenBlock, ScheduleInfo, ImportSealedBlock,
|
||||||
BroadcastProposalBlock, ImportBlock, StateOrBlock, StateInfo, StateClient, Call,
|
BroadcastProposalBlock, ImportBlock, StateOrBlock, StateInfo, StateClient, Call,
|
||||||
AccountData, BlockChain as BlockChainTrait, BlockProducer, SealedBlockImporter,
|
AccountData, BlockChain as BlockChainTrait, BlockProducer, SealedBlockImporter,
|
||||||
ClientIoMessage, BlockChainReset
|
ClientIoMessage,
|
||||||
};
|
};
|
||||||
use client::{
|
use client::{
|
||||||
BlockId, TransactionId, UncleId, TraceId, ClientConfig, BlockChainClient,
|
BlockId, TransactionId, UncleId, TraceId, ClientConfig, BlockChainClient,
|
||||||
@@ -80,14 +79,12 @@ use verification::queue::kind::BlockLike;
|
|||||||
use verification::queue::kind::blocks::Unverified;
|
use verification::queue::kind::blocks::Unverified;
|
||||||
use verification::{PreverifiedBlock, Verifier, BlockQueue};
|
use verification::{PreverifiedBlock, Verifier, BlockQueue};
|
||||||
use verification;
|
use verification;
|
||||||
use ansi_term::Colour;
|
|
||||||
|
|
||||||
// re-export
|
// re-export
|
||||||
pub use types::blockchain_info::BlockChainInfo;
|
pub use types::blockchain_info::BlockChainInfo;
|
||||||
pub use types::block_status::BlockStatus;
|
pub use types::block_status::BlockStatus;
|
||||||
pub use blockchain::CacheSize as BlockChainCacheSize;
|
pub use blockchain::CacheSize as BlockChainCacheSize;
|
||||||
pub use verification::QueueInfo as BlockQueueInfo;
|
pub use verification::QueueInfo as BlockQueueInfo;
|
||||||
use db::Writable;
|
|
||||||
|
|
||||||
use_contract!(registry, "res/contracts/registrar.json");
|
use_contract!(registry, "res/contracts/registrar.json");
|
||||||
|
|
||||||
@@ -624,7 +621,7 @@ impl Importer {
|
|||||||
|
|
||||||
let call = move |addr, data| {
|
let call = move |addr, data| {
|
||||||
let mut state_db = state_db.boxed_clone();
|
let mut state_db = state_db.boxed_clone();
|
||||||
let backend = ::state::backend::Proving::new(state_db.as_hash_db_mut());
|
let backend = ::state::backend::Proving::new(state_db.as_hashdb_mut());
|
||||||
|
|
||||||
let transaction =
|
let transaction =
|
||||||
client.contract_call_tx(BlockId::Hash(*header.parent_hash()), addr, data);
|
client.contract_call_tx(BlockId::Hash(*header.parent_hash()), addr, data);
|
||||||
@@ -1176,7 +1173,7 @@ impl Client {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let processing_threads = self.config.snapshot.processing_threads;
|
let processing_threads = self.config.snapshot.processing_threads;
|
||||||
snapshot::take_snapshot(&*self.engine, &self.chain.read(), start_hash, db.as_hash_db(), writer, p, processing_threads)?;
|
snapshot::take_snapshot(&*self.engine, &self.chain.read(), start_hash, db.as_hashdb(), writer, p, processing_threads)?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -1331,48 +1328,6 @@ impl snapshot::DatabaseRestore for Client {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl BlockChainReset for Client {
|
|
||||||
fn reset(&self, num: u32) -> Result<(), String> {
|
|
||||||
if num as u64 > self.pruning_history() {
|
|
||||||
return Err("Attempting to reset to block with pruned state".into())
|
|
||||||
}
|
|
||||||
|
|
||||||
let (blocks_to_delete, best_block_hash) = self.chain.read()
|
|
||||||
.block_headers_from_best_block(num)
|
|
||||||
.ok_or("Attempted to reset past genesis block")?;
|
|
||||||
|
|
||||||
let mut db_transaction = DBTransaction::with_capacity((num + 1) as usize);
|
|
||||||
|
|
||||||
for hash in &blocks_to_delete {
|
|
||||||
db_transaction.delete(::db::COL_HEADERS, &hash.hash());
|
|
||||||
db_transaction.delete(::db::COL_BODIES, &hash.hash());
|
|
||||||
db_transaction.delete(::db::COL_EXTRA, &hash.hash());
|
|
||||||
Writable::delete::<H256, BlockNumberKey>
|
|
||||||
(&mut db_transaction, ::db::COL_EXTRA, &hash.number());
|
|
||||||
}
|
|
||||||
|
|
||||||
// update the new best block hash
|
|
||||||
db_transaction.put(::db::COL_EXTRA, b"best", &*best_block_hash);
|
|
||||||
|
|
||||||
self.db.read()
|
|
||||||
.key_value()
|
|
||||||
.write(db_transaction)
|
|
||||||
.map_err(|err| format!("could not complete reset operation; io error occured: {}", err))?;
|
|
||||||
|
|
||||||
let hashes = blocks_to_delete.iter().map(|b| b.hash()).collect::<Vec<_>>();
|
|
||||||
|
|
||||||
info!("Deleting block hashes {}",
|
|
||||||
Colour::Red
|
|
||||||
.bold()
|
|
||||||
.paint(format!("{:#?}", hashes))
|
|
||||||
);
|
|
||||||
|
|
||||||
info!("New best block hash {}", Colour::Green.bold().paint(format!("{:?}", best_block_hash)));
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Nonce for Client {
|
impl Nonce for Client {
|
||||||
fn nonce(&self, address: &Address, id: BlockId) -> Option<U256> {
|
fn nonce(&self, address: &Address, id: BlockId) -> Option<U256> {
|
||||||
self.state_at(id).and_then(|s| s.nonce(address).ok())
|
self.state_at(id).and_then(|s| s.nonce(address).ok())
|
||||||
@@ -1706,17 +1661,15 @@ impl BlockChainClient for Client {
|
|||||||
self.config.spec_name.clone()
|
self.config.spec_name.clone()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_spec_name(&self, new_spec_name: String) -> Result<(), ()> {
|
fn set_spec_name(&self, new_spec_name: String) {
|
||||||
trace!(target: "mode", "Client::set_spec_name({:?})", new_spec_name);
|
trace!(target: "mode", "Client::set_spec_name({:?})", new_spec_name);
|
||||||
if !self.enabled.load(AtomicOrdering::Relaxed) {
|
if !self.enabled.load(AtomicOrdering::Relaxed) {
|
||||||
return Err(());
|
return;
|
||||||
}
|
}
|
||||||
if let Some(ref h) = *self.exit_handler.lock() {
|
if let Some(ref h) = *self.exit_handler.lock() {
|
||||||
(*h)(new_spec_name);
|
(*h)(new_spec_name);
|
||||||
Ok(())
|
|
||||||
} else {
|
} else {
|
||||||
warn!("Not hypervised; cannot change chain.");
|
warn!("Not hypervised; cannot change chain.");
|
||||||
Err(())
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1783,8 +1736,7 @@ impl BlockChainClient for Client {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let (root, db) = state.drop();
|
let (root, db) = state.drop();
|
||||||
let db = &db.as_hash_db();
|
let trie = match self.factories.trie.readonly(db.as_hashdb(), &root) {
|
||||||
let trie = match self.factories.trie.readonly(db, &root) {
|
|
||||||
Ok(trie) => trie,
|
Ok(trie) => trie,
|
||||||
_ => {
|
_ => {
|
||||||
trace!(target: "fatdb", "list_accounts: Couldn't open the DB");
|
trace!(target: "fatdb", "list_accounts: Couldn't open the DB");
|
||||||
@@ -1830,9 +1782,8 @@ impl BlockChainClient for Client {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let (_, db) = state.drop();
|
let (_, db) = state.drop();
|
||||||
let account_db = &self.factories.accountdb.readonly(db.as_hash_db(), keccak(account));
|
let account_db = self.factories.accountdb.readonly(db.as_hashdb(), keccak(account));
|
||||||
let account_db = &account_db.as_hash_db();
|
let trie = match self.factories.trie.readonly(account_db.as_hashdb(), &root) {
|
||||||
let trie = match self.factories.trie.readonly(account_db, &root) {
|
|
||||||
Ok(trie) => trie,
|
Ok(trie) => trie,
|
||||||
_ => {
|
_ => {
|
||||||
trace!(target: "fatdb", "list_storage: Couldn't open the DB");
|
trace!(target: "fatdb", "list_storage: Couldn't open the DB");
|
||||||
@@ -2159,16 +2110,11 @@ impl BlockChainClient for Client {
|
|||||||
|
|
||||||
fn transact_contract(&self, address: Address, data: Bytes) -> Result<(), transaction::Error> {
|
fn transact_contract(&self, address: Address, data: Bytes) -> Result<(), transaction::Error> {
|
||||||
let authoring_params = self.importer.miner.authoring_params();
|
let authoring_params = self.importer.miner.authoring_params();
|
||||||
let service_transaction_checker = ServiceTransactionChecker::default();
|
|
||||||
let gas_price = match service_transaction_checker.check_address(self, authoring_params.author) {
|
|
||||||
Ok(true) => U256::zero(),
|
|
||||||
_ => self.importer.miner.sensible_gas_price(),
|
|
||||||
};
|
|
||||||
let transaction = transaction::Transaction {
|
let transaction = transaction::Transaction {
|
||||||
nonce: self.latest_nonce(&authoring_params.author),
|
nonce: self.latest_nonce(&authoring_params.author),
|
||||||
action: Action::Call(address),
|
action: Action::Call(address),
|
||||||
gas: self.importer.miner.sensible_gas_limit(),
|
gas: self.importer.miner.sensible_gas_limit(),
|
||||||
gas_price,
|
gas_price: self.importer.miner.sensible_gas_price(),
|
||||||
value: U256::zero(),
|
value: U256::zero(),
|
||||||
data: data,
|
data: data,
|
||||||
};
|
};
|
||||||
@@ -2503,7 +2449,7 @@ impl ProvingBlockChainClient for Client {
|
|||||||
let mut jdb = self.state_db.read().journal_db().boxed_clone();
|
let mut jdb = self.state_db.read().journal_db().boxed_clone();
|
||||||
|
|
||||||
state::prove_transaction_virtual(
|
state::prove_transaction_virtual(
|
||||||
jdb.as_hash_db_mut(),
|
jdb.as_hashdb_mut(),
|
||||||
header.state_root().clone(),
|
header.state_root().clone(),
|
||||||
&transaction,
|
&transaction,
|
||||||
self.engine.machine(),
|
self.engine.machine(),
|
||||||
|
|||||||
@@ -38,7 +38,6 @@ pub use self::chain_notify::{ChainNotify, NewBlocks, ChainRoute, ChainRouteType,
|
|||||||
pub use self::traits::{
|
pub use self::traits::{
|
||||||
Nonce, Balance, ChainInfo, BlockInfo, ReopenBlock, PrepareOpenBlock, TransactionInfo, ScheduleInfo, ImportSealedBlock, BroadcastProposalBlock, ImportBlock,
|
Nonce, Balance, ChainInfo, BlockInfo, ReopenBlock, PrepareOpenBlock, TransactionInfo, ScheduleInfo, ImportSealedBlock, BroadcastProposalBlock, ImportBlock,
|
||||||
StateOrBlock, StateClient, Call, EngineInfo, AccountData, BlockChain, BlockProducer, SealedBlockImporter, BadBlocks,
|
StateOrBlock, StateClient, Call, EngineInfo, AccountData, BlockChain, BlockProducer, SealedBlockImporter, BadBlocks,
|
||||||
BlockChainReset
|
|
||||||
};
|
};
|
||||||
pub use state::StateInfo;
|
pub use state::StateInfo;
|
||||||
pub use self::traits::{BlockChainClient, EngineClient, ProvingBlockChainClient, IoClient};
|
pub use self::traits::{BlockChainClient, EngineClient, ProvingBlockChainClient, IoClient};
|
||||||
|
|||||||
@@ -863,7 +863,7 @@ impl BlockChainClient for TestBlockChainClient {
|
|||||||
|
|
||||||
fn spec_name(&self) -> String { "foundation".into() }
|
fn spec_name(&self) -> String { "foundation".into() }
|
||||||
|
|
||||||
fn set_spec_name(&self, _: String) -> Result<(), ()> { unimplemented!(); }
|
fn set_spec_name(&self, _: String) { unimplemented!(); }
|
||||||
|
|
||||||
fn disable(&self) { self.disabled.store(true, AtomicOrder::Relaxed); }
|
fn disable(&self) { self.disabled.store(true, AtomicOrder::Relaxed); }
|
||||||
|
|
||||||
|
|||||||
@@ -360,7 +360,7 @@ pub trait BlockChainClient : Sync + Send + AccountData + BlockChain + CallContra
|
|||||||
fn spec_name(&self) -> String;
|
fn spec_name(&self) -> String;
|
||||||
|
|
||||||
/// Set the chain via a spec name.
|
/// Set the chain via a spec name.
|
||||||
fn set_spec_name(&self, spec_name: String) -> Result<(), ()>;
|
fn set_spec_name(&self, spec_name: String);
|
||||||
|
|
||||||
/// Disable the client from importing blocks. This cannot be undone in this session and indicates
|
/// Disable the client from importing blocks. This cannot be undone in this session and indicates
|
||||||
/// that a subsystem has reason to believe this executable incapable of syncing the chain.
|
/// that a subsystem has reason to believe this executable incapable of syncing the chain.
|
||||||
@@ -471,9 +471,3 @@ pub trait ProvingBlockChainClient: BlockChainClient {
|
|||||||
/// Get an epoch change signal by block hash.
|
/// Get an epoch change signal by block hash.
|
||||||
fn epoch_signal(&self, hash: H256) -> Option<Vec<u8>>;
|
fn epoch_signal(&self, hash: H256) -> Option<Vec<u8>>;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// resets the blockchain
|
|
||||||
pub trait BlockChainReset {
|
|
||||||
/// reset to best_block - n
|
|
||||||
fn reset(&self, num: u32) -> Result<(), String>;
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ use std::sync::atomic::{AtomicUsize, AtomicBool, Ordering as AtomicOrdering};
|
|||||||
use std::sync::{Weak, Arc};
|
use std::sync::{Weak, Arc};
|
||||||
use std::time::{UNIX_EPOCH, SystemTime, Duration};
|
use std::time::{UNIX_EPOCH, SystemTime, Duration};
|
||||||
|
|
||||||
|
use account_provider::AccountProvider;
|
||||||
use block::*;
|
use block::*;
|
||||||
use client::EngineClient;
|
use client::EngineClient;
|
||||||
use engines::{Engine, Seal, EngineError, ConstructedVerifier};
|
use engines::{Engine, Seal, EngineError, ConstructedVerifier};
|
||||||
@@ -36,7 +37,7 @@ use hash::keccak;
|
|||||||
use super::signer::EngineSigner;
|
use super::signer::EngineSigner;
|
||||||
use super::validator_set::{ValidatorSet, SimpleList, new_validator_set};
|
use super::validator_set::{ValidatorSet, SimpleList, new_validator_set};
|
||||||
use self::finality::RollingFinality;
|
use self::finality::RollingFinality;
|
||||||
use ethkey::{self, Signature};
|
use ethkey::{self, Password, Signature};
|
||||||
use io::{IoContext, IoHandler, TimerToken, IoService};
|
use io::{IoContext, IoHandler, TimerToken, IoService};
|
||||||
use itertools::{self, Itertools};
|
use itertools::{self, Itertools};
|
||||||
use rlp::{encode, Decodable, DecoderError, Encodable, RlpStream, Rlp};
|
use rlp::{encode, Decodable, DecoderError, Encodable, RlpStream, Rlp};
|
||||||
@@ -414,7 +415,7 @@ pub struct AuthorityRound {
|
|||||||
transition_service: IoService<()>,
|
transition_service: IoService<()>,
|
||||||
step: Arc<PermissionedStep>,
|
step: Arc<PermissionedStep>,
|
||||||
client: Arc<RwLock<Option<Weak<EngineClient>>>>,
|
client: Arc<RwLock<Option<Weak<EngineClient>>>>,
|
||||||
signer: RwLock<Option<Box<EngineSigner>>>,
|
signer: RwLock<EngineSigner>,
|
||||||
validators: Box<ValidatorSet>,
|
validators: Box<ValidatorSet>,
|
||||||
validate_score_transition: u64,
|
validate_score_transition: u64,
|
||||||
validate_step_transition: u64,
|
validate_step_transition: u64,
|
||||||
@@ -675,7 +676,7 @@ impl AuthorityRound {
|
|||||||
can_propose: AtomicBool::new(true),
|
can_propose: AtomicBool::new(true),
|
||||||
}),
|
}),
|
||||||
client: Arc::new(RwLock::new(None)),
|
client: Arc::new(RwLock::new(None)),
|
||||||
signer: RwLock::new(None),
|
signer: Default::default(),
|
||||||
validators: our_params.validators,
|
validators: our_params.validators,
|
||||||
validate_score_transition: our_params.validate_score_transition,
|
validate_score_transition: our_params.validate_score_transition,
|
||||||
validate_step_transition: our_params.validate_step_transition,
|
validate_step_transition: our_params.validate_step_transition,
|
||||||
@@ -798,7 +799,7 @@ impl AuthorityRound {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if let (true, Some(me)) = (current_step > parent_step + 1, self.signer.read().as_ref().map(|s| s.address())) {
|
if let (true, Some(me)) = (current_step > parent_step + 1, self.signer.read().address()) {
|
||||||
debug!(target: "engine", "Author {} built block with step gap. current step: {}, parent step: {}",
|
debug!(target: "engine", "Author {} built block with step gap. current step: {}, parent step: {}",
|
||||||
header.author(), current_step, parent_step);
|
header.author(), current_step, parent_step);
|
||||||
let mut reported = HashSet::new();
|
let mut reported = HashSet::new();
|
||||||
@@ -1504,16 +1505,12 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
|||||||
self.validators.register_client(client);
|
self.validators.register_client(client);
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_signer(&self, signer: Box<EngineSigner>) {
|
fn set_signer(&self, ap: Arc<AccountProvider>, address: Address, password: Password) {
|
||||||
*self.signer.write() = Some(signer);
|
self.signer.write().set(ap, address, password);
|
||||||
}
|
}
|
||||||
|
|
||||||
fn sign(&self, hash: H256) -> Result<Signature, Error> {
|
fn sign(&self, hash: H256) -> Result<Signature, Error> {
|
||||||
Ok(self.signer.read()
|
Ok(self.signer.read().sign(hash)?)
|
||||||
.as_ref()
|
|
||||||
.ok_or(ethkey::Error::InvalidAddress)?
|
|
||||||
.sign(hash)?
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> {
|
fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> {
|
||||||
@@ -1548,16 +1545,16 @@ mod tests {
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::sync::atomic::{AtomicUsize, Ordering as AtomicOrdering};
|
use std::sync::atomic::{AtomicUsize, Ordering as AtomicOrdering};
|
||||||
use hash::keccak;
|
use hash::keccak;
|
||||||
use accounts::AccountProvider;
|
|
||||||
use ethereum_types::{Address, H520, H256, U256};
|
use ethereum_types::{Address, H520, H256, U256};
|
||||||
use ethkey::Signature;
|
use ethkey::Signature;
|
||||||
use types::header::Header;
|
use types::header::Header;
|
||||||
use rlp::encode;
|
use rlp::encode;
|
||||||
use block::*;
|
use block::*;
|
||||||
use test_helpers::{
|
use test_helpers::{
|
||||||
generate_dummy_client_with_spec, get_temp_state_db,
|
generate_dummy_client_with_spec_and_accounts, get_temp_state_db,
|
||||||
TestNotify
|
TestNotify
|
||||||
};
|
};
|
||||||
|
use account_provider::AccountProvider;
|
||||||
use spec::Spec;
|
use spec::Spec;
|
||||||
use types::transaction::{Action, Transaction};
|
use types::transaction::{Action, Transaction};
|
||||||
use engines::{Seal, Engine, EngineError, EthEngine};
|
use engines::{Seal, Engine, EngineError, EthEngine};
|
||||||
@@ -1636,14 +1633,14 @@ mod tests {
|
|||||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes, addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes, addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||||
let b2 = b2.close_and_lock().unwrap();
|
let b2 = b2.close_and_lock().unwrap();
|
||||||
|
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
if let Seal::Regular(seal) = engine.generate_seal(b1.block(), &genesis_header) {
|
if let Seal::Regular(seal) = engine.generate_seal(b1.block(), &genesis_header) {
|
||||||
assert!(b1.clone().try_seal(engine, seal).is_ok());
|
assert!(b1.clone().try_seal(engine, seal).is_ok());
|
||||||
// Second proposal is forbidden.
|
// Second proposal is forbidden.
|
||||||
assert!(engine.generate_seal(b1.block(), &genesis_header) == Seal::None);
|
assert!(engine.generate_seal(b1.block(), &genesis_header) == Seal::None);
|
||||||
}
|
}
|
||||||
|
|
||||||
engine.set_signer(Box::new((tap, addr2, "2".into())));
|
engine.set_signer(tap, addr2, "2".into());
|
||||||
if let Seal::Regular(seal) = engine.generate_seal(b2.block(), &genesis_header) {
|
if let Seal::Regular(seal) = engine.generate_seal(b2.block(), &genesis_header) {
|
||||||
assert!(b2.clone().try_seal(engine, seal).is_ok());
|
assert!(b2.clone().try_seal(engine, seal).is_ok());
|
||||||
// Second proposal is forbidden.
|
// Second proposal is forbidden.
|
||||||
@@ -1670,13 +1667,13 @@ mod tests {
|
|||||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes, addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes, addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||||
let b2 = b2.close_and_lock().unwrap();
|
let b2 = b2.close_and_lock().unwrap();
|
||||||
|
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
match engine.generate_seal(b1.block(), &genesis_header) {
|
match engine.generate_seal(b1.block(), &genesis_header) {
|
||||||
Seal::None | Seal::Proposal(_) => panic!("wrong seal"),
|
Seal::None | Seal::Proposal(_) => panic!("wrong seal"),
|
||||||
Seal::Regular(_) => {
|
Seal::Regular(_) => {
|
||||||
engine.step();
|
engine.step();
|
||||||
|
|
||||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
engine.set_signer(tap.clone(), addr2, "0".into());
|
||||||
match engine.generate_seal(b2.block(), &genesis_header) {
|
match engine.generate_seal(b2.block(), &genesis_header) {
|
||||||
Seal::Regular(_) | Seal::Proposal(_) => panic!("sealed despite wrong difficulty"),
|
Seal::Regular(_) | Seal::Proposal(_) => panic!("sealed despite wrong difficulty"),
|
||||||
Seal::None => {}
|
Seal::None => {}
|
||||||
@@ -1784,7 +1781,7 @@ mod tests {
|
|||||||
assert!(aura.verify_block_family(&header, &parent_header).is_ok());
|
assert!(aura.verify_block_family(&header, &parent_header).is_ok());
|
||||||
assert_eq!(last_benign.load(AtomicOrdering::SeqCst), 0);
|
assert_eq!(last_benign.load(AtomicOrdering::SeqCst), 0);
|
||||||
|
|
||||||
aura.set_signer(Box::new((Arc::new(AccountProvider::transient_provider()), Default::default(), "".into())));
|
aura.set_signer(Arc::new(AccountProvider::transient_provider()), Default::default(), "".into());
|
||||||
|
|
||||||
// Do not report on steps skipped between genesis and first block.
|
// Do not report on steps skipped between genesis and first block.
|
||||||
header.set_number(1);
|
header.set_number(1);
|
||||||
@@ -1894,12 +1891,12 @@ mod tests {
|
|||||||
|
|
||||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||||
|
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_test_round_empty_steps);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_test_round_empty_steps, None);
|
||||||
let notify = Arc::new(TestNotify::default());
|
let notify = Arc::new(TestNotify::default());
|
||||||
client.add_notify(notify.clone());
|
client.add_notify(notify.clone());
|
||||||
engine.register_client(Arc::downgrade(&client) as _);
|
engine.register_client(Arc::downgrade(&client) as _);
|
||||||
|
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
|
|
||||||
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
let b1 = OpenBlock::new(engine, Default::default(), false, db1, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||||
let b1 = b1.close_and_lock().unwrap();
|
let b1 = b1.close_and_lock().unwrap();
|
||||||
@@ -1933,7 +1930,7 @@ mod tests {
|
|||||||
|
|
||||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||||
|
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_test_round_empty_steps);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_test_round_empty_steps, None);
|
||||||
let notify = Arc::new(TestNotify::default());
|
let notify = Arc::new(TestNotify::default());
|
||||||
client.add_notify(notify.clone());
|
client.add_notify(notify.clone());
|
||||||
engine.register_client(Arc::downgrade(&client) as _);
|
engine.register_client(Arc::downgrade(&client) as _);
|
||||||
@@ -1943,7 +1940,7 @@ mod tests {
|
|||||||
let b1 = b1.close_and_lock().unwrap();
|
let b1 = b1.close_and_lock().unwrap();
|
||||||
|
|
||||||
// since the block is empty it isn't sealed and we generate empty steps
|
// since the block is empty it isn't sealed and we generate empty steps
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||||
engine.step();
|
engine.step();
|
||||||
|
|
||||||
@@ -1960,9 +1957,9 @@ mod tests {
|
|||||||
let b2 = b2.close_and_lock().unwrap();
|
let b2 = b2.close_and_lock().unwrap();
|
||||||
|
|
||||||
// we will now seal a block with 1tx and include the accumulated empty step message
|
// we will now seal a block with 1tx and include the accumulated empty step message
|
||||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
engine.set_signer(tap.clone(), addr2, "0".into());
|
||||||
if let Seal::Regular(seal) = engine.generate_seal(b2.block(), &genesis_header) {
|
if let Seal::Regular(seal) = engine.generate_seal(b2.block(), &genesis_header) {
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
let empty_step2 = sealed_empty_step(engine, 2, &genesis_header.hash());
|
let empty_step2 = sealed_empty_step(engine, 2, &genesis_header.hash());
|
||||||
let empty_steps = ::rlp::encode_list(&vec![empty_step2]);
|
let empty_steps = ::rlp::encode_list(&vec![empty_step2]);
|
||||||
|
|
||||||
@@ -1986,7 +1983,7 @@ mod tests {
|
|||||||
|
|
||||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||||
|
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_test_round_empty_steps);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_test_round_empty_steps, None);
|
||||||
let notify = Arc::new(TestNotify::default());
|
let notify = Arc::new(TestNotify::default());
|
||||||
client.add_notify(notify.clone());
|
client.add_notify(notify.clone());
|
||||||
engine.register_client(Arc::downgrade(&client) as _);
|
engine.register_client(Arc::downgrade(&client) as _);
|
||||||
@@ -1996,14 +1993,14 @@ mod tests {
|
|||||||
let b1 = b1.close_and_lock().unwrap();
|
let b1 = b1.close_and_lock().unwrap();
|
||||||
|
|
||||||
// since the block is empty it isn't sealed and we generate empty steps
|
// since the block is empty it isn't sealed and we generate empty steps
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||||
engine.step();
|
engine.step();
|
||||||
|
|
||||||
// step 3
|
// step 3
|
||||||
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes.clone(), addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
let b2 = OpenBlock::new(engine, Default::default(), false, db2, &genesis_header, last_hashes.clone(), addr2, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||||
let b2 = b2.close_and_lock().unwrap();
|
let b2 = b2.close_and_lock().unwrap();
|
||||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
engine.set_signer(tap.clone(), addr2, "0".into());
|
||||||
assert_eq!(engine.generate_seal(b2.block(), &genesis_header), Seal::None);
|
assert_eq!(engine.generate_seal(b2.block(), &genesis_header), Seal::None);
|
||||||
engine.step();
|
engine.step();
|
||||||
|
|
||||||
@@ -2012,10 +2009,10 @@ mod tests {
|
|||||||
let b3 = OpenBlock::new(engine, Default::default(), false, db3, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
let b3 = OpenBlock::new(engine, Default::default(), false, db3, &genesis_header, last_hashes.clone(), addr1, (3141562.into(), 31415620.into()), vec![], false, &mut Vec::new().into_iter()).unwrap();
|
||||||
let b3 = b3.close_and_lock().unwrap();
|
let b3 = b3.close_and_lock().unwrap();
|
||||||
|
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
if let Seal::Regular(seal) = engine.generate_seal(b3.block(), &genesis_header) {
|
if let Seal::Regular(seal) = engine.generate_seal(b3.block(), &genesis_header) {
|
||||||
let empty_step2 = sealed_empty_step(engine, 2, &genesis_header.hash());
|
let empty_step2 = sealed_empty_step(engine, 2, &genesis_header.hash());
|
||||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
engine.set_signer(tap.clone(), addr2, "0".into());
|
||||||
let empty_step3 = sealed_empty_step(engine, 3, &genesis_header.hash());
|
let empty_step3 = sealed_empty_step(engine, 3, &genesis_header.hash());
|
||||||
|
|
||||||
let empty_steps = ::rlp::encode_list(&vec![empty_step2, empty_step3]);
|
let empty_steps = ::rlp::encode_list(&vec![empty_step2, empty_step3]);
|
||||||
@@ -2038,7 +2035,7 @@ mod tests {
|
|||||||
|
|
||||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||||
|
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_test_round_empty_steps);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_test_round_empty_steps, None);
|
||||||
engine.register_client(Arc::downgrade(&client) as _);
|
engine.register_client(Arc::downgrade(&client) as _);
|
||||||
|
|
||||||
// step 2
|
// step 2
|
||||||
@@ -2046,7 +2043,7 @@ mod tests {
|
|||||||
let b1 = b1.close_and_lock().unwrap();
|
let b1 = b1.close_and_lock().unwrap();
|
||||||
|
|
||||||
// since the block is empty it isn't sealed and we generate empty steps
|
// since the block is empty it isn't sealed and we generate empty steps
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||||
engine.step();
|
engine.step();
|
||||||
|
|
||||||
@@ -2100,7 +2097,7 @@ mod tests {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// empty step with valid signature from incorrect proposer for step
|
// empty step with valid signature from incorrect proposer for step
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
let empty_steps = vec![sealed_empty_step(engine, 1, &parent_header.hash())];
|
let empty_steps = vec![sealed_empty_step(engine, 1, &parent_header.hash())];
|
||||||
set_empty_steps_seal(&mut header, 2, &signature, &empty_steps);
|
set_empty_steps_seal(&mut header, 2, &signature, &empty_steps);
|
||||||
|
|
||||||
@@ -2110,9 +2107,9 @@ mod tests {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// valid empty steps
|
// valid empty steps
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
let empty_step2 = sealed_empty_step(engine, 2, &parent_header.hash());
|
let empty_step2 = sealed_empty_step(engine, 2, &parent_header.hash());
|
||||||
engine.set_signer(Box::new((tap.clone(), addr2, "0".into())));
|
engine.set_signer(tap.clone(), addr2, "0".into());
|
||||||
let empty_step3 = sealed_empty_step(engine, 3, &parent_header.hash());
|
let empty_step3 = sealed_empty_step(engine, 3, &parent_header.hash());
|
||||||
|
|
||||||
let empty_steps = vec![empty_step2, empty_step3];
|
let empty_steps = vec![empty_step2, empty_step3];
|
||||||
@@ -2137,7 +2134,10 @@ mod tests {
|
|||||||
|
|
||||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||||
|
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_test_round_block_reward_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(
|
||||||
|
Spec::new_test_round_block_reward_contract,
|
||||||
|
None,
|
||||||
|
);
|
||||||
engine.register_client(Arc::downgrade(&client) as _);
|
engine.register_client(Arc::downgrade(&client) as _);
|
||||||
|
|
||||||
// step 2
|
// step 2
|
||||||
@@ -2157,7 +2157,7 @@ mod tests {
|
|||||||
let b1 = b1.close_and_lock().unwrap();
|
let b1 = b1.close_and_lock().unwrap();
|
||||||
|
|
||||||
// since the block is empty it isn't sealed and we generate empty steps
|
// since the block is empty it isn't sealed and we generate empty steps
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
|
||||||
engine.step();
|
engine.step();
|
||||||
|
|
||||||
@@ -2195,7 +2195,7 @@ mod tests {
|
|||||||
let engine = &*spec.engine;
|
let engine = &*spec.engine;
|
||||||
|
|
||||||
let addr1 = accounts[0];
|
let addr1 = accounts[0];
|
||||||
engine.set_signer(Box::new((tap.clone(), addr1, "1".into())));
|
engine.set_signer(tap.clone(), addr1, "1".into());
|
||||||
|
|
||||||
let mut header: Header = Header::default();
|
let mut header: Header = Header::default();
|
||||||
let empty_step = empty_step(engine, 1, &header.parent_hash());
|
let empty_step = empty_step(engine, 1, &header.parent_hash());
|
||||||
@@ -2276,7 +2276,7 @@ mod tests {
|
|||||||
header.set_author(accounts[0]);
|
header.set_author(accounts[0]);
|
||||||
|
|
||||||
// when
|
// when
|
||||||
engine.set_signer(Box::new((tap.clone(), accounts[1], "0".into())));
|
engine.set_signer(tap.clone(), accounts[1], "0".into());
|
||||||
let empty_steps = vec![
|
let empty_steps = vec![
|
||||||
sealed_empty_step(&*engine, 1, &parent.hash()),
|
sealed_empty_step(&*engine, 1, &parent.hash()),
|
||||||
sealed_empty_step(&*engine, 1, &parent.hash()),
|
sealed_empty_step(&*engine, 1, &parent.hash()),
|
||||||
@@ -2313,9 +2313,9 @@ mod tests {
|
|||||||
header.set_author(accounts[0]);
|
header.set_author(accounts[0]);
|
||||||
|
|
||||||
// when
|
// when
|
||||||
engine.set_signer(Box::new((tap.clone(), accounts[1], "0".into())));
|
engine.set_signer(tap.clone(), accounts[1], "0".into());
|
||||||
let es1 = sealed_empty_step(&*engine, 1, &parent.hash());
|
let es1 = sealed_empty_step(&*engine, 1, &parent.hash());
|
||||||
engine.set_signer(Box::new((tap.clone(), accounts[0], "1".into())));
|
engine.set_signer(tap.clone(), accounts[0], "1".into());
|
||||||
let es2 = sealed_empty_step(&*engine, 2, &parent.hash());
|
let es2 = sealed_empty_step(&*engine, 2, &parent.hash());
|
||||||
|
|
||||||
let mut empty_steps = vec![es2, es1];
|
let mut empty_steps = vec![es2, es1];
|
||||||
|
|||||||
@@ -16,18 +16,19 @@
|
|||||||
|
|
||||||
//! A blockchain engine that supports a basic, non-BFT proof-of-authority.
|
//! A blockchain engine that supports a basic, non-BFT proof-of-authority.
|
||||||
|
|
||||||
use std::sync::Weak;
|
use std::sync::{Weak, Arc};
|
||||||
use ethereum_types::{H256, H520};
|
use ethereum_types::{H256, H520, Address};
|
||||||
use parking_lot::RwLock;
|
use parking_lot::RwLock;
|
||||||
use ethkey::{self, Signature};
|
use ethkey::{self, Password, Signature};
|
||||||
|
use account_provider::AccountProvider;
|
||||||
use block::*;
|
use block::*;
|
||||||
use engines::{Engine, Seal, ConstructedVerifier, EngineError};
|
use engines::{Engine, Seal, ConstructedVerifier, EngineError};
|
||||||
use engines::signer::EngineSigner;
|
|
||||||
use error::{BlockError, Error};
|
use error::{BlockError, Error};
|
||||||
use ethjson;
|
use ethjson;
|
||||||
use client::EngineClient;
|
use client::EngineClient;
|
||||||
use machine::{AuxiliaryData, Call, EthereumMachine};
|
use machine::{AuxiliaryData, Call, EthereumMachine};
|
||||||
use types::header::{Header, ExtendedHeader};
|
use types::header::{Header, ExtendedHeader};
|
||||||
|
use super::signer::EngineSigner;
|
||||||
use super::validator_set::{ValidatorSet, SimpleList, new_validator_set};
|
use super::validator_set::{ValidatorSet, SimpleList, new_validator_set};
|
||||||
|
|
||||||
/// `BasicAuthority` params.
|
/// `BasicAuthority` params.
|
||||||
@@ -75,7 +76,7 @@ fn verify_external(header: &Header, validators: &ValidatorSet) -> Result<(), Err
|
|||||||
/// Engine using `BasicAuthority`, trivial proof-of-authority consensus.
|
/// Engine using `BasicAuthority`, trivial proof-of-authority consensus.
|
||||||
pub struct BasicAuthority {
|
pub struct BasicAuthority {
|
||||||
machine: EthereumMachine,
|
machine: EthereumMachine,
|
||||||
signer: RwLock<Option<Box<EngineSigner>>>,
|
signer: RwLock<EngineSigner>,
|
||||||
validators: Box<ValidatorSet>,
|
validators: Box<ValidatorSet>,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -84,7 +85,7 @@ impl BasicAuthority {
|
|||||||
pub fn new(our_params: BasicAuthorityParams, machine: EthereumMachine) -> Self {
|
pub fn new(our_params: BasicAuthorityParams, machine: EthereumMachine) -> Self {
|
||||||
BasicAuthority {
|
BasicAuthority {
|
||||||
machine: machine,
|
machine: machine,
|
||||||
signer: RwLock::new(None),
|
signer: Default::default(),
|
||||||
validators: new_validator_set(our_params.validators),
|
validators: new_validator_set(our_params.validators),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -189,16 +190,12 @@ impl Engine<EthereumMachine> for BasicAuthority {
|
|||||||
self.validators.register_client(client);
|
self.validators.register_client(client);
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_signer(&self, signer: Box<EngineSigner>) {
|
fn set_signer(&self, ap: Arc<AccountProvider>, address: Address, password: Password) {
|
||||||
*self.signer.write() = Some(signer);
|
self.signer.write().set(ap, address, password);
|
||||||
}
|
}
|
||||||
|
|
||||||
fn sign(&self, hash: H256) -> Result<Signature, Error> {
|
fn sign(&self, hash: H256) -> Result<Signature, Error> {
|
||||||
Ok(self.signer.read()
|
Ok(self.signer.read().sign(hash)?)
|
||||||
.as_ref()
|
|
||||||
.ok_or_else(|| ethkey::Error::InvalidAddress)?
|
|
||||||
.sign(hash)?
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> {
|
fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> {
|
||||||
@@ -217,7 +214,7 @@ mod tests {
|
|||||||
use ethereum_types::H520;
|
use ethereum_types::H520;
|
||||||
use block::*;
|
use block::*;
|
||||||
use test_helpers::get_temp_state_db;
|
use test_helpers::get_temp_state_db;
|
||||||
use accounts::AccountProvider;
|
use account_provider::AccountProvider;
|
||||||
use types::header::Header;
|
use types::header::Header;
|
||||||
use spec::Spec;
|
use spec::Spec;
|
||||||
use engines::Seal;
|
use engines::Seal;
|
||||||
@@ -260,7 +257,7 @@ mod tests {
|
|||||||
|
|
||||||
let spec = new_test_authority();
|
let spec = new_test_authority();
|
||||||
let engine = &*spec.engine;
|
let engine = &*spec.engine;
|
||||||
engine.set_signer(Box::new((Arc::new(tap), addr, "".into())));
|
engine.set_signer(Arc::new(tap), addr, "".into());
|
||||||
let genesis_header = spec.genesis_header();
|
let genesis_header = spec.genesis_header();
|
||||||
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
|
||||||
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
let last_hashes = Arc::new(vec![genesis_header.hash()]);
|
||||||
@@ -278,7 +275,7 @@ mod tests {
|
|||||||
|
|
||||||
let engine = new_test_authority().engine;
|
let engine = new_test_authority().engine;
|
||||||
assert!(!engine.seals_internally().unwrap());
|
assert!(!engine.seals_internally().unwrap());
|
||||||
engine.set_signer(Box::new((Arc::new(tap), authority, "".into())));
|
engine.set_signer(Arc::new(tap), authority, "".into());
|
||||||
assert!(engine.seals_internally().unwrap());
|
assert!(engine.seals_internally().unwrap());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ use std::sync::Arc;
|
|||||||
use hash::keccak;
|
use hash::keccak;
|
||||||
use error::Error;
|
use error::Error;
|
||||||
use machine::WithRewards;
|
use machine::WithRewards;
|
||||||
use parity_machine::Machine;
|
use parity_machine::{Machine, WithBalances};
|
||||||
use trace;
|
use trace;
|
||||||
use types::BlockNumber;
|
use types::BlockNumber;
|
||||||
use super::{SystemOrCodeCall, SystemOrCodeCallKind};
|
use super::{SystemOrCodeCall, SystemOrCodeCallKind};
|
||||||
@@ -152,7 +152,7 @@ impl BlockRewardContract {
|
|||||||
|
|
||||||
/// Applies the given block rewards, i.e. adds the given balance to each beneficiary' address.
|
/// Applies the given block rewards, i.e. adds the given balance to each beneficiary' address.
|
||||||
/// If tracing is enabled the operations are recorded.
|
/// If tracing is enabled the operations are recorded.
|
||||||
pub fn apply_block_rewards<M: Machine + WithRewards>(
|
pub fn apply_block_rewards<M: Machine + WithBalances + WithRewards>(
|
||||||
rewards: &[(Address, RewardKind, U256)],
|
rewards: &[(Address, RewardKind, U256)],
|
||||||
block: &mut M::LiveBlock,
|
block: &mut M::LiveBlock,
|
||||||
machine: &M,
|
machine: &M,
|
||||||
@@ -170,14 +170,17 @@ mod test {
|
|||||||
use client::PrepareOpenBlock;
|
use client::PrepareOpenBlock;
|
||||||
use ethereum_types::U256;
|
use ethereum_types::U256;
|
||||||
use spec::Spec;
|
use spec::Spec;
|
||||||
use test_helpers::generate_dummy_client_with_spec;
|
use test_helpers::generate_dummy_client_with_spec_and_accounts;
|
||||||
|
|
||||||
use engines::SystemOrCodeCallKind;
|
use engines::SystemOrCodeCallKind;
|
||||||
use super::{BlockRewardContract, RewardKind};
|
use super::{BlockRewardContract, RewardKind};
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn block_reward_contract() {
|
fn block_reward_contract() {
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_test_round_block_reward_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(
|
||||||
|
Spec::new_test_round_block_reward_contract,
|
||||||
|
None,
|
||||||
|
);
|
||||||
|
|
||||||
let machine = Spec::new_test_machine();
|
let machine = Spec::new_test_machine();
|
||||||
|
|
||||||
|
|||||||
@@ -20,17 +20,16 @@ mod authority_round;
|
|||||||
mod basic_authority;
|
mod basic_authority;
|
||||||
mod instant_seal;
|
mod instant_seal;
|
||||||
mod null_engine;
|
mod null_engine;
|
||||||
|
mod signer;
|
||||||
mod validator_set;
|
mod validator_set;
|
||||||
|
|
||||||
pub mod block_reward;
|
pub mod block_reward;
|
||||||
pub mod signer;
|
|
||||||
|
|
||||||
pub use self::authority_round::AuthorityRound;
|
pub use self::authority_round::AuthorityRound;
|
||||||
pub use self::basic_authority::BasicAuthority;
|
pub use self::basic_authority::BasicAuthority;
|
||||||
pub use self::epoch::{EpochVerifier, Transition as EpochTransition};
|
pub use self::epoch::{EpochVerifier, Transition as EpochTransition};
|
||||||
pub use self::instant_seal::{InstantSeal, InstantSealParams};
|
pub use self::instant_seal::{InstantSeal, InstantSealParams};
|
||||||
pub use self::null_engine::NullEngine;
|
pub use self::null_engine::NullEngine;
|
||||||
pub use self::signer::EngineSigner;
|
|
||||||
|
|
||||||
// TODO [ToDr] Remove re-export (#10130)
|
// TODO [ToDr] Remove re-export (#10130)
|
||||||
pub use types::engines::ForkChoice;
|
pub use types::engines::ForkChoice;
|
||||||
@@ -40,6 +39,7 @@ use std::sync::{Weak, Arc};
|
|||||||
use std::collections::{BTreeMap, HashMap};
|
use std::collections::{BTreeMap, HashMap};
|
||||||
use std::{fmt, error};
|
use std::{fmt, error};
|
||||||
|
|
||||||
|
use account_provider::AccountProvider;
|
||||||
use builtin::Builtin;
|
use builtin::Builtin;
|
||||||
use vm::{EnvInfo, Schedule, CreateContractAddress, CallType, ActionValue};
|
use vm::{EnvInfo, Schedule, CreateContractAddress, CallType, ActionValue};
|
||||||
use error::Error;
|
use error::Error;
|
||||||
@@ -49,7 +49,7 @@ use snapshot::SnapshotComponents;
|
|||||||
use spec::CommonParams;
|
use spec::CommonParams;
|
||||||
use types::transaction::{self, UnverifiedTransaction, SignedTransaction};
|
use types::transaction::{self, UnverifiedTransaction, SignedTransaction};
|
||||||
|
|
||||||
use ethkey::{Signature};
|
use ethkey::{Password, Signature};
|
||||||
use parity_machine::{Machine, LocalizedMachine as Localized, TotalScoredHeader};
|
use parity_machine::{Machine, LocalizedMachine as Localized, TotalScoredHeader};
|
||||||
use ethereum_types::{H256, U256, Address};
|
use ethereum_types::{H256, U256, Address};
|
||||||
use unexpected::{Mismatch, OutOfBounds};
|
use unexpected::{Mismatch, OutOfBounds};
|
||||||
@@ -380,8 +380,8 @@ pub trait Engine<M: Machine>: Sync + Send {
|
|||||||
/// Takes a header of a fully verified block.
|
/// Takes a header of a fully verified block.
|
||||||
fn is_proposal(&self, _verified_header: &M::Header) -> bool { false }
|
fn is_proposal(&self, _verified_header: &M::Header) -> bool { false }
|
||||||
|
|
||||||
/// Register a component which signs consensus messages.
|
/// Register an account which signs consensus messages.
|
||||||
fn set_signer(&self, _signer: Box<EngineSigner>) {}
|
fn set_signer(&self, _account_provider: Arc<AccountProvider>, _address: Address, _password: Password) {}
|
||||||
|
|
||||||
/// Sign using the EngineSigner, to be used for consensus tx signing.
|
/// Sign using the EngineSigner, to be used for consensus tx signing.
|
||||||
fn sign(&self, _hash: H256) -> Result<Signature, M::Error> { unimplemented!() }
|
fn sign(&self, _hash: H256) -> Result<Signature, M::Error> { unimplemented!() }
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ use engines::Engine;
|
|||||||
use engines::block_reward::{self, RewardKind};
|
use engines::block_reward::{self, RewardKind};
|
||||||
use ethereum_types::U256;
|
use ethereum_types::U256;
|
||||||
use machine::WithRewards;
|
use machine::WithRewards;
|
||||||
use parity_machine::{Machine, Header, LiveBlock, TotalScoredHeader};
|
use parity_machine::{Header, LiveBlock, WithBalances, TotalScoredHeader};
|
||||||
use types::BlockNumber;
|
use types::BlockNumber;
|
||||||
|
|
||||||
/// Params for a null engine.
|
/// Params for a null engine.
|
||||||
@@ -58,7 +58,7 @@ impl<M: Default> Default for NullEngine<M> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<M: Machine + WithRewards> Engine<M> for NullEngine<M>
|
impl<M: WithBalances + WithRewards> Engine<M> for NullEngine<M>
|
||||||
where M::ExtendedHeader: TotalScoredHeader,
|
where M::ExtendedHeader: TotalScoredHeader,
|
||||||
<M::ExtendedHeader as TotalScoredHeader>::Value: Ord
|
<M::ExtendedHeader as TotalScoredHeader>::Value: Ord
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -16,68 +16,49 @@
|
|||||||
|
|
||||||
//! A signer used by Engines which need to sign messages.
|
//! A signer used by Engines which need to sign messages.
|
||||||
|
|
||||||
|
use std::sync::Arc;
|
||||||
use ethereum_types::{H256, Address};
|
use ethereum_types::{H256, Address};
|
||||||
use ethkey::{self, Signature};
|
use ethkey::{Password, Signature};
|
||||||
|
use account_provider::{self, AccountProvider};
|
||||||
|
|
||||||
/// Everything that an Engine needs to sign messages.
|
/// Everything that an Engine needs to sign messages.
|
||||||
pub trait EngineSigner: Send + Sync {
|
pub struct EngineSigner {
|
||||||
|
account_provider: Arc<AccountProvider>,
|
||||||
|
address: Option<Address>,
|
||||||
|
password: Option<Password>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for EngineSigner {
|
||||||
|
fn default() -> Self {
|
||||||
|
EngineSigner {
|
||||||
|
account_provider: Arc::new(AccountProvider::transient_provider()),
|
||||||
|
address: Default::default(),
|
||||||
|
password: Default::default(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl EngineSigner {
|
||||||
|
/// Set up the signer to sign with given address and password.
|
||||||
|
pub fn set(&mut self, ap: Arc<AccountProvider>, address: Address, password: Password) {
|
||||||
|
self.account_provider = ap;
|
||||||
|
self.address = Some(address);
|
||||||
|
self.password = Some(password);
|
||||||
|
debug!(target: "poa", "Setting Engine signer to {}", address);
|
||||||
|
}
|
||||||
|
|
||||||
/// Sign a consensus message hash.
|
/// Sign a consensus message hash.
|
||||||
fn sign(&self, hash: H256) -> Result<Signature, ethkey::Error>;
|
pub fn sign(&self, hash: H256) -> Result<Signature, account_provider::SignError> {
|
||||||
|
self.account_provider.sign(self.address.unwrap_or_else(Default::default), self.password.clone(), hash)
|
||||||
/// Signing address
|
|
||||||
fn address(&self) -> Address;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Creates a new `EngineSigner` from given key pair.
|
|
||||||
pub fn from_keypair(keypair: ethkey::KeyPair) -> Box<EngineSigner> {
|
|
||||||
Box::new(Signer(keypair))
|
|
||||||
}
|
|
||||||
|
|
||||||
struct Signer(ethkey::KeyPair);
|
|
||||||
|
|
||||||
impl EngineSigner for Signer {
|
|
||||||
fn sign(&self, hash: H256) -> Result<Signature, ethkey::Error> {
|
|
||||||
ethkey::sign(self.0.secret(), &hash)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn address(&self) -> Address {
|
/// Signing address.
|
||||||
self.0.address()
|
pub fn address(&self) -> Option<Address> {
|
||||||
}
|
self.address.clone()
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
/// Check if the signing address was set.
|
||||||
mod test_signer {
|
pub fn is_some(&self) -> bool {
|
||||||
use std::sync::Arc;
|
self.address.is_some()
|
||||||
|
|
||||||
use ethkey::Password;
|
|
||||||
use accounts::{self, AccountProvider, SignError};
|
|
||||||
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
impl EngineSigner for (Arc<AccountProvider>, Address, Password) {
|
|
||||||
fn sign(&self, hash: H256) -> Result<Signature, ethkey::Error> {
|
|
||||||
match self.0.sign(self.1, Some(self.2.clone()), hash) {
|
|
||||||
Err(SignError::NotUnlocked) => unreachable!(),
|
|
||||||
Err(SignError::NotFound) => Err(ethkey::Error::InvalidAddress),
|
|
||||||
Err(SignError::Hardware(err)) => {
|
|
||||||
warn!("Error using hardware wallet for engine: {:?}", err);
|
|
||||||
Err(ethkey::Error::InvalidSecret)
|
|
||||||
},
|
|
||||||
Err(SignError::SStore(accounts::Error::EthKey(err))) => Err(err),
|
|
||||||
Err(SignError::SStore(accounts::Error::EthKeyCrypto(err))) => {
|
|
||||||
warn!("Low level crypto error: {:?}", err);
|
|
||||||
Err(ethkey::Error::InvalidSecret)
|
|
||||||
},
|
|
||||||
Err(SignError::SStore(err)) => {
|
|
||||||
warn!("Error signing for engine: {:?}", err);
|
|
||||||
Err(ethkey::Error::InvalidSignature)
|
|
||||||
},
|
|
||||||
Ok(ok) => Ok(ok),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn address(&self) -> Address {
|
|
||||||
self.1
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -141,10 +141,10 @@ mod tests {
|
|||||||
use rlp::encode;
|
use rlp::encode;
|
||||||
use spec::Spec;
|
use spec::Spec;
|
||||||
use types::header::Header;
|
use types::header::Header;
|
||||||
use accounts::AccountProvider;
|
use account_provider::AccountProvider;
|
||||||
use miner::{self, MinerService};
|
use miner::MinerService;
|
||||||
use types::ids::BlockId;
|
use types::ids::BlockId;
|
||||||
use test_helpers::generate_dummy_client_with_spec;
|
use test_helpers::generate_dummy_client_with_spec_and_accounts;
|
||||||
use call_contract::CallContract;
|
use call_contract::CallContract;
|
||||||
use client::{BlockChainClient, ChainInfo, BlockInfo};
|
use client::{BlockChainClient, ChainInfo, BlockInfo};
|
||||||
use super::super::ValidatorSet;
|
use super::super::ValidatorSet;
|
||||||
@@ -152,7 +152,7 @@ mod tests {
|
|||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn fetches_validators() {
|
fn fetches_validators() {
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_validator_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_validator_contract, None);
|
||||||
let vc = Arc::new(ValidatorContract::new("0000000000000000000000000000000000000005".parse::<Address>().unwrap()));
|
let vc = Arc::new(ValidatorContract::new("0000000000000000000000000000000000000005".parse::<Address>().unwrap()));
|
||||||
vc.register_client(Arc::downgrade(&client) as _);
|
vc.register_client(Arc::downgrade(&client) as _);
|
||||||
let last_hash = client.best_block_header().hash();
|
let last_hash = client.best_block_header().hash();
|
||||||
@@ -164,14 +164,13 @@ mod tests {
|
|||||||
fn reports_validators() {
|
fn reports_validators() {
|
||||||
let tap = Arc::new(AccountProvider::transient_provider());
|
let tap = Arc::new(AccountProvider::transient_provider());
|
||||||
let v1 = tap.insert_account(keccak("1").into(), &"".into()).unwrap();
|
let v1 = tap.insert_account(keccak("1").into(), &"".into()).unwrap();
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_validator_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_validator_contract, Some(tap.clone()));
|
||||||
client.engine().register_client(Arc::downgrade(&client) as _);
|
client.engine().register_client(Arc::downgrade(&client) as _);
|
||||||
let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap();
|
let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap();
|
||||||
|
|
||||||
// Make sure reporting can be done.
|
// Make sure reporting can be done.
|
||||||
client.miner().set_gas_range_target((1_000_000.into(), 1_000_000.into()));
|
client.miner().set_gas_range_target((1_000_000.into(), 1_000_000.into()));
|
||||||
let signer = Box::new((tap.clone(), v1, "".into()));
|
client.miner().set_author(v1, Some("".into())).unwrap();
|
||||||
client.miner().set_author(miner::Author::Sealer(signer));
|
|
||||||
|
|
||||||
// Check a block that is a bit in future, reject it but don't report the validator.
|
// Check a block that is a bit in future, reject it but don't report the validator.
|
||||||
let mut header = Header::default();
|
let mut header = Header::default();
|
||||||
|
|||||||
@@ -150,15 +150,15 @@ mod tests {
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::collections::BTreeMap;
|
use std::collections::BTreeMap;
|
||||||
use hash::keccak;
|
use hash::keccak;
|
||||||
use accounts::AccountProvider;
|
use account_provider::AccountProvider;
|
||||||
use client::{BlockChainClient, ChainInfo, BlockInfo, ImportBlock};
|
use client::{BlockChainClient, ChainInfo, BlockInfo, ImportBlock};
|
||||||
use engines::EpochChange;
|
use engines::EpochChange;
|
||||||
use engines::validator_set::ValidatorSet;
|
use engines::validator_set::ValidatorSet;
|
||||||
use ethkey::Secret;
|
use ethkey::Secret;
|
||||||
use types::header::Header;
|
use types::header::Header;
|
||||||
use miner::{self, MinerService};
|
use miner::MinerService;
|
||||||
use spec::Spec;
|
use spec::Spec;
|
||||||
use test_helpers::{generate_dummy_client_with_spec, generate_dummy_client_with_spec_and_data};
|
use test_helpers::{generate_dummy_client_with_spec_and_accounts, generate_dummy_client_with_spec_and_data};
|
||||||
use types::ids::BlockId;
|
use types::ids::BlockId;
|
||||||
use ethereum_types::Address;
|
use ethereum_types::Address;
|
||||||
use verification::queue::kind::blocks::Unverified;
|
use verification::queue::kind::blocks::Unverified;
|
||||||
@@ -171,29 +171,26 @@ mod tests {
|
|||||||
let s0: Secret = keccak("0").into();
|
let s0: Secret = keccak("0").into();
|
||||||
let v0 = tap.insert_account(s0.clone(), &"".into()).unwrap();
|
let v0 = tap.insert_account(s0.clone(), &"".into()).unwrap();
|
||||||
let v1 = tap.insert_account(keccak("1").into(), &"".into()).unwrap();
|
let v1 = tap.insert_account(keccak("1").into(), &"".into()).unwrap();
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_validator_multi);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_validator_multi, Some(tap));
|
||||||
client.engine().register_client(Arc::downgrade(&client) as _);
|
client.engine().register_client(Arc::downgrade(&client) as _);
|
||||||
|
|
||||||
// Make sure txs go through.
|
// Make sure txs go through.
|
||||||
client.miner().set_gas_range_target((1_000_000.into(), 1_000_000.into()));
|
client.miner().set_gas_range_target((1_000_000.into(), 1_000_000.into()));
|
||||||
|
|
||||||
// Wrong signer for the first block.
|
// Wrong signer for the first block.
|
||||||
let signer = Box::new((tap.clone(), v1, "".into()));
|
client.miner().set_author(v1, Some("".into())).unwrap();
|
||||||
client.miner().set_author(miner::Author::Sealer(signer));
|
|
||||||
client.transact_contract(Default::default(), Default::default()).unwrap();
|
client.transact_contract(Default::default(), Default::default()).unwrap();
|
||||||
::client::EngineClient::update_sealing(&*client);
|
::client::EngineClient::update_sealing(&*client);
|
||||||
assert_eq!(client.chain_info().best_block_number, 0);
|
assert_eq!(client.chain_info().best_block_number, 0);
|
||||||
// Right signer for the first block.
|
// Right signer for the first block.
|
||||||
let signer = Box::new((tap.clone(), v0, "".into()));
|
client.miner().set_author(v0, Some("".into())).unwrap();
|
||||||
client.miner().set_author(miner::Author::Sealer(signer));
|
|
||||||
::client::EngineClient::update_sealing(&*client);
|
::client::EngineClient::update_sealing(&*client);
|
||||||
assert_eq!(client.chain_info().best_block_number, 1);
|
assert_eq!(client.chain_info().best_block_number, 1);
|
||||||
// This time v0 is wrong.
|
// This time v0 is wrong.
|
||||||
client.transact_contract(Default::default(), Default::default()).unwrap();
|
client.transact_contract(Default::default(), Default::default()).unwrap();
|
||||||
::client::EngineClient::update_sealing(&*client);
|
::client::EngineClient::update_sealing(&*client);
|
||||||
assert_eq!(client.chain_info().best_block_number, 1);
|
assert_eq!(client.chain_info().best_block_number, 1);
|
||||||
let signer = Box::new((tap.clone(), v1, "".into()));
|
client.miner().set_author(v1, Some("".into())).unwrap();
|
||||||
client.miner().set_author(miner::Author::Sealer(signer));
|
|
||||||
::client::EngineClient::update_sealing(&*client);
|
::client::EngineClient::update_sealing(&*client);
|
||||||
assert_eq!(client.chain_info().best_block_number, 2);
|
assert_eq!(client.chain_info().best_block_number, 2);
|
||||||
// v1 is still good.
|
// v1 is still good.
|
||||||
|
|||||||
@@ -445,19 +445,19 @@ mod tests {
|
|||||||
use ethereum_types::Address;
|
use ethereum_types::Address;
|
||||||
use types::ids::BlockId;
|
use types::ids::BlockId;
|
||||||
use spec::Spec;
|
use spec::Spec;
|
||||||
use accounts::AccountProvider;
|
use account_provider::AccountProvider;
|
||||||
use types::transaction::{Transaction, Action};
|
use types::transaction::{Transaction, Action};
|
||||||
use client::{ChainInfo, BlockInfo, ImportBlock};
|
use client::{ChainInfo, BlockInfo, ImportBlock};
|
||||||
use ethkey::Secret;
|
use ethkey::Secret;
|
||||||
use miner::{self, MinerService};
|
use miner::MinerService;
|
||||||
use test_helpers::{generate_dummy_client_with_spec, generate_dummy_client_with_spec_and_data};
|
use test_helpers::{generate_dummy_client_with_spec_and_accounts, generate_dummy_client_with_spec_and_data};
|
||||||
use super::super::ValidatorSet;
|
use super::super::ValidatorSet;
|
||||||
use super::{ValidatorSafeContract, EVENT_NAME_HASH};
|
use super::{ValidatorSafeContract, EVENT_NAME_HASH};
|
||||||
use verification::queue::kind::blocks::Unverified;
|
use verification::queue::kind::blocks::Unverified;
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn fetches_validators() {
|
fn fetches_validators() {
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_validator_safe_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_validator_safe_contract, None);
|
||||||
let vc = Arc::new(ValidatorSafeContract::new("0000000000000000000000000000000000000005".parse::<Address>().unwrap()));
|
let vc = Arc::new(ValidatorSafeContract::new("0000000000000000000000000000000000000005".parse::<Address>().unwrap()));
|
||||||
vc.register_client(Arc::downgrade(&client) as _);
|
vc.register_client(Arc::downgrade(&client) as _);
|
||||||
let last_hash = client.best_block_header().hash();
|
let last_hash = client.best_block_header().hash();
|
||||||
@@ -472,12 +472,11 @@ mod tests {
|
|||||||
let v0 = tap.insert_account(s0.clone(), &"".into()).unwrap();
|
let v0 = tap.insert_account(s0.clone(), &"".into()).unwrap();
|
||||||
let v1 = tap.insert_account(keccak("0").into(), &"".into()).unwrap();
|
let v1 = tap.insert_account(keccak("0").into(), &"".into()).unwrap();
|
||||||
let chain_id = Spec::new_validator_safe_contract().chain_id();
|
let chain_id = Spec::new_validator_safe_contract().chain_id();
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_validator_safe_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_validator_safe_contract, Some(tap));
|
||||||
client.engine().register_client(Arc::downgrade(&client) as _);
|
client.engine().register_client(Arc::downgrade(&client) as _);
|
||||||
let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap();
|
let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap();
|
||||||
let signer = Box::new((tap.clone(), v1, "".into()));
|
|
||||||
|
|
||||||
client.miner().set_author(miner::Author::Sealer(signer));
|
client.miner().set_author(v1, Some("".into())).unwrap();
|
||||||
// Remove "1" validator.
|
// Remove "1" validator.
|
||||||
let tx = Transaction {
|
let tx = Transaction {
|
||||||
nonce: 0.into(),
|
nonce: 0.into(),
|
||||||
@@ -505,13 +504,11 @@ mod tests {
|
|||||||
assert_eq!(client.chain_info().best_block_number, 1);
|
assert_eq!(client.chain_info().best_block_number, 1);
|
||||||
|
|
||||||
// Switch to the validator that is still there.
|
// Switch to the validator that is still there.
|
||||||
let signer = Box::new((tap.clone(), v0, "".into()));
|
client.miner().set_author(v0, Some("".into())).unwrap();
|
||||||
client.miner().set_author(miner::Author::Sealer(signer));
|
|
||||||
::client::EngineClient::update_sealing(&*client);
|
::client::EngineClient::update_sealing(&*client);
|
||||||
assert_eq!(client.chain_info().best_block_number, 2);
|
assert_eq!(client.chain_info().best_block_number, 2);
|
||||||
// Switch back to the added validator, since the state is updated.
|
// Switch back to the added validator, since the state is updated.
|
||||||
let signer = Box::new((tap.clone(), v1, "".into()));
|
client.miner().set_author(v1, Some("".into())).unwrap();
|
||||||
client.miner().set_author(miner::Author::Sealer(signer));
|
|
||||||
let tx = Transaction {
|
let tx = Transaction {
|
||||||
nonce: 2.into(),
|
nonce: 2.into(),
|
||||||
gas_price: 0.into(),
|
gas_price: 0.into(),
|
||||||
@@ -542,7 +539,7 @@ mod tests {
|
|||||||
use types::header::Header;
|
use types::header::Header;
|
||||||
use types::log_entry::LogEntry;
|
use types::log_entry::LogEntry;
|
||||||
|
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_validator_safe_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_validator_safe_contract, None);
|
||||||
let engine = client.engine().clone();
|
let engine = client.engine().clone();
|
||||||
let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap();
|
let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap();
|
||||||
|
|
||||||
@@ -579,7 +576,7 @@ mod tests {
|
|||||||
use types::header::Header;
|
use types::header::Header;
|
||||||
use engines::{EpochChange, Proof};
|
use engines::{EpochChange, Proof};
|
||||||
|
|
||||||
let client = generate_dummy_client_with_spec(Spec::new_validator_safe_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_validator_safe_contract, None);
|
||||||
let engine = client.engine().clone();
|
let engine = client.engine().clone();
|
||||||
|
|
||||||
let mut new_header = Header::default();
|
let mut new_header = Header::default();
|
||||||
|
|||||||
@@ -25,10 +25,11 @@ use ethtrie::TrieError;
|
|||||||
use rlp;
|
use rlp;
|
||||||
use snappy::InvalidInput;
|
use snappy::InvalidInput;
|
||||||
use snapshot::Error as SnapshotError;
|
use snapshot::Error as SnapshotError;
|
||||||
use types::BlockNumber;
|
|
||||||
use types::transaction::Error as TransactionError;
|
use types::transaction::Error as TransactionError;
|
||||||
|
use types::BlockNumber;
|
||||||
use unexpected::{Mismatch, OutOfBounds};
|
use unexpected::{Mismatch, OutOfBounds};
|
||||||
|
|
||||||
|
use account_provider::SignError as AccountsError;
|
||||||
use engines::EngineError;
|
use engines::EngineError;
|
||||||
|
|
||||||
pub use executed::{ExecutionError, CallError};
|
pub use executed::{ExecutionError, CallError};
|
||||||
@@ -246,6 +247,12 @@ error_chain! {
|
|||||||
display("Snapshot error {}", err)
|
display("Snapshot error {}", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[doc = "Account Provider error"]
|
||||||
|
AccountProvider(err: AccountsError) {
|
||||||
|
description("Accounts Provider error")
|
||||||
|
display("Accounts Provider error {}", err)
|
||||||
|
}
|
||||||
|
|
||||||
#[doc = "PoW hash is invalid or out of date."]
|
#[doc = "PoW hash is invalid or out of date."]
|
||||||
PowHashInvalid {
|
PowHashInvalid {
|
||||||
description("PoW hash is invalid or out of date.")
|
description("PoW hash is invalid or out of date.")
|
||||||
@@ -266,6 +273,12 @@ error_chain! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl From<AccountsError> for Error {
|
||||||
|
fn from(err: AccountsError) -> Error {
|
||||||
|
ErrorKind::AccountProvider(err).into()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
impl From<SnapshotError> for Error {
|
impl From<SnapshotError> for Error {
|
||||||
fn from(err: SnapshotError) -> Error {
|
fn from(err: SnapshotError) -> Error {
|
||||||
match err {
|
match err {
|
||||||
|
|||||||
@@ -113,8 +113,6 @@ pub struct EthashParams {
|
|||||||
pub block_reward_contract: Option<BlockRewardContract>,
|
pub block_reward_contract: Option<BlockRewardContract>,
|
||||||
/// Difficulty bomb delays.
|
/// Difficulty bomb delays.
|
||||||
pub difficulty_bomb_delays: BTreeMap<BlockNumber, BlockNumber>,
|
pub difficulty_bomb_delays: BTreeMap<BlockNumber, BlockNumber>,
|
||||||
/// Block to transition to progpow
|
|
||||||
pub progpow_transition: u64,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<ethjson::spec::EthashParams> for EthashParams {
|
impl From<ethjson::spec::EthashParams> for EthashParams {
|
||||||
@@ -155,7 +153,6 @@ impl From<ethjson::spec::EthashParams> for EthashParams {
|
|||||||
}),
|
}),
|
||||||
expip2_transition: p.expip2_transition.map_or(u64::max_value(), Into::into),
|
expip2_transition: p.expip2_transition.map_or(u64::max_value(), Into::into),
|
||||||
expip2_duration_limit: p.expip2_duration_limit.map_or(30, Into::into),
|
expip2_duration_limit: p.expip2_duration_limit.map_or(30, Into::into),
|
||||||
progpow_transition: p.progpow_transition.map_or(u64::max_value(), Into::into),
|
|
||||||
block_reward_contract_transition: p.block_reward_contract_transition.map_or(0, Into::into),
|
block_reward_contract_transition: p.block_reward_contract_transition.map_or(0, Into::into),
|
||||||
block_reward_contract: match (p.block_reward_contract_code, p.block_reward_contract_address) {
|
block_reward_contract: match (p.block_reward_contract_code, p.block_reward_contract_address) {
|
||||||
(Some(code), _) => Some(BlockRewardContract::new_from_code(Arc::new(code.into()))),
|
(Some(code), _) => Some(BlockRewardContract::new_from_code(Arc::new(code.into()))),
|
||||||
@@ -185,12 +182,10 @@ impl Ethash {
|
|||||||
machine: EthereumMachine,
|
machine: EthereumMachine,
|
||||||
optimize_for: T,
|
optimize_for: T,
|
||||||
) -> Arc<Self> {
|
) -> Arc<Self> {
|
||||||
let progpow_transition = ethash_params.progpow_transition;
|
|
||||||
|
|
||||||
Arc::new(Ethash {
|
Arc::new(Ethash {
|
||||||
ethash_params,
|
ethash_params,
|
||||||
machine,
|
machine,
|
||||||
pow: EthashManager::new(cache_dir.as_ref(), optimize_for.into(), progpow_transition),
|
pow: EthashManager::new(cache_dir.as_ref(), optimize_for.into()),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -325,8 +320,7 @@ impl Engine<EthereumMachine> for Arc<Ethash> {
|
|||||||
let difficulty = ethash::boundary_to_difficulty(&H256(quick_get_difficulty(
|
let difficulty = ethash::boundary_to_difficulty(&H256(quick_get_difficulty(
|
||||||
&header.bare_hash().0,
|
&header.bare_hash().0,
|
||||||
seal.nonce.low_u64(),
|
seal.nonce.low_u64(),
|
||||||
&seal.mix_hash.0,
|
&seal.mix_hash.0
|
||||||
header.number() >= self.ethash_params.progpow_transition
|
|
||||||
)));
|
)));
|
||||||
|
|
||||||
if &difficulty < header.difficulty() {
|
if &difficulty < header.difficulty() {
|
||||||
@@ -529,7 +523,6 @@ mod tests {
|
|||||||
block_reward_contract: None,
|
block_reward_contract: None,
|
||||||
block_reward_contract_transition: 0,
|
block_reward_contract_transition: 0,
|
||||||
difficulty_bomb_delays: BTreeMap::new(),
|
difficulty_bomb_delays: BTreeMap::new(),
|
||||||
progpow_transition: u64::max_value(),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -18,7 +18,10 @@
|
|||||||
|
|
||||||
use ethjson;
|
use ethjson;
|
||||||
|
|
||||||
#[cfg(feature="ci-skip-tests")]
|
#[cfg(all(not(test), feature = "ci-skip-issue"))]
|
||||||
|
compile_error!("ci-skip-tests can only be enabled for testing builds.");
|
||||||
|
|
||||||
|
#[cfg(feature="ci-skip-issue")]
|
||||||
lazy_static!{
|
lazy_static!{
|
||||||
pub static ref SKIP_TEST_STATE: ethjson::test::SkipStates = {
|
pub static ref SKIP_TEST_STATE: ethjson::test::SkipStates = {
|
||||||
let skip_data = include_bytes!("../../res/ethereum/tests-issues/currents.json");
|
let skip_data = include_bytes!("../../res/ethereum/tests-issues/currents.json");
|
||||||
@@ -26,7 +29,7 @@ lazy_static!{
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(not(feature="ci-skip-tests"))]
|
#[cfg(not(feature="ci-skip-issue"))]
|
||||||
lazy_static!{
|
lazy_static!{
|
||||||
pub static ref SKIP_TEST_STATE: ethjson::test::SkipStates = {
|
pub static ref SKIP_TEST_STATE: ethjson::test::SkipStates = {
|
||||||
ethjson::test::SkipStates::empty()
|
ethjson::test::SkipStates::empty()
|
||||||
|
|||||||
@@ -18,6 +18,9 @@ use ethjson;
|
|||||||
use trie::{TrieFactory, TrieSpec};
|
use trie::{TrieFactory, TrieSpec};
|
||||||
use ethtrie::RlpCodec;
|
use ethtrie::RlpCodec;
|
||||||
use ethereum_types::H256;
|
use ethereum_types::H256;
|
||||||
|
use memorydb::MemoryDB;
|
||||||
|
use keccak_hasher::KeccakHasher;
|
||||||
|
use kvdb::DBValue;
|
||||||
|
|
||||||
use super::HookType;
|
use super::HookType;
|
||||||
|
|
||||||
@@ -34,7 +37,7 @@ fn test_trie<H: FnMut(&str, HookType)>(json: &[u8], trie: TrieSpec, start_stop_h
|
|||||||
for (name, test) in tests.into_iter() {
|
for (name, test) in tests.into_iter() {
|
||||||
start_stop_hook(&name, HookType::OnStart);
|
start_stop_hook(&name, HookType::OnStart);
|
||||||
|
|
||||||
let mut memdb = journaldb::new_memory_db();
|
let mut memdb = MemoryDB::<KeccakHasher, DBValue>::new();
|
||||||
let mut root = H256::default();
|
let mut root = H256::default();
|
||||||
let mut t = factory.create(&mut memdb, &mut root);
|
let mut t = factory.create(&mut memdb, &mut root);
|
||||||
|
|
||||||
|
|||||||
@@ -74,7 +74,8 @@ extern crate ethcore_miner;
|
|||||||
extern crate ethereum_types;
|
extern crate ethereum_types;
|
||||||
extern crate ethjson;
|
extern crate ethjson;
|
||||||
extern crate ethkey;
|
extern crate ethkey;
|
||||||
extern crate hash_db;
|
extern crate ethstore;
|
||||||
|
extern crate hashdb;
|
||||||
extern crate heapsize;
|
extern crate heapsize;
|
||||||
extern crate itertools;
|
extern crate itertools;
|
||||||
extern crate journaldb;
|
extern crate journaldb;
|
||||||
@@ -85,7 +86,7 @@ extern crate kvdb_memorydb;
|
|||||||
extern crate len_caching_lock;
|
extern crate len_caching_lock;
|
||||||
extern crate lru_cache;
|
extern crate lru_cache;
|
||||||
extern crate memory_cache;
|
extern crate memory_cache;
|
||||||
extern crate memory_db;
|
extern crate memorydb;
|
||||||
extern crate num;
|
extern crate num;
|
||||||
extern crate num_cpus;
|
extern crate num_cpus;
|
||||||
extern crate parity_bytes as bytes;
|
extern crate parity_bytes as bytes;
|
||||||
@@ -93,7 +94,7 @@ extern crate parity_crypto;
|
|||||||
extern crate parity_machine;
|
extern crate parity_machine;
|
||||||
extern crate parity_snappy as snappy;
|
extern crate parity_snappy as snappy;
|
||||||
extern crate parking_lot;
|
extern crate parking_lot;
|
||||||
extern crate trie_db as trie;
|
extern crate patricia_trie as trie;
|
||||||
extern crate patricia_trie_ethereum as ethtrie;
|
extern crate patricia_trie_ethereum as ethtrie;
|
||||||
extern crate rand;
|
extern crate rand;
|
||||||
extern crate rayon;
|
extern crate rayon;
|
||||||
@@ -107,8 +108,6 @@ extern crate using_queue;
|
|||||||
extern crate vm;
|
extern crate vm;
|
||||||
extern crate wasm;
|
extern crate wasm;
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
extern crate ethcore_accounts as accounts;
|
|
||||||
#[cfg(feature = "stratum")]
|
#[cfg(feature = "stratum")]
|
||||||
extern crate ethcore_stratum;
|
extern crate ethcore_stratum;
|
||||||
#[cfg(any(test, feature = "tempdir"))]
|
#[cfg(any(test, feature = "tempdir"))]
|
||||||
@@ -117,10 +116,11 @@ extern crate tempdir;
|
|||||||
extern crate kvdb_rocksdb;
|
extern crate kvdb_rocksdb;
|
||||||
#[cfg(any(test, feature = "blooms-db"))]
|
#[cfg(any(test, feature = "blooms-db"))]
|
||||||
extern crate blooms_db;
|
extern crate blooms_db;
|
||||||
#[cfg(any(test, feature = "env_logger"))]
|
#[cfg(any(target_os = "linux", target_os = "macos", target_os = "windows", target_os = "android"))]
|
||||||
extern crate env_logger;
|
extern crate hardware_wallet;
|
||||||
#[cfg(test)]
|
|
||||||
extern crate rlp_compress;
|
#[cfg(not(any(target_os = "linux", target_os = "macos", target_os = "windows", target_os = "android")))]
|
||||||
|
extern crate fake_hardware_wallet as hardware_wallet;
|
||||||
|
|
||||||
#[macro_use]
|
#[macro_use]
|
||||||
extern crate ethabi_derive;
|
extern crate ethabi_derive;
|
||||||
@@ -144,15 +144,15 @@ extern crate serde_derive;
|
|||||||
#[cfg_attr(test, macro_use)]
|
#[cfg_attr(test, macro_use)]
|
||||||
extern crate evm;
|
extern crate evm;
|
||||||
|
|
||||||
#[cfg(all(test, feature = "price-info"))]
|
#[cfg(any(test, feature = "env_logger"))]
|
||||||
extern crate fetch;
|
extern crate env_logger;
|
||||||
|
#[cfg(test)]
|
||||||
#[cfg(all(test, feature = "price-info"))]
|
extern crate rlp_compress;
|
||||||
extern crate parity_runtime;
|
|
||||||
|
|
||||||
#[cfg(not(time_checked_add))]
|
#[cfg(not(time_checked_add))]
|
||||||
extern crate time_utils;
|
extern crate time_utils;
|
||||||
|
|
||||||
|
pub mod account_provider;
|
||||||
pub mod block;
|
pub mod block;
|
||||||
pub mod builtin;
|
pub mod builtin;
|
||||||
pub mod client;
|
pub mod client;
|
||||||
|
|||||||
@@ -438,7 +438,14 @@ impl ::parity_machine::Machine for EthereumMachine {
|
|||||||
type AncestryAction = ::types::ancestry_action::AncestryAction;
|
type AncestryAction = ::types::ancestry_action::AncestryAction;
|
||||||
|
|
||||||
type Error = Error;
|
type Error = Error;
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> ::parity_machine::LocalizedMachine<'a> for EthereumMachine {
|
||||||
|
type StateContext = Call<'a>;
|
||||||
|
type AuxiliaryData = AuxiliaryData<'a>;
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ::parity_machine::WithBalances for EthereumMachine {
|
||||||
fn balance(&self, live: &ExecutedBlock, address: &Address) -> Result<U256, Error> {
|
fn balance(&self, live: &ExecutedBlock, address: &Address) -> Result<U256, Error> {
|
||||||
live.state().balance(address).map_err(Into::into)
|
live.state().balance(address).map_err(Into::into)
|
||||||
}
|
}
|
||||||
@@ -448,11 +455,6 @@ impl ::parity_machine::Machine for EthereumMachine {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'a> ::parity_machine::LocalizedMachine<'a> for EthereumMachine {
|
|
||||||
type StateContext = Call<'a>;
|
|
||||||
type AuxiliaryData = AuxiliaryData<'a>;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// A state machine that uses block rewards.
|
/// A state machine that uses block rewards.
|
||||||
pub trait WithRewards: ::parity_machine::Machine {
|
pub trait WithRewards: ::parity_machine::Machine {
|
||||||
/// Note block rewards, traces each reward storing information about benefactor, amount and type
|
/// Note block rewards, traces each reward storing information about benefactor, amount and type
|
||||||
|
|||||||
@@ -23,11 +23,11 @@ use ansi_term::Colour;
|
|||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use call_contract::CallContract;
|
use call_contract::CallContract;
|
||||||
use ethcore_miner::gas_pricer::GasPricer;
|
use ethcore_miner::gas_pricer::GasPricer;
|
||||||
use ethcore_miner::local_accounts::LocalAccounts;
|
|
||||||
use ethcore_miner::pool::{self, TransactionQueue, VerifiedTransaction, QueueStatus, PrioritizationStrategy};
|
use ethcore_miner::pool::{self, TransactionQueue, VerifiedTransaction, QueueStatus, PrioritizationStrategy};
|
||||||
#[cfg(feature = "work-notify")]
|
#[cfg(feature = "work-notify")]
|
||||||
use ethcore_miner::work_notify::NotifyWork;
|
use ethcore_miner::work_notify::NotifyWork;
|
||||||
use ethereum_types::{H256, U256, Address};
|
use ethereum_types::{H256, U256, Address};
|
||||||
|
use ethkey::Password;
|
||||||
use io::IoChannel;
|
use io::IoChannel;
|
||||||
use miner::pool_client::{PoolClient, CachedNonceClient, NonceCache};
|
use miner::pool_client::{PoolClient, CachedNonceClient, NonceCache};
|
||||||
use miner;
|
use miner;
|
||||||
@@ -46,12 +46,13 @@ use types::header::Header;
|
|||||||
use types::receipt::RichReceipt;
|
use types::receipt::RichReceipt;
|
||||||
use using_queue::{UsingQueue, GetAction};
|
use using_queue::{UsingQueue, GetAction};
|
||||||
|
|
||||||
|
use account_provider::{AccountProvider, SignError as AccountError};
|
||||||
use block::{ClosedBlock, IsBlock, SealedBlock};
|
use block::{ClosedBlock, IsBlock, SealedBlock};
|
||||||
use client::{
|
use client::{
|
||||||
BlockChain, ChainInfo, BlockProducer, SealedBlockImporter, Nonce, TransactionInfo, TransactionId
|
BlockChain, ChainInfo, BlockProducer, SealedBlockImporter, Nonce, TransactionInfo, TransactionId
|
||||||
};
|
};
|
||||||
use client::{BlockId, ClientIoMessage};
|
use client::{BlockId, ClientIoMessage};
|
||||||
use engines::{EthEngine, Seal, EngineSigner};
|
use engines::{EthEngine, Seal};
|
||||||
use error::{Error, ErrorKind};
|
use error::{Error, ErrorKind};
|
||||||
use executed::ExecutionError;
|
use executed::ExecutionError;
|
||||||
use executive::contract_address;
|
use executive::contract_address;
|
||||||
@@ -196,25 +197,6 @@ pub struct AuthoringParams {
|
|||||||
pub extra_data: Bytes,
|
pub extra_data: Bytes,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Block sealing mechanism
|
|
||||||
pub enum Author {
|
|
||||||
/// Sealing block is external and we only need a reward beneficiary (i.e. PoW)
|
|
||||||
External(Address),
|
|
||||||
/// Sealing is done internally, we need a way to create signatures to seal block (i.e. PoA)
|
|
||||||
Sealer(Box<EngineSigner>),
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Author {
|
|
||||||
/// Get author's address.
|
|
||||||
pub fn address(&self) -> Address {
|
|
||||||
match *self {
|
|
||||||
Author::External(address) => address,
|
|
||||||
Author::Sealer(ref sealer) => sealer.address(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
struct SealingWork {
|
struct SealingWork {
|
||||||
queue: UsingQueue<ClosedBlock>,
|
queue: UsingQueue<ClosedBlock>,
|
||||||
enabled: bool,
|
enabled: bool,
|
||||||
@@ -245,7 +227,7 @@ pub struct Miner {
|
|||||||
// TODO [ToDr] Arc is only required because of price updater
|
// TODO [ToDr] Arc is only required because of price updater
|
||||||
transaction_queue: Arc<TransactionQueue>,
|
transaction_queue: Arc<TransactionQueue>,
|
||||||
engine: Arc<EthEngine>,
|
engine: Arc<EthEngine>,
|
||||||
accounts: Arc<LocalAccounts>,
|
accounts: Option<Arc<AccountProvider>>,
|
||||||
io_channel: RwLock<Option<IoChannel<ClientIoMessage>>>,
|
io_channel: RwLock<Option<IoChannel<ClientIoMessage>>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -263,11 +245,11 @@ impl Miner {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Creates new instance of miner Arc.
|
/// Creates new instance of miner Arc.
|
||||||
pub fn new<A: LocalAccounts + 'static>(
|
pub fn new(
|
||||||
options: MinerOptions,
|
options: MinerOptions,
|
||||||
gas_pricer: GasPricer,
|
gas_pricer: GasPricer,
|
||||||
spec: &Spec,
|
spec: &Spec,
|
||||||
accounts: A,
|
accounts: Option<Arc<AccountProvider>>,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
let limits = options.pool_limits.clone();
|
let limits = options.pool_limits.clone();
|
||||||
let verifier_options = options.pool_verification_options.clone();
|
let verifier_options = options.pool_verification_options.clone();
|
||||||
@@ -290,7 +272,7 @@ impl Miner {
|
|||||||
nonce_cache: NonceCache::new(nonce_cache_size),
|
nonce_cache: NonceCache::new(nonce_cache_size),
|
||||||
options,
|
options,
|
||||||
transaction_queue: Arc::new(TransactionQueue::new(limits, verifier_options, tx_queue_strategy)),
|
transaction_queue: Arc::new(TransactionQueue::new(limits, verifier_options, tx_queue_strategy)),
|
||||||
accounts: Arc::new(accounts),
|
accounts,
|
||||||
engine: spec.engine.clone(),
|
engine: spec.engine.clone(),
|
||||||
io_channel: RwLock::new(None),
|
io_channel: RwLock::new(None),
|
||||||
}
|
}
|
||||||
@@ -299,7 +281,7 @@ impl Miner {
|
|||||||
/// Creates new instance of miner with given spec and accounts.
|
/// Creates new instance of miner with given spec and accounts.
|
||||||
///
|
///
|
||||||
/// NOTE This should be only used for tests.
|
/// NOTE This should be only used for tests.
|
||||||
pub fn new_for_tests(spec: &Spec, accounts: Option<HashSet<Address>>) -> Miner {
|
pub fn new_for_tests(spec: &Spec, accounts: Option<Arc<AccountProvider>>) -> Miner {
|
||||||
let minimal_gas_price = 0.into();
|
let minimal_gas_price = 0.into();
|
||||||
Miner::new(MinerOptions {
|
Miner::new(MinerOptions {
|
||||||
pool_verification_options: pool::verifier::Options {
|
pool_verification_options: pool::verifier::Options {
|
||||||
@@ -310,7 +292,7 @@ impl Miner {
|
|||||||
},
|
},
|
||||||
reseal_min_period: Duration::from_secs(0),
|
reseal_min_period: Duration::from_secs(0),
|
||||||
..Default::default()
|
..Default::default()
|
||||||
}, GasPricer::new_fixed(minimal_gas_price), spec, accounts.unwrap_or_default())
|
}, GasPricer::new_fixed(minimal_gas_price), spec, accounts)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Sets `IoChannel`
|
/// Sets `IoChannel`
|
||||||
@@ -377,7 +359,7 @@ impl Miner {
|
|||||||
chain,
|
chain,
|
||||||
&self.nonce_cache,
|
&self.nonce_cache,
|
||||||
&*self.engine,
|
&*self.engine,
|
||||||
&*self.accounts,
|
self.accounts.as_ref().map(|x| &**x),
|
||||||
self.options.refuse_service_transactions,
|
self.options.refuse_service_transactions,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -845,11 +827,14 @@ impl miner::MinerService for Miner {
|
|||||||
self.params.write().extra_data = extra_data;
|
self.params.write().extra_data = extra_data;
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_author(&self, author: Author) {
|
fn set_author(&self, address: Address, password: Option<Password>) -> Result<(), AccountError> {
|
||||||
self.params.write().author = author.address();
|
self.params.write().author = address;
|
||||||
|
|
||||||
if let Author::Sealer(signer) = author {
|
if self.engine.seals_internally().is_some() && password.is_some() {
|
||||||
if self.engine.seals_internally().is_some() {
|
if let Some(ref ap) = self.accounts {
|
||||||
|
let password = password.unwrap_or_else(|| Password::from(String::new()));
|
||||||
|
// Sign test message
|
||||||
|
ap.sign(address.clone(), Some(password.clone()), Default::default())?;
|
||||||
// Enable sealing
|
// Enable sealing
|
||||||
self.sealing.lock().enabled = true;
|
self.sealing.lock().enabled = true;
|
||||||
// --------------------------------------------------------------------------
|
// --------------------------------------------------------------------------
|
||||||
@@ -857,10 +842,14 @@ impl miner::MinerService for Miner {
|
|||||||
// | (some `Engine`s call `EngineClient.update_sealing()`) |
|
// | (some `Engine`s call `EngineClient.update_sealing()`) |
|
||||||
// | Make sure to release the locks before calling that method. |
|
// | Make sure to release the locks before calling that method. |
|
||||||
// --------------------------------------------------------------------------
|
// --------------------------------------------------------------------------
|
||||||
self.engine.set_signer(signer);
|
self.engine.set_signer(ap.clone(), address, password);
|
||||||
|
Ok(())
|
||||||
} else {
|
} else {
|
||||||
warn!("Setting an EngineSigner while Engine does not require one.");
|
warn!(target: "miner", "No account provider");
|
||||||
|
Err(AccountError::NotFound)
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
Ok(())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -873,32 +862,6 @@ impl miner::MinerService for Miner {
|
|||||||
self.params.read().gas_range_target.0 / 5
|
self.params.read().gas_range_target.0 / 5
|
||||||
}
|
}
|
||||||
|
|
||||||
fn set_minimal_gas_price(&self, new_price: U256) -> Result<bool, &str> {
|
|
||||||
match *self.gas_pricer.lock() {
|
|
||||||
// Binding the gas pricer to `gp` here to prevent
|
|
||||||
// a deadlock when calling recalibrate()
|
|
||||||
ref mut gp @ GasPricer::Fixed(_) => {
|
|
||||||
trace!(target: "miner", "minimal_gas_price: recalibrating fixed...");
|
|
||||||
*gp = GasPricer::new_fixed(new_price);
|
|
||||||
|
|
||||||
let txq = self.transaction_queue.clone();
|
|
||||||
let mut options = self.options.pool_verification_options.clone();
|
|
||||||
gp.recalibrate(move |gas_price| {
|
|
||||||
debug!(target: "miner", "minimal_gas_price: Got gas price! {}", gas_price);
|
|
||||||
options.minimal_gas_price = gas_price;
|
|
||||||
txq.set_verifier_options(options);
|
|
||||||
});
|
|
||||||
|
|
||||||
Ok(true)
|
|
||||||
},
|
|
||||||
#[cfg(feature = "price-info")]
|
|
||||||
GasPricer::Calibrated(_) => {
|
|
||||||
let error_msg = "Can't update fixed gas price while automatic gas calibration is enabled.";
|
|
||||||
return Err(error_msg);
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn import_external_transactions<C: miner::BlockChainClient>(
|
fn import_external_transactions<C: miner::BlockChainClient>(
|
||||||
&self,
|
&self,
|
||||||
chain: &C,
|
chain: &C,
|
||||||
@@ -954,12 +917,11 @@ impl miner::MinerService for Miner {
|
|||||||
pending: PendingTransaction,
|
pending: PendingTransaction,
|
||||||
trusted: bool
|
trusted: bool
|
||||||
) -> Result<(), transaction::Error> {
|
) -> Result<(), transaction::Error> {
|
||||||
// treat the tx as local if the option is enabled, if we have the account, or if
|
// treat the tx as local if the option is enabled, or if we have the account
|
||||||
// the account is specified as a Prioritized Local Addresses
|
|
||||||
let sender = pending.sender();
|
let sender = pending.sender();
|
||||||
let treat_as_local = trusted
|
let treat_as_local = trusted
|
||||||
|| !self.options.tx_queue_no_unfamiliar_locals
|
|| !self.options.tx_queue_no_unfamiliar_locals
|
||||||
|| self.accounts.is_local(&sender);
|
|| self.accounts.as_ref().map(|accts| accts.has_account(sender)).unwrap_or(false);
|
||||||
|
|
||||||
if treat_as_local {
|
if treat_as_local {
|
||||||
self.import_own_transaction(chain, pending)
|
self.import_own_transaction(chain, pending)
|
||||||
@@ -1288,7 +1250,7 @@ impl miner::MinerService for Miner {
|
|||||||
chain,
|
chain,
|
||||||
&nonce_cache,
|
&nonce_cache,
|
||||||
&*engine,
|
&*engine,
|
||||||
&*accounts,
|
accounts.as_ref().map(|x| &**x),
|
||||||
refuse_service_transactions,
|
refuse_service_transactions,
|
||||||
);
|
);
|
||||||
queue.cull(client);
|
queue.cull(client);
|
||||||
@@ -1322,10 +1284,7 @@ impl miner::MinerService for Miner {
|
|||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use std::iter::FromIterator;
|
|
||||||
|
|
||||||
use super::*;
|
use super::*;
|
||||||
use accounts::AccountProvider;
|
|
||||||
use ethkey::{Generator, Random};
|
use ethkey::{Generator, Random};
|
||||||
use hash::keccak;
|
use hash::keccak;
|
||||||
use rustc_hex::FromHex;
|
use rustc_hex::FromHex;
|
||||||
@@ -1333,7 +1292,7 @@ mod tests {
|
|||||||
|
|
||||||
use client::{TestBlockChainClient, EachBlockWith, ChainInfo, ImportSealedBlock};
|
use client::{TestBlockChainClient, EachBlockWith, ChainInfo, ImportSealedBlock};
|
||||||
use miner::{MinerService, PendingOrdering};
|
use miner::{MinerService, PendingOrdering};
|
||||||
use test_helpers::{generate_dummy_client, generate_dummy_client_with_spec};
|
use test_helpers::{generate_dummy_client, generate_dummy_client_with_spec_and_accounts};
|
||||||
use types::transaction::{Transaction};
|
use types::transaction::{Transaction};
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -1396,7 +1355,7 @@ mod tests {
|
|||||||
},
|
},
|
||||||
GasPricer::new_fixed(0u64.into()),
|
GasPricer::new_fixed(0u64.into()),
|
||||||
&Spec::new_test(),
|
&Spec::new_test(),
|
||||||
::std::collections::HashSet::new(), // local accounts
|
None, // accounts provider
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1509,8 +1468,8 @@ mod tests {
|
|||||||
// given
|
// given
|
||||||
let keypair = Random.generate().unwrap();
|
let keypair = Random.generate().unwrap();
|
||||||
let client = TestBlockChainClient::default();
|
let client = TestBlockChainClient::default();
|
||||||
let mut local_accounts = ::std::collections::HashSet::new();
|
let account_provider = AccountProvider::transient_provider();
|
||||||
local_accounts.insert(keypair.address());
|
account_provider.insert_account(keypair.secret().clone(), &"".into()).expect("can add accounts to the provider we just created");
|
||||||
|
|
||||||
let miner = Miner::new(
|
let miner = Miner::new(
|
||||||
MinerOptions {
|
MinerOptions {
|
||||||
@@ -1519,7 +1478,7 @@ mod tests {
|
|||||||
},
|
},
|
||||||
GasPricer::new_fixed(0u64.into()),
|
GasPricer::new_fixed(0u64.into()),
|
||||||
&Spec::new_test(),
|
&Spec::new_test(),
|
||||||
local_accounts,
|
Some(Arc::new(account_provider)),
|
||||||
);
|
);
|
||||||
let transaction = transaction();
|
let transaction = transaction();
|
||||||
let best_block = 0;
|
let best_block = 0;
|
||||||
@@ -1551,32 +1510,6 @@ mod tests {
|
|||||||
assert_eq!(miner.prepare_pending_block(&client), BlockPreparationStatus::NotPrepared);
|
assert_eq!(miner.prepare_pending_block(&client), BlockPreparationStatus::NotPrepared);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn should_prioritize_locals() {
|
|
||||||
let client = TestBlockChainClient::default();
|
|
||||||
let transaction = transaction();
|
|
||||||
let miner = Miner::new(
|
|
||||||
MinerOptions {
|
|
||||||
tx_queue_no_unfamiliar_locals: true, // should work even with this enabled
|
|
||||||
..miner().options
|
|
||||||
},
|
|
||||||
GasPricer::new_fixed(0u64.into()),
|
|
||||||
&Spec::new_test(),
|
|
||||||
HashSet::from_iter(vec![transaction.sender()].into_iter()),
|
|
||||||
);
|
|
||||||
let best_block = 0;
|
|
||||||
|
|
||||||
// Miner with sender as a known local address should prioritize transactions from that address
|
|
||||||
let res2 = miner.import_claimed_local_transaction(&client, PendingTransaction::new(transaction, None), false);
|
|
||||||
|
|
||||||
// check to make sure the prioritized transaction is pending
|
|
||||||
assert_eq!(res2.unwrap(), ());
|
|
||||||
assert_eq!(miner.pending_transactions(best_block).unwrap().len(), 1);
|
|
||||||
assert_eq!(miner.pending_receipts(best_block).unwrap().len(), 1);
|
|
||||||
assert_eq!(miner.ready_transactions(&client, 10, PendingOrdering::Priority).len(), 1);
|
|
||||||
assert_eq!(miner.prepare_pending_block(&client), BlockPreparationStatus::NotPrepared);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn should_not_seal_unless_enabled() {
|
fn should_not_seal_unless_enabled() {
|
||||||
let miner = miner();
|
let miner = miner();
|
||||||
@@ -1614,19 +1547,12 @@ mod tests {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn should_not_fail_setting_engine_signer_without_account_provider() {
|
fn should_fail_setting_engine_signer_without_account_provider() {
|
||||||
let spec = Spec::new_test_round;
|
let spec = Spec::new_instant;
|
||||||
let tap = Arc::new(AccountProvider::transient_provider());
|
let tap = Arc::new(AccountProvider::transient_provider());
|
||||||
let addr = tap.insert_account(keccak("1").into(), &"".into()).unwrap();
|
let addr = tap.insert_account(keccak("1").into(), &"".into()).unwrap();
|
||||||
let client = generate_dummy_client_with_spec(spec);
|
let client = generate_dummy_client_with_spec_and_accounts(spec, None);
|
||||||
let engine_signer = Box::new((tap.clone(), addr, "".into()));
|
assert!(match client.miner().set_author(addr, Some("".into())) { Err(AccountError::NotFound) => true, _ => false });
|
||||||
let msg = Default::default();
|
|
||||||
assert!(client.engine().sign(msg).is_err());
|
|
||||||
|
|
||||||
// should set engine signer and miner author
|
|
||||||
client.miner().set_author(Author::Sealer(engine_signer));
|
|
||||||
assert_eq!(client.miner().authoring_params().author, addr);
|
|
||||||
assert!(client.engine().sign(msg).is_ok());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -1680,60 +1606,4 @@ mod tests {
|
|||||||
|
|
||||||
assert!(miner.is_currently_sealing());
|
assert!(miner.is_currently_sealing());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn should_set_new_minimum_gas_price() {
|
|
||||||
// Creates a new GasPricer::Fixed behind the scenes
|
|
||||||
let miner = Miner::new_for_tests(&Spec::new_test(), None);
|
|
||||||
|
|
||||||
let expected_minimum_gas_price: U256 = 0x1337.into();
|
|
||||||
miner.set_minimal_gas_price(expected_minimum_gas_price).unwrap();
|
|
||||||
|
|
||||||
let txq_options = miner.transaction_queue.status().options;
|
|
||||||
let current_minimum_gas_price = txq_options.minimal_gas_price;
|
|
||||||
|
|
||||||
assert!(current_minimum_gas_price == expected_minimum_gas_price);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(feature = "price-info")]
|
|
||||||
fn dynamic_gas_pricer() -> GasPricer {
|
|
||||||
use std::time::Duration;
|
|
||||||
use parity_runtime::Executor;
|
|
||||||
use fetch::Client as FetchClient;
|
|
||||||
use ethcore_miner::gas_price_calibrator::{GasPriceCalibrator, GasPriceCalibratorOptions};
|
|
||||||
|
|
||||||
// Don't really care about any of these settings since
|
|
||||||
// the gas pricer is never actually going to be used
|
|
||||||
let fetch = FetchClient::new(1).unwrap();
|
|
||||||
let p = Executor::new_sync();
|
|
||||||
|
|
||||||
GasPricer::new_calibrated(
|
|
||||||
GasPriceCalibrator::new(
|
|
||||||
GasPriceCalibratorOptions {
|
|
||||||
usd_per_tx: 0.0,
|
|
||||||
recalibration_period: Duration::from_secs(0),
|
|
||||||
},
|
|
||||||
fetch,
|
|
||||||
p,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
#[cfg(feature = "price-info")]
|
|
||||||
fn should_fail_to_set_new_minimum_gas_price() {
|
|
||||||
// We get a fixed gas pricer by default, need to change that
|
|
||||||
let miner = Miner::new_for_tests(&Spec::new_test(), None);
|
|
||||||
let calibrated_gas_pricer = dynamic_gas_pricer();
|
|
||||||
*miner.gas_pricer.lock() = calibrated_gas_pricer;
|
|
||||||
|
|
||||||
let expected_minimum_gas_price: U256 = 0x1337.into();
|
|
||||||
let result = miner.set_minimal_gas_price(expected_minimum_gas_price);
|
|
||||||
assert!(result.is_err());
|
|
||||||
|
|
||||||
let received_error_msg = result.unwrap_err();
|
|
||||||
let expected_error_msg = "Can't update fixed gas price while automatic gas calibration is enabled.";
|
|
||||||
|
|
||||||
assert!(received_error_msg == expected_error_msg);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -25,8 +25,7 @@ pub mod pool_client;
|
|||||||
#[cfg(feature = "stratum")]
|
#[cfg(feature = "stratum")]
|
||||||
pub mod stratum;
|
pub mod stratum;
|
||||||
|
|
||||||
pub use self::miner::{Miner, MinerOptions, Penalization, PendingSet, AuthoringParams, Author};
|
pub use self::miner::{Miner, MinerOptions, Penalization, PendingSet, AuthoringParams};
|
||||||
pub use ethcore_miner::local_accounts::LocalAccounts;
|
|
||||||
pub use ethcore_miner::pool::PendingOrdering;
|
pub use ethcore_miner::pool::PendingOrdering;
|
||||||
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
@@ -35,6 +34,7 @@ use std::collections::{BTreeSet, BTreeMap};
|
|||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use ethcore_miner::pool::{VerifiedTransaction, QueueStatus, local_transactions};
|
use ethcore_miner::pool::{VerifiedTransaction, QueueStatus, local_transactions};
|
||||||
use ethereum_types::{H256, U256, Address};
|
use ethereum_types::{H256, U256, Address};
|
||||||
|
use ethkey::Password;
|
||||||
use types::transaction::{self, UnverifiedTransaction, SignedTransaction, PendingTransaction};
|
use types::transaction::{self, UnverifiedTransaction, SignedTransaction, PendingTransaction};
|
||||||
use types::BlockNumber;
|
use types::BlockNumber;
|
||||||
use types::block::Block;
|
use types::block::Block;
|
||||||
@@ -130,8 +130,8 @@ pub trait MinerService : Send + Sync {
|
|||||||
|
|
||||||
/// Set info necessary to sign consensus messages and block authoring.
|
/// Set info necessary to sign consensus messages and block authoring.
|
||||||
///
|
///
|
||||||
/// On chains where sealing is done externally (e.g. PoW) we provide only reward beneficiary.
|
/// On PoW password is optional.
|
||||||
fn set_author(&self, author: Author);
|
fn set_author(&self, address: Address, password: Option<Password>) -> Result<(), ::account_provider::SignError>;
|
||||||
|
|
||||||
// Transaction Pool
|
// Transaction Pool
|
||||||
|
|
||||||
@@ -205,8 +205,4 @@ pub trait MinerService : Send + Sync {
|
|||||||
|
|
||||||
/// Suggested gas limit.
|
/// Suggested gas limit.
|
||||||
fn sensible_gas_limit(&self) -> U256;
|
fn sensible_gas_limit(&self) -> U256;
|
||||||
|
|
||||||
/// Set a new minimum gas limit.
|
|
||||||
/// Will not work if dynamic gas calibration is set.
|
|
||||||
fn set_minimal_gas_price(&self, gas_price: U256) -> Result<bool, &str>;
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -23,7 +23,6 @@ use std::{
|
|||||||
};
|
};
|
||||||
|
|
||||||
use ethereum_types::{H256, U256, Address};
|
use ethereum_types::{H256, U256, Address};
|
||||||
use ethcore_miner::local_accounts::LocalAccounts;
|
|
||||||
use ethcore_miner::pool;
|
use ethcore_miner::pool;
|
||||||
use ethcore_miner::pool::client::NonceClient;
|
use ethcore_miner::pool::client::NonceClient;
|
||||||
use ethcore_miner::service_transaction_checker::ServiceTransactionChecker;
|
use ethcore_miner::service_transaction_checker::ServiceTransactionChecker;
|
||||||
@@ -35,6 +34,7 @@ use types::transaction::{
|
|||||||
use types::header::Header;
|
use types::header::Header;
|
||||||
use parking_lot::RwLock;
|
use parking_lot::RwLock;
|
||||||
|
|
||||||
|
use account_provider::AccountProvider;
|
||||||
use call_contract::CallContract;
|
use call_contract::CallContract;
|
||||||
use client::{TransactionId, BlockInfo, Nonce};
|
use client::{TransactionId, BlockInfo, Nonce};
|
||||||
use engines::EthEngine;
|
use engines::EthEngine;
|
||||||
@@ -73,7 +73,7 @@ pub struct PoolClient<'a, C: 'a> {
|
|||||||
chain: &'a C,
|
chain: &'a C,
|
||||||
cached_nonces: CachedNonceClient<'a, C>,
|
cached_nonces: CachedNonceClient<'a, C>,
|
||||||
engine: &'a EthEngine,
|
engine: &'a EthEngine,
|
||||||
accounts: &'a LocalAccounts,
|
accounts: Option<&'a AccountProvider>,
|
||||||
best_block_header: Header,
|
best_block_header: Header,
|
||||||
service_transaction_checker: Option<ServiceTransactionChecker>,
|
service_transaction_checker: Option<ServiceTransactionChecker>,
|
||||||
}
|
}
|
||||||
@@ -92,14 +92,14 @@ impl<'a, C: 'a> Clone for PoolClient<'a, C> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<'a, C: 'a> PoolClient<'a, C> where
|
impl<'a, C: 'a> PoolClient<'a, C> where
|
||||||
C: BlockInfo + CallContract,
|
C: BlockInfo + CallContract,
|
||||||
{
|
{
|
||||||
/// Creates new client given chain, nonce cache, accounts and service transaction verifier.
|
/// Creates new client given chain, nonce cache, accounts and service transaction verifier.
|
||||||
pub fn new(
|
pub fn new(
|
||||||
chain: &'a C,
|
chain: &'a C,
|
||||||
cache: &'a NonceCache,
|
cache: &'a NonceCache,
|
||||||
engine: &'a EthEngine,
|
engine: &'a EthEngine,
|
||||||
accounts: &'a LocalAccounts,
|
accounts: Option<&'a AccountProvider>,
|
||||||
refuse_service_transactions: bool,
|
refuse_service_transactions: bool,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
let best_block_header = chain.best_block_header();
|
let best_block_header = chain.best_block_header();
|
||||||
@@ -151,7 +151,7 @@ impl<'a, C: 'a> pool::client::Client for PoolClient<'a, C> where
|
|||||||
pool::client::AccountDetails {
|
pool::client::AccountDetails {
|
||||||
nonce: self.cached_nonces.account_nonce(address),
|
nonce: self.cached_nonces.account_nonce(address),
|
||||||
balance: self.chain.latest_balance(address),
|
balance: self.chain.latest_balance(address),
|
||||||
is_local: self.accounts.is_local(address),
|
is_local: self.accounts.map_or(false, |accounts| accounts.has_account(*address)),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ use std::collections::BTreeMap;
|
|||||||
use itertools::Itertools;
|
use itertools::Itertools;
|
||||||
use hash::{keccak};
|
use hash::{keccak};
|
||||||
use ethereum_types::{H256, U256};
|
use ethereum_types::{H256, U256};
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use triehash::sec_trie_root;
|
use triehash::sec_trie_root;
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ use bytes::Bytes;
|
|||||||
use ethereum_types::{H256, U256};
|
use ethereum_types::{H256, U256};
|
||||||
use ethtrie::{TrieDB, TrieDBMut};
|
use ethtrie::{TrieDB, TrieDBMut};
|
||||||
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP};
|
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP};
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use rlp::{RlpStream, Rlp};
|
use rlp::{RlpStream, Rlp};
|
||||||
use snapshot::Error;
|
use snapshot::Error;
|
||||||
use std::collections::HashSet;
|
use std::collections::HashSet;
|
||||||
@@ -66,8 +66,7 @@ impl CodeState {
|
|||||||
// account address hash, account properties and the storage. Each item contains at most `max_storage_items`
|
// account address hash, account properties and the storage. Each item contains at most `max_storage_items`
|
||||||
// storage records split according to snapshot format definition.
|
// storage records split according to snapshot format definition.
|
||||||
pub fn to_fat_rlps(account_hash: &H256, acc: &BasicAccount, acct_db: &AccountDB, used_code: &mut HashSet<H256>, first_chunk_size: usize, max_chunk_size: usize) -> Result<Vec<Bytes>, Error> {
|
pub fn to_fat_rlps(account_hash: &H256, acc: &BasicAccount, acct_db: &AccountDB, used_code: &mut HashSet<H256>, first_chunk_size: usize, max_chunk_size: usize) -> Result<Vec<Bytes>, Error> {
|
||||||
let db = &(acct_db as &HashDB<_,_>);
|
let db = TrieDB::new(acct_db, &acc.storage_root)?;
|
||||||
let db = TrieDB::new(db, &acc.storage_root)?;
|
|
||||||
let mut chunks = Vec::new();
|
let mut chunks = Vec::new();
|
||||||
let mut db_iter = db.iter()?;
|
let mut db_iter = db.iter()?;
|
||||||
let mut target_chunk_size = first_chunk_size;
|
let mut target_chunk_size = first_chunk_size;
|
||||||
@@ -78,7 +77,7 @@ pub fn to_fat_rlps(account_hash: &H256, acc: &BasicAccount, acct_db: &AccountDB,
|
|||||||
account_stream.begin_list(5);
|
account_stream.begin_list(5);
|
||||||
|
|
||||||
account_stream.append(&acc.nonce)
|
account_stream.append(&acc.nonce)
|
||||||
.append(&acc.balance);
|
.append(&acc.balance);
|
||||||
|
|
||||||
// [has_code, code_hash].
|
// [has_code, code_hash].
|
||||||
if acc.code_hash == KECCAK_EMPTY {
|
if acc.code_hash == KECCAK_EMPTY {
|
||||||
@@ -188,7 +187,7 @@ pub fn from_fat_rlp(
|
|||||||
};
|
};
|
||||||
let pairs = rlp.at(4)?;
|
let pairs = rlp.at(4)?;
|
||||||
for pair_rlp in pairs.iter() {
|
for pair_rlp in pairs.iter() {
|
||||||
let k: Bytes = pair_rlp.val_at(0)?;
|
let k: Bytes = pair_rlp.val_at(0)?;
|
||||||
let v: Bytes = pair_rlp.val_at(1)?;
|
let v: Bytes = pair_rlp.val_at(1)?;
|
||||||
|
|
||||||
storage_trie.insert(&k, &v)?;
|
storage_trie.insert(&k, &v)?;
|
||||||
@@ -214,7 +213,7 @@ mod tests {
|
|||||||
|
|
||||||
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP, keccak};
|
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP, keccak};
|
||||||
use ethereum_types::{H256, Address};
|
use ethereum_types::{H256, Address};
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
use rlp::Rlp;
|
use rlp::Rlp;
|
||||||
|
|
||||||
@@ -237,9 +236,9 @@ mod tests {
|
|||||||
let thin_rlp = ::rlp::encode(&account);
|
let thin_rlp = ::rlp::encode(&account);
|
||||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||||
|
|
||||||
let fat_rlps = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), usize::max_value(), usize::max_value()).unwrap();
|
let fat_rlps = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hashdb(), &addr), &mut Default::default(), usize::max_value(), usize::max_value()).unwrap();
|
||||||
let fat_rlp = Rlp::new(&fat_rlps[0]).at(1).unwrap();
|
let fat_rlp = Rlp::new(&fat_rlps[0]).at(1).unwrap();
|
||||||
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &addr), fat_rlp, H256::zero()).unwrap().0, account);
|
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hashdb_mut(), &addr), fat_rlp, H256::zero()).unwrap().0, account);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -248,7 +247,7 @@ mod tests {
|
|||||||
let addr = Address::random();
|
let addr = Address::random();
|
||||||
|
|
||||||
let account = {
|
let account = {
|
||||||
let acct_db = AccountDBMut::new(db.as_hash_db_mut(), &addr);
|
let acct_db = AccountDBMut::new(db.as_hashdb_mut(), &addr);
|
||||||
let mut root = KECCAK_NULL_RLP;
|
let mut root = KECCAK_NULL_RLP;
|
||||||
fill_storage(acct_db, &mut root, &mut H256::zero());
|
fill_storage(acct_db, &mut root, &mut H256::zero());
|
||||||
BasicAccount {
|
BasicAccount {
|
||||||
@@ -262,9 +261,9 @@ mod tests {
|
|||||||
let thin_rlp = ::rlp::encode(&account);
|
let thin_rlp = ::rlp::encode(&account);
|
||||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||||
|
|
||||||
let fat_rlp = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), usize::max_value(), usize::max_value()).unwrap();
|
let fat_rlp = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hashdb(), &addr), &mut Default::default(), usize::max_value(), usize::max_value()).unwrap();
|
||||||
let fat_rlp = Rlp::new(&fat_rlp[0]).at(1).unwrap();
|
let fat_rlp = Rlp::new(&fat_rlp[0]).at(1).unwrap();
|
||||||
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &addr), fat_rlp, H256::zero()).unwrap().0, account);
|
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hashdb_mut(), &addr), fat_rlp, H256::zero()).unwrap().0, account);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -273,7 +272,7 @@ mod tests {
|
|||||||
let addr = Address::random();
|
let addr = Address::random();
|
||||||
|
|
||||||
let account = {
|
let account = {
|
||||||
let acct_db = AccountDBMut::new(db.as_hash_db_mut(), &addr);
|
let acct_db = AccountDBMut::new(db.as_hashdb_mut(), &addr);
|
||||||
let mut root = KECCAK_NULL_RLP;
|
let mut root = KECCAK_NULL_RLP;
|
||||||
fill_storage(acct_db, &mut root, &mut H256::zero());
|
fill_storage(acct_db, &mut root, &mut H256::zero());
|
||||||
BasicAccount {
|
BasicAccount {
|
||||||
@@ -287,12 +286,12 @@ mod tests {
|
|||||||
let thin_rlp = ::rlp::encode(&account);
|
let thin_rlp = ::rlp::encode(&account);
|
||||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||||
|
|
||||||
let fat_rlps = to_fat_rlps(&keccak(addr), &account, &AccountDB::new(db.as_hash_db(), &addr), &mut Default::default(), 500, 1000).unwrap();
|
let fat_rlps = to_fat_rlps(&keccak(addr), &account, &AccountDB::new(db.as_hashdb(), &addr), &mut Default::default(), 500, 1000).unwrap();
|
||||||
let mut root = KECCAK_NULL_RLP;
|
let mut root = KECCAK_NULL_RLP;
|
||||||
let mut restored_account = None;
|
let mut restored_account = None;
|
||||||
for rlp in fat_rlps {
|
for rlp in fat_rlps {
|
||||||
let fat_rlp = Rlp::new(&rlp).at(1).unwrap();
|
let fat_rlp = Rlp::new(&rlp).at(1).unwrap();
|
||||||
restored_account = Some(from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &addr), fat_rlp, root).unwrap().0);
|
restored_account = Some(from_fat_rlp(&mut AccountDBMut::new(db.as_hashdb_mut(), &addr), fat_rlp, root).unwrap().0);
|
||||||
root = restored_account.as_ref().unwrap().storage_root.clone();
|
root = restored_account.as_ref().unwrap().storage_root.clone();
|
||||||
}
|
}
|
||||||
assert_eq!(restored_account, Some(account));
|
assert_eq!(restored_account, Some(account));
|
||||||
@@ -306,12 +305,12 @@ mod tests {
|
|||||||
let addr2 = Address::random();
|
let addr2 = Address::random();
|
||||||
|
|
||||||
let code_hash = {
|
let code_hash = {
|
||||||
let mut acct_db = AccountDBMut::new(db.as_hash_db_mut(), &addr1);
|
let mut acct_db = AccountDBMut::new(db.as_hashdb_mut(), &addr1);
|
||||||
acct_db.insert(b"this is definitely code")
|
acct_db.insert(b"this is definitely code")
|
||||||
};
|
};
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut acct_db = AccountDBMut::new(db.as_hash_db_mut(), &addr2);
|
let mut acct_db = AccountDBMut::new(db.as_hashdb_mut(), &addr2);
|
||||||
acct_db.emplace(code_hash.clone(), DBValue::from_slice(b"this is definitely code"));
|
acct_db.emplace(code_hash.clone(), DBValue::from_slice(b"this is definitely code"));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -331,18 +330,18 @@ mod tests {
|
|||||||
|
|
||||||
let mut used_code = HashSet::new();
|
let mut used_code = HashSet::new();
|
||||||
|
|
||||||
let fat_rlp1 = to_fat_rlps(&keccak(&addr1), &account1, &AccountDB::new(db.as_hash_db(), &addr1), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
let fat_rlp1 = to_fat_rlps(&keccak(&addr1), &account1, &AccountDB::new(db.as_hashdb(), &addr1), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
||||||
let fat_rlp2 = to_fat_rlps(&keccak(&addr2), &account2, &AccountDB::new(db.as_hash_db(), &addr2), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
let fat_rlp2 = to_fat_rlps(&keccak(&addr2), &account2, &AccountDB::new(db.as_hashdb(), &addr2), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
||||||
assert_eq!(used_code.len(), 1);
|
assert_eq!(used_code.len(), 1);
|
||||||
|
|
||||||
let fat_rlp1 = Rlp::new(&fat_rlp1[0]).at(1).unwrap();
|
let fat_rlp1 = Rlp::new(&fat_rlp1[0]).at(1).unwrap();
|
||||||
let fat_rlp2 = Rlp::new(&fat_rlp2[0]).at(1).unwrap();
|
let fat_rlp2 = Rlp::new(&fat_rlp2[0]).at(1).unwrap();
|
||||||
|
|
||||||
let (acc, maybe_code) = from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &addr2), fat_rlp2, H256::zero()).unwrap();
|
let (acc, maybe_code) = from_fat_rlp(&mut AccountDBMut::new(db.as_hashdb_mut(), &addr2), fat_rlp2, H256::zero()).unwrap();
|
||||||
assert!(maybe_code.is_none());
|
assert!(maybe_code.is_none());
|
||||||
assert_eq!(acc, account2);
|
assert_eq!(acc, account2);
|
||||||
|
|
||||||
let (acc, maybe_code) = from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &addr1), fat_rlp1, H256::zero()).unwrap();
|
let (acc, maybe_code) = from_fat_rlp(&mut AccountDBMut::new(db.as_hashdb_mut(), &addr1), fat_rlp1, H256::zero()).unwrap();
|
||||||
assert_eq!(maybe_code, Some(b"this is definitely code".to_vec()));
|
assert_eq!(maybe_code, Some(b"this is definitely code".to_vec()));
|
||||||
assert_eq!(acc, account1);
|
assert_eq!(acc, account1);
|
||||||
}
|
}
|
||||||
@@ -350,6 +349,6 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn encoding_empty_acc() {
|
fn encoding_empty_acc() {
|
||||||
let mut db = get_temp_state_db();
|
let mut db = get_temp_state_db();
|
||||||
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hash_db_mut(), &Address::default()), Rlp::new(&::rlp::NULL_RLP), H256::zero()).unwrap(), (ACC_EMPTY, None));
|
assert_eq!(from_fat_rlp(&mut AccountDBMut::new(db.as_hashdb_mut(), &Address::default()), Rlp::new(&::rlp::NULL_RLP), H256::zero()).unwrap(), (ACC_EMPTY, None));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ use types::header::Header;
|
|||||||
use types::ids::BlockId;
|
use types::ids::BlockId;
|
||||||
|
|
||||||
use ethereum_types::{H256, U256};
|
use ethereum_types::{H256, U256};
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use snappy;
|
use snappy;
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
@@ -322,7 +322,7 @@ impl<'a> StateChunker<'a> {
|
|||||||
/// Returns a list of hashes of chunks created, or any error it may
|
/// Returns a list of hashes of chunks created, or any error it may
|
||||||
/// have encountered.
|
/// have encountered.
|
||||||
pub fn chunk_state<'a>(db: &HashDB<KeccakHasher, DBValue>, root: &H256, writer: &Mutex<SnapshotWriter + 'a>, progress: &'a Progress, part: Option<usize>) -> Result<Vec<H256>, Error> {
|
pub fn chunk_state<'a>(db: &HashDB<KeccakHasher, DBValue>, root: &H256, writer: &Mutex<SnapshotWriter + 'a>, progress: &'a Progress, part: Option<usize>) -> Result<Vec<H256>, Error> {
|
||||||
let account_trie = TrieDB::new(&db, &root)?;
|
let account_trie = TrieDB::new(db, &root)?;
|
||||||
|
|
||||||
let mut chunker = StateChunker {
|
let mut chunker = StateChunker {
|
||||||
hashes: Vec::new(),
|
hashes: Vec::new(),
|
||||||
@@ -414,7 +414,7 @@ impl StateRebuilder {
|
|||||||
pairs.resize(rlp.item_count()?, (H256::new(), Vec::new()));
|
pairs.resize(rlp.item_count()?, (H256::new(), Vec::new()));
|
||||||
|
|
||||||
let status = rebuild_accounts(
|
let status = rebuild_accounts(
|
||||||
self.db.as_hash_db_mut(),
|
self.db.as_hashdb_mut(),
|
||||||
rlp,
|
rlp,
|
||||||
&mut pairs,
|
&mut pairs,
|
||||||
&self.known_code,
|
&self.known_code,
|
||||||
@@ -429,7 +429,7 @@ impl StateRebuilder {
|
|||||||
// patch up all missing code. must be done after collecting all new missing code entries.
|
// patch up all missing code. must be done after collecting all new missing code entries.
|
||||||
for (code_hash, code, first_with) in status.new_code {
|
for (code_hash, code, first_with) in status.new_code {
|
||||||
for addr_hash in self.missing_code.remove(&code_hash).unwrap_or_else(Vec::new) {
|
for addr_hash in self.missing_code.remove(&code_hash).unwrap_or_else(Vec::new) {
|
||||||
let mut db = AccountDBMut::from_hash(self.db.as_hash_db_mut(), addr_hash);
|
let mut db = AccountDBMut::from_hash(self.db.as_hashdb_mut(), addr_hash);
|
||||||
db.emplace(code_hash, DBValue::from_slice(&code));
|
db.emplace(code_hash, DBValue::from_slice(&code));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -441,9 +441,9 @@ impl StateRebuilder {
|
|||||||
// batch trie writes
|
// batch trie writes
|
||||||
{
|
{
|
||||||
let mut account_trie = if self.state_root != KECCAK_NULL_RLP {
|
let mut account_trie = if self.state_root != KECCAK_NULL_RLP {
|
||||||
TrieDBMut::from_existing(self.db.as_hash_db_mut(), &mut self.state_root)?
|
TrieDBMut::from_existing(self.db.as_hashdb_mut(), &mut self.state_root)?
|
||||||
} else {
|
} else {
|
||||||
TrieDBMut::new(self.db.as_hash_db_mut(), &mut self.state_root)
|
TrieDBMut::new(self.db.as_hashdb_mut(), &mut self.state_root)
|
||||||
};
|
};
|
||||||
|
|
||||||
for (hash, thin_rlp) in pairs {
|
for (hash, thin_rlp) in pairs {
|
||||||
|
|||||||
@@ -340,7 +340,7 @@ impl Service {
|
|||||||
// replace one the client's database with our own.
|
// replace one the client's database with our own.
|
||||||
fn replace_client_db(&self) -> Result<(), Error> {
|
fn replace_client_db(&self) -> Result<(), Error> {
|
||||||
let migrated_blocks = self.migrate_blocks()?;
|
let migrated_blocks = self.migrate_blocks()?;
|
||||||
info!(target: "snapshot", "Migrated {} ancient blocks", migrated_blocks);
|
trace!(target: "snapshot", "Migrated {} ancient blocks", migrated_blocks);
|
||||||
|
|
||||||
let rest_db = self.restoration_db();
|
let rest_db = self.restoration_db();
|
||||||
self.client.restore_db(&*rest_db.to_string_lossy())?;
|
self.client.restore_db(&*rest_db.to_string_lossy())?;
|
||||||
@@ -424,7 +424,7 @@ impl Service {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if block_number % 10_000 == 0 {
|
if block_number % 10_000 == 0 {
|
||||||
info!(target: "snapshot", "Block restoration at #{}", block_number);
|
trace!(target: "snapshot", "Block restoration at #{}", block_number);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ use rand::Rng;
|
|||||||
|
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
use ethereum_types::H256;
|
use ethereum_types::H256;
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use journaldb;
|
use journaldb;
|
||||||
use trie::{TrieMut, Trie};
|
use trie::{TrieMut, Trie};
|
||||||
@@ -65,7 +65,7 @@ impl StateProducer {
|
|||||||
pub fn tick<R: Rng>(&mut self, rng: &mut R, db: &mut HashDB<KeccakHasher, DBValue>) {
|
pub fn tick<R: Rng>(&mut self, rng: &mut R, db: &mut HashDB<KeccakHasher, DBValue>) {
|
||||||
// modify existing accounts.
|
// modify existing accounts.
|
||||||
let mut accounts_to_modify: Vec<_> = {
|
let mut accounts_to_modify: Vec<_> = {
|
||||||
let trie = TrieDB::new(&db, &self.state_root).unwrap();
|
let trie = TrieDB::new(&*db, &self.state_root).unwrap();
|
||||||
let temp = trie.iter().unwrap() // binding required due to complicated lifetime stuff
|
let temp = trie.iter().unwrap() // binding required due to complicated lifetime stuff
|
||||||
.filter(|_| rng.gen::<f32>() < ACCOUNT_CHURN)
|
.filter(|_| rng.gen::<f32>() < ACCOUNT_CHURN)
|
||||||
.map(Result::unwrap)
|
.map(Result::unwrap)
|
||||||
@@ -130,6 +130,15 @@ pub fn fill_storage(mut db: AccountDBMut, root: &mut H256, seed: &mut H256) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Compare two state dbs.
|
||||||
|
pub fn compare_dbs(one: &HashDB<KeccakHasher, DBValue>, two: &HashDB<KeccakHasher, DBValue>) {
|
||||||
|
let keys = one.keys();
|
||||||
|
|
||||||
|
for key in keys.keys() {
|
||||||
|
assert_eq!(one.get(&key).unwrap(), two.get(&key).unwrap());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Take a snapshot from the given client into a temporary file.
|
/// Take a snapshot from the given client into a temporary file.
|
||||||
/// Return a snapshot reader for it.
|
/// Return a snapshot reader for it.
|
||||||
pub fn snap(client: &Client) -> (Box<SnapshotReader>, TempDir) {
|
pub fn snap(client: &Client) -> (Box<SnapshotReader>, TempDir) {
|
||||||
|
|||||||
@@ -20,12 +20,12 @@ use std::cell::RefCell;
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use accounts::AccountProvider;
|
use account_provider::AccountProvider;
|
||||||
use client::{Client, BlockChainClient, ChainInfo};
|
use client::{Client, BlockChainClient, ChainInfo};
|
||||||
use ethkey::Secret;
|
use ethkey::Secret;
|
||||||
use snapshot::tests::helpers as snapshot_helpers;
|
use snapshot::tests::helpers as snapshot_helpers;
|
||||||
use spec::Spec;
|
use spec::Spec;
|
||||||
use test_helpers::generate_dummy_client_with_spec;
|
use test_helpers::generate_dummy_client_with_spec_and_accounts;
|
||||||
use types::transaction::{Transaction, Action, SignedTransaction};
|
use types::transaction::{Transaction, Action, SignedTransaction};
|
||||||
use tempdir::TempDir;
|
use tempdir::TempDir;
|
||||||
|
|
||||||
@@ -88,7 +88,8 @@ enum Transition {
|
|||||||
|
|
||||||
// create a chain with the given transitions and some blocks beyond that transition.
|
// create a chain with the given transitions and some blocks beyond that transition.
|
||||||
fn make_chain(accounts: Arc<AccountProvider>, blocks_beyond: usize, transitions: Vec<Transition>) -> Arc<Client> {
|
fn make_chain(accounts: Arc<AccountProvider>, blocks_beyond: usize, transitions: Vec<Transition>) -> Arc<Client> {
|
||||||
let client = generate_dummy_client_with_spec(spec_fixed_to_contract);
|
let client = generate_dummy_client_with_spec_and_accounts(
|
||||||
|
spec_fixed_to_contract, Some(accounts.clone()));
|
||||||
|
|
||||||
let mut cur_signers = vec![*RICH_ADDR];
|
let mut cur_signers = vec![*RICH_ADDR];
|
||||||
{
|
{
|
||||||
@@ -99,14 +100,13 @@ fn make_chain(accounts: Arc<AccountProvider>, blocks_beyond: usize, transitions:
|
|||||||
{
|
{
|
||||||
// push a block with given number, signed by one of the signers, with given transactions.
|
// push a block with given number, signed by one of the signers, with given transactions.
|
||||||
let push_block = |signers: &[Address], n, txs: Vec<SignedTransaction>| {
|
let push_block = |signers: &[Address], n, txs: Vec<SignedTransaction>| {
|
||||||
use miner::{self, MinerService};
|
use miner::MinerService;
|
||||||
|
|
||||||
let idx = n as usize % signers.len();
|
let idx = n as usize % signers.len();
|
||||||
trace!(target: "snapshot", "Pushing block #{}, {} txs, author={}",
|
trace!(target: "snapshot", "Pushing block #{}, {} txs, author={}",
|
||||||
n, txs.len(), signers[idx]);
|
n, txs.len(), signers[idx]);
|
||||||
|
|
||||||
let signer = Box::new((accounts.clone(), signers[idx], PASS.into()));
|
client.miner().set_author(signers[idx], Some(PASS.into())).unwrap();
|
||||||
client.miner().set_author(miner::Author::Sealer(signer));
|
|
||||||
client.miner().import_external_transactions(&*client,
|
client.miner().import_external_transactions(&*client,
|
||||||
txs.into_iter().map(Into::into).collect());
|
txs.into_iter().map(Into::into).collect());
|
||||||
|
|
||||||
|
|||||||
@@ -184,7 +184,7 @@ fn keep_ancient_blocks() {
|
|||||||
let start_header = bc.block_header_data(&best_hash).unwrap();
|
let start_header = bc.block_header_data(&best_hash).unwrap();
|
||||||
let state_root = start_header.state_root();
|
let state_root = start_header.state_root();
|
||||||
let state_hashes = chunk_state(
|
let state_hashes = chunk_state(
|
||||||
state_db.as_hash_db(),
|
state_db.as_hashdb(),
|
||||||
&state_root,
|
&state_root,
|
||||||
&writer,
|
&writer,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ use types::basic_account::BasicAccount;
|
|||||||
use snapshot::account;
|
use snapshot::account;
|
||||||
use snapshot::{chunk_state, Error as SnapshotError, Progress, StateRebuilder, SNAPSHOT_SUBPARTS};
|
use snapshot::{chunk_state, Error as SnapshotError, Progress, StateRebuilder, SNAPSHOT_SUBPARTS};
|
||||||
use snapshot::io::{PackedReader, PackedWriter, SnapshotReader, SnapshotWriter};
|
use snapshot::io::{PackedReader, PackedWriter, SnapshotReader, SnapshotWriter};
|
||||||
use super::helpers::StateProducer;
|
use super::helpers::{compare_dbs, StateProducer};
|
||||||
|
|
||||||
use error::{Error, ErrorKind};
|
use error::{Error, ErrorKind};
|
||||||
|
|
||||||
@@ -32,15 +32,15 @@ use rand::{XorShiftRng, SeedableRng};
|
|||||||
use ethereum_types::H256;
|
use ethereum_types::H256;
|
||||||
use journaldb::{self, Algorithm};
|
use journaldb::{self, Algorithm};
|
||||||
use kvdb_rocksdb::{Database, DatabaseConfig};
|
use kvdb_rocksdb::{Database, DatabaseConfig};
|
||||||
|
use memorydb::MemoryDB;
|
||||||
use parking_lot::Mutex;
|
use parking_lot::Mutex;
|
||||||
use tempdir::TempDir;
|
use tempdir::TempDir;
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn snap_and_restore() {
|
fn snap_and_restore() {
|
||||||
use hash_db::HashDB;
|
|
||||||
let mut producer = StateProducer::new();
|
let mut producer = StateProducer::new();
|
||||||
let mut rng = XorShiftRng::from_seed([1, 2, 3, 4]);
|
let mut rng = XorShiftRng::from_seed([1, 2, 3, 4]);
|
||||||
let mut old_db = journaldb::new_memory_db();
|
let mut old_db = MemoryDB::new();
|
||||||
let db_cfg = DatabaseConfig::with_columns(::db::NUM_COLUMNS);
|
let db_cfg = DatabaseConfig::with_columns(::db::NUM_COLUMNS);
|
||||||
|
|
||||||
for _ in 0..150 {
|
for _ in 0..150 {
|
||||||
@@ -91,11 +91,8 @@ fn snap_and_restore() {
|
|||||||
|
|
||||||
let new_db = journaldb::new(db, Algorithm::OverlayRecent, ::db::COL_STATE);
|
let new_db = journaldb::new(db, Algorithm::OverlayRecent, ::db::COL_STATE);
|
||||||
assert_eq!(new_db.earliest_era(), Some(1000));
|
assert_eq!(new_db.earliest_era(), Some(1000));
|
||||||
let keys = old_db.keys();
|
|
||||||
|
|
||||||
for key in keys.keys() {
|
compare_dbs(&old_db, new_db.as_hashdb());
|
||||||
assert_eq!(old_db.get(&key).unwrap(), new_db.as_hash_db().get(&key).unwrap());
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -103,7 +100,7 @@ fn get_code_from_prev_chunk() {
|
|||||||
use std::collections::HashSet;
|
use std::collections::HashSet;
|
||||||
use rlp::RlpStream;
|
use rlp::RlpStream;
|
||||||
use ethereum_types::{H256, U256};
|
use ethereum_types::{H256, U256};
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
|
|
||||||
use account_db::{AccountDBMut, AccountDB};
|
use account_db::{AccountDBMut, AccountDB};
|
||||||
|
|
||||||
@@ -124,7 +121,7 @@ fn get_code_from_prev_chunk() {
|
|||||||
let acc: BasicAccount = ::rlp::decode(&thin_rlp).expect("error decoding basic account");
|
let acc: BasicAccount = ::rlp::decode(&thin_rlp).expect("error decoding basic account");
|
||||||
|
|
||||||
let mut make_chunk = |acc, hash| {
|
let mut make_chunk = |acc, hash| {
|
||||||
let mut db = journaldb::new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
AccountDBMut::from_hash(&mut db, hash).insert(&code[..]);
|
AccountDBMut::from_hash(&mut db, hash).insert(&code[..]);
|
||||||
|
|
||||||
let fat_rlp = account::to_fat_rlps(&hash, &acc, &AccountDB::from_hash(&db, hash), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
let fat_rlp = account::to_fat_rlps(&hash, &acc, &AccountDB::from_hash(&db, hash), &mut used_code, usize::max_value(), usize::max_value()).unwrap();
|
||||||
@@ -158,7 +155,7 @@ fn get_code_from_prev_chunk() {
|
|||||||
fn checks_flag() {
|
fn checks_flag() {
|
||||||
let mut producer = StateProducer::new();
|
let mut producer = StateProducer::new();
|
||||||
let mut rng = XorShiftRng::from_seed([5, 6, 7, 8]);
|
let mut rng = XorShiftRng::from_seed([5, 6, 7, 8]);
|
||||||
let mut old_db = journaldb::new_memory_db();
|
let mut old_db = MemoryDB::new();
|
||||||
let db_cfg = DatabaseConfig::with_columns(::db::NUM_COLUMNS);
|
let db_cfg = DatabaseConfig::with_columns(::db::NUM_COLUMNS);
|
||||||
|
|
||||||
for _ in 0..10 {
|
for _ in 0..10 {
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ use bytes::Bytes;
|
|||||||
use ethereum_types::{H256, Bloom, U256, Address};
|
use ethereum_types::{H256, Bloom, U256, Address};
|
||||||
use ethjson;
|
use ethjson;
|
||||||
use hash::{KECCAK_NULL_RLP, keccak};
|
use hash::{KECCAK_NULL_RLP, keccak};
|
||||||
|
use memorydb::MemoryDB;
|
||||||
use parking_lot::RwLock;
|
use parking_lot::RwLock;
|
||||||
use rlp::{Rlp, RlpStream};
|
use rlp::{Rlp, RlpStream};
|
||||||
use rustc_hex::{FromHex, ToHex};
|
use rustc_hex::{FromHex, ToHex};
|
||||||
@@ -555,7 +556,7 @@ fn load_from(spec_params: SpecParams, s: ethjson::spec::Spec) -> Result<Spec, Er
|
|||||||
None => {
|
None => {
|
||||||
let _ = s.run_constructors(
|
let _ = s.run_constructors(
|
||||||
&Default::default(),
|
&Default::default(),
|
||||||
BasicBackend(journaldb::new_memory_db()),
|
BasicBackend(MemoryDB::new()),
|
||||||
)?;
|
)?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -623,7 +624,7 @@ impl Spec {
|
|||||||
|
|
||||||
// basic accounts in spec.
|
// basic accounts in spec.
|
||||||
{
|
{
|
||||||
let mut t = factories.trie.create(db.as_hash_db_mut(), &mut root);
|
let mut t = factories.trie.create(db.as_hashdb_mut(), &mut root);
|
||||||
|
|
||||||
for (address, account) in self.genesis_state.get().iter() {
|
for (address, account) in self.genesis_state.get().iter() {
|
||||||
t.insert(&**address, &account.rlp())?;
|
t.insert(&**address, &account.rlp())?;
|
||||||
@@ -634,7 +635,7 @@ impl Spec {
|
|||||||
db.note_non_null_account(address);
|
db.note_non_null_account(address);
|
||||||
account.insert_additional(
|
account.insert_additional(
|
||||||
&mut *factories.accountdb.create(
|
&mut *factories.accountdb.create(
|
||||||
db.as_hash_db_mut(),
|
db.as_hashdb_mut(),
|
||||||
keccak(address),
|
keccak(address),
|
||||||
),
|
),
|
||||||
&factories.trie,
|
&factories.trie,
|
||||||
@@ -791,7 +792,7 @@ impl Spec {
|
|||||||
self.genesis_state = s;
|
self.genesis_state = s;
|
||||||
let _ = self.run_constructors(
|
let _ = self.run_constructors(
|
||||||
&Default::default(),
|
&Default::default(),
|
||||||
BasicBackend(journaldb::new_memory_db()),
|
BasicBackend(MemoryDB::new()),
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
@@ -812,7 +813,7 @@ impl Spec {
|
|||||||
|
|
||||||
/// Ensure that the given state DB has the trie nodes in for the genesis state.
|
/// Ensure that the given state DB has the trie nodes in for the genesis state.
|
||||||
pub fn ensure_db_good<T: Backend>(&self, db: T, factories: &Factories) -> Result<T, Error> {
|
pub fn ensure_db_good<T: Backend>(&self, db: T, factories: &Factories) -> Result<T, Error> {
|
||||||
if db.as_hash_db().contains(&self.state_root()) {
|
if db.as_hashdb().contains(&self.state_root()) {
|
||||||
return Ok(db);
|
return Ok(db);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -859,7 +860,7 @@ impl Spec {
|
|||||||
None,
|
None,
|
||||||
);
|
);
|
||||||
|
|
||||||
self.ensure_db_good(BasicBackend(db.as_hash_db_mut()), &factories)
|
self.ensure_db_good(BasicBackend(db.as_hashdb_mut()), &factories)
|
||||||
.map_err(|e| format!("Unable to initialize genesis state: {}", e))?;
|
.map_err(|e| format!("Unable to initialize genesis state: {}", e))?;
|
||||||
|
|
||||||
let call = |a, d| {
|
let call = |a, d| {
|
||||||
@@ -885,7 +886,7 @@ impl Spec {
|
|||||||
}.fake_sign(from);
|
}.fake_sign(from);
|
||||||
|
|
||||||
let res = ::state::prove_transaction_virtual(
|
let res = ::state::prove_transaction_virtual(
|
||||||
db.as_hash_db_mut(),
|
db.as_hashdb_mut(),
|
||||||
*genesis.state_root(),
|
*genesis.state_root(),
|
||||||
&tx,
|
&tx,
|
||||||
self.engine.machine(),
|
self.engine.machine(),
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ use std::collections::{HashMap, BTreeMap};
|
|||||||
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP, keccak};
|
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP, keccak};
|
||||||
use ethereum_types::{H256, U256, Address};
|
use ethereum_types::{H256, U256, Address};
|
||||||
use error::Error;
|
use error::Error;
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
use bytes::{Bytes, ToPretty};
|
use bytes::{Bytes, ToPretty};
|
||||||
@@ -253,7 +253,7 @@ impl Account {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn get_and_cache_storage(storage_root: &H256, storage_cache: &mut LruCache<H256, H256>, db: &HashDB<KeccakHasher, DBValue>, key: &H256) -> TrieResult<H256> {
|
fn get_and_cache_storage(storage_root: &H256, storage_cache: &mut LruCache<H256, H256>, db: &HashDB<KeccakHasher, DBValue>, key: &H256) -> TrieResult<H256> {
|
||||||
let db = SecTrieDB::new(&db, storage_root)?;
|
let db = SecTrieDB::new(db, storage_root)?;
|
||||||
let panicky_decoder = |bytes:&[u8]| ::rlp::decode(&bytes).expect("decoding db value failed");
|
let panicky_decoder = |bytes:&[u8]| ::rlp::decode(&bytes).expect("decoding db value failed");
|
||||||
let item: U256 = db.get_with(key, panicky_decoder)?.unwrap_or_else(U256::zero);
|
let item: U256 = db.get_with(key, panicky_decoder)?.unwrap_or_else(U256::zero);
|
||||||
let value: H256 = item.into();
|
let value: H256 = item.into();
|
||||||
@@ -591,7 +591,7 @@ impl Account {
|
|||||||
pub fn prove_storage(&self, db: &HashDB<KeccakHasher, DBValue>, storage_key: H256) -> TrieResult<(Vec<Bytes>, H256)> {
|
pub fn prove_storage(&self, db: &HashDB<KeccakHasher, DBValue>, storage_key: H256) -> TrieResult<(Vec<Bytes>, H256)> {
|
||||||
let mut recorder = Recorder::new();
|
let mut recorder = Recorder::new();
|
||||||
|
|
||||||
let trie = TrieDB::new(&db, &self.storage_root)?;
|
let trie = TrieDB::new(db, &self.storage_root)?;
|
||||||
let item: U256 = {
|
let item: U256 = {
|
||||||
let panicky_decoder = |bytes:&[u8]| ::rlp::decode(bytes).expect("decoding db value failed");
|
let panicky_decoder = |bytes:&[u8]| ::rlp::decode(bytes).expect("decoding db value failed");
|
||||||
let query = (&mut recorder, panicky_decoder);
|
let query = (&mut recorder, panicky_decoder);
|
||||||
@@ -617,7 +617,7 @@ impl fmt::Debug for Account {
|
|||||||
mod tests {
|
mod tests {
|
||||||
use rlp_compress::{compress, decompress, snapshot_swapper};
|
use rlp_compress::{compress, decompress, snapshot_swapper};
|
||||||
use ethereum_types::{H256, Address};
|
use ethereum_types::{H256, Address};
|
||||||
use journaldb::new_memory_db;
|
use memorydb::MemoryDB;
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use super::*;
|
use super::*;
|
||||||
use account_db::*;
|
use account_db::*;
|
||||||
@@ -633,7 +633,7 @@ mod tests {
|
|||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn storage_at() {
|
fn storage_at() {
|
||||||
let mut db = new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
||||||
let rlp = {
|
let rlp = {
|
||||||
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
||||||
@@ -652,7 +652,7 @@ mod tests {
|
|||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn note_code() {
|
fn note_code() {
|
||||||
let mut db = new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
||||||
|
|
||||||
let rlp = {
|
let rlp = {
|
||||||
@@ -672,7 +672,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn commit_storage() {
|
fn commit_storage() {
|
||||||
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
||||||
let mut db = new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
||||||
a.set_storage(0.into(), 0x1234.into());
|
a.set_storage(0.into(), 0x1234.into());
|
||||||
assert_eq!(a.storage_root(), None);
|
assert_eq!(a.storage_root(), None);
|
||||||
@@ -683,7 +683,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn commit_remove_commit_storage() {
|
fn commit_remove_commit_storage() {
|
||||||
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
||||||
let mut db = new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
||||||
a.set_storage(0.into(), 0x1234.into());
|
a.set_storage(0.into(), 0x1234.into());
|
||||||
a.commit_storage(&Default::default(), &mut db).unwrap();
|
a.commit_storage(&Default::default(), &mut db).unwrap();
|
||||||
@@ -697,7 +697,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn commit_code() {
|
fn commit_code() {
|
||||||
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
||||||
let mut db = new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
||||||
a.init_code(vec![0x55, 0x44, 0xffu8]);
|
a.init_code(vec![0x55, 0x44, 0xffu8]);
|
||||||
assert_eq!(a.code_filth, Filth::Dirty);
|
assert_eq!(a.code_filth, Filth::Dirty);
|
||||||
@@ -709,7 +709,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn reset_code() {
|
fn reset_code() {
|
||||||
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
let mut a = Account::new_contract(69.into(), 0.into(), KECCAK_NULL_RLP);
|
||||||
let mut db = new_memory_db();
|
let mut db = MemoryDB::new();
|
||||||
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
let mut db = AccountDBMut::new(&mut db, &Address::new());
|
||||||
a.init_code(vec![0x55, 0x44, 0xffu8]);
|
a.init_code(vec![0x55, 0x44, 0xffu8]);
|
||||||
assert_eq!(a.code_filth, Filth::Dirty);
|
assert_eq!(a.code_filth, Filth::Dirty);
|
||||||
|
|||||||
@@ -27,19 +27,18 @@ use std::sync::Arc;
|
|||||||
use state::Account;
|
use state::Account;
|
||||||
use parking_lot::Mutex;
|
use parking_lot::Mutex;
|
||||||
use ethereum_types::{Address, H256};
|
use ethereum_types::{Address, H256};
|
||||||
use memory_db::MemoryDB;
|
use memorydb::MemoryDB;
|
||||||
use hash_db::{AsHashDB, HashDB};
|
use hashdb::{AsHashDB, HashDB};
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use journaldb::AsKeyedHashDB;
|
|
||||||
|
|
||||||
/// State backend. See module docs for more details.
|
/// State backend. See module docs for more details.
|
||||||
pub trait Backend: Send {
|
pub trait Backend: Send {
|
||||||
/// Treat the backend as a read-only hashdb.
|
/// Treat the backend as a read-only hashdb.
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue>;
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue>;
|
||||||
|
|
||||||
/// Treat the backend as a writeable hashdb.
|
/// Treat the backend as a writeable hashdb.
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue>;
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue>;
|
||||||
|
|
||||||
/// Add an account entry to the cache.
|
/// Add an account entry to the cache.
|
||||||
fn add_to_account_cache(&mut self, addr: Address, data: Option<Account>, modified: bool);
|
fn add_to_account_cache(&mut self, addr: Address, data: Option<Account>, modified: bool);
|
||||||
@@ -83,17 +82,14 @@ pub struct ProofCheck(MemoryDB<KeccakHasher, DBValue>);
|
|||||||
impl ProofCheck {
|
impl ProofCheck {
|
||||||
/// Create a new `ProofCheck` backend from the given state items.
|
/// Create a new `ProofCheck` backend from the given state items.
|
||||||
pub fn new(proof: &[DBValue]) -> Self {
|
pub fn new(proof: &[DBValue]) -> Self {
|
||||||
let mut db = journaldb::new_memory_db();
|
let mut db = MemoryDB::<KeccakHasher, DBValue>::new();
|
||||||
for item in proof { db.insert(item); }
|
for item in proof { db.insert(item); }
|
||||||
ProofCheck(db)
|
ProofCheck(db)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl journaldb::KeyedHashDB for ProofCheck {
|
|
||||||
fn keys(&self) -> HashMap<H256, i32> { self.0.keys() }
|
|
||||||
}
|
|
||||||
|
|
||||||
impl HashDB<KeccakHasher, DBValue> for ProofCheck {
|
impl HashDB<KeccakHasher, DBValue> for ProofCheck {
|
||||||
|
fn keys(&self) -> HashMap<H256, i32> { self.0.keys() }
|
||||||
fn get(&self, key: &H256) -> Option<DBValue> {
|
fn get(&self, key: &H256) -> Option<DBValue> {
|
||||||
self.0.get(key)
|
self.0.get(key)
|
||||||
}
|
}
|
||||||
@@ -114,13 +110,13 @@ impl HashDB<KeccakHasher, DBValue> for ProofCheck {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl AsHashDB<KeccakHasher, DBValue> for ProofCheck {
|
impl AsHashDB<KeccakHasher, DBValue> for ProofCheck {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Backend for ProofCheck {
|
impl Backend for ProofCheck {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
||||||
fn add_to_account_cache(&mut self, _addr: Address, _data: Option<Account>, _modified: bool) {}
|
fn add_to_account_cache(&mut self, _addr: Address, _data: Option<Account>, _modified: bool) {}
|
||||||
fn cache_code(&self, _hash: H256, _code: Arc<Vec<u8>>) {}
|
fn cache_code(&self, _hash: H256, _code: Arc<Vec<u8>>) {}
|
||||||
fn get_cached_account(&self, _addr: &Address) -> Option<Option<Account>> { None }
|
fn get_cached_account(&self, _addr: &Address) -> Option<Option<Account>> { None }
|
||||||
@@ -139,32 +135,26 @@ impl Backend for ProofCheck {
|
|||||||
/// The proof-of-execution can be extracted with `extract_proof`.
|
/// The proof-of-execution can be extracted with `extract_proof`.
|
||||||
///
|
///
|
||||||
/// This doesn't cache anything or rely on the canonical state caches.
|
/// This doesn't cache anything or rely on the canonical state caches.
|
||||||
pub struct Proving<H> {
|
pub struct Proving<H: AsHashDB<KeccakHasher, DBValue>> {
|
||||||
base: H, // state we're proving values from.
|
base: H, // state we're proving values from.
|
||||||
changed: MemoryDB<KeccakHasher, DBValue>, // changed state via insertions.
|
changed: MemoryDB<KeccakHasher, DBValue>, // changed state via insertions.
|
||||||
proof: Mutex<HashSet<DBValue>>,
|
proof: Mutex<HashSet<DBValue>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<AH: AsKeyedHashDB + Send + Sync> AsKeyedHashDB for Proving<AH> {
|
|
||||||
fn as_keyed_hash_db(&self) -> &journaldb::KeyedHashDB { self }
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<AH: AsHashDB<KeccakHasher, DBValue> + Send + Sync> AsHashDB<KeccakHasher, DBValue> for Proving<AH> {
|
impl<AH: AsHashDB<KeccakHasher, DBValue> + Send + Sync> AsHashDB<KeccakHasher, DBValue> for Proving<AH> {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
||||||
}
|
|
||||||
|
|
||||||
impl<H: AsKeyedHashDB + Send + Sync> journaldb::KeyedHashDB for Proving<H> {
|
|
||||||
fn keys(&self) -> HashMap<H256, i32> {
|
|
||||||
let mut keys = self.base.as_keyed_hash_db().keys();
|
|
||||||
keys.extend(self.changed.keys());
|
|
||||||
keys
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> HashDB<KeccakHasher, DBValue> for Proving<H> {
|
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> HashDB<KeccakHasher, DBValue> for Proving<H> {
|
||||||
|
fn keys(&self) -> HashMap<H256, i32> {
|
||||||
|
let mut keys = self.base.as_hashdb().keys();
|
||||||
|
keys.extend(self.changed.keys());
|
||||||
|
keys
|
||||||
|
}
|
||||||
|
|
||||||
fn get(&self, key: &H256) -> Option<DBValue> {
|
fn get(&self, key: &H256) -> Option<DBValue> {
|
||||||
match self.base.as_hash_db().get(key) {
|
match self.base.as_hashdb().get(key) {
|
||||||
Some(val) => {
|
Some(val) => {
|
||||||
self.proof.lock().insert(val.clone());
|
self.proof.lock().insert(val.clone());
|
||||||
Some(val)
|
Some(val)
|
||||||
@@ -194,9 +184,9 @@ impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> HashDB<KeccakHasher, DBVa
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> Backend for Proving<H> {
|
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> Backend for Proving<H> {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
|
||||||
|
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
|
||||||
|
|
||||||
fn add_to_account_cache(&mut self, _: Address, _: Option<Account>, _: bool) { }
|
fn add_to_account_cache(&mut self, _: Address, _: Option<Account>, _: bool) { }
|
||||||
|
|
||||||
@@ -221,7 +211,7 @@ impl<H: AsHashDB<KeccakHasher, DBValue>> Proving<H> {
|
|||||||
pub fn new(base: H) -> Self {
|
pub fn new(base: H) -> Self {
|
||||||
Proving {
|
Proving {
|
||||||
base: base,
|
base: base,
|
||||||
changed: journaldb::new_memory_db(),
|
changed: MemoryDB::<KeccakHasher, DBValue>::new(),
|
||||||
proof: Mutex::new(HashSet::new()),
|
proof: Mutex::new(HashSet::new()),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -248,12 +238,12 @@ impl<H: AsHashDB<KeccakHasher, DBValue> + Clone> Clone for Proving<H> {
|
|||||||
pub struct Basic<H>(pub H);
|
pub struct Basic<H>(pub H);
|
||||||
|
|
||||||
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> Backend for Basic<H> {
|
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> Backend for Basic<H> {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> {
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> {
|
||||||
self.0.as_hash_db()
|
self.0.as_hashdb()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
|
||||||
self.0.as_hash_db_mut()
|
self.0.as_hashdb_mut()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn add_to_account_cache(&mut self, _: Address, _: Option<Account>, _: bool) { }
|
fn add_to_account_cache(&mut self, _: Address, _: Option<Account>, _: bool) { }
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ use state_db::StateDB;
|
|||||||
use factory::VmFactory;
|
use factory::VmFactory;
|
||||||
|
|
||||||
use ethereum_types::{H256, U256, Address};
|
use ethereum_types::{H256, U256, Address};
|
||||||
use hash_db::{HashDB, AsHashDB};
|
use hashdb::{HashDB, AsHashDB};
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use kvdb::DBValue;
|
use kvdb::DBValue;
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
@@ -366,7 +366,7 @@ impl<B: Backend> State<B> {
|
|||||||
let mut root = H256::new();
|
let mut root = H256::new();
|
||||||
{
|
{
|
||||||
// init trie and reset root to null
|
// init trie and reset root to null
|
||||||
let _ = factories.trie.create(db.as_hash_db_mut(), &mut root);
|
let _ = factories.trie.create(db.as_hashdb_mut(), &mut root);
|
||||||
}
|
}
|
||||||
|
|
||||||
State {
|
State {
|
||||||
@@ -381,7 +381,7 @@ impl<B: Backend> State<B> {
|
|||||||
|
|
||||||
/// Creates new state with existing state root
|
/// Creates new state with existing state root
|
||||||
pub fn from_existing(db: B, root: H256, account_start_nonce: U256, factories: Factories) -> TrieResult<State<B>> {
|
pub fn from_existing(db: B, root: H256, account_start_nonce: U256, factories: Factories) -> TrieResult<State<B>> {
|
||||||
if !db.as_hash_db().contains(&root) {
|
if !db.as_hashdb().contains(&root) {
|
||||||
return Err(Box::new(TrieError::InvalidStateRoot(root)));
|
return Err(Box::new(TrieError::InvalidStateRoot(root)));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -665,8 +665,8 @@ impl<B: Backend> State<B> {
|
|||||||
let trie_res = self.db.get_cached(address, |acc| match acc {
|
let trie_res = self.db.get_cached(address, |acc| match acc {
|
||||||
None => Ok(H256::new()),
|
None => Ok(H256::new()),
|
||||||
Some(a) => {
|
Some(a) => {
|
||||||
let account_db = self.factories.accountdb.readonly(self.db.as_hash_db(), a.address_hash(address));
|
let account_db = self.factories.accountdb.readonly(self.db.as_hashdb(), a.address_hash(address));
|
||||||
f_at(a, account_db.as_hash_db(), key)
|
f_at(a, account_db.as_hashdb(), key)
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -677,8 +677,8 @@ impl<B: Backend> State<B> {
|
|||||||
// otherwise cache the account localy and cache storage key there.
|
// otherwise cache the account localy and cache storage key there.
|
||||||
if let Some(ref mut acc) = local_account {
|
if let Some(ref mut acc) = local_account {
|
||||||
if let Some(ref account) = acc.account {
|
if let Some(ref account) = acc.account {
|
||||||
let account_db = self.factories.accountdb.readonly(self.db.as_hash_db(), account.address_hash(address));
|
let account_db = self.factories.accountdb.readonly(self.db.as_hashdb(), account.address_hash(address));
|
||||||
return f_at(account, account_db.as_hash_db(), key)
|
return f_at(account, account_db.as_hashdb(), key)
|
||||||
} else {
|
} else {
|
||||||
return Ok(H256::new())
|
return Ok(H256::new())
|
||||||
}
|
}
|
||||||
@@ -689,13 +689,12 @@ impl<B: Backend> State<B> {
|
|||||||
if self.db.is_known_null(address) { return Ok(H256::zero()) }
|
if self.db.is_known_null(address) { return Ok(H256::zero()) }
|
||||||
|
|
||||||
// account is not found in the global cache, get from the DB and insert into local
|
// account is not found in the global cache, get from the DB and insert into local
|
||||||
let db = &self.db.as_hash_db();
|
let db = self.factories.trie.readonly(self.db.as_hashdb(), &self.root).expect(SEC_TRIE_DB_UNWRAP_STR);
|
||||||
let db = self.factories.trie.readonly(db, &self.root).expect(SEC_TRIE_DB_UNWRAP_STR);
|
|
||||||
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
||||||
let maybe_acc = db.get_with(address, from_rlp)?;
|
let maybe_acc = db.get_with(address, from_rlp)?;
|
||||||
let r = maybe_acc.as_ref().map_or(Ok(H256::new()), |a| {
|
let r = maybe_acc.as_ref().map_or(Ok(H256::new()), |a| {
|
||||||
let account_db = self.factories.accountdb.readonly(self.db.as_hash_db(), a.address_hash(address));
|
let account_db = self.factories.accountdb.readonly(self.db.as_hashdb(), a.address_hash(address));
|
||||||
f_at(a, account_db.as_hash_db(), key)
|
f_at(a, account_db.as_hashdb(), key)
|
||||||
});
|
});
|
||||||
self.insert_cache(address, AccountEntry::new_clean(maybe_acc));
|
self.insert_cache(address, AccountEntry::new_clean(maybe_acc));
|
||||||
r
|
r
|
||||||
@@ -888,9 +887,9 @@ impl<B: Backend> State<B> {
|
|||||||
if let Some(ref mut account) = a.account {
|
if let Some(ref mut account) = a.account {
|
||||||
let addr_hash = account.address_hash(address);
|
let addr_hash = account.address_hash(address);
|
||||||
{
|
{
|
||||||
let mut account_db = self.factories.accountdb.create(self.db.as_hash_db_mut(), addr_hash);
|
let mut account_db = self.factories.accountdb.create(self.db.as_hashdb_mut(), addr_hash);
|
||||||
account.commit_storage(&self.factories.trie, account_db.as_hash_db_mut())?;
|
account.commit_storage(&self.factories.trie, account_db.as_hashdb_mut())?;
|
||||||
account.commit_code(account_db.as_hash_db_mut());
|
account.commit_code(account_db.as_hashdb_mut());
|
||||||
}
|
}
|
||||||
if !account.is_empty() {
|
if !account.is_empty() {
|
||||||
self.db.note_non_null_account(address);
|
self.db.note_non_null_account(address);
|
||||||
@@ -899,7 +898,7 @@ impl<B: Backend> State<B> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut trie = self.factories.trie.from_existing(self.db.as_hash_db_mut(), &mut self.root)?;
|
let mut trie = self.factories.trie.from_existing(self.db.as_hashdb_mut(), &mut self.root)?;
|
||||||
for (address, ref mut a) in accounts.iter_mut().filter(|&(_, ref a)| a.is_dirty()) {
|
for (address, ref mut a) in accounts.iter_mut().filter(|&(_, ref a)| a.is_dirty()) {
|
||||||
a.state = AccountState::Committed;
|
a.state = AccountState::Committed;
|
||||||
match a.account {
|
match a.account {
|
||||||
@@ -982,8 +981,7 @@ impl<B: Backend> State<B> {
|
|||||||
|
|
||||||
let mut result = BTreeMap::new();
|
let mut result = BTreeMap::new();
|
||||||
|
|
||||||
let db = &self.db.as_hash_db();
|
let trie = self.factories.trie.readonly(self.db.as_hashdb(), &self.root)?;
|
||||||
let trie = self.factories.trie.readonly(db, &self.root)?;
|
|
||||||
|
|
||||||
// put trie in cache
|
// put trie in cache
|
||||||
for item in trie.iter()? {
|
for item in trie.iter()? {
|
||||||
@@ -1013,11 +1011,10 @@ impl<B: Backend> State<B> {
|
|||||||
fn account_to_pod_account(&self, account: &Account, address: &Address) -> Result<PodAccount, Error> {
|
fn account_to_pod_account(&self, account: &Account, address: &Address) -> Result<PodAccount, Error> {
|
||||||
let mut pod_storage = BTreeMap::new();
|
let mut pod_storage = BTreeMap::new();
|
||||||
let addr_hash = account.address_hash(address);
|
let addr_hash = account.address_hash(address);
|
||||||
let accountdb = self.factories.accountdb.readonly(self.db.as_hash_db(), addr_hash);
|
let accountdb = self.factories.accountdb.readonly(self.db.as_hashdb(), addr_hash);
|
||||||
let root = account.base_storage_root();
|
let root = account.base_storage_root();
|
||||||
|
|
||||||
let accountdb = &accountdb.as_hash_db();
|
let trie = self.factories.trie.readonly(accountdb.as_hashdb(), &root)?;
|
||||||
let trie = self.factories.trie.readonly(accountdb, &root)?;
|
|
||||||
for o_kv in trie.iter()? {
|
for o_kv in trie.iter()? {
|
||||||
if let Ok((key, val)) = o_kv {
|
if let Ok((key, val)) = o_kv {
|
||||||
pod_storage.insert(key[..].into(), rlp::decode::<U256>(&val[..]).expect("Decoded from trie which was encoded from the same type; qed").into());
|
pod_storage.insert(key[..].into(), rlp::decode::<U256>(&val[..]).expect("Decoded from trie which was encoded from the same type; qed").into());
|
||||||
@@ -1137,8 +1134,8 @@ impl<B: Backend> State<B> {
|
|||||||
// check local cache first
|
// check local cache first
|
||||||
if let Some(ref mut maybe_acc) = self.cache.borrow_mut().get_mut(a) {
|
if let Some(ref mut maybe_acc) = self.cache.borrow_mut().get_mut(a) {
|
||||||
if let Some(ref mut account) = maybe_acc.account {
|
if let Some(ref mut account) = maybe_acc.account {
|
||||||
let accountdb = self.factories.accountdb.readonly(self.db.as_hash_db(), account.address_hash(a));
|
let accountdb = self.factories.accountdb.readonly(self.db.as_hashdb(), account.address_hash(a));
|
||||||
if Self::update_account_cache(require, account, &self.db, accountdb.as_hash_db()) {
|
if Self::update_account_cache(require, account, &self.db, accountdb.as_hashdb()) {
|
||||||
return Ok(f(Some(account)));
|
return Ok(f(Some(account)));
|
||||||
} else {
|
} else {
|
||||||
return Err(Box::new(TrieError::IncompleteDatabase(H256::from(a))));
|
return Err(Box::new(TrieError::IncompleteDatabase(H256::from(a))));
|
||||||
@@ -1149,8 +1146,8 @@ impl<B: Backend> State<B> {
|
|||||||
// check global cache
|
// check global cache
|
||||||
let result = self.db.get_cached(a, |mut acc| {
|
let result = self.db.get_cached(a, |mut acc| {
|
||||||
if let Some(ref mut account) = acc {
|
if let Some(ref mut account) = acc {
|
||||||
let accountdb = self.factories.accountdb.readonly(self.db.as_hash_db(), account.address_hash(a));
|
let accountdb = self.factories.accountdb.readonly(self.db.as_hashdb(), account.address_hash(a));
|
||||||
if !Self::update_account_cache(require, account, &self.db, accountdb.as_hash_db()) {
|
if !Self::update_account_cache(require, account, &self.db, accountdb.as_hashdb()) {
|
||||||
return Err(Box::new(TrieError::IncompleteDatabase(H256::from(a))));
|
return Err(Box::new(TrieError::IncompleteDatabase(H256::from(a))));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1163,13 +1160,12 @@ impl<B: Backend> State<B> {
|
|||||||
if check_null && self.db.is_known_null(a) { return Ok(f(None)); }
|
if check_null && self.db.is_known_null(a) { return Ok(f(None)); }
|
||||||
|
|
||||||
// not found in the global cache, get from the DB and insert into local
|
// not found in the global cache, get from the DB and insert into local
|
||||||
let db = &self.db.as_hash_db();
|
let db = self.factories.trie.readonly(self.db.as_hashdb(), &self.root)?;
|
||||||
let db = self.factories.trie.readonly(db, &self.root)?;
|
|
||||||
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
||||||
let mut maybe_acc = db.get_with(a, from_rlp)?;
|
let mut maybe_acc = db.get_with(a, from_rlp)?;
|
||||||
if let Some(ref mut account) = maybe_acc.as_mut() {
|
if let Some(ref mut account) = maybe_acc.as_mut() {
|
||||||
let accountdb = self.factories.accountdb.readonly(self.db.as_hash_db(), account.address_hash(a));
|
let accountdb = self.factories.accountdb.readonly(self.db.as_hashdb(), account.address_hash(a));
|
||||||
if !Self::update_account_cache(require, account, &self.db, accountdb.as_hash_db()) {
|
if !Self::update_account_cache(require, account, &self.db, accountdb.as_hashdb()) {
|
||||||
return Err(Box::new(TrieError::IncompleteDatabase(H256::from(a))));
|
return Err(Box::new(TrieError::IncompleteDatabase(H256::from(a))));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1196,8 +1192,7 @@ impl<B: Backend> State<B> {
|
|||||||
Some(acc) => self.insert_cache(a, AccountEntry::new_clean_cached(acc)),
|
Some(acc) => self.insert_cache(a, AccountEntry::new_clean_cached(acc)),
|
||||||
None => {
|
None => {
|
||||||
let maybe_acc = if !self.db.is_known_null(a) {
|
let maybe_acc = if !self.db.is_known_null(a) {
|
||||||
let db = &self.db.as_hash_db();
|
let db = self.factories.trie.readonly(self.db.as_hashdb(), &self.root)?;
|
||||||
let db = self.factories.trie.readonly(db, &self.root)?;
|
|
||||||
let from_rlp = |b:&[u8]| { Account::from_rlp(b).expect("decoding db value failed") };
|
let from_rlp = |b:&[u8]| { Account::from_rlp(b).expect("decoding db value failed") };
|
||||||
AccountEntry::new_clean(db.get_with(a, from_rlp)?)
|
AccountEntry::new_clean(db.get_with(a, from_rlp)?)
|
||||||
} else {
|
} else {
|
||||||
@@ -1225,9 +1220,9 @@ impl<B: Backend> State<B> {
|
|||||||
|
|
||||||
if require_code {
|
if require_code {
|
||||||
let addr_hash = account.address_hash(a);
|
let addr_hash = account.address_hash(a);
|
||||||
let accountdb = self.factories.accountdb.readonly(self.db.as_hash_db(), addr_hash);
|
let accountdb = self.factories.accountdb.readonly(self.db.as_hashdb(), addr_hash);
|
||||||
|
|
||||||
if !Self::update_account_cache(RequireCache::Code, &mut account, &self.db, accountdb.as_hash_db()) {
|
if !Self::update_account_cache(RequireCache::Code, &mut account, &self.db, accountdb.as_hashdb()) {
|
||||||
return Err(Box::new(TrieError::IncompleteDatabase(H256::from(a))))
|
return Err(Box::new(TrieError::IncompleteDatabase(H256::from(a))))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1250,8 +1245,7 @@ impl<B: Backend> State<B> {
|
|||||||
/// `account_key` == keccak(address)
|
/// `account_key` == keccak(address)
|
||||||
pub fn prove_account(&self, account_key: H256) -> TrieResult<(Vec<Bytes>, BasicAccount)> {
|
pub fn prove_account(&self, account_key: H256) -> TrieResult<(Vec<Bytes>, BasicAccount)> {
|
||||||
let mut recorder = Recorder::new();
|
let mut recorder = Recorder::new();
|
||||||
let db = &self.db.as_hash_db();
|
let trie = TrieDB::new(self.db.as_hashdb(), &self.root)?;
|
||||||
let trie = TrieDB::new(db, &self.root)?;
|
|
||||||
let maybe_account: Option<BasicAccount> = {
|
let maybe_account: Option<BasicAccount> = {
|
||||||
let panicky_decoder = |bytes: &[u8]| {
|
let panicky_decoder = |bytes: &[u8]| {
|
||||||
::rlp::decode(bytes).expect(&format!("prove_account, could not query trie for account key={}", &account_key))
|
::rlp::decode(bytes).expect(&format!("prove_account, could not query trie for account key={}", &account_key))
|
||||||
@@ -1277,16 +1271,15 @@ impl<B: Backend> State<B> {
|
|||||||
pub fn prove_storage(&self, account_key: H256, storage_key: H256) -> TrieResult<(Vec<Bytes>, H256)> {
|
pub fn prove_storage(&self, account_key: H256, storage_key: H256) -> TrieResult<(Vec<Bytes>, H256)> {
|
||||||
// TODO: probably could look into cache somehow but it's keyed by
|
// TODO: probably could look into cache somehow but it's keyed by
|
||||||
// address, not keccak(address).
|
// address, not keccak(address).
|
||||||
let db = &self.db.as_hash_db();
|
let trie = TrieDB::new(self.db.as_hashdb(), &self.root)?;
|
||||||
let trie = TrieDB::new(db, &self.root)?;
|
|
||||||
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
||||||
let acc = match trie.get_with(&account_key, from_rlp)? {
|
let acc = match trie.get_with(&account_key, from_rlp)? {
|
||||||
Some(acc) => acc,
|
Some(acc) => acc,
|
||||||
None => return Ok((Vec::new(), H256::new())),
|
None => return Ok((Vec::new(), H256::new())),
|
||||||
};
|
};
|
||||||
|
|
||||||
let account_db = self.factories.accountdb.readonly(self.db.as_hash_db(), account_key);
|
let account_db = self.factories.accountdb.readonly(self.db.as_hashdb(), account_key);
|
||||||
acc.prove_storage(account_db.as_hash_db(), storage_key)
|
acc.prove_storage(account_db.as_hashdb(), storage_key)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ use byteorder::{LittleEndian, ByteOrder};
|
|||||||
use db::COL_ACCOUNT_BLOOM;
|
use db::COL_ACCOUNT_BLOOM;
|
||||||
use ethereum_types::{H256, Address};
|
use ethereum_types::{H256, Address};
|
||||||
use hash::keccak;
|
use hash::keccak;
|
||||||
use hash_db::HashDB;
|
use hashdb::HashDB;
|
||||||
use journaldb::JournalDB;
|
use journaldb::JournalDB;
|
||||||
use keccak_hasher::KeccakHasher;
|
use keccak_hasher::KeccakHasher;
|
||||||
use kvdb::{KeyValueDB, DBTransaction, DBValue};
|
use kvdb::{KeyValueDB, DBTransaction, DBValue};
|
||||||
@@ -313,13 +313,13 @@ impl StateDB {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Conversion method to interpret self as `HashDB` reference
|
/// Conversion method to interpret self as `HashDB` reference
|
||||||
pub fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> {
|
pub fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> {
|
||||||
self.db.as_hash_db()
|
self.db.as_hashdb()
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Conversion method to interpret self as mutable `HashDB` reference
|
/// Conversion method to interpret self as mutable `HashDB` reference
|
||||||
pub fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
|
pub fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
|
||||||
self.db.as_hash_db_mut()
|
self.db.as_hashdb_mut()
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Clone the database.
|
/// Clone the database.
|
||||||
@@ -379,7 +379,14 @@ impl StateDB {
|
|||||||
|
|
||||||
/// Check if the account can be returned from cache by matching current block parent hash against canonical
|
/// Check if the account can be returned from cache by matching current block parent hash against canonical
|
||||||
/// state and filtering out account modified in later blocks.
|
/// state and filtering out account modified in later blocks.
|
||||||
fn is_allowed(addr: &Address, parent_hash: &H256, modifications: &VecDeque<BlockChanges>) -> bool {
|
fn is_allowed(addr: &Address, parent_hash: &Option<H256>, modifications: &VecDeque<BlockChanges>) -> bool {
|
||||||
|
let mut parent = match *parent_hash {
|
||||||
|
None => {
|
||||||
|
trace!("Cache lookup skipped for {:?}: no parent hash", addr);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
Some(ref parent) => parent,
|
||||||
|
};
|
||||||
if modifications.is_empty() {
|
if modifications.is_empty() {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
@@ -388,7 +395,6 @@ impl StateDB {
|
|||||||
// We search for our parent in that list first and then for
|
// We search for our parent in that list first and then for
|
||||||
// all its parent until we hit the canonical block,
|
// all its parent until we hit the canonical block,
|
||||||
// checking against all the intermediate modifications.
|
// checking against all the intermediate modifications.
|
||||||
let mut parent = parent_hash;
|
|
||||||
for m in modifications {
|
for m in modifications {
|
||||||
if &m.hash == parent {
|
if &m.hash == parent {
|
||||||
if m.is_canon {
|
if m.is_canon {
|
||||||
@@ -407,10 +413,10 @@ impl StateDB {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl state::Backend for StateDB {
|
impl state::Backend for StateDB {
|
||||||
fn as_hash_db(&self) -> &HashDB<KeccakHasher, DBValue> { self.db.as_hash_db() }
|
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self.db.as_hashdb() }
|
||||||
|
|
||||||
fn as_hash_db_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
|
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
|
||||||
self.db.as_hash_db_mut()
|
self.db.as_hashdb_mut()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn add_to_account_cache(&mut self, addr: Address, data: Option<Account>, modified: bool) {
|
fn add_to_account_cache(&mut self, addr: Address, data: Option<Account>, modified: bool) {
|
||||||
@@ -428,25 +434,20 @@ impl state::Backend for StateDB {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn get_cached_account(&self, addr: &Address) -> Option<Option<Account>> {
|
fn get_cached_account(&self, addr: &Address) -> Option<Option<Account>> {
|
||||||
self.parent_hash.as_ref().and_then(|parent_hash| {
|
let mut cache = self.account_cache.lock();
|
||||||
let mut cache = self.account_cache.lock();
|
if !Self::is_allowed(addr, &self.parent_hash, &cache.modifications) {
|
||||||
if !Self::is_allowed(addr, parent_hash, &cache.modifications) {
|
return None;
|
||||||
return None;
|
}
|
||||||
}
|
cache.accounts.get_mut(addr).map(|a| a.as_ref().map(|a| a.clone_basic()))
|
||||||
cache.accounts.get_mut(addr).map(|a| a.as_ref().map(|a| a.clone_basic()))
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_cached<F, U>(&self, a: &Address, f: F) -> Option<U>
|
fn get_cached<F, U>(&self, a: &Address, f: F) -> Option<U>
|
||||||
where F: FnOnce(Option<&mut Account>) -> U
|
where F: FnOnce(Option<&mut Account>) -> U {
|
||||||
{
|
let mut cache = self.account_cache.lock();
|
||||||
self.parent_hash.as_ref().and_then(|parent_hash| {
|
if !Self::is_allowed(a, &self.parent_hash, &cache.modifications) {
|
||||||
let mut cache = self.account_cache.lock();
|
return None;
|
||||||
if !Self::is_allowed(a, parent_hash, &cache.modifications) {
|
}
|
||||||
return None;
|
cache.accounts.get_mut(a).map(|c| f(c.as_mut()))
|
||||||
}
|
|
||||||
cache.accounts.get_mut(a).map(|c| f(c.as_mut()))
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_cached_code(&self, hash: &H256) -> Option<Arc<Vec<u8>>> {
|
fn get_cached_code(&self, hash: &H256) -> Option<Arc<Vec<u8>>> {
|
||||||
|
|||||||
@@ -39,6 +39,7 @@ use types::header::Header;
|
|||||||
use types::view;
|
use types::view;
|
||||||
use types::views::BlockView;
|
use types::views::BlockView;
|
||||||
|
|
||||||
|
use account_provider::AccountProvider;
|
||||||
use block::{OpenBlock, Drain};
|
use block::{OpenBlock, Drain};
|
||||||
use client::{Client, ClientConfig, ChainInfo, ImportBlock, ChainNotify, ChainMessageType, PrepareOpenBlock};
|
use client::{Client, ClientConfig, ChainInfo, ImportBlock, ChainNotify, ChainMessageType, PrepareOpenBlock};
|
||||||
use factory::Factories;
|
use factory::Factories;
|
||||||
@@ -108,15 +109,18 @@ pub fn generate_dummy_client_with_data(block_number: u32, txs_per_block: usize,
|
|||||||
generate_dummy_client_with_spec_and_data(Spec::new_null, block_number, txs_per_block, tx_gas_prices)
|
generate_dummy_client_with_spec_and_data(Spec::new_null, block_number, txs_per_block, tx_gas_prices)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generates dummy client (not test client) with corresponding spec and accounts
|
/// Generates dummy client (not test client) with corresponding amount of blocks, txs per block and spec
|
||||||
pub fn generate_dummy_client_with_spec<F>(test_spec: F) -> Arc<Client> where F: Fn()->Spec {
|
pub fn generate_dummy_client_with_spec_and_data<F>(test_spec: F, block_number: u32, txs_per_block: usize, tx_gas_prices: &[U256]) -> Arc<Client> where F: Fn()->Spec {
|
||||||
generate_dummy_client_with_spec_and_data(test_spec, 0, 0, &[])
|
generate_dummy_client_with_spec_accounts_and_data(test_spec, None, block_number, txs_per_block, tx_gas_prices)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generates dummy client (not test client) with corresponding amount of blocks, txs per block and spec
|
/// Generates dummy client (not test client) with corresponding spec and accounts
|
||||||
pub fn generate_dummy_client_with_spec_and_data<F>(test_spec: F, block_number: u32, txs_per_block: usize, tx_gas_prices: &[U256]) -> Arc<Client> where
|
pub fn generate_dummy_client_with_spec_and_accounts<F>(test_spec: F, accounts: Option<Arc<AccountProvider>>) -> Arc<Client> where F: Fn()->Spec {
|
||||||
F: Fn() -> Spec
|
generate_dummy_client_with_spec_accounts_and_data(test_spec, accounts, 0, 0, &[])
|
||||||
{
|
}
|
||||||
|
|
||||||
|
/// Generates dummy client (not test client) with corresponding blocks, accounts and spec
|
||||||
|
pub fn generate_dummy_client_with_spec_accounts_and_data<F>(test_spec: F, accounts: Option<Arc<AccountProvider>>, block_number: u32, txs_per_block: usize, tx_gas_prices: &[U256]) -> Arc<Client> where F: Fn()->Spec {
|
||||||
let test_spec = test_spec();
|
let test_spec = test_spec();
|
||||||
let client_db = new_db();
|
let client_db = new_db();
|
||||||
|
|
||||||
@@ -124,7 +128,7 @@ pub fn generate_dummy_client_with_spec_and_data<F>(test_spec: F, block_number: u
|
|||||||
ClientConfig::default(),
|
ClientConfig::default(),
|
||||||
&test_spec,
|
&test_spec,
|
||||||
client_db,
|
client_db,
|
||||||
Arc::new(Miner::new_for_tests(&test_spec, None)),
|
Arc::new(Miner::new_for_tests(&test_spec, accounts)),
|
||||||
IoChannel::disconnected(),
|
IoChannel::disconnected(),
|
||||||
).unwrap();
|
).unwrap();
|
||||||
let test_engine = &*test_spec.engine;
|
let test_engine = &*test_spec.engine;
|
||||||
|
|||||||
@@ -106,7 +106,7 @@ impl Filter {
|
|||||||
|
|
||||||
let to_matches = match trace.result {
|
let to_matches = match trace.result {
|
||||||
Res::Create(ref create_result) => self.to_address.matches(&create_result.address),
|
Res::Create(ref create_result) => self.to_address.matches(&create_result.address),
|
||||||
_ => self.to_address.matches_all(),
|
_ => false
|
||||||
};
|
};
|
||||||
|
|
||||||
from_matches && to_matches
|
from_matches && to_matches
|
||||||
@@ -385,44 +385,4 @@ mod tests {
|
|||||||
assert!(f1.matches(&trace));
|
assert!(f1.matches(&trace));
|
||||||
assert!(f2.matches(&trace));
|
assert!(f2.matches(&trace));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn filter_match_failed_contract_creation_fix_9822() {
|
|
||||||
|
|
||||||
let f0 = Filter {
|
|
||||||
range: (0..0),
|
|
||||||
from_address: vec![1.into()].into(),
|
|
||||||
to_address: vec![].into(),
|
|
||||||
};
|
|
||||||
|
|
||||||
let f1 = Filter {
|
|
||||||
range: (0..0),
|
|
||||||
from_address: vec![].into(),
|
|
||||||
to_address: vec![].into(),
|
|
||||||
};
|
|
||||||
|
|
||||||
let f2 = Filter {
|
|
||||||
range: (0..0),
|
|
||||||
from_address: vec![].into(),
|
|
||||||
to_address: vec![2.into()].into(),
|
|
||||||
};
|
|
||||||
|
|
||||||
let trace = FlatTrace {
|
|
||||||
action: Action::Create(Create {
|
|
||||||
from: 1.into(),
|
|
||||||
gas: 4.into(),
|
|
||||||
init: vec![0x5],
|
|
||||||
value: 3.into(),
|
|
||||||
}),
|
|
||||||
result: Res::FailedCall(TraceError::BadInstruction),
|
|
||||||
trace_address: vec![].into_iter().collect(),
|
|
||||||
subtraces: 0
|
|
||||||
};
|
|
||||||
|
|
||||||
assert!(f0.matches(&trace));
|
|
||||||
assert!(f1.matches(&trace));
|
|
||||||
assert!(!f2.matches(&trace));
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -9,17 +9,16 @@ authors = ["Parity Technologies <admin@parity.io>"]
|
|||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
common-types = { path = "../types" }
|
common-types = { path = "../types" }
|
||||||
enum_primitive = "0.1.1"
|
env_logger = "0.5"
|
||||||
ethcore = { path = ".." }
|
ethcore = { path = ".." }
|
||||||
ethcore-io = { path = "../../util/io" }
|
ethcore-io = { path = "../../util/io" }
|
||||||
ethcore-light = { path = "../light" }
|
ethcore-light = { path = "../light" }
|
||||||
ethcore-network = { path = "../../util/network" }
|
ethcore-network = { path = "../../util/network" }
|
||||||
ethcore-network-devp2p = { path = "../../util/network-devp2p" }
|
ethcore-network-devp2p = { path = "../../util/network-devp2p" }
|
||||||
ethereum-types = "0.4"
|
ethereum-types = "0.4"
|
||||||
ethkey = { path = "../../accounts/ethkey" }
|
|
||||||
ethstore = { path = "../../accounts/ethstore" }
|
ethstore = { path = "../../accounts/ethstore" }
|
||||||
fastmap = { path = "../../util/fastmap" }
|
fastmap = { path = "../../util/fastmap" }
|
||||||
hash-db = "0.11.0"
|
hashdb = "0.3.0"
|
||||||
heapsize = "0.4"
|
heapsize = "0.4"
|
||||||
keccak-hash = "0.1"
|
keccak-hash = "0.1"
|
||||||
keccak-hasher = { path = "../../util/keccak-hasher" }
|
keccak-hasher = { path = "../../util/keccak-hasher" }
|
||||||
@@ -35,8 +34,9 @@ triehash-ethereum = {version = "0.2", path = "../../util/triehash-ethereum" }
|
|||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
env_logger = "0.5"
|
env_logger = "0.5"
|
||||||
ethcore = { path = "..", features = ["test-helpers"] }
|
|
||||||
ethcore-io = { path = "../../util/io", features = ["mio"] }
|
ethcore-io = { path = "../../util/io", features = ["mio"] }
|
||||||
ethcore-private-tx = { path = "../private-tx" }
|
ethkey = { path = "../../accounts/ethkey" }
|
||||||
kvdb-memorydb = "0.1"
|
kvdb-memorydb = "0.1"
|
||||||
|
ethcore-private-tx = { path = "../private-tx" }
|
||||||
|
ethcore = { path = "..", features = ["test-helpers"] }
|
||||||
rustc-hex = "1.0"
|
rustc-hex = "1.0"
|
||||||
|
|||||||
@@ -17,19 +17,18 @@
|
|||||||
use std::sync::{Arc, mpsc, atomic};
|
use std::sync::{Arc, mpsc, atomic};
|
||||||
use std::collections::{HashMap, BTreeMap};
|
use std::collections::{HashMap, BTreeMap};
|
||||||
use std::io;
|
use std::io;
|
||||||
use std::ops::RangeInclusive;
|
use std::ops::Range;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use devp2p::NetworkService;
|
use devp2p::NetworkService;
|
||||||
use network::{NetworkProtocolHandler, NetworkContext, PeerId, ProtocolId,
|
use network::{NetworkProtocolHandler, NetworkContext, PeerId, ProtocolId,
|
||||||
NetworkConfiguration as BasicNetworkConfiguration, NonReservedPeerMode, Error, ErrorKind,
|
NetworkConfiguration as BasicNetworkConfiguration, NonReservedPeerMode, Error, ErrorKind,
|
||||||
ConnectionFilter};
|
ConnectionFilter};
|
||||||
use network::client_version::ClientVersion;
|
|
||||||
|
|
||||||
use types::pruning_info::PruningInfo;
|
use types::pruning_info::PruningInfo;
|
||||||
use ethereum_types::{H256, H512, U256};
|
use ethereum_types::{H256, H512, U256};
|
||||||
use io::{TimerToken};
|
use io::{TimerToken};
|
||||||
use ethkey::Secret;
|
use ethstore::ethkey::Secret;
|
||||||
use ethcore::client::{BlockChainClient, ChainNotify, NewBlocks, ChainMessageType};
|
use ethcore::client::{BlockChainClient, ChainNotify, NewBlocks, ChainMessageType};
|
||||||
use ethcore::snapshot::SnapshotService;
|
use ethcore::snapshot::SnapshotService;
|
||||||
use types::BlockNumber;
|
use types::BlockNumber;
|
||||||
@@ -39,8 +38,8 @@ use std::net::{SocketAddr, AddrParseError};
|
|||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
use parking_lot::{RwLock, Mutex};
|
use parking_lot::{RwLock, Mutex};
|
||||||
use chain::{ETH_PROTOCOL_VERSION_63, ETH_PROTOCOL_VERSION_62,
|
use chain::{ETH_PROTOCOL_VERSION_63, ETH_PROTOCOL_VERSION_62,
|
||||||
PAR_PROTOCOL_VERSION_1, PAR_PROTOCOL_VERSION_2, PAR_PROTOCOL_VERSION_3};
|
PAR_PROTOCOL_VERSION_1, PAR_PROTOCOL_VERSION_2, PAR_PROTOCOL_VERSION_3,
|
||||||
use chain::sync_packet::SyncPacket::{PrivateTransactionPacket, SignedPrivateTransactionPacket};
|
PRIVATE_TRANSACTION_PACKET, SIGNED_PRIVATE_TRANSACTION_PACKET};
|
||||||
use light::client::AsLightClient;
|
use light::client::AsLightClient;
|
||||||
use light::Provider;
|
use light::Provider;
|
||||||
use light::net::{
|
use light::net::{
|
||||||
@@ -51,8 +50,6 @@ use network::IpFilter;
|
|||||||
use private_tx::PrivateTxHandler;
|
use private_tx::PrivateTxHandler;
|
||||||
use types::transaction::UnverifiedTransaction;
|
use types::transaction::UnverifiedTransaction;
|
||||||
|
|
||||||
use super::light_sync::SyncInfo;
|
|
||||||
|
|
||||||
/// Parity sync protocol
|
/// Parity sync protocol
|
||||||
pub const WARP_SYNC_PROTOCOL_ID: ProtocolId = *b"par";
|
pub const WARP_SYNC_PROTOCOL_ID: ProtocolId = *b"par";
|
||||||
/// Ethereum sync protocol
|
/// Ethereum sync protocol
|
||||||
@@ -161,7 +158,7 @@ pub struct PeerInfo {
|
|||||||
/// Public node id
|
/// Public node id
|
||||||
pub id: Option<String>,
|
pub id: Option<String>,
|
||||||
/// Node client ID
|
/// Node client ID
|
||||||
pub client_version: ClientVersion,
|
pub client_version: String,
|
||||||
/// Capabilities
|
/// Capabilities
|
||||||
pub capabilities: Vec<String>,
|
pub capabilities: Vec<String>,
|
||||||
/// Remote endpoint address
|
/// Remote endpoint address
|
||||||
@@ -579,9 +576,9 @@ impl ChainNotify for EthSync {
|
|||||||
match message_type {
|
match message_type {
|
||||||
ChainMessageType::Consensus(message) => self.eth_handler.sync.write().propagate_consensus_packet(&mut sync_io, message),
|
ChainMessageType::Consensus(message) => self.eth_handler.sync.write().propagate_consensus_packet(&mut sync_io, message),
|
||||||
ChainMessageType::PrivateTransaction(transaction_hash, message) =>
|
ChainMessageType::PrivateTransaction(transaction_hash, message) =>
|
||||||
self.eth_handler.sync.write().propagate_private_transaction(&mut sync_io, transaction_hash, PrivateTransactionPacket, message),
|
self.eth_handler.sync.write().propagate_private_transaction(&mut sync_io, transaction_hash, PRIVATE_TRANSACTION_PACKET, message),
|
||||||
ChainMessageType::SignedPrivateTransaction(transaction_hash, message) =>
|
ChainMessageType::SignedPrivateTransaction(transaction_hash, message) =>
|
||||||
self.eth_handler.sync.write().propagate_private_transaction(&mut sync_io, transaction_hash, SignedPrivateTransactionPacket, message),
|
self.eth_handler.sync.write().propagate_private_transaction(&mut sync_io, transaction_hash, SIGNED_PRIVATE_TRANSACTION_PACKET, message),
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
@@ -618,7 +615,9 @@ pub trait ManageNetwork : Send + Sync {
|
|||||||
/// Stop network
|
/// Stop network
|
||||||
fn stop_network(&self);
|
fn stop_network(&self);
|
||||||
/// Returns the minimum and maximum peers.
|
/// Returns the minimum and maximum peers.
|
||||||
fn num_peers_range(&self) -> RangeInclusive<u32>;
|
/// Note that `range.end` is *exclusive*.
|
||||||
|
// TODO: Range should be changed to RangeInclusive once stable (https://github.com/rust-lang/rust/pull/50758)
|
||||||
|
fn num_peers_range(&self) -> Range<u32>;
|
||||||
/// Get network context for protocol.
|
/// Get network context for protocol.
|
||||||
fn with_proto_context(&self, proto: ProtocolId, f: &mut FnMut(&NetworkContext));
|
fn with_proto_context(&self, proto: ProtocolId, f: &mut FnMut(&NetworkContext));
|
||||||
}
|
}
|
||||||
@@ -657,7 +656,7 @@ impl ManageNetwork for EthSync {
|
|||||||
self.stop();
|
self.stop();
|
||||||
}
|
}
|
||||||
|
|
||||||
fn num_peers_range(&self) -> RangeInclusive<u32> {
|
fn num_peers_range(&self) -> Range<u32> {
|
||||||
self.network.num_peers_range()
|
self.network.num_peers_range()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -806,24 +805,6 @@ pub trait LightSyncProvider {
|
|||||||
fn transactions_stats(&self) -> BTreeMap<H256, TransactionStats>;
|
fn transactions_stats(&self) -> BTreeMap<H256, TransactionStats>;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Wrapper around `light_sync::SyncInfo` to expose those methods without the concrete type `LightSync`
|
|
||||||
pub trait LightSyncInfo: Send + Sync {
|
|
||||||
/// Get the highest block advertised on the network.
|
|
||||||
fn highest_block(&self) -> Option<u64>;
|
|
||||||
|
|
||||||
/// Get the block number at the time of sync start.
|
|
||||||
fn start_block(&self) -> u64;
|
|
||||||
|
|
||||||
/// Whether major sync is underway.
|
|
||||||
fn is_major_importing(&self) -> bool;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute a closure with a protocol context.
|
|
||||||
pub trait LightNetworkDispatcher {
|
|
||||||
/// Execute a closure with a protocol context.
|
|
||||||
fn with_context<F, T>(&self, f: F) -> Option<T> where F: FnOnce(&::light::net::BasicContext) -> T;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Configuration for the light sync.
|
/// Configuration for the light sync.
|
||||||
pub struct LightSyncParams<L> {
|
pub struct LightSyncParams<L> {
|
||||||
/// Network configuration.
|
/// Network configuration.
|
||||||
@@ -843,7 +824,7 @@ pub struct LightSyncParams<L> {
|
|||||||
/// Service for light synchronization.
|
/// Service for light synchronization.
|
||||||
pub struct LightSync {
|
pub struct LightSync {
|
||||||
proto: Arc<LightProtocol>,
|
proto: Arc<LightProtocol>,
|
||||||
sync: Arc<SyncInfo + Sync + Send>,
|
sync: Arc<::light_sync::SyncInfo + Sync + Send>,
|
||||||
attached_protos: Vec<AttachedProtocol>,
|
attached_protos: Vec<AttachedProtocol>,
|
||||||
network: NetworkService,
|
network: NetworkService,
|
||||||
subprotocol_name: [u8; 3],
|
subprotocol_name: [u8; 3],
|
||||||
@@ -894,6 +875,15 @@ impl LightSync {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Execute a closure with a protocol context.
|
||||||
|
pub fn with_context<F, T>(&self, f: F) -> Option<T>
|
||||||
|
where F: FnOnce(&::light::net::BasicContext) -> T
|
||||||
|
{
|
||||||
|
self.network.with_context_eval(
|
||||||
|
self.subprotocol_name,
|
||||||
|
move |ctx| self.proto.with_context(&ctx, f),
|
||||||
|
)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ::std::ops::Deref for LightSync {
|
impl ::std::ops::Deref for LightSync {
|
||||||
@@ -902,16 +892,6 @@ impl ::std::ops::Deref for LightSync {
|
|||||||
fn deref(&self) -> &Self::Target { &*self.sync }
|
fn deref(&self) -> &Self::Target { &*self.sync }
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
impl LightNetworkDispatcher for LightSync {
|
|
||||||
fn with_context<F, T>(&self, f: F) -> Option<T> where F: FnOnce(&::light::net::BasicContext) -> T {
|
|
||||||
self.network.with_context_eval(
|
|
||||||
self.subprotocol_name,
|
|
||||||
move |ctx| self.proto.with_context(&ctx, f),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ManageNetwork for LightSync {
|
impl ManageNetwork for LightSync {
|
||||||
fn accept_unreserved_peers(&self) {
|
fn accept_unreserved_peers(&self) {
|
||||||
self.network.set_non_reserved_mode(NonReservedPeerMode::Accept);
|
self.network.set_non_reserved_mode(NonReservedPeerMode::Accept);
|
||||||
@@ -955,7 +935,7 @@ impl ManageNetwork for LightSync {
|
|||||||
self.network.stop();
|
self.network.stop();
|
||||||
}
|
}
|
||||||
|
|
||||||
fn num_peers_range(&self) -> RangeInclusive<u32> {
|
fn num_peers_range(&self) -> Range<u32> {
|
||||||
self.network.num_peers_range()
|
self.network.num_peers_range()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -968,12 +948,12 @@ impl LightSyncProvider for LightSync {
|
|||||||
fn peer_numbers(&self) -> PeerNumbers {
|
fn peer_numbers(&self) -> PeerNumbers {
|
||||||
let (connected, active) = self.proto.peer_count();
|
let (connected, active) = self.proto.peer_count();
|
||||||
let peers_range = self.num_peers_range();
|
let peers_range = self.num_peers_range();
|
||||||
debug_assert!(peers_range.end() >= peers_range.start());
|
debug_assert!(peers_range.end > peers_range.start);
|
||||||
PeerNumbers {
|
PeerNumbers {
|
||||||
connected: connected,
|
connected: connected,
|
||||||
active: active,
|
active: active,
|
||||||
max: *peers_range.end() as usize,
|
max: peers_range.end as usize - 1,
|
||||||
min: *peers_range.start() as usize,
|
min: peers_range.start as usize,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1012,17 +992,3 @@ impl LightSyncProvider for LightSync {
|
|||||||
Default::default() // TODO
|
Default::default() // TODO
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl LightSyncInfo for LightSync {
|
|
||||||
fn highest_block(&self) -> Option<u64> {
|
|
||||||
(*self.sync).highest_block()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn start_block(&self) -> u64 {
|
|
||||||
(*self.sync).start_block()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn is_major_importing(&self) -> bool {
|
|
||||||
(*self.sync).is_major_importing()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -29,13 +29,10 @@ use ethcore::error::{ImportErrorKind, QueueErrorKind, BlockError, Error as Ethco
|
|||||||
use sync_io::SyncIo;
|
use sync_io::SyncIo;
|
||||||
use blocks::{BlockCollection, SyncBody, SyncHeader};
|
use blocks::{BlockCollection, SyncBody, SyncHeader};
|
||||||
use chain::BlockSet;
|
use chain::BlockSet;
|
||||||
use network::PeerId;
|
|
||||||
use network::client_version::ClientCapabilities;
|
|
||||||
|
|
||||||
const MAX_HEADERS_TO_REQUEST: usize = 128;
|
const MAX_HEADERS_TO_REQUEST: usize = 128;
|
||||||
const MAX_BODIES_TO_REQUEST_LARGE: usize = 128;
|
const MAX_BODIES_TO_REQUEST: usize = 32;
|
||||||
const MAX_BODIES_TO_REQUEST_SMALL: usize = 32; // Size request for parity clients prior to 2.4.0
|
const MAX_RECEPITS_TO_REQUEST: usize = 128;
|
||||||
const MAX_RECEPITS_TO_REQUEST: usize = 256;
|
|
||||||
const SUBCHAIN_SIZE: u64 = 256;
|
const SUBCHAIN_SIZE: u64 = 256;
|
||||||
const MAX_ROUND_PARENTS: usize = 16;
|
const MAX_ROUND_PARENTS: usize = 16;
|
||||||
const MAX_PARALLEL_SUBCHAIN_DOWNLOAD: usize = 5;
|
const MAX_PARALLEL_SUBCHAIN_DOWNLOAD: usize = 5;
|
||||||
@@ -467,12 +464,12 @@ impl BlockDownloader {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Find some headers or blocks to download for a peer.
|
/// Find some headers or blocks to download for a peer.
|
||||||
pub fn request_blocks(&mut self, peer_id: PeerId, io: &mut SyncIo, num_active_peers: usize) -> Option<BlockRequest> {
|
pub fn request_blocks(&mut self, io: &mut SyncIo, num_active_peers: usize) -> Option<BlockRequest> {
|
||||||
match self.state {
|
match self.state {
|
||||||
State::Idle => {
|
State::Idle => {
|
||||||
self.start_sync_round(io);
|
self.start_sync_round(io);
|
||||||
if self.state == State::ChainHead {
|
if self.state == State::ChainHead {
|
||||||
return self.request_blocks(peer_id, io, num_active_peers);
|
return self.request_blocks(io, num_active_peers);
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
State::ChainHead => {
|
State::ChainHead => {
|
||||||
@@ -490,15 +487,7 @@ impl BlockDownloader {
|
|||||||
},
|
},
|
||||||
State::Blocks => {
|
State::Blocks => {
|
||||||
// check to see if we need to download any block bodies first
|
// check to see if we need to download any block bodies first
|
||||||
let client_version = io.peer_version(peer_id);
|
let needed_bodies = self.blocks.needed_bodies(MAX_BODIES_TO_REQUEST, false);
|
||||||
|
|
||||||
let number_of_bodies_to_request = if client_version.can_handle_large_requests() {
|
|
||||||
MAX_BODIES_TO_REQUEST_LARGE
|
|
||||||
} else {
|
|
||||||
MAX_BODIES_TO_REQUEST_SMALL
|
|
||||||
};
|
|
||||||
|
|
||||||
let needed_bodies = self.blocks.needed_bodies(number_of_bodies_to_request, false);
|
|
||||||
if !needed_bodies.is_empty() {
|
if !needed_bodies.is_empty() {
|
||||||
return Some(BlockRequest::Bodies {
|
return Some(BlockRequest::Bodies {
|
||||||
hashes: needed_bodies,
|
hashes: needed_bodies,
|
||||||
|
|||||||
@@ -17,14 +17,12 @@
|
|||||||
use api::WARP_SYNC_PROTOCOL_ID;
|
use api::WARP_SYNC_PROTOCOL_ID;
|
||||||
use block_sync::{BlockDownloaderImportError as DownloaderImportError, DownloadAction};
|
use block_sync::{BlockDownloaderImportError as DownloaderImportError, DownloadAction};
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use enum_primitive::FromPrimitive;
|
|
||||||
use ethcore::error::{Error as EthcoreError, ErrorKind as EthcoreErrorKind, ImportErrorKind, BlockError};
|
use ethcore::error::{Error as EthcoreError, ErrorKind as EthcoreErrorKind, ImportErrorKind, BlockError};
|
||||||
use ethcore::snapshot::{ManifestData, RestorationStatus};
|
use ethcore::snapshot::{ManifestData, RestorationStatus};
|
||||||
use ethcore::verification::queue::kind::blocks::Unverified;
|
use ethcore::verification::queue::kind::blocks::Unverified;
|
||||||
use ethereum_types::{H256, U256};
|
use ethereum_types::{H256, U256};
|
||||||
use hash::keccak;
|
use hash::keccak;
|
||||||
use network::PeerId;
|
use network::PeerId;
|
||||||
use network::client_version::ClientVersion;
|
|
||||||
use rlp::Rlp;
|
use rlp::Rlp;
|
||||||
use snapshot::ChunkType;
|
use snapshot::ChunkType;
|
||||||
use std::time::Instant;
|
use std::time::Instant;
|
||||||
@@ -34,20 +32,6 @@ use types::BlockNumber;
|
|||||||
use types::block_status::BlockStatus;
|
use types::block_status::BlockStatus;
|
||||||
use types::ids::BlockId;
|
use types::ids::BlockId;
|
||||||
|
|
||||||
use super::sync_packet::{PacketInfo, SyncPacket};
|
|
||||||
use super::sync_packet::SyncPacket::{
|
|
||||||
StatusPacket,
|
|
||||||
NewBlockHashesPacket,
|
|
||||||
BlockHeadersPacket,
|
|
||||||
BlockBodiesPacket,
|
|
||||||
NewBlockPacket,
|
|
||||||
ReceiptsPacket,
|
|
||||||
SnapshotManifestPacket,
|
|
||||||
SnapshotDataPacket,
|
|
||||||
PrivateTransactionPacket,
|
|
||||||
SignedPrivateTransactionPacket,
|
|
||||||
};
|
|
||||||
|
|
||||||
use super::{
|
use super::{
|
||||||
BlockSet,
|
BlockSet,
|
||||||
ChainSync,
|
ChainSync,
|
||||||
@@ -63,6 +47,16 @@ use super::{
|
|||||||
MAX_NEW_HASHES,
|
MAX_NEW_HASHES,
|
||||||
PAR_PROTOCOL_VERSION_1,
|
PAR_PROTOCOL_VERSION_1,
|
||||||
PAR_PROTOCOL_VERSION_3,
|
PAR_PROTOCOL_VERSION_3,
|
||||||
|
BLOCK_BODIES_PACKET,
|
||||||
|
BLOCK_HEADERS_PACKET,
|
||||||
|
NEW_BLOCK_HASHES_PACKET,
|
||||||
|
NEW_BLOCK_PACKET,
|
||||||
|
PRIVATE_TRANSACTION_PACKET,
|
||||||
|
RECEIPTS_PACKET,
|
||||||
|
SIGNED_PRIVATE_TRANSACTION_PACKET,
|
||||||
|
SNAPSHOT_DATA_PACKET,
|
||||||
|
SNAPSHOT_MANIFEST_PACKET,
|
||||||
|
STATUS_PACKET,
|
||||||
};
|
};
|
||||||
|
|
||||||
/// The Chain Sync Handler: handles responses from peers
|
/// The Chain Sync Handler: handles responses from peers
|
||||||
@@ -72,40 +66,36 @@ impl SyncHandler {
|
|||||||
/// Handle incoming packet from peer
|
/// Handle incoming packet from peer
|
||||||
pub fn on_packet(sync: &mut ChainSync, io: &mut SyncIo, peer: PeerId, packet_id: u8, data: &[u8]) {
|
pub fn on_packet(sync: &mut ChainSync, io: &mut SyncIo, peer: PeerId, packet_id: u8, data: &[u8]) {
|
||||||
let rlp = Rlp::new(data);
|
let rlp = Rlp::new(data);
|
||||||
if let Some(packet_id) = SyncPacket::from_u8(packet_id) {
|
let result = match packet_id {
|
||||||
let result = match packet_id {
|
STATUS_PACKET => SyncHandler::on_peer_status(sync, io, peer, &rlp),
|
||||||
StatusPacket => SyncHandler::on_peer_status(sync, io, peer, &rlp),
|
BLOCK_HEADERS_PACKET => SyncHandler::on_peer_block_headers(sync, io, peer, &rlp),
|
||||||
BlockHeadersPacket => SyncHandler::on_peer_block_headers(sync, io, peer, &rlp),
|
BLOCK_BODIES_PACKET => SyncHandler::on_peer_block_bodies(sync, io, peer, &rlp),
|
||||||
BlockBodiesPacket => SyncHandler::on_peer_block_bodies(sync, io, peer, &rlp),
|
RECEIPTS_PACKET => SyncHandler::on_peer_block_receipts(sync, io, peer, &rlp),
|
||||||
ReceiptsPacket => SyncHandler::on_peer_block_receipts(sync, io, peer, &rlp),
|
NEW_BLOCK_PACKET => SyncHandler::on_peer_new_block(sync, io, peer, &rlp),
|
||||||
NewBlockPacket => SyncHandler::on_peer_new_block(sync, io, peer, &rlp),
|
NEW_BLOCK_HASHES_PACKET => SyncHandler::on_peer_new_hashes(sync, io, peer, &rlp),
|
||||||
NewBlockHashesPacket => SyncHandler::on_peer_new_hashes(sync, io, peer, &rlp),
|
SNAPSHOT_MANIFEST_PACKET => SyncHandler::on_snapshot_manifest(sync, io, peer, &rlp),
|
||||||
SnapshotManifestPacket => SyncHandler::on_snapshot_manifest(sync, io, peer, &rlp),
|
SNAPSHOT_DATA_PACKET => SyncHandler::on_snapshot_data(sync, io, peer, &rlp),
|
||||||
SnapshotDataPacket => SyncHandler::on_snapshot_data(sync, io, peer, &rlp),
|
PRIVATE_TRANSACTION_PACKET => SyncHandler::on_private_transaction(sync, io, peer, &rlp),
|
||||||
PrivateTransactionPacket => SyncHandler::on_private_transaction(sync, io, peer, &rlp),
|
SIGNED_PRIVATE_TRANSACTION_PACKET => SyncHandler::on_signed_private_transaction(sync, io, peer, &rlp),
|
||||||
SignedPrivateTransactionPacket => SyncHandler::on_signed_private_transaction(sync, io, peer, &rlp),
|
_ => {
|
||||||
_ => {
|
debug!(target: "sync", "{}: Unknown packet {}", peer, packet_id);
|
||||||
debug!(target: "sync", "{}: Unknown packet {}", peer, packet_id.id());
|
Ok(())
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
match result {
|
|
||||||
Err(DownloaderImportError::Invalid) => {
|
|
||||||
debug!(target:"sync", "{} -> Invalid packet {}", peer, packet_id.id());
|
|
||||||
io.disable_peer(peer);
|
|
||||||
sync.deactivate_peer(io, peer);
|
|
||||||
},
|
|
||||||
Err(DownloaderImportError::Useless) => {
|
|
||||||
sync.deactivate_peer(io, peer);
|
|
||||||
},
|
|
||||||
Ok(()) => {
|
|
||||||
// give a task to the same peer first
|
|
||||||
sync.sync_peer(io, peer, false);
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
} else {
|
};
|
||||||
debug!(target: "sync", "{}: Unknown packet {}", peer, packet_id);
|
|
||||||
|
match result {
|
||||||
|
Err(DownloaderImportError::Invalid) => {
|
||||||
|
debug!(target:"sync", "{} -> Invalid packet {}", peer, packet_id);
|
||||||
|
io.disable_peer(peer);
|
||||||
|
sync.deactivate_peer(io, peer);
|
||||||
|
},
|
||||||
|
Err(DownloaderImportError::Useless) => {
|
||||||
|
sync.deactivate_peer(io, peer);
|
||||||
|
},
|
||||||
|
Ok(()) => {
|
||||||
|
// give a task to the same peer first
|
||||||
|
sync.sync_peer(io, peer, false);
|
||||||
|
},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -117,7 +107,7 @@ impl SyncHandler {
|
|||||||
|
|
||||||
/// Called by peer when it is disconnecting
|
/// Called by peer when it is disconnecting
|
||||||
pub fn on_peer_aborting(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId) {
|
pub fn on_peer_aborting(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId) {
|
||||||
trace!(target: "sync", "== Disconnecting {}: {}", peer_id, io.peer_version(peer_id));
|
trace!(target: "sync", "== Disconnecting {}: {}", peer_id, io.peer_info(peer_id));
|
||||||
sync.handshaking_peers.remove(&peer_id);
|
sync.handshaking_peers.remove(&peer_id);
|
||||||
if sync.peers.contains_key(&peer_id) {
|
if sync.peers.contains_key(&peer_id) {
|
||||||
debug!(target: "sync", "Disconnected {}", peer_id);
|
debug!(target: "sync", "Disconnected {}", peer_id);
|
||||||
@@ -143,7 +133,7 @@ impl SyncHandler {
|
|||||||
|
|
||||||
/// Called when a new peer is connected
|
/// Called when a new peer is connected
|
||||||
pub fn on_peer_connected(sync: &mut ChainSync, io: &mut SyncIo, peer: PeerId) {
|
pub fn on_peer_connected(sync: &mut ChainSync, io: &mut SyncIo, peer: PeerId) {
|
||||||
trace!(target: "sync", "== Connected {}: {}", peer, io.peer_version(peer));
|
trace!(target: "sync", "== Connected {}: {}", peer, io.peer_info(peer));
|
||||||
if let Err(e) = sync.send_status(io, peer) {
|
if let Err(e) = sync.send_status(io, peer) {
|
||||||
debug!(target:"sync", "Error sending status request: {:?}", e);
|
debug!(target:"sync", "Error sending status request: {:?}", e);
|
||||||
io.disconnect_peer(peer);
|
io.disconnect_peer(peer);
|
||||||
@@ -589,7 +579,6 @@ impl SyncHandler {
|
|||||||
snapshot_number: if warp_protocol { Some(r.val_at(6)?) } else { None },
|
snapshot_number: if warp_protocol { Some(r.val_at(6)?) } else { None },
|
||||||
block_set: None,
|
block_set: None,
|
||||||
private_tx_enabled: if private_tx_protocol { r.val_at(7).unwrap_or(false) } else { false },
|
private_tx_enabled: if private_tx_protocol { r.val_at(7).unwrap_or(false) } else { false },
|
||||||
client_version: ClientVersion::from(io.peer_version(peer_id)),
|
|
||||||
};
|
};
|
||||||
|
|
||||||
trace!(target: "sync", "New peer {} (\
|
trace!(target: "sync", "New peer {} (\
|
||||||
@@ -610,12 +599,12 @@ impl SyncHandler {
|
|||||||
peer.private_tx_enabled
|
peer.private_tx_enabled
|
||||||
);
|
);
|
||||||
if io.is_expired() {
|
if io.is_expired() {
|
||||||
trace!(target: "sync", "Status packet from expired session {}:{}", peer_id, io.peer_version(peer_id));
|
trace!(target: "sync", "Status packet from expired session {}:{}", peer_id, io.peer_info(peer_id));
|
||||||
return Ok(());
|
return Ok(());
|
||||||
}
|
}
|
||||||
|
|
||||||
if sync.peers.contains_key(&peer_id) {
|
if sync.peers.contains_key(&peer_id) {
|
||||||
debug!(target: "sync", "Unexpected status packet from {}:{}", peer_id, io.peer_version(peer_id));
|
debug!(target: "sync", "Unexpected status packet from {}:{}", peer_id, io.peer_info(peer_id));
|
||||||
return Ok(());
|
return Ok(());
|
||||||
}
|
}
|
||||||
let chain_info = io.chain().chain_info();
|
let chain_info = io.chain().chain_info();
|
||||||
@@ -644,7 +633,7 @@ impl SyncHandler {
|
|||||||
// Don't activate peer immediatelly when searching for common block.
|
// Don't activate peer immediatelly when searching for common block.
|
||||||
// Let the current sync round complete first.
|
// Let the current sync round complete first.
|
||||||
sync.active_peers.insert(peer_id.clone());
|
sync.active_peers.insert(peer_id.clone());
|
||||||
debug!(target: "sync", "Connected {}:{}", peer_id, io.peer_version(peer_id));
|
debug!(target: "sync", "Connected {}:{}", peer_id, io.peer_info(peer_id));
|
||||||
|
|
||||||
if let Some((fork_block, _)) = sync.fork_block {
|
if let Some((fork_block, _)) = sync.fork_block {
|
||||||
SyncRequester::request_fork_header(sync, io, peer_id, fork_block);
|
SyncRequester::request_fork_header(sync, io, peer_id, fork_block);
|
||||||
|
|||||||
@@ -88,7 +88,6 @@
|
|||||||
//! All other messages are ignored.
|
//! All other messages are ignored.
|
||||||
|
|
||||||
mod handler;
|
mod handler;
|
||||||
pub mod sync_packet;
|
|
||||||
mod propagator;
|
mod propagator;
|
||||||
mod requester;
|
mod requester;
|
||||||
mod supplier;
|
mod supplier;
|
||||||
@@ -105,7 +104,6 @@ use parking_lot::{Mutex, RwLock, RwLockWriteGuard};
|
|||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use rlp::{RlpStream, DecoderError};
|
use rlp::{RlpStream, DecoderError};
|
||||||
use network::{self, PeerId, PacketId};
|
use network::{self, PeerId, PacketId};
|
||||||
use network::client_version::ClientVersion;
|
|
||||||
use ethcore::client::{BlockChainClient, BlockStatus, BlockId, BlockChainInfo, BlockQueueInfo};
|
use ethcore::client::{BlockChainClient, BlockStatus, BlockId, BlockChainInfo, BlockQueueInfo};
|
||||||
use ethcore::snapshot::{RestorationStatus};
|
use ethcore::snapshot::{RestorationStatus};
|
||||||
use sync_io::SyncIo;
|
use sync_io::SyncIo;
|
||||||
@@ -120,12 +118,6 @@ use types::transaction::UnverifiedTransaction;
|
|||||||
use types::BlockNumber;
|
use types::BlockNumber;
|
||||||
|
|
||||||
use self::handler::SyncHandler;
|
use self::handler::SyncHandler;
|
||||||
use self::sync_packet::{PacketInfo, SyncPacket};
|
|
||||||
use self::sync_packet::SyncPacket::{
|
|
||||||
NewBlockPacket,
|
|
||||||
StatusPacket,
|
|
||||||
};
|
|
||||||
|
|
||||||
use self::propagator::SyncPropagator;
|
use self::propagator::SyncPropagator;
|
||||||
use self::requester::SyncRequester;
|
use self::requester::SyncRequester;
|
||||||
pub(crate) use self::supplier::SyncSupplier;
|
pub(crate) use self::supplier::SyncSupplier;
|
||||||
@@ -161,6 +153,28 @@ const MAX_TRANSACTION_PACKET_SIZE: usize = 5 * 1024 * 1024;
|
|||||||
const SNAPSHOT_RESTORE_THRESHOLD: BlockNumber = 30000;
|
const SNAPSHOT_RESTORE_THRESHOLD: BlockNumber = 30000;
|
||||||
const SNAPSHOT_MIN_PEERS: usize = 3;
|
const SNAPSHOT_MIN_PEERS: usize = 3;
|
||||||
|
|
||||||
|
const STATUS_PACKET: u8 = 0x00;
|
||||||
|
const NEW_BLOCK_HASHES_PACKET: u8 = 0x01;
|
||||||
|
const TRANSACTIONS_PACKET: u8 = 0x02;
|
||||||
|
pub const GET_BLOCK_HEADERS_PACKET: u8 = 0x03;
|
||||||
|
pub const BLOCK_HEADERS_PACKET: u8 = 0x04;
|
||||||
|
pub const GET_BLOCK_BODIES_PACKET: u8 = 0x05;
|
||||||
|
const BLOCK_BODIES_PACKET: u8 = 0x06;
|
||||||
|
const NEW_BLOCK_PACKET: u8 = 0x07;
|
||||||
|
|
||||||
|
pub const GET_NODE_DATA_PACKET: u8 = 0x0d;
|
||||||
|
pub const NODE_DATA_PACKET: u8 = 0x0e;
|
||||||
|
pub const GET_RECEIPTS_PACKET: u8 = 0x0f;
|
||||||
|
pub const RECEIPTS_PACKET: u8 = 0x10;
|
||||||
|
|
||||||
|
pub const GET_SNAPSHOT_MANIFEST_PACKET: u8 = 0x11;
|
||||||
|
pub const SNAPSHOT_MANIFEST_PACKET: u8 = 0x12;
|
||||||
|
pub const GET_SNAPSHOT_DATA_PACKET: u8 = 0x13;
|
||||||
|
pub const SNAPSHOT_DATA_PACKET: u8 = 0x14;
|
||||||
|
pub const CONSENSUS_DATA_PACKET: u8 = 0x15;
|
||||||
|
pub const PRIVATE_TRANSACTION_PACKET: u8 = 0x16;
|
||||||
|
pub const SIGNED_PRIVATE_TRANSACTION_PACKET: u8 = 0x17;
|
||||||
|
|
||||||
const MAX_SNAPSHOT_CHUNKS_DOWNLOAD_AHEAD: usize = 3;
|
const MAX_SNAPSHOT_CHUNKS_DOWNLOAD_AHEAD: usize = 3;
|
||||||
|
|
||||||
const WAIT_PEERS_TIMEOUT: Duration = Duration::from_secs(5);
|
const WAIT_PEERS_TIMEOUT: Duration = Duration::from_secs(5);
|
||||||
@@ -328,8 +342,6 @@ pub struct PeerInfo {
|
|||||||
snapshot_number: Option<BlockNumber>,
|
snapshot_number: Option<BlockNumber>,
|
||||||
/// Block set requested
|
/// Block set requested
|
||||||
block_set: Option<BlockSet>,
|
block_set: Option<BlockSet>,
|
||||||
/// Version of the software the peer is running
|
|
||||||
client_version: ClientVersion,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl PeerInfo {
|
impl PeerInfo {
|
||||||
@@ -469,7 +481,7 @@ impl ChainSyncApi {
|
|||||||
for peers in sync.get_peers(&chain_info, PeerState::SameBlock).chunks(10) {
|
for peers in sync.get_peers(&chain_info, PeerState::SameBlock).chunks(10) {
|
||||||
check_deadline(deadline)?;
|
check_deadline(deadline)?;
|
||||||
for peer in peers {
|
for peer in peers {
|
||||||
SyncPropagator::send_packet(io, *peer, NewBlockPacket, rlp.clone());
|
SyncPropagator::send_packet(io, *peer, NEW_BLOCK_PACKET, rlp.clone());
|
||||||
if let Some(ref mut peer) = sync.peers.get_mut(peer) {
|
if let Some(ref mut peer) = sync.peers.get_mut(peer) {
|
||||||
peer.latest_hash = hash;
|
peer.latest_hash = hash;
|
||||||
}
|
}
|
||||||
@@ -952,7 +964,7 @@ impl ChainSync {
|
|||||||
if !have_latest && (higher_difficulty || force || self.state == SyncState::NewBlocks) {
|
if !have_latest && (higher_difficulty || force || self.state == SyncState::NewBlocks) {
|
||||||
// check if got new blocks to download
|
// check if got new blocks to download
|
||||||
trace!(target: "sync", "Syncing with peer {}, force={}, td={:?}, our td={}, state={:?}", peer_id, force, peer_difficulty, syncing_difficulty, self.state);
|
trace!(target: "sync", "Syncing with peer {}, force={}, td={:?}, our td={}, state={:?}", peer_id, force, peer_difficulty, syncing_difficulty, self.state);
|
||||||
if let Some(request) = self.new_blocks.request_blocks(peer_id, io, num_active_peers) {
|
if let Some(request) = self.new_blocks.request_blocks(io, num_active_peers) {
|
||||||
SyncRequester::request_blocks(self, io, peer_id, request, BlockSet::NewBlocks);
|
SyncRequester::request_blocks(self, io, peer_id, request, BlockSet::NewBlocks);
|
||||||
if self.state == SyncState::Idle {
|
if self.state == SyncState::Idle {
|
||||||
self.state = SyncState::Blocks;
|
self.state = SyncState::Blocks;
|
||||||
@@ -965,7 +977,7 @@ impl ChainSync {
|
|||||||
let equal_or_higher_difficulty = peer_difficulty.map_or(false, |pd| pd >= syncing_difficulty);
|
let equal_or_higher_difficulty = peer_difficulty.map_or(false, |pd| pd >= syncing_difficulty);
|
||||||
|
|
||||||
if force || equal_or_higher_difficulty {
|
if force || equal_or_higher_difficulty {
|
||||||
if let Some(request) = self.old_blocks.as_mut().and_then(|d| d.request_blocks(peer_id, io, num_active_peers)) {
|
if let Some(request) = self.old_blocks.as_mut().and_then(|d| d.request_blocks(io, num_active_peers)) {
|
||||||
SyncRequester::request_blocks(self, io, peer_id, request, BlockSet::OldBlocks);
|
SyncRequester::request_blocks(self, io, peer_id, request, BlockSet::OldBlocks);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@@ -1131,7 +1143,7 @@ impl ChainSync {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
packet.complete_unbounded_list();
|
packet.complete_unbounded_list();
|
||||||
io.respond(StatusPacket.id(), packet.out())
|
io.respond(STATUS_PACKET, packet.out())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn maintain_peers(&mut self, io: &mut SyncIo) {
|
pub fn maintain_peers(&mut self, io: &mut SyncIo) {
|
||||||
@@ -1316,7 +1328,7 @@ impl ChainSync {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Broadcast private transaction message to peers.
|
/// Broadcast private transaction message to peers.
|
||||||
pub fn propagate_private_transaction(&mut self, io: &mut SyncIo, transaction_hash: H256, packet_id: SyncPacket, packet: Bytes) {
|
pub fn propagate_private_transaction(&mut self, io: &mut SyncIo, transaction_hash: H256, packet_id: PacketId, packet: Bytes) {
|
||||||
SyncPropagator::propagate_private_transaction(self, io, transaction_hash, packet_id, packet);
|
SyncPropagator::propagate_private_transaction(self, io, transaction_hash, packet_id, packet);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1447,7 +1459,6 @@ pub mod tests {
|
|||||||
snapshot_hash: None,
|
snapshot_hash: None,
|
||||||
asking_snapshot_data: None,
|
asking_snapshot_data: None,
|
||||||
block_set: None,
|
block_set: None,
|
||||||
client_version: ClientVersion::from(""),
|
|
||||||
});
|
});
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user