Compare commits

...

15 Commits

Author SHA1 Message Date
Thibaut Sardan
b47e064f8e
Backports for stable 2.1.10 (#10046)
* bump stable to 2.1.10

* RPC: parity_getBlockReceipts (#9527)

* Block receipts RPC.

* Use lazy evaluation of block receipts (ecrecover).

* Optimize transaction_receipt to prevent performance regression.

* Fix RPC grumbles.

* Add block & transaction receipt tests.

* Fix conversion to block id.

* Update a few parity-common dependencies (#9663)

* Update a few parity-common dependencies

* cleanup

* cleanup

* revert update of ethereum/tests

* better reporting of network rlp errors

* Use rlp 0.3.0-beta.1

* fix util function get_dummy_blocks

* Already a Vec

* encode_list returns vec already

* Address grumble

* No need for betas

* Fix double spaces

* Fix empty steps (#9939)

* Don't send empty step twice or empty step then block.

* Perform basic validation of locally sealed blocks.

* Don't include empty step twice.

* Strict empty steps validation (#10041)

* Add two failings tests for strict empty steps.

* Implement strict validation of empty steps.

* ethcore: enable constantinople on ethereum (#10031)

* ethcore: change blockreward to 2e18 for foundation after constantinople

* ethcore: delay diff bomb by 2e6 blocks for foundation after constantinople

* ethcore: enable eip-{145,1014,1052,1283} for foundation after constantinople

* Change test miner max memory to malloc reports. (#10024)

* Bump crossbeam. (#10048)

* Revert "Bump crossbeam. (#10048)"

This reverts commit ed1db0c2d3d3b8e6bfde3124fb03b093a264e241.
2018-12-13 17:53:36 +01:00
Afri Schoedon
af1169d2ff
patch stable release pipeline (#10021)
* version: bump stable to 2.1.9

* ci: move future releases to ethereum subdir on s3 (#10017)

* ci: move future releases to ethereum subdir on s3

* ci: redesign s3 bucket logic

* ci: use the releases bucket
2018-12-05 15:51:10 +01:00
Afri Schoedon
3eae1d3640
Stable patch release 2.1.8 (#10001)
* version: bump stable to 2.1.8

* Fix Bloom migration (#9992)

* Fix wrong block number in blooms migration

* Fix wrong `const` type (usize -> u64) 😬

* Fix daemonize (#10000)

* Revert "prevent silent errors in daemon mode, closes #9367 (#9946)"

This reverts commit 52d5278a62.

* deps(daemonize): switch back to crates.io

* move daemonize before creating account provider (#10003)

* move daemonize before creating account provider

* daemonize: add a future-proofing comment
2018-11-30 16:05:29 +01:00
Afri Schoedon
126208cc74
Backports for stable 2.1.7 (#9975)
* version: bump stable to 2.1.7

* Adjust requests costs for light client (#9925)

* PIP Table Cost relative to average peers instead of max peers

* Add tracing in PIP new_cost_table

* Update stat peer_count

* Use number of leeching peers for Light serve costs

* Fix test::light_params_load_share_depends_on_max_peers (wrong type)

* Remove (now) useless test

* Remove `load_share` from LightParams.Config
Prevent div. by 0

* Add LEECHER_COUNT_FACTOR

* PR Grumble: u64 to u32 for f64 casting

* Prevent u32 overflow for avg_peer_count

* Add tests for LightSync::Statistics

* Fix empty steps (#9939)

* Don't send empty step twice or empty step then block.

* Perform basic validation of locally sealed blocks.

* Don't include empty step twice.

* prevent silent errors in daemon mode, closes #9367 (#9946)

* Fix light client informant while syncing (#9932)

* Add `is_idle` to LightSync to check importing status

* Use SyncStateWrapper to make sure is_idle gets updates

* Update is_major_import to use verified queue size as well

* Add comment for `is_idle`

* Add Debug to `SyncStateWrapper`

* `fn get` -> `fn into_inner`

*  ci: rearrange pipeline by logic (#9970)

* ci: rearrange pipeline by logic

* ci: rename docs script

* Add readiness check for docker container (#9804)

* Update Dockerfile

Since parity is built for "mission critical use", I thought other operators may see the need for this.

Adding the `curl` and `jq` commands allows for an extremely simple health check to be usable in container orchestrators.

For example. Here is a health check for a parity docker container running in Kubernetes.

This can be setup as a readiness Probe that would prevent clustered nodes that aren't ready from serving traffic.

```bash
#!/bin/bash

ETH_SYNCING=$(curl -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' http://localhost:8545 -H 'Content-Type: application/json')
RESULT=$(echo "$ETH_SYNCING | jq -r .result)

if [ "$RESULT" == "false" ]; then
  echo "Parity is ready to start accepting traffic"
  exit 0
else
  echo "Parity is still syncing the blockchain"
  exit 1
fi
```

* add sync check script

* Fix docker script (#9854)


* Dockerfile: change source path of the newly added check_sync.sh (#9869)

* Do not use the home directory as the working dir in docker (#9834)

* Do not create a home directory.

* Re-add -m flag

* fix docker build (#9971)

* bump smallvec to 0.6 in ethcore-light, ethstore and whisper (#9588)

* bump smallvec to 0.6 in ethcore-light, ethstore and whisper

* bump transaction-pool

* Fix test.

* patch cargo to use tokio-proto from git repo

this makes sure we no longer depend on smallvec 0.2.1 which is
affected by https://github.com/servo/rust-smallvec/issues/96

* use patched version of untrusted 0.5.1

* ci: allow audit to fail
2018-11-28 13:14:55 +01:00
Thibaut Sardan
491f17f149
Backport Stable 2.1.6 (#9904)
* bump stable 2.1.6

* ethcore: use Machine::verify_transaction on parent block (#9900)

* ethcore: use Machine::verify_transaction on parent block

also fixes off-by-one activation of transaction permission contract

* ethcore: clarify call to verify_transaction

* fix: Intermittent failing CI due to addr in use (#9885)

Allow OS to set port at runtime

* gitlab-ci: make android release build succeed (#9743)


* use docker cargo config file for android builds

* make android build succeed

* Update light-client hardcoded headers for
foundation: #6692865, ropsten: #4417537, kovan: #9363457

* Remove rust-toolchain file (#9906)

* light-fetch: Differentiate between out-of-gas/manual throw and use required gas from response on failure (#9824)

* fix start_gas, handle OOG exceptions & NotEnoughGas

* Change START_GAS: 50_000 -> 60_000
* When the `OutOfGas exception` is received then try to double the gas until it succeeds or block gas limit is reached
* When `NotEnoughBasGas error` is received then use the required gas provided in the response

* fix(light-fetch): ensure block_gas_limit is tried

Try the `block_gas_limit` before regard the execution as an error

* Update rpc/src/v1/helpers/light_fetch.rs

Co-Authored-By: niklasad1 <niklasadolfsson1@gmail.com>

* fix #9824 merge artifacts

* simplify cargo audit

* ci: nuke the gitlab caches (#9855)
2018-11-14 14:48:29 +01:00
Afri Schoedon
aad068d2c6
version: mark 2.1.5 stable (#9821)
* version: mark 2.1.5 stable

* ci: remove failing tests for android, windows, and macos (#9788)

* ci: remove failing tests for android, windows, and macos

* ci: restore android build jobs

* Move state root verification before gas used (#9841)

* Classic.json Bootnode Update (#9828)

* fix: Update bootnodes list to only responsive nodes

* feat: Add more bootnodes to classic.json list

* feat: Add retested bootnodes
2018-11-01 15:37:22 +01:00
Afri Schoedon
bee2cb8fb4
Backports: parity beta 2.1.4 (#9787)
* version: bump parity beta to 2.1.4

* ethcore: bump ropsten forkblock checkpoint (#9775)

* ethcore: handle vm exception when estimating gas (#9615)

* removed "rustup" & added new runner tag (#9731)

* removed "rustup" & added new runner tag

* exchanged tag "rust-windows" with "windows"

* revert windows tag change

* sync: retry different peer after empty subchain heads response (#9753)

* If no subchain heads then try a different peer

* Add log when useless chain head

* Restrict ChainHead useless peer to ancient blocks

* sync: replace `limit_reorg` with `block_set` condition

* update jsonrpc-core to a1b2bb742ce16d1168669ffb13ffe856e8131228

* Allow zero chain id in EIP155 signing process (#9792)

* Allow zero chain id in EIP155 signing process

* Rename test

* Fix test failure

* Insert dev account before unlocking (#9813)
2018-10-28 12:21:33 +01:00
Afri Schoedon
18ddd7c249
Beta release 2.1.3 backports (#9749)
* parity-version: mark 2.1.3 beta as critical

* Use signed 256-bit integer for sstore gas refund substate  (#9746)

* Add signed refund

* Use signed 256-bit integer for sstore gas refund substate

* Fix tests

* Remove signed mod and use i128 directly

* Fix evm test case casting

* Fix jsontests ext signature

* Add --force to cargo audit install script (#9735)

* heads ref not present for branches beta and stable (#9741)

* aura: fix panic on extra_info with unsealed block (#9755)

* aura: fix panic when unsealed block passed to extra_info

* aura: use hex formatting for EmptyStep hashes

* aura: add test for extra_info
2018-10-15 21:44:18 +02:00
André Silva
d35f4c1f0d [beta] More backports for 2.1.2 (#9733)
* produce portable binaries (#9725)

* HF in POA Core (2018-10-22) (#9724)

https://github.com/poanetwork/poa-chain-spec/pull/87

* Use static call and apparent value transfer for block reward contract code (#9603)

* Verify block syncing responses against requests (#9670)

* sync: Validate received BlockHeaders packets against stored request.

* sync: Validate received BlockBodies and BlockReceipts.

* sync: Fix broken tests.

* sync: Unit tests for BlockDownloader::import_headers.

* sync: Unit tests for import_{bodies,receipts}.

* tests: Add missing method doc.

* Fix ancient blocks sync (#9531)

* Log block set in block_sync for easier debugging

* logging macros

* Match no args in sync logging macros

* Add QueueFull error

* Only allow importing headers if the first matches requested

* WIP

* Test for chain head gaps and log

* Calc distance even with 2 heads

* Revert previous commits, preparing simple fix

This reverts commit 5f38aa885b22ebb0e3a1d60120cea69f9f322628.

* Reject headers with no gaps when ChainHead

* Reset block sync download when queue full

* Simplify check for subchain heads

* Add comment to explain subchain heads filter

* Fix is_subchain_heads check and comment

* Prevent premature round completion after restart

This is a problem on mainnet where multiple stale peer requests will
force many rounds to complete quickly, forcing the retraction.

* Reset stale old blocks request after queue full

* Revert "Reject headers with no gaps when ChainHead"

This reverts commit 0eb865539e5dee37ab34f168f5fb643300de5ace.

* Add BlockSet to BlockDownloader logging

Currently it is difficult to debug this because there are two instances,
one for OldBlocks and one for NewBlocks. This adds the BlockSet to all
log messages for easy log filtering.

* Reset OldBlocks download from last enqueued

Previously when the ancient block queue was full it would restart the
download from the last imported block, so the ones still in the queue would be
redownloaded. Keeping the existing downloader instance and just
resetting it will start again from the last enqueued block.:wq

* Ignore expired Body and Receipt requests

* Log when ancient block download being restarted

* Only request old blocks from peers with >= difficulty

https://github.com/paritytech/parity-ethereum/pull/9226 might be too
permissive and causing the behaviour of the retraction soon after the
fork block. With this change the peer difficulty has to be greater than
or euqal to our syncing difficulty, so should still fix
https://github.com/paritytech/parity-ethereum/issues/9225

* Some logging and clear stalled blocks head

* Revert "Some logging and clear stalled blocks head"

This reverts commit 757641d9b817ae8b63fec684759b0815af9c4d0e.

* Reset stalled header if useless more than once

* Store useless headers in HashSet

* Add sync target to logging macro

* Don't disable useless peer and fix log macro

* Clear useless headers on reset and comments

* Use custom error for collecting blocks

Previously we resued BlockImportError, however only the Invalid case and
this made little sense with the QueueFull error.

* Remove blank line

* Test for reset sync after consecutive useless headers

* Don't reset after consecutive headers when chain head

* Delete commented out imports

* Return DownloadAction from collect_blocks instead of error

* Don't reset after round complete, was causing test hangs

* Add comment explaining reset after useless

* Replace HashSet with counter for useless headers

* Refactor sync reset on bad block/queue full

* Add missing target for log message

* Fix compiler errors and test after merge

* ethcore: revert ethereum tests submodule update

* Add hardcoded headers (#9730)

* add foundation hardcoded header #6486017

* add ropsten hardcoded headers #4202497

* add kovan hardcoded headers #9023489

* gitlab ci: releasable_branches: change variables condition to schedule (#9729)
2018-10-10 18:55:55 +02:00
Afri Schoedon
52fe28a052
Backports for beta 2.1.2 (#9649)
* parity-version: bump beta to 2.1.2

* docs(rpc): push the branch along with tags (#9578)

* docs(rpc): push the branch along with tags

* ci: remove old rpc docs script

* Remove snapcraft clean (#9585)

* Revert " add snapcraft package image (master) (#9584)"

This reverts commit ceaedbbd7f.

* Update package-snap.sh

* Update .gitlab-ci.yml

* ci: fix regex 🙄 (#9597)

* docs(rpc): annotate tag with the provided message (#9601)

* Update ropsten.json (#9602)

* HF in POA Sokol (2018-09-19) (#9607)

https://github.com/poanetwork/poa-chain-spec/pull/86

* fix(network): don't disconnect reserved peers (#9608)

The priority of && and || was borked.

* fix failing node-table tests on mac os, closes #9632 (#9633)

* ethcore-io retries failed work steal (#9651)

* ethcore-io uses newer version of crossbeam && retries failed work steal

* ethcore-io non-mio service uses newer crossbeam

* remove master from releasable branches (#9655)

* remove master from releasable branches

need backporting in beta 
fix https://gitlab.parity.io/parity/parity-ethereum/-/jobs/101065 etc

* add except for snap packages for master

* Test fix for windows cache name... (#9658)

* Test fix for windows cache name...

* Fix variable name.

* fix(light_fetch): avoid race with BlockNumber::Latest (#9665)

* Calculate sha3 instead of sha256 for push-release. (#9673)

* Calculate sha3 instead of sha256 for push-release.

* Add pushes to the script.

* Hardfork the testnets (#9562)

* ethcore: propose hardfork block number 4230000 for ropsten

* ethcore: propose hardfork block number 9000000 for kovan

* ethcore: enable kip-4 and kip-6 on kovan

* etcore: bump kovan hardfork to block 9.2M

* ethcore: fix ropsten constantinople block number to 4.2M

* ethcore: disable difficulty_test_ropsten until ethereum/tests are updated upstream

* ci: fix push script (#9679)

* ci: fix push script

* Fix copying & running on windows.

* CI: Remove unnecessary pipes (#9681)

* ci: reduce gitlab pipelines significantly

* ci: build pipeline for PR

* ci: remove dead weight

* ci: remove github release script

* ci: remove forever broken aura tests

* ci: add random stuff to the end of the pipes

* ci: add wind and mac to the end of the pipe

* ci: remove snap artifacts

* ci: (re)move dockerfiles

* ci: clarify job names

* ci: add cargo audit job

* ci: make audit script executable

* ci: ignore snap and docker files for rust check

* ci: simplify audit script

* ci: rename misc to optional

* ci: add publish script to releaseable branches

* ci: more verbose cp command for windows build

* ci: fix weird binary checksum logic in push script

* ci: fix regex in push script for windows

* ci: simplify gitlab caching

* docs: align README with ci changes

* ci: specify default cargo target dir

* ci: print verbose environment

* ci: proper naming of scripts

* ci: restore docker files

* ci: use docker hub file

* ci: use cargo home instead of cargo target dir

* ci: touch random rust file to trigger real builds

* ci: set cargo target dir for audit script

* ci: remove temp file

* ci: don't export the cargo target dir in the audit script

* ci: fix windows unbound variable

* docs: fix gitlab badge path

* rename deprecated gitlab ci variables

https://docs.gitlab.com/ee/ci/variables/#9-0-renaming

* ci: fix git compare for nightly builds

* test: skip c++ example for all platforms but linux

* ci: add random rust file to trigger tests

* ci: remove random rust file

* disable cpp lib test for mac, win and beta (#9686)

* cleanup ci merge

* ci: fix tests

* fix bad-block reporting no reason (#9638)

* ethcore: fix detection of major import (#9552)

* sync: set state to idle after sync is completed

* sync: refactor sync reset

* Don't hash the init_code of CREATE. (#9688)

* Docker: run as parity user (#9689)

* Implement CREATE2 gas changes and fix some potential overflowing (#9694)

* Implement CREATE2 gas changes and fix some potential overflowing

* Ignore create2 state tests

* Split CREATE and CREATE2 in gasometer

* Generalize rounding (x + 31) / 32 to to_word_size

* make instantSeal engine backwards compatible, closes #9696 (#9700)

* ethcore: delay ropsten hardfork (#9704)

* fix (light/provider) : Make `read_only executions` read-only (#9591)

* `ExecutionsRequest` from light-clients as read-only

This changes so all `ExecutionRequests` from light-clients are executed
as read-only which the `virtual``flag == true ensures.

This boost up the current transaction to always succeed

Note, this only affects `eth_estimateGas` and `eth_call` AFAIK.

* grumbles(revert renaming) : TransactionProof

* grumbles(trace) : remove incorrect trace

* grumbles(state/prove_tx) : explicit `virt`

Remove the boolean flag to determine that a `state::prove_transaction`
whether it should be executed in a virtual context or not.

Because of that also rename the function to
`state::prove_transction_virtual` to make more clear

* CI: Skip docs job for nightly (#9693)

* ci: force-tag wiki changes

* ci: force-tag wiki changes

* ci: skip docs job for master and nightly

* ci: revert docs job checking for nightly tag

* ci: exclude docs job from nightly builds in gitlab script
2018-10-09 15:04:30 +02:00
gabriel klawitter
8e347b2602 rpc-docs should push the branch (#9611) 2018-09-21 14:38:28 +02:00
gabriel klawitter
a5dcaf7d21 beta: rpc-docs set github token (#9609) 2018-09-20 17:19:42 +03:00
Afri Schoedon
cb09330cb3
Backports for 2.1.1 beta (#9599)
* parity: bump version to 2.1.1 beta

* ci: fix regex 🙄

* docs(rpc): annotate tag with the provided message
2018-09-19 20:31:26 +02:00
Denis S. Soldatov aka General-Beck
363ad10906 add snapcraft package image (#9583)
* add snapcraft package image

* Update .gitlab-ci.yml

* remove snapcraft clean
2018-09-18 15:32:05 +02:00
Afri Schoedon
d147700046
Backports for 2.1.0 beta (#9518)
* parity-version: mark 2.1.0 track beta

* ci: update branch version references

* docker: release master to latest

* Fix checkpointing when creating contract failed (#9514)

* ci: fix json docs generation (#9515)

* fix typo in version string (#9516)

* Update patricia trie to 0.2.2 crates. Default dependencies on minor
version only.

* Putting back ethereum tests to the right commit

* Enable all Constantinople hard fork changes in constantinople_test.json (#9505)

* Enable all Constantinople hard fork changes in constantinople_test.json

* Address grumbles

* Remove EIP-210 activation

* 8m -> 5m

* Temporarily add back eip210 transition so we can get test passed

* Add eip210_test and remove eip210 transition from const_test

* In create memory calculation is the same for create2 because the additional parameter was popped before. (#9522)

* deps: bump fs-swap and kvdb-rocksdb

* Multithreaded snapshot creation (#9239)

* Add Progress to Snapshot Secondary chunks creation

* Use half of CPUs to multithread snapshot creation

* Use env var to define number of threads

* info to debug logs

* Add Snapshot threads as CLI option

* Randomize chunks per thread

* Remove randomness, add debugging

* Add warning

* Add tracing

* Use parity-common fix seek branch

* Fix log

* Fix tests

* Fix tests

* PR Grumbles

* PR Grumble II

* Update Cargo.lock

* PR Grumbles

* Default snapshot threads to half number of CPUs

* Fix default snapshot threads // min 1

* correct before_script for nightly build versions (#9543)

- fix gitlab array of strings syntax error
- get proper commit id
- avoid colon in stings

* Remove initial token for WS. (#9545)

* version: mark release critical

* ci: fix rpc docs generation 2 (#9550)

* Improve P2P discovery (#9526)

* Add `target` to Rust traces

* network-devp2p: Don't remove discovery peer in main sync

* network-p2p: Refresh discovery more often

* Update Peer discovery protocol

* Run discovery more often when not enough nodes connected

* Start the first discovery early

* Update fast discovery rate

* Fix tests

* Fix `ping` tests

* Fixing remote Node address ; adding PingPong round

* Fix tests: update new +1 PingPong round

* Increase slow Discovery rate
Check in flight FindNode before pings

* Add `deprecated` to deprecated_echo_hash

* Refactor `discovery_round` branching

* net_version caches network_id to avoid redundant aquire of sync read lock (#9544)

* net_version caches network_id to avoid redundant aquire of sync read lock, #8746

* use lower_hex display formatting for net_peerCount rpc method

* Increase Gas-floor-target and Gas Cap (#9564)

+ Gas-floor-target increased to 8M by default

+ Gas-cap increased to 10M by default

* Revert to old parity-tokio-ipc.

* Downgrade named pipes.
2018-09-17 19:22:30 +02:00
196 changed files with 4814 additions and 2723 deletions

View File

@ -1,37 +1,22 @@
stages:
- test
- build
- package
- publish
- docs
- optional
image: parity/rust:gitlab-ci
variables:
CI_SERVER_NAME: "GitLab CI"
CARGO_HOME: "${CI_PROJECT_DIR}/cargo"
BUILD_TARGET: ubuntu
BUILD_ARCH: amd64
CARGO_HOME: "${CI_PROJECT_DIR}/.cargo"
CARGO_TARGET: x86_64-unknown-linux-gnu
cache:
key: "${CI_JOB_NAME}"
paths:
- ${CI_PROJECT_DIR}/target/
- ${CI_PROJECT_DIR}/cargo/
.releaseable_branches: # list of git refs for building GitLab artifacts (think "pre-release binaries")
only: &releaseable_branches
- master
- stable
- beta
- tags
.publishable_branches: # list of git refs for publishing builds to the "production" locations
only: &publishable_branches
- nightly # Our nightly builds from schedule, on `master`
- "v2*" # Our version tags
- schedules
.collect_artifacts: &collect_artifacts
artifacts:
@ -41,68 +26,32 @@ cache:
paths:
- artifacts/
.determine_version:
before_script: &determine_version
- >
VERSION="$(sed -r -n '1,/^version/s/^version = "([^"]+)".*$/\1/p' < Cargo.toml)";
if [ "${CI_COMMIT_REF_NAME}" = "nightly" ]; then
COMMIT_REF_SHORT="echo ${CI_COMMIT_REF} | grep -oE '^.{7}')";
DATE_STRING="$(date +%Y%m%d)";
export VERSION="${VERSION}-${COMMIT_REF_SHORT}-${DATE_STRING}";
fi;
export VERSION;
echo "Version: $VERSION"
.determine_version: &determine_version
- VERSION="$(sed -r -n '1,/^version/s/^version = "([^"]+)".*$/\1/p' Cargo.toml)"
- DATE_STR="$(date +%Y%m%d)"
- ID_SHORT="$(echo ${CI_COMMIT_SHA} | cut -c 1-7)"
- test "${CI_COMMIT_REF_NAME}" = "nightly" && VERSION="${VERSION}-${ID_SHORT}-${DATE_STR}"
- export VERSION
- echo "Version = ${VERSION}"
#### stage: test
test-rust-stable: &test
test-linux:
stage: test
variables:
RUN_TESTS: all
script:
- scripts/gitlab/test.sh stable
- scripts/gitlab/test-all.sh stable
tags:
- rust-stable
.optional_test: &optional_test
<<: *test
allow_failure: true
only:
- master
test-rust-beta:
<<: *optional_test
script:
- scripts/gitlab/test.sh beta
test-rust-nightly:
<<: *optional_test
script:
- scripts/gitlab/test.sh nightly
test-lint-rustfmt:
<<: *optional_test
script:
- scripts/gitlab/rustfmt.sh
test-lint-clippy:
<<: *optional_test
script:
- scripts/gitlab/clippy.sh
test-coverage-kcov:
test-audit:
stage: test
only:
- master
script:
- scripts/gitlab/coverage.sh
- scripts/gitlab/cargo-audit.sh
tags:
- shell
- rust-stable
allow_failure: true
#### stage: build
build-linux-ubuntu-amd64: &build
build-linux:
stage: build
only: *releaseable_branches
variables:
@ -112,59 +61,23 @@ build-linux-ubuntu-amd64: &build
<<: *collect_artifacts
tags:
- rust-stable
allow_failure: true
build-linux-ubuntu-i386:
<<: *build
image: parity/rust-i686:gitlab-ci
variables:
CARGO_TARGET: i686-unknown-linux-gnu
tags:
- rust-i686
build-linux-ubuntu-arm64:
<<: *build
image: parity/rust-arm64:gitlab-ci
variables:
CARGO_TARGET: aarch64-unknown-linux-gnu
tags:
- rust-arm
build-linux-ubuntu-armhf:
<<: *build
image: parity/rust-armv7:gitlab-ci
variables:
CARGO_TARGET: armv7-unknown-linux-gnueabihf
tags:
- rust-arm
build-linux-android-armhf:
<<: *build
image: parity/rust-android:gitlab-ci
variables:
CARGO_TARGET: armv7-linux-androideabi
tags:
- rust-arm
build-darwin-macos-x86_64:
<<: *build
build-darwin:
stage: build
only: *releaseable_branches
variables:
CARGO_TARGET: x86_64-apple-darwin
CC: gcc
CXX: g++
script:
- scripts/gitlab/build-unix.sh
tags:
- osx
- rust-osx
<<: *collect_artifacts
build-windows-msvc-x86_64:
build-windows:
stage: build
only: *releaseable_branches
cache:
key: "%CI_JOB_NAME%"
paths:
- "%CI_PROJECT_DIR%/target/"
- "%CI_PROJECT_DIR%/cargo/"
# No cargo caching, since fetch-locking on Windows gets stuck
variables:
CARGO_TARGET: x86_64-pc-windows-msvc
script:
@ -173,131 +86,70 @@ build-windows-msvc-x86_64:
- rust-windows
<<: *collect_artifacts
#### stage: package
package-linux-snap-amd64: &package_snap
stage: package
publish-docker:
stage: publish
only: *releaseable_branches
cache: {}
before_script: *determine_version
variables:
CARGO_TARGET: x86_64-unknown-linux-gnu
dependencies:
- build-linux-ubuntu-amd64
script:
- scripts/gitlab/package-snap.sh
tags:
- rust-stable
<<: *collect_artifacts
package-linux-snap-i386:
<<: *package_snap
variables:
BUILD_ARCH: i386
CARGO_TARGET: i686-unknown-linux-gnu
dependencies:
- build-linux-ubuntu-i386
package-linux-snap-arm64:
<<: *package_snap
variables:
BUILD_ARCH: arm64
CARGO_TARGET: aarch64-unknown-linux-gnu
dependencies:
- build-linux-ubuntu-arm64
package-linux-snap-armhf:
<<: *package_snap
variables:
BUILD_ARCH: armhf
CARGO_TARGET: armv7-unknown-linux-gnueabihf
dependencies:
- build-linux-ubuntu-armhf
#### stage: publish
publish-linux-snap-amd64: &publish_snap
stage: publish
only: *publishable_branches
image: snapcore/snapcraft:stable
cache: {}
before_script: *determine_version
variables:
BUILD_ARCH: amd64
dependencies:
- package-linux-snap-amd64
script:
- scripts/gitlab/publish-snap.sh
tags:
- rust-stable
publish-linux-snap-i386:
<<: *publish_snap
variables:
BUILD_ARCH: i386
dependencies:
- package-linux-snap-i386
publish-linux-snap-arm64:
<<: *publish_snap
variables:
BUILD_ARCH: arm64
dependencies:
- package-linux-snap-arm64
publish-linux-snap-armhf:
<<: *publish_snap
variables:
BUILD_ARCH: armhf
dependencies:
- package-linux-snap-armhf
publish-docker-parity-amd64: &publish_docker
stage: publish
only: *publishable_branches
cache: {}
dependencies:
- build-linux-ubuntu-amd64
- build-linux
tags:
- shell
allow_failure: true
script:
- scripts/gitlab/publish-docker.sh parity
publish-docker-parityevm-amd64:
<<: *publish_docker
script:
- scripts/gitlab/publish-docker.sh parity-evm
publish-github-and-s3:
publish-awss3:
stage: publish
only: *publishable_branches
only: *releaseable_branches
cache: {}
dependencies:
- build-linux-ubuntu-amd64
- build-linux-ubuntu-i386
- build-linux-ubuntu-armhf
- build-linux-ubuntu-arm64
- build-darwin-macos-x86_64
- build-windows-msvc-x86_64
- build-linux
- build-darwin
- build-windows
before_script: *determine_version
script:
- scripts/gitlab/push.sh
- scripts/gitlab/publish-awss3.sh
tags:
- shell
allow_failure: true
####stage: docs
docs-rpc-json:
stage: docs
publish-docs:
stage: publish
only:
- tags
except:
- nightly
cache: {}
script:
- scripts/gitlab/rpc-docs.sh
- scripts/gitlab/publish-docs.sh
tags:
- shell
build-android:
stage: optional
image: parity/rust-android:gitlab-ci
variables:
CARGO_TARGET: armv7-linux-androideabi
script:
- scripts/gitlab/build-unix.sh
tags:
- rust-arm
allow_failure: true
test-beta:
stage: optional
variables:
RUN_TESTS: cargo
script:
- scripts/gitlab/test-all.sh beta
tags:
- rust-beta
allow_failure: true
test-nightly:
stage: optional
variables:
RUN_TESTS: all
script:
- scripts/gitlab/test-all.sh nightly
tags:
- rust-nightly
allow_failure: true

837
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@
description = "Parity Ethereum client"
name = "parity-ethereum"
# NOTE Make sure to update util/version/Cargo.toml as well
version = "2.1.0"
version = "2.1.10"
license = "GPL-3.0"
authors = ["Parity Technologies <admin@parity.io>"]
@ -46,7 +46,7 @@ ethcore-transaction = { path = "ethcore/transaction" }
ethereum-types = "0.4"
node-filter = { path = "ethcore/node_filter" }
ethkey = { path = "ethkey" }
rlp = { version = "0.2.4", features = ["ethereum"] }
rlp = { version = "0.3.0", features = ["ethereum"] }
rpc-cli = { path = "rpc_cli" }
parity-hash-fetch = { path = "hash-fetch" }
parity-ipfs-api = { path = "ipfs" }
@ -84,7 +84,7 @@ fake-fetch = { path = "util/fake-fetch" }
winapi = { version = "0.3.4", features = ["winsock2", "winuser", "shellapi"] }
[target.'cfg(not(windows))'.dependencies]
daemonize = { git = "https://github.com/paritytech/daemonize" }
daemonize = "0.3"
[features]
miner-debug = ["ethcore/miner-debug"]
@ -140,3 +140,5 @@ members = [
[patch.crates-io]
ring = { git = "https://github.com/paritytech/ring" }
tokio-proto = { git = "https://github.com/tokio-rs/tokio-proto.git", rev = "56c720e" }
untrusted = { git = "https://github.com/paritytech/untrusted", branch = "0.5" }

View File

@ -4,9 +4,7 @@
<p align="center"><strong><a href="https://github.com/paritytech/parity-ethereum/releases/latest">» Download the latest release «</a></strong></p>
<p align="center"><a href="https://gitlab.parity.io/parity/parity/commits/master" target="_blank"><img src="https://gitlab.parity.io/parity/parity/badges/master/build.svg" /></a>
<a href="https://codecov.io/gh/paritytech/parity-ethereum" target="_blank"><img src="https://codecov.io/gh/paritytech/parity-ethereum/branch/master/graph/badge.svg" /></a>
<a href="https://build.snapcraft.io/user/paritytech/parity" target="_blank"><img src="https://build.snapcraft.io/badge/paritytech/parity.svg" /></a>
<p align="center"><a href="https://gitlab.parity.io/parity/parity-ethereum/commits/master" target="_blank"><img src="https://gitlab.parity.io/parity/parity-ethereum/badges/master/build.svg" /></a>
<a href="https://www.gnu.org/licenses/gpl-3.0.en.html" target="_blank"><img src="https://img.shields.io/badge/license-GPL%20v3-green.svg" /></a></p>
**Built for mission-critical use**: Miners, service providers, and exchanges need fast synchronisation and maximum uptime. Parity Ethereum provides the core infrastructure essential for speedy and reliable services.
@ -25,11 +23,11 @@ By default, Parity Ethereum runs a JSON-RPC HTTP server on port `:8545` and a We
If you run into problems while using Parity Ethereum, check out the [wiki for documentation](https://wiki.parity.io/), feel free to [file an issue in this repository](https://github.com/paritytech/parity-ethereum/issues/new), or hop on our [Gitter](https://gitter.im/paritytech/parity) or [Riot](https://riot.im/app/#/group/+parity:matrix.parity.io) chat room to ask a question. We are glad to help! **For security-critical issues**, please refer to the security policy outlined in [SECURITY.md](SECURITY.md).
Parity Ethereum's current beta-release is 2.0. You can download it at [the releases page](https://github.com/paritytech/parity-ethereum/releases) or follow the instructions below to build from source. Please, mind the [CHANGELOG.md](CHANGELOG.md) for a list of all changes between different versions.
Parity Ethereum's current beta-release is 2.1. You can download it at [the releases page](https://github.com/paritytech/parity-ethereum/releases) or follow the instructions below to build from source. Please, mind the [CHANGELOG.md](CHANGELOG.md) for a list of all changes between different versions.
## Build Dependencies
Parity Ethereum requires **Rust version 1.28.x** to build.
Parity Ethereum requires **Rust version 1.29.x** to build.
We recommend installing Rust through [rustup](https://www.rustup.rs/). If you don't already have `rustup`, you can install it like this:
@ -60,26 +58,6 @@ Once you have `rustup` installed, then you need to install:
Make sure that these binaries are in your `PATH`. After that, you should be able to build Parity Ethereum from source.
## Install from the Snapcraft Store
In any of the [supported Linux distros](https://snapcraft.io/docs/core/install):
```bash
sudo snap install parity
```
Alternatively, if you want to contribute testing the upcoming release:
```bash
sudo snap install parity --beta
```
Moreover, to test the latest code from the master branch:
```bash
sudo snap install parity --edge
```
## Build from Source Code
```bash

View File

@ -1,61 +0,0 @@
FROM ubuntu:xenial
LABEL maintainer="Parity Technologies <devops@parity.io>"
RUN apt-get update && \
apt-get install -yq sudo curl file build-essential wget git g++ cmake pkg-config bison flex \
unzip lib32stdc++6 lib32z1 python autotools-dev automake autoconf libtool \
gperf xsltproc docbook-xsl
# Rust & Cargo
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
ENV PATH /root/.cargo/bin:$PATH
RUN rustup toolchain install stable
RUN rustup target add --toolchain stable arm-linux-androideabi
RUN rustup target add --toolchain stable armv7-linux-androideabi
# Android NDK and toolchain
RUN cd /usr/local && \
wget -q https://dl.google.com/android/repository/android-ndk-r16b-linux-x86_64.zip && \
unzip -q android-ndk-r16b-linux-x86_64.zip && \
rm android-ndk-r16b-linux-x86_64.zip
ENV NDK_HOME /usr/local/android-ndk-r16b
RUN /usr/local/android-ndk-r16b/build/tools/make-standalone-toolchain.sh \
--arch=arm --install-dir=/opt/ndk-standalone --stl=libc++ --platform=android-26
ENV PATH $PATH:/opt/ndk-standalone/bin
# Compiling libudev for Android
# This is the most hacky part of the process, as we need to apply a patch and pass specific
# options that the compiler environment doesn't define.
RUN cd /root && \
git clone https://github.com/gentoo/eudev.git
ADD libudev.patch /root
RUN cd /root/eudev && \
git checkout 83d918449f22720d84a341a05e24b6d109e6d3ae && \
./autogen.sh && \
./configure --disable-introspection --disable-programs --disable-hwdb \
--host=arm-linux-androideabi --prefix=/opt/ndk-standalone/sysroot/usr/ \
--enable-shared=false CC=arm-linux-androideabi-clang \
CFLAGS="-D LINE_MAX=2048 -D RLIMIT_NLIMITS=15 -D IPTOS_LOWCOST=2 -std=gnu99" \
CXX=arm-linux-androideabi-clang++ && \
git apply - < /root/libudev.patch && \
make && \
make install
RUN rm -rf /root/eudev
RUN rm /root/libudev.patch
# Rust-related configuration
ADD cargo-config.toml /root/.cargo/config
ENV ARM_LINUX_ANDROIDEABI_OPENSSL_DIR /opt/ndk-standalone/sysroot/usr
ENV ARMV7_LINUX_ANDROIDEABI_OPENSSL_DIR /opt/ndk-standalone/sysroot/usr
ENV CC_arm_linux_androideabi arm-linux-androideabi-clang
ENV CC_armv7_linux_androideabi arm-linux-androideabi-clang
ENV CXX_arm_linux_androideabi arm-linux-androideabi-clang++
ENV CXX_armv7_linux_androideabi arm-linux-androideabi-clang++
ENV AR_arm_linux_androideabi arm-linux-androideabi-ar
ENV AR_armv7_linux_androideabi arm-linux-androideabi-ar
ENV CFLAGS_arm_linux_androideabi -std=gnu11 -fPIC -D OS_ANDROID
ENV CFLAGS_armv7_linux_androideabi -std=gnu11 -fPIC -D OS_ANDROID
ENV CXXFLAGS_arm_linux_androideabi -std=gnu++11 -fPIC -fexceptions -frtti -static-libstdc++ -D OS_ANDROID
ENV CXXFLAGS_armv7_linux_androideabi -std=gnu++11 -fPIC -fexceptions -frtti -static-libstdc++ -D OS_ANDROID
ENV CXXSTDLIB_arm_linux_androideabi ""
ENV CXXSTDLIB_armv7_linux_androideabi ""

View File

@ -1,9 +0,0 @@
[target.armv7-linux-androideabi]
linker = "arm-linux-androideabi-clang"
ar = "arm-linux-androideabi-ar"
rustflags = ["-C", "link-arg=-lc++_static", "-C", "link-arg=-lc++abi", "-C", "link-arg=-landroid_support"]
[target.arm-linux-androideabi]
linker = "arm-linux-androideabi-clang"
ar = "arm-linux-androideabi-ar"
rustflags = ["-C", "link-arg=-lc++_static", "-C", "link-arg=-lc++abi", "-C", "link-arg=-landroid_support"]

View File

@ -1,216 +0,0 @@
diff --git a/src/collect/collect.c b/src/collect/collect.c
index 2cf1f00..b24f26b 100644
--- a/src/collect/collect.c
+++ b/src/collect/collect.c
@@ -84,7 +84,7 @@ static void usage(void)
" invoked for each ID in <idlist>) collect returns 0, the\n"
" number of missing IDs otherwise.\n"
" On error a negative number is returned.\n\n"
- , program_invocation_short_name);
+ , "parity");
}
/*
diff --git a/src/scsi_id/scsi_id.c b/src/scsi_id/scsi_id.c
index 8b76d87..7bf3948 100644
--- a/src/scsi_id/scsi_id.c
+++ b/src/scsi_id/scsi_id.c
@@ -321,7 +321,7 @@ static void help(void) {
" -u --replace-whitespace Replace all whitespace by underscores\n"
" -v --verbose Verbose logging\n"
" -x --export Print values as environment keys\n"
- , program_invocation_short_name);
+ , "parity");
}
diff --git a/src/shared/hashmap.h b/src/shared/hashmap.h
index a03ee58..a7c2005 100644
--- a/src/shared/hashmap.h
+++ b/src/shared/hashmap.h
@@ -98,10 +98,7 @@ extern const struct hash_ops uint64_hash_ops;
#if SIZEOF_DEV_T != 8
unsigned long devt_hash_func(const void *p, const uint8_t hash_key[HASH_KEY_SIZE]) _pure_;
int devt_compare_func(const void *a, const void *b) _pure_;
-extern const struct hash_ops devt_hash_ops = {
- .hash = devt_hash_func,
- .compare = devt_compare_func
-};
+extern const struct hash_ops devt_hash_ops;
#else
#define devt_hash_func uint64_hash_func
#define devt_compare_func uint64_compare_func
diff --git a/src/shared/log.c b/src/shared/log.c
index 4a40996..1496984 100644
--- a/src/shared/log.c
+++ b/src/shared/log.c
@@ -335,7 +335,7 @@ static int write_to_syslog(
IOVEC_SET_STRING(iovec[0], header_priority);
IOVEC_SET_STRING(iovec[1], header_time);
- IOVEC_SET_STRING(iovec[2], program_invocation_short_name);
+ IOVEC_SET_STRING(iovec[2], "parity");
IOVEC_SET_STRING(iovec[3], header_pid);
IOVEC_SET_STRING(iovec[4], buffer);
@@ -383,7 +383,7 @@ static int write_to_kmsg(
char_array_0(header_pid);
IOVEC_SET_STRING(iovec[0], header_priority);
- IOVEC_SET_STRING(iovec[1], program_invocation_short_name);
+ IOVEC_SET_STRING(iovec[1], "parity");
IOVEC_SET_STRING(iovec[2], header_pid);
IOVEC_SET_STRING(iovec[3], buffer);
IOVEC_SET_STRING(iovec[4], "\n");
diff --git a/src/udev/udevadm-control.c b/src/udev/udevadm-control.c
index 6af7163..3271e56 100644
--- a/src/udev/udevadm-control.c
+++ b/src/udev/udevadm-control.c
@@ -41,7 +41,7 @@ static void print_help(void) {
" -p --property=KEY=VALUE Set a global property for all events\n"
" -m --children-max=N Maximum number of children\n"
" --timeout=SECONDS Maximum time to block for a reply\n"
- , program_invocation_short_name);
+ , "parity");
}
static int adm_control(struct udev *udev, int argc, char *argv[]) {
diff --git a/src/udev/udevadm-info.c b/src/udev/udevadm-info.c
index 0aec976..a31ac02 100644
--- a/src/udev/udevadm-info.c
+++ b/src/udev/udevadm-info.c
@@ -279,7 +279,7 @@ static void help(void) {
" -P --export-prefix Export the key name with a prefix\n"
" -e --export-db Export the content of the udev database\n"
" -c --cleanup-db Clean up the udev database\n"
- , program_invocation_short_name);
+ , "parity");
}
static int uinfo(struct udev *udev, int argc, char *argv[]) {
diff --git a/src/udev/udevadm-monitor.c b/src/udev/udevadm-monitor.c
index 15ded09..b58dd08 100644
--- a/src/udev/udevadm-monitor.c
+++ b/src/udev/udevadm-monitor.c
@@ -73,7 +73,7 @@ static void help(void) {
" -u --udev Print udev events\n"
" -s --subsystem-match=SUBSYSTEM[/DEVTYPE] Filter events by subsystem\n"
" -t --tag-match=TAG Filter events by tag\n"
- , program_invocation_short_name);
+ , "parity");
}
static int adm_monitor(struct udev *udev, int argc, char *argv[]) {
diff --git a/src/udev/udevadm-settle.c b/src/udev/udevadm-settle.c
index 33597bc..b36a504 100644
--- a/src/udev/udevadm-settle.c
+++ b/src/udev/udevadm-settle.c
@@ -43,7 +43,7 @@ static void help(void) {
" --version Show package version\n"
" -t --timeout=SECONDS Maximum time to wait for events\n"
" -E --exit-if-exists=FILE Stop waiting if file exists\n"
- , program_invocation_short_name);
+ , "parity");
}
static int adm_settle(struct udev *udev, int argc, char *argv[]) {
diff --git a/src/udev/udevadm-test-builtin.c b/src/udev/udevadm-test-builtin.c
index baaeca9..50ed812 100644
--- a/src/udev/udevadm-test-builtin.c
+++ b/src/udev/udevadm-test-builtin.c
@@ -39,7 +39,7 @@ static void help(struct udev *udev) {
" -h --help Print this message\n"
" --version Print version of the program\n\n"
"Commands:\n"
- , program_invocation_short_name);
+ , "parity");
udev_builtin_list(udev);
}
diff --git a/src/udev/udevadm-test.c b/src/udev/udevadm-test.c
index 47fd924..a855412 100644
--- a/src/udev/udevadm-test.c
+++ b/src/udev/udevadm-test.c
@@ -39,7 +39,7 @@ static void help(void) {
" --version Show package version\n"
" -a --action=ACTION Set action string\n"
" -N --resolve-names=early|late|never When to resolve names\n"
- , program_invocation_short_name);
+ , "parity");
}
static int adm_test(struct udev *udev, int argc, char *argv[]) {
diff --git a/src/udev/udevadm-trigger.c b/src/udev/udevadm-trigger.c
index 4dc756a..67787d3 100644
--- a/src/udev/udevadm-trigger.c
+++ b/src/udev/udevadm-trigger.c
@@ -92,7 +92,7 @@ static void help(void) {
" -y --sysname-match=NAME Trigger devices with this /sys path\n"
" --name-match=NAME Trigger devices with this /dev name\n"
" -b --parent-match=NAME Trigger devices with that parent device\n"
- , program_invocation_short_name);
+ , "parity");
}
static int adm_trigger(struct udev *udev, int argc, char *argv[]) {
diff --git a/src/udev/udevadm.c b/src/udev/udevadm.c
index 3e57cf6..b03dfaa 100644
--- a/src/udev/udevadm.c
+++ b/src/udev/udevadm.c
@@ -62,7 +62,7 @@ static int adm_help(struct udev *udev, int argc, char *argv[]) {
printf("%s [--help] [--version] [--debug] COMMAND [COMMAND OPTIONS]\n\n"
"Send control commands or test the device manager.\n\n"
"Commands:\n"
- , program_invocation_short_name);
+ , "parity");
for (i = 0; i < ELEMENTSOF(udevadm_cmds); i++)
if (udevadm_cmds[i]->help != NULL)
@@ -128,7 +128,7 @@ int main(int argc, char *argv[]) {
goto out;
}
- fprintf(stderr, "%s: missing or unknown command\n", program_invocation_short_name);
+ fprintf(stderr, "%s: missing or unknown command\n", "parity");
rc = 2;
out:
mac_selinux_finish();
diff --git a/src/udev/udevd.c b/src/udev/udevd.c
index cf826c6..4eec0af 100644
--- a/src/udev/udevd.c
+++ b/src/udev/udevd.c
@@ -1041,7 +1041,7 @@ static void help(void) {
" -t --event-timeout=SECONDS Seconds to wait before terminating an event\n"
" -N --resolve-names=early|late|never\n"
" When to resolve users and groups\n"
- , program_invocation_short_name);
+ , "parity");
}
static int parse_argv(int argc, char *argv[]) {
diff --git a/src/v4l_id/v4l_id.c b/src/v4l_id/v4l_id.c
index 1dce0d5..f65badf 100644
--- a/src/v4l_id/v4l_id.c
+++ b/src/v4l_id/v4l_id.c
@@ -49,7 +49,7 @@ int main(int argc, char *argv[]) {
printf("%s [-h,--help] <device file>\n\n"
"Video4Linux device identification.\n\n"
" -h Print this message\n"
- , program_invocation_short_name);
+ , "parity");
return 0;
case '?':
return -EINVAL;
diff --git a/src/shared/path-util.c b/src/shared/path-util.c
index 0744563..7151356 100644
--- a/src/shared/path-util.c
+++ b/src/shared/path-util.c
@@ -109,7 +109,7 @@ char *path_make_absolute_cwd(const char *p) {
if (path_is_absolute(p))
return strdup(p);
- cwd = get_current_dir_name();
+ cwd = getcwd(malloc(128), 128);
if (!cwd)
return NULL;

View File

@ -16,9 +16,9 @@ crossbeam = "0.3"
ethash = { path = "../ethash" }
ethcore-bloom-journal = { path = "../util/bloom" }
parity-bytes = "0.1"
hashdb = "0.2.1"
memorydb = "0.2.1"
patricia-trie = "0.2.1"
hashdb = "0.3.0"
memorydb = "0.3.0"
patricia-trie = "0.3.0"
patricia-trie-ethereum = { path = "../util/patricia-trie-ethereum" }
parity-crypto = "0.1"
error-chain = { version = "0.12", default-features = false }
@ -47,7 +47,7 @@ parity-machine = { path = "../machine" }
parking_lot = "0.6"
rayon = "1.0"
rand = "0.4"
rlp = { version = "0.2.4", features = ["ethereum"] }
rlp = { version = "0.3.0", features = ["ethereum"] }
rlp_compress = { path = "../util/rlp_compress" }
rlp_derive = { path = "../util/rlp_derive" }
kvdb = "0.1"

View File

@ -176,9 +176,8 @@ impl<Gas: evm::CostType> Gasometer<Gas> {
Request::GasMem(default_gas, mem_needed(stack.peek(0), stack.peek(1))?)
},
instructions::SHA3 => {
let w = overflowing!(add_gas_usize(Gas::from_u256(*stack.peek(1))?, 31));
let words = w >> 5;
let gas = Gas::from(schedule.sha3_gas) + (Gas::from(schedule.sha3_word_gas) * words);
let words = overflowing!(to_word_size(Gas::from_u256(*stack.peek(1))?));
let gas = overflowing!(Gas::from(schedule.sha3_gas).overflow_add(overflowing!(Gas::from(schedule.sha3_word_gas).overflow_mul(words))));
Request::GasMem(gas, mem_needed(stack.peek(0), stack.peek(1))?)
},
instructions::CALLDATACOPY | instructions::CODECOPY | instructions::RETURNDATACOPY => {
@ -231,13 +230,24 @@ impl<Gas: evm::CostType> Gasometer<Gas> {
Request::GasMemProvide(gas, mem, Some(requested))
},
instructions::CREATE | instructions::CREATE2 => {
instructions::CREATE => {
let start = stack.peek(1);
let len = stack.peek(2);
let gas = Gas::from(schedule.create_gas);
let mem = match instruction {
instructions::CREATE => mem_needed(stack.peek(1), stack.peek(2))?,
instructions::CREATE2 => mem_needed(stack.peek(2), stack.peek(3))?,
_ => unreachable!("instruction can only be CREATE/CREATE2 checked above; qed"),
};
let mem = mem_needed(start, len)?;
Request::GasMemProvide(gas, mem, None)
},
instructions::CREATE2 => {
let start = stack.peek(1);
let len = stack.peek(2);
let base = Gas::from(schedule.create_gas);
let word = overflowing!(to_word_size(Gas::from_u256(*len)?));
let word_gas = overflowing!(Gas::from(schedule.sha3_word_gas).overflow_mul(word));
let gas = overflowing!(base.overflow_add(word_gas));
let mem = mem_needed(start, len)?;
Request::GasMemProvide(gas, mem, None)
},
@ -287,8 +297,8 @@ impl<Gas: evm::CostType> Gasometer<Gas> {
},
Request::GasMemCopy(gas, mem_size, copy) => {
let (mem_gas_cost, new_mem_gas, new_mem_size) = self.mem_gas_cost(schedule, current_mem_size, &mem_size)?;
let copy = overflowing!(add_gas_usize(copy, 31)) >> 5;
let copy_gas = Gas::from(schedule.copy_gas) * copy;
let copy = overflowing!(to_word_size(copy));
let copy_gas = overflowing!(Gas::from(schedule.copy_gas).overflow_mul(copy));
let gas = overflowing!(gas.overflow_add(copy_gas));
let gas = overflowing!(gas.overflow_add(mem_gas_cost));
@ -315,7 +325,7 @@ impl<Gas: evm::CostType> Gasometer<Gas> {
};
let current_mem_size = Gas::from(current_mem_size);
let req_mem_size_rounded = (overflowing!(mem_size.overflow_add(Gas::from(31 as usize))) >> 5) << 5;
let req_mem_size_rounded = overflowing!(to_word_size(*mem_size)) << 5;
let (mem_gas_cost, new_mem_gas) = if req_mem_size_rounded > current_mem_size {
let new_mem_gas = gas_for_mem(req_mem_size_rounded)?;
@ -347,6 +357,16 @@ fn add_gas_usize<Gas: evm::CostType>(value: Gas, num: usize) -> (Gas, bool) {
value.overflow_add(Gas::from(num))
}
#[inline]
fn to_word_size<Gas: evm::CostType>(value: Gas) -> (Gas, bool) {
let (gas, overflow) = add_gas_usize(value, 31);
if overflow {
return (gas, overflow);
}
(gas >> 5, false)
}
#[inline]
fn calculate_eip1283_sstore_gas<Gas: evm::CostType>(schedule: &Schedule, original: &U256, current: &U256, new: &U256) -> Gas {
Gas::from(
@ -383,7 +403,7 @@ fn calculate_eip1283_sstore_gas<Gas: evm::CostType>(schedule: &Schedule, origina
}
pub fn handle_eip1283_sstore_clears_refund(ext: &mut vm::Ext, original: &U256, current: &U256, new: &U256) {
let sstore_clears_schedule = U256::from(ext.schedule().sstore_refund_gas);
let sstore_clears_schedule = ext.schedule().sstore_refund_gas;
if current == new {
// 1. If current value equals new value (this is a no-op), 200 gas is deducted.
@ -418,11 +438,11 @@ pub fn handle_eip1283_sstore_clears_refund(ext: &mut vm::Ext, original: &U256, c
// 2.2.2. If original value equals new value (this storage slot is reset)
if original.is_zero() {
// 2.2.2.1. If original value is 0, add 19800 gas to refund counter.
let refund = U256::from(ext.schedule().sstore_set_gas - ext.schedule().sload_gas);
let refund = ext.schedule().sstore_set_gas - ext.schedule().sload_gas;
ext.add_sstore_refund(refund);
} else {
// 2.2.2.2. Otherwise, add 4800 gas to refund counter.
let refund = U256::from(ext.schedule().sstore_reset_gas - ext.schedule().sload_gas);
let refund = ext.schedule().sstore_reset_gas - ext.schedule().sload_gas;
ext.add_sstore_refund(refund);
}
}

View File

@ -306,8 +306,7 @@ impl<Cost: CostType> Interpreter<Cost> {
match result {
InstructionResult::JumpToPosition(position) => {
if self.valid_jump_destinations.is_none() {
let code_hash = self.params.code_hash.clone().unwrap_or_else(|| keccak(self.reader.code.as_ref()));
self.valid_jump_destinations = Some(self.cache.jump_destinations(&code_hash, &self.reader.code));
self.valid_jump_destinations = Some(self.cache.jump_destinations(&self.params.code_hash, &self.reader.code));
}
let jump_destinations = self.valid_jump_destinations.as_ref().expect("jump_destinations are initialized on first jump; qed");
let pos = self.verify_jump(position, jump_destinations)?;
@ -634,7 +633,7 @@ impl<Cost: CostType> Interpreter<Cost> {
gasometer::handle_eip1283_sstore_clears_refund(ext, &original_val, &current_val, &val);
} else {
if !current_val.is_zero() && val.is_zero() {
let sstore_clears_schedule = U256::from(ext.schedule().sstore_refund_gas);
let sstore_clears_schedule = ext.schedule().sstore_refund_gas;
ext.add_sstore_refund(sstore_clears_schedule);
}
}

View File

@ -50,17 +50,22 @@ impl SharedCache {
}
/// Get jump destinations bitmap for a contract.
pub fn jump_destinations(&self, code_hash: &H256, code: &[u8]) -> Arc<BitSet> {
if code_hash == &KECCAK_EMPTY {
return Self::find_jump_destinations(code);
}
pub fn jump_destinations(&self, code_hash: &Option<H256>, code: &[u8]) -> Arc<BitSet> {
if let Some(ref code_hash) = code_hash {
if code_hash == &KECCAK_EMPTY {
return Self::find_jump_destinations(code);
}
if let Some(d) = self.jump_destinations.lock().get_mut(code_hash) {
return d.0.clone();
if let Some(d) = self.jump_destinations.lock().get_mut(code_hash) {
return d.0.clone();
}
}
let d = Self::find_jump_destinations(code);
self.jump_destinations.lock().insert(code_hash.clone(), Bits(d.clone()));
if let Some(ref code_hash) = code_hash {
self.jump_destinations.lock().insert(*code_hash, Bits(d.clone()));
}
d
}

View File

@ -716,7 +716,7 @@ fn test_jumps(factory: super::Factory) {
test_finalize(vm.exec(&mut ext)).unwrap()
};
assert_eq!(ext.sstore_clears, U256::from(ext.schedule().sstore_refund_gas));
assert_eq!(ext.sstore_clears, ext.schedule().sstore_refund_gas as i128);
assert_store(&ext, 0, "0000000000000000000000000000000000000000000000000000000000000000"); // 5!
assert_store(&ext, 1, "0000000000000000000000000000000000000000000000000000000000000078"); // 5!
assert_eq!(gas_left, U256::from(54_117));

View File

@ -12,18 +12,18 @@ ethcore = { path = ".."}
parity-bytes = "0.1"
ethcore-transaction = { path = "../transaction" }
ethereum-types = "0.4"
memorydb = "0.2.1"
patricia-trie = "0.2.1"
memorydb = "0.3.0"
patricia-trie = "0.3.0"
patricia-trie-ethereum = { path = "../../util/patricia-trie-ethereum" }
ethcore-network = { path = "../../util/network" }
ethcore-io = { path = "../../util/io" }
hashdb = "0.2.1"
hashdb = "0.3.0"
heapsize = "0.4"
vm = { path = "../vm" }
fastmap = { path = "../../util/fastmap" }
rlp = { version = "0.2.4", features = ["ethereum"] }
rlp = { version = "0.3.0", features = ["ethereum"] }
rlp_derive = { path = "../../util/rlp_derive" }
smallvec = "0.4"
smallvec = "0.6"
futures = "0.1"
rand = "0.4"
itertools = "0.5"

View File

@ -27,6 +27,7 @@ use ethcore::ids::BlockId;
use ethereum_types::{H256, U256};
use hashdb::HashDB;
use keccak_hasher::KeccakHasher;
use kvdb::DBValue;
use memorydb::MemoryDB;
use bytes::Bytes;
use trie::{TrieMut, Trie, Recorder};
@ -52,13 +53,13 @@ pub const SIZE: u64 = 2048;
/// A canonical hash trie. This is generic over any database it can query.
/// See module docs for more details.
#[derive(Debug, Clone)]
pub struct CHT<DB: HashDB<KeccakHasher>> {
pub struct CHT<DB: HashDB<KeccakHasher, DBValue>> {
db: DB,
root: H256, // the root of this CHT.
number: u64,
}
impl<DB: HashDB<KeccakHasher>> CHT<DB> {
impl<DB: HashDB<KeccakHasher, DBValue>> CHT<DB> {
/// Query the root of the CHT.
pub fn root(&self) -> H256 { self.root }
@ -92,10 +93,10 @@ pub struct BlockInfo {
/// Build an in-memory CHT from a closure which provides necessary information
/// about blocks. If the fetcher ever fails to provide the info, the CHT
/// will not be generated.
pub fn build<F>(cht_num: u64, mut fetcher: F) -> Option<CHT<MemoryDB<KeccakHasher>>>
pub fn build<F>(cht_num: u64, mut fetcher: F) -> Option<CHT<MemoryDB<KeccakHasher, DBValue>>>
where F: FnMut(BlockId) -> Option<BlockInfo>
{
let mut db = MemoryDB::<KeccakHasher>::new();
let mut db = MemoryDB::<KeccakHasher, DBValue>::new();
// start from the last block by number and work backwards.
let last_num = start_number(cht_num + 1) - 1;
@ -134,7 +135,7 @@ pub fn compute_root<I>(cht_num: u64, iterable: I) -> Option<H256>
let start_num = start_number(cht_num) as usize;
for (i, (h, td)) in iterable.into_iter().take(SIZE as usize).enumerate() {
v.push((key!(i + start_num).into_vec(), val!(h, td).into_vec()))
v.push((key!(i + start_num), val!(h, td)))
}
if v.len() == SIZE as usize {
@ -149,7 +150,7 @@ pub fn compute_root<I>(cht_num: u64, iterable: I) -> Option<H256>
/// verify the given trie branch and extract the canonical hash and total difficulty.
// TODO: better support for partially-checked queries.
pub fn check_proof(proof: &[Bytes], num: u64, root: H256) -> Option<(H256, U256)> {
let mut db = MemoryDB::<KeccakHasher>::new();
let mut db = MemoryDB::<KeccakHasher, DBValue>::new();
for node in proof { db.insert(&node[..]); }
let res = match TrieDB::new(&db, &root) {

View File

@ -218,7 +218,7 @@ impl HeaderChain {
) -> Result<Self, Error> {
let mut live_epoch_proofs = ::std::collections::HashMap::default();
let genesis = ::rlp::encode(&spec.genesis_header()).into_vec();
let genesis = ::rlp::encode(&spec.genesis_header());
let decoded_header = spec.genesis_header();
let chain = if let Some(current) = db.get(col, CURRENT_KEY)? {

View File

@ -28,7 +28,7 @@ use parking_lot::{Mutex, RwLock};
use provider::Provider;
use request::{Request, NetworkRequests as Requests, Response};
use rlp::{RlpStream, Rlp};
use std::collections::{HashMap, HashSet};
use std::collections::{HashMap, HashSet, VecDeque};
use std::fmt;
use std::ops::{BitOr, BitAnd, Not};
use std::sync::Arc;
@ -38,7 +38,7 @@ use std::time::{Duration, Instant};
use self::request_credits::{Credits, FlowParams};
use self::context::{Ctx, TickCtx};
use self::error::Punishment;
use self::load_timer::{LoadDistribution, NullStore};
use self::load_timer::{LoadDistribution, NullStore, MOVING_SAMPLE_SIZE};
use self::request_set::RequestSet;
use self::id_guard::IdGuard;
@ -70,6 +70,16 @@ const PROPAGATE_TIMEOUT_INTERVAL: Duration = Duration::from_secs(5);
const RECALCULATE_COSTS_TIMEOUT: TimerToken = 3;
const RECALCULATE_COSTS_INTERVAL: Duration = Duration::from_secs(60 * 60);
const STATISTICS_TIMEOUT: TimerToken = 4;
const STATISTICS_INTERVAL: Duration = Duration::from_secs(15);
/// Maximum load share for the light server
pub const MAX_LIGHTSERV_LOAD: f64 = 0.5;
/// Factor to multiply leecher count to cater for
/// extra sudden connections (should be >= 1.0)
pub const LEECHER_COUNT_FACTOR: f64 = 1.25;
// minimum interval between updates.
const UPDATE_INTERVAL: Duration = Duration::from_millis(5000);
@ -256,18 +266,18 @@ pub trait Handler: Send + Sync {
pub struct Config {
/// How many stored seconds of credits peers should be able to accumulate.
pub max_stored_seconds: u64,
/// How much of the total load capacity each peer should be allowed to take.
pub load_share: f64,
/// The network config median peers (used as default peer count)
pub median_peers: f64,
}
impl Default for Config {
fn default() -> Self {
const LOAD_SHARE: f64 = 1.0 / 25.0;
const MEDIAN_PEERS: f64 = 25.0;
const MAX_ACCUMULATED: u64 = 60 * 5; // only charge for 5 minutes.
Config {
max_stored_seconds: MAX_ACCUMULATED,
load_share: LOAD_SHARE,
median_peers: MEDIAN_PEERS,
}
}
}
@ -335,6 +345,42 @@ mod id_guard {
}
}
/// Provides various statistics that could
/// be used to compute costs
pub struct Statistics {
/// Samples of peer count
peer_counts: VecDeque<usize>,
}
impl Statistics {
/// Create a new Statistics instance
pub fn new() -> Self {
Statistics {
peer_counts: VecDeque::with_capacity(MOVING_SAMPLE_SIZE),
}
}
/// Add a new peer_count sample
pub fn add_peer_count(&mut self, peer_count: usize) {
while self.peer_counts.len() >= MOVING_SAMPLE_SIZE {
self.peer_counts.pop_front();
}
self.peer_counts.push_back(peer_count);
}
/// Get the average peer count from previous samples. Is always >= 1.0
pub fn avg_peer_count(&self) -> f64 {
let len = self.peer_counts.len();
if len == 0 {
return 1.0;
}
let avg = self.peer_counts.iter()
.fold(0, |sum: u32, &v| sum.saturating_add(v as u32)) as f64
/ len as f64;
avg.max(1.0)
}
}
/// This is an implementation of the light ethereum network protocol, abstracted
/// over a `Provider` of data and a p2p network.
///
@ -359,6 +405,7 @@ pub struct LightProtocol {
req_id: AtomicUsize,
sample_store: Box<SampleStore>,
load_distribution: LoadDistribution,
statistics: RwLock<Statistics>,
}
impl LightProtocol {
@ -369,9 +416,11 @@ impl LightProtocol {
let genesis_hash = provider.chain_info().genesis_hash;
let sample_store = params.sample_store.unwrap_or_else(|| Box::new(NullStore));
let load_distribution = LoadDistribution::load(&*sample_store);
// Default load share relative to median peers
let load_share = MAX_LIGHTSERV_LOAD / params.config.median_peers;
let flow_params = FlowParams::from_request_times(
|kind| load_distribution.expected_time(kind),
params.config.load_share,
load_share,
Duration::from_secs(params.config.max_stored_seconds),
);
@ -389,6 +438,7 @@ impl LightProtocol {
req_id: AtomicUsize::new(0),
sample_store,
load_distribution,
statistics: RwLock::new(Statistics::new()),
}
}
@ -408,6 +458,16 @@ impl LightProtocol {
)
}
/// Get the number of active light peers downloading from the
/// node
pub fn leecher_count(&self) -> usize {
let credit_limit = *self.flow_params.read().limit();
// Count the number of peers that used some credit
self.peers.read().iter()
.filter(|(_, p)| p.lock().local_credits.current() < credit_limit)
.count()
}
/// Make a request to a peer.
///
/// Fails on: nonexistent peer, network error, peer not server,
@ -772,12 +832,16 @@ impl LightProtocol {
fn begin_new_cost_period(&self, io: &IoContext) {
self.load_distribution.end_period(&*self.sample_store);
let avg_peer_count = self.statistics.read().avg_peer_count();
// Load share relative to average peer count +LEECHER_COUNT_FACTOR%
let load_share = MAX_LIGHTSERV_LOAD / (avg_peer_count * LEECHER_COUNT_FACTOR);
let new_params = Arc::new(FlowParams::from_request_times(
|kind| self.load_distribution.expected_time(kind),
self.config.load_share,
load_share,
Duration::from_secs(self.config.max_stored_seconds),
));
*self.flow_params.write() = new_params.clone();
trace!(target: "pip", "New cost period: avg_peers={} ; cost_table:{:?}", avg_peer_count, new_params.cost_table());
let peers = self.peers.read();
let now = Instant::now();
@ -797,6 +861,11 @@ impl LightProtocol {
peer_info.awaiting_acknowledge = Some((now, new_params.clone()));
}
}
fn tick_statistics(&self) {
let leecher_count = self.leecher_count();
self.statistics.write().add_peer_count(leecher_count);
}
}
impl LightProtocol {
@ -1099,6 +1168,8 @@ impl NetworkProtocolHandler for LightProtocol {
.expect("Error registering sync timer.");
io.register_timer(RECALCULATE_COSTS_TIMEOUT, RECALCULATE_COSTS_INTERVAL)
.expect("Error registering request timer interval token.");
io.register_timer(STATISTICS_TIMEOUT, STATISTICS_INTERVAL)
.expect("Error registering statistics timer.");
}
fn read(&self, io: &NetworkContext, peer: &PeerId, packet_id: u8, data: &[u8]) {
@ -1119,6 +1190,7 @@ impl NetworkProtocolHandler for LightProtocol {
TICK_TIMEOUT => self.tick_handlers(&io),
PROPAGATE_TIMEOUT => self.propagate_transactions(&io),
RECALCULATE_COSTS_TIMEOUT => self.begin_new_cost_period(&io),
STATISTICS_TIMEOUT => self.tick_statistics(),
_ => warn!(target: "pip", "received timeout on unknown token {}", timer),
}
}

View File

@ -22,9 +22,10 @@ use ethcore::client::{EachBlockWith, TestBlockChainClient};
use ethcore::encoded;
use ethcore::ids::BlockId;
use ethereum_types::{H256, U256, Address};
use net::{LightProtocol, Params, packet, Peer};
use net::{LightProtocol, Params, packet, Peer, Statistics};
use net::context::IoContext;
use net::status::{Capabilities, Status};
use net::load_timer::MOVING_SAMPLE_SIZE;
use network::{PeerId, NodeId};
use provider::Provider;
use request;
@ -150,7 +151,7 @@ impl Provider for TestProvider {
fn storage_proof(&self, req: request::CompleteStorageRequest) -> Option<request::StorageResponse> {
Some(StorageResponse {
proof: vec![::rlp::encode(&req.key_hash).into_vec()],
proof: vec![::rlp::encode(&req.key_hash)],
value: req.key_hash | req.address_hash,
})
}
@ -780,3 +781,34 @@ fn get_transaction_index() {
let expected = Expect::Respond(packet::RESPONSE, response);
proto.handle_packet(&expected, 1, packet::REQUEST, &request_body);
}
#[test]
fn sync_statistics() {
let mut stats = Statistics::new();
// Empty set should return 1.0
assert_eq!(stats.avg_peer_count(), 1.0);
// Average < 1.0 should return 1.0
stats.add_peer_count(0);
assert_eq!(stats.avg_peer_count(), 1.0);
stats = Statistics::new();
const N: f64 = 50.0;
for i in 1..(N as usize + 1) {
stats.add_peer_count(i);
}
// Compute the average for the sum 1..N
assert_eq!(stats.avg_peer_count(), N * (N + 1.0) / 2.0 / N);
for _ in 1..(MOVING_SAMPLE_SIZE + 1) {
stats.add_peer_count(40);
}
// Test that it returns the average of the last
// `MOVING_SAMPLE_SIZE` values
assert_eq!(stats.avg_peer_count(), 40.0);
}

View File

@ -1156,7 +1156,7 @@ mod tests {
header.set_number(10_000);
header.set_extra_data(b"test_header".to_vec());
let hash = header.hash();
let raw_header = encoded::Header::new(::rlp::encode(&header).into_vec());
let raw_header = encoded::Header::new(::rlp::encode(&header));
let cache = Mutex::new(make_cache());
assert!(HeaderByHash(hash.into()).check_response(&cache, &hash.into(), &[raw_header]).is_ok())
@ -1177,14 +1177,14 @@ mod tests {
headers.reverse(); // because responses are in reverse order
let raw_headers = headers.iter()
.map(|hdr| encoded::Header::new(::rlp::encode(hdr).into_vec()))
.map(|hdr| encoded::Header::new(::rlp::encode(hdr)))
.collect::<Vec<_>>();
let mut invalid_successor = Header::new();
invalid_successor.set_number(11);
invalid_successor.set_parent_hash(headers[1].hash());
let raw_invalid_successor = encoded::Header::new(::rlp::encode(&invalid_successor).into_vec());
let raw_invalid_successor = encoded::Header::new(::rlp::encode(&invalid_successor));
let cache = Mutex::new(make_cache());
@ -1247,10 +1247,10 @@ mod tests {
let mut body_stream = RlpStream::new_list(2);
body_stream.begin_list(0).begin_list(0);
let req = Body(encoded::Header::new(::rlp::encode(&header).into_vec()).into());
let req = Body(encoded::Header::new(::rlp::encode(&header)).into());
let cache = Mutex::new(make_cache());
let response = encoded::Body::new(body_stream.drain().into_vec());
let response = encoded::Body::new(body_stream.drain());
assert!(req.check_response(&cache, &response).is_ok())
}
@ -1270,7 +1270,7 @@ mod tests {
header.set_receipts_root(receipts_root);
let req = BlockReceipts(encoded::Header::new(::rlp::encode(&header).into_vec()).into());
let req = BlockReceipts(encoded::Header::new(::rlp::encode(&header)).into());
let cache = Mutex::new(make_cache());
assert!(req.check_response(&cache, &receipts).is_ok())
@ -1318,7 +1318,7 @@ mod tests {
header.set_state_root(root.clone());
let req = Account {
header: encoded::Header::new(::rlp::encode(&header).into_vec()).into(),
header: encoded::Header::new(::rlp::encode(&header)).into(),
address: addr,
};
@ -1332,7 +1332,7 @@ mod tests {
let code_hash = keccak(&code);
let header = Header::new();
let req = Code {
header: encoded::Header::new(::rlp::encode(&header).into_vec()).into(),
header: encoded::Header::new(::rlp::encode(&header)).into(),
code_hash: code_hash.into(),
};

View File

@ -176,7 +176,7 @@ impl<T: ProvingBlockChainClient + ?Sized> Provider for T {
}
fn block_receipts(&self, req: request::CompleteReceiptsRequest) -> Option<request::ReceiptsResponse> {
BlockChainClient::block_receipts(self, &req.hash)
BlockChainClient::encoded_block_receipts(self, &req.hash)
.map(|x| ::request::ReceiptsResponse { receipts: ::rlp::decode_list(&x) })
}

View File

@ -1676,7 +1676,7 @@ mod tests {
let full_req = Request::Headers(req.clone());
let res = HeadersResponse {
headers: vec![
::ethcore::encoded::Header::new(::rlp::encode(&Header::default()).into_vec())
::ethcore::encoded::Header::new(::rlp::encode(&Header::default()))
]
};
let full_res = Response::Headers(res.clone());

View File

@ -26,10 +26,10 @@ heapsize = "0.4"
keccak-hash = "0.1.2"
log = "0.4"
parking_lot = "0.6"
patricia-trie = "0.2.1"
patricia-trie = "0.3.0"
patricia-trie-ethereum = { path = "../../util/patricia-trie-ethereum" }
rand = "0.3"
rlp = { version = "0.2.4", features = ["ethereum"] }
rlp = { version = "0.3.0", features = ["ethereum"] }
rlp_derive = { path = "../../util/rlp_derive" }
rustc-hex = "1.0"
serde = "1.0"

View File

@ -219,7 +219,7 @@ impl Provider where {
let private_state_hash = self.calculate_state_hash(&private_state, contract_nonce);
trace!(target: "privatetx", "Hashed effective private state for sender: {:?}", private_state_hash);
self.transactions_for_signing.write().add_transaction(private.hash(), signed_transaction, contract_validators, private_state, contract_nonce)?;
self.broadcast_private_transaction(private.hash(), private.rlp_bytes().into_vec());
self.broadcast_private_transaction(private.hash(), private.rlp_bytes());
Ok(Receipt {
hash: tx_hash,
contract_address: Some(contract),
@ -258,13 +258,13 @@ impl Provider where {
match transaction.validator_account {
None => {
trace!(target: "privatetx", "Propagating transaction further");
self.broadcast_private_transaction(private_hash, transaction.private_transaction.rlp_bytes().into_vec());
self.broadcast_private_transaction(private_hash, transaction.private_transaction.rlp_bytes());
return Ok(());
}
Some(validator_account) => {
if !self.validator_accounts.contains(&validator_account) {
trace!(target: "privatetx", "Propagating transaction further");
self.broadcast_private_transaction(private_hash, transaction.private_transaction.rlp_bytes().into_vec());
self.broadcast_private_transaction(private_hash, transaction.private_transaction.rlp_bytes());
return Ok(());
}
let tx_action = transaction.transaction.action.clone();
@ -290,7 +290,7 @@ impl Provider where {
let signed_state = signed_state.expect("Error was checked before");
let signed_private_transaction = SignedPrivateTransaction::new(private_hash, signed_state, None);
trace!(target: "privatetx", "Sending signature for private transaction: {:?}", signed_private_transaction);
self.broadcast_signed_private_transaction(signed_private_transaction.hash(), signed_private_transaction.rlp_bytes().into_vec());
self.broadcast_signed_private_transaction(signed_private_transaction.hash(), signed_private_transaction.rlp_bytes());
} else {
bail!("Incorrect type of action for the transaction");
}
@ -315,7 +315,7 @@ impl Provider where {
let desc = match self.transactions_for_signing.read().get(&private_hash) {
None => {
// Not our transaction, broadcast further to peers
self.broadcast_signed_private_transaction(signed_tx.hash(), signed_tx.rlp_bytes().into_vec());
self.broadcast_signed_private_transaction(signed_tx.hash(), signed_tx.rlp_bytes());
return Ok(());
},
Some(desc) => desc,

View File

@ -3068,17 +3068,18 @@
]
},
"nodes": [
"enode://e809c4a2fec7daed400e5e28564e23693b23b2cc5a019b612505631bbe7b9ccf709c1796d2a3d29ef2b045f210caf51e3c4f5b6d3587d43ad5d6397526fa6179@174.112.32.157:30303",
"enode://6e538e7c1280f0a31ff08b382db5302480f775480b8e68f8febca0ceff81e4b19153c6f8bf60313b93bef2cc34d34e1df41317de0ce613a201d1660a788a03e2@52.206.67.235:30303",
"enode://efd48ad0879eeb7f9cb5e50f33f7bc21e805a72e90361f145baaa22dd75d111e7cd9c93f1b7060dcb30aa1b3e620269336dbf32339fea4c18925a4c15fe642df@18.205.66.229:30303",
"enode://5fbfb426fbb46f8b8c1bd3dd140f5b511da558cd37d60844b525909ab82e13a25ee722293c829e52cb65c2305b1637fa9a2ea4d6634a224d5f400bfe244ac0de@162.243.55.45:30303",
"enode://42d8f29d1db5f4b2947cd5c3d76c6d0d3697e6b9b3430c3d41e46b4bb77655433aeedc25d4b4ea9d8214b6a43008ba67199374a9b53633301bca0cd20c6928ab@104.155.176.151:30303",
"enode://814920f1ec9510aa9ea1c8f79d8b6e6a462045f09caa2ae4055b0f34f7416fca6facd3dd45f1cf1673c0209e0503f02776b8ff94020e98b6679a0dc561b4eba0@104.154.136.117:30303",
"enode://72e445f4e89c0f476d404bc40478b0df83a5b500d2d2e850e08eb1af0cd464ab86db6160d0fde64bd77d5f0d33507ae19035671b3c74fec126d6e28787669740@104.198.71.200:30303",
"enode://39abab9d2a41f53298c0c9dc6bbca57b0840c3ba9dccf42aa27316addc1b7e56ade32a0a9f7f52d6c5db4fe74d8824bcedfeaecf1a4e533cacb71cf8100a9442@144.76.238.49:30303",
"enode://f50e675a34f471af2438b921914b5f06499c7438f3146f6b8936f1faeb50b8a91d0d0c24fb05a66f05865cd58c24da3e664d0def806172ddd0d4c5bdbf37747e@144.76.238.49:30306",
"enode://83b33409349ffa25e150555f7b4f8deebc68f3d34d782129dc3c8ba07b880c209310a4191e1725f2f6bef59bce9452d821111eaa786deab08a7e6551fca41f4f@159.89.223.6:30303",
"enode://5cd218959f8263bc3721d7789070806b0adff1a0ed3f95ec886fb469f9362c7507e3b32b256550b9a7964a23a938e8d42d45a0c34b332bfebc54b29081e83b93@35.187.57.94:30303",
"enode://6dd3ac8147fa82e46837ec8c3223d69ac24bcdbab04b036a3705c14f3a02e968f7f1adfcdb002aacec2db46e625c04bf8b5a1f85bb2d40a479b3cc9d45a444af@104.237.131.102:30303"
"enode://6dd3ac8147fa82e46837ec8c3223d69ac24bcdbab04b036a3705c14f3a02e968f7f1adfcdb002aacec2db46e625c04bf8b5a1f85bb2d40a479b3cc9d45a444af@104.237.131.102:30303",
"enode://b9e893ea9cb4537f4fed154233005ae61b441cd0ecd980136138c304fefac194c25a16b73dac05fc66a4198d0c15dd0f33af99b411882c68a019dfa6bb703b9d@18.130.93.66:30303",
"enode://3fe9705a02487baea45c1ffebfa4d84819f5f1e68a0dbc18031553242a6a08e39499b61e361a52c2a92f9553efd63763f6fdd34692be0d4ba6823bb2fc346009@178.62.238.75:30303",
"enode://d50facc65e46bda6ff594b6e95491efa16e067de41ae96571d9f3cb853d538c44864496fa5e4df10115f02bbbaf47853f932e110a44c89227da7c30e96840596@188.166.163.187:30303",
"enode://a0d5c589dc02d008fe4237da9877a5f1daedee0227ab612677eabe323520f003eb5e311af335de9f7964c2092bbc2b3b7ab1cce5a074d8346959f0868b4e366e@46.101.78.44:30303",
"enode://c071d96b0c0f13006feae3977fb1b3c2f62caedf643df9a3655bc1b60f777f05e69a4e58bf3547bb299210092764c56df1e08380e91265baa845dca8bc0a71da@68.183.99.5:30303",
"enode://83b33409349ffa25e150555f7b4f8deebc68f3d34d782129dc3c8ba07b880c209310a4191e1725f2f6bef59bce9452d821111eaa786deab08a7e6551fca41f4f@206.189.68.191:30303",
"enode://0daae2a30f2c73b0b257746587136efb8e3479496f7ea1e943eeb9a663b72dd04582f699f7010ee02c57fc45d1f09568c20a9050ff937f9139e2973ddd98b87b@159.89.169.103:30303",
"enode://50808461dd73b3d70537e4c1e5fafd1132b3a90f998399af9205f8889987d62096d4e853813562dd43e7270a71c9d9d4e4dd73a534fdb22fbac98c389c1a7362@178.128.55.119:30303",
"enode://5cd218959f8263bc3721d7789070806b0adff1a0ed3f95ec886fb469f9362c7507e3b32b256550b9a7964a23a938e8d42d45a0c34b332bfebc54b29081e83b93@35.187.57.94:30303"
],
"accounts": {
"0000000000000000000000000000000000000001": { "builtin": { "name": "ecrecover", "pricing": { "linear": { "base": 3000, "word": 0 } } } },

View File

@ -1,16 +1,16 @@
{
"name": "Byzantium (Test)",
"name": "Constantinople (test)",
"engine": {
"Ethash": {
"params": {
"minimumDifficulty": "0x020000",
"difficultyBoundDivisor": "0x0800",
"durationLimit": "0x0d",
"blockReward": "0x29A2241AF62C0000",
"blockReward": "0x1BC16D674EC80000",
"homesteadTransition": "0x0",
"eip100bTransition": "0x0",
"difficultyBombDelays": {
"0": 3000000
"0": 5000000
}
}
}
@ -30,11 +30,13 @@
"eip161abcTransition": "0x0",
"eip161dTransition": "0x0",
"eip140Transition": "0x0",
"eip210Transition": "0x0",
"eip211Transition": "0x0",
"eip214Transition": "0x0",
"eip155Transition": "0x0",
"eip658Transition": "0x0",
"eip145Transition": "0x0",
"eip1014Transition": "0x0",
"eip1052Transition": "0x0",
"eip1283Transition": "0x0"
},
"genesis": {

View File

@ -0,0 +1,54 @@
{
"name": "EIP210 (test)",
"engine": {
"Ethash": {
"params": {
"minimumDifficulty": "0x020000",
"difficultyBoundDivisor": "0x0800",
"durationLimit": "0x0d",
"blockReward": "0x4563918244F40000",
"homesteadTransition": "0x0"
}
}
},
"params": {
"gasLimitBoundDivisor": "0x0400",
"registrar" : "0xc6d9d2cd449a754c494264e1809c50e34d64562b",
"accountStartNonce": "0x00",
"maximumExtraDataSize": "0x20",
"minGasLimit": "0x1388",
"networkID" : "0x1",
"maxCodeSize": 24576,
"maxCodeSizeTransition": "0x0",
"eip98Transition": "0xffffffffffffffff",
"eip150Transition": "0x0",
"eip160Transition": "0x0",
"eip161abcTransition": "0x0",
"eip161dTransition": "0x0",
"eip210Transition": "0x0"
},
"genesis": {
"seal": {
"ethereum": {
"nonce": "0x0000000000000042",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
}
},
"difficulty": "0x400000000",
"author": "0x0000000000000000000000000000000000000000",
"timestamp": "0x00",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"extraData": "0x11bbe8db4e347b4e8c937c1c8370e4b5ed33adb3db69cbdb7a38e1e50b1b82fa",
"gasLimit": "0x1388"
},
"accounts": {
"0000000000000000000000000000000000000001": { "balance": "1", "builtin": { "name": "ecrecover", "pricing": { "linear": { "base": 3000, "word": 0 } } } },
"0000000000000000000000000000000000000002": { "balance": "1", "builtin": { "name": "sha256", "pricing": { "linear": { "base": 60, "word": 12 } } } },
"0000000000000000000000000000000000000003": { "balance": "1", "builtin": { "name": "ripemd160", "pricing": { "linear": { "base": 600, "word": 120 } } } },
"0000000000000000000000000000000000000004": { "balance": "1", "builtin": { "name": "identity", "pricing": { "linear": { "base": 15, "word": 3 } } } },
"0000000000000000000000000000000000000005": { "builtin": { "name": "modexp", "activate_at": "0x00", "pricing": { "modexp": { "divisor": 100 } } } },
"0000000000000000000000000000000000000006": { "builtin": { "name": "alt_bn128_add", "activate_at": "0x00", "pricing": { "linear": { "base": 500, "word": 0 } } } },
"0000000000000000000000000000000000000007": { "builtin": { "name": "alt_bn128_mul", "activate_at": "0x00", "pricing": { "linear": { "base": 2000, "word": 0 } } } },
"0000000000000000000000000000000000000008": { "builtin": { "name": "alt_bn128_pairing", "activate_at": "0x00", "pricing": { "alt_bn128_pairing": { "base": 100000, "pair": 80000 } } } }
}
}

View File

@ -9,7 +9,8 @@
"durationLimit": "0x0d",
"blockReward": {
"0": "0x4563918244F40000",
"4370000": "0x29A2241AF62C0000"
"4370000": "0x29A2241AF62C0000",
"7080000": "0x1BC16D674EC80000"
},
"homesteadTransition": "0x118c30",
"daoHardforkTransition": "0x1d4c00",
@ -134,7 +135,8 @@
],
"eip100bTransition": 4370000,
"difficultyBombDelays": {
"4370000": 3000000
"4370000": 3000000,
"7080000": 2000000
}
}
}
@ -159,7 +161,11 @@
"eip140Transition": 4370000,
"eip211Transition": 4370000,
"eip214Transition": 4370000,
"eip658Transition": 4370000
"eip658Transition": 4370000,
"eip145Transition": 7080000,
"eip1014Transition": 7080000,
"eip1052Transition": 7080000,
"eip1283Transition": 7080000
},
"genesis": {
"seal": {
@ -177,8 +183,8 @@
"stateRoot": "0xd7f8974fb5ac78d9ac099b9ad5018bedc2ce0a72dad1827a1709da30580f0544"
},
"hardcodedSync": {
"header": "f9020ba0c7139a7f4b14c2e12dbe34aeb92711b37747bf8698ecdd6f2c3b1f5f3840e288a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d493479452e44f279f4203dcf680395379e5f9990a69f13ca06e817f8a9b206f93a393da76a3ece2a74b98eaecc4dae0cfa8f409455e88ccb4a0d739197170d2bc6bbb24fac0ce695982090702082fe1541bb7634f018dfe87b3a038503b7299fb1c113a78b4a3a5dfd997ef32e7fbf722fc178dfcb21e1af1f7f5b9010000020000000000000008000000000010000000000000000000000000000200000002000000000008000000000000000000000000000000000000000000000000000000000000000001004008000000000000000000000000000080000000000008000000080000000000008000000000000000000000000000040210004000000010000000400000000001040000000200000000000000440008000000040000000000000000000000000010000000000000000000000000000000800000000000100002000000000000000000002000000000000000000000000000000000000080000000100000000000000000040400000000000000004000000000000000870c90e059b181c6835ee8018379fb9583065150845b833d198a7777772e62772e636f6da0171fc2d066507ea10c8c2d7bedd5ccc3f0dfb4d590a3998a614013326c0b213a88b4fd884826187393",
"totalDifficulty": "6255555800944520547241",
"header": "f90210a0b5950d9087e0e1d3dfc18cae9388fc798d9b3932a6efb4fc073081f5561adc8da01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d4934794829bd824b016326a401d083b33d092293333a830a00caa3307575bbbe59bb8f8a89f115ccb864866c2591d8d3ec3bffa6231532e52a024278597d4fddf58a4b55fe6a8bb7bf0e6cf1c6e75158b3d0d0b1bb9bb16647da01ba199a450390854e88ea5504144be61dd681e805aed7a1cc8bea61d5b61194bb901001902100820000000020110010418b02612064240040a4051001001a0024400e2484001141a422e56073887504031028220080010040501800000080080268040050018400100cc4710800419088000005a840a201c09011040086001260040000021004842021800024204400204a8a1180a8a0002164400202000115420602005512a20410088002248c000020e801803c051202030000420810912000401010604000801543084004414c051048082100000221684005c8015a40100040170025040132500014054900482500403088004000821194124a0b20000200229b9601c230180400802020800000408e49314200190121404002080218814a08085870ac84878814c3083662001837a2147837a050f845be9e82d8fe4b883e5bda9e7a59ee4bb99e9b1bca05714f772f5e2fa3a41a8c7435407b579b7c16397cf08eca605eadd9e63db7f6a88315d0b500d2cb0a8",
"totalDifficulty": "7772265011609637471285",
"CHTs": [
"0x0eb474b7721727204978e92e27d31cddff56471911e424a4c8271c35f9c982cc",
"0xe10e94515fb5ffb7ffa9bf50db4a959b3f50c2ff75e0b8bd5f5e038749e52a11",
@ -3216,7 +3222,238 @@
"0xf344c0cf6516f0fa6617e48076726aefbdaaf5a31f67ad8199bc3f6e426bf904",
"0x3f3d2d33f36ba9009e9a72f3f5bbcb5df5392a19fc7afc8d37823aaf52b03477",
"0x346a89411f090d559ff90e670bf0a385b1b09f117fc9ffa18b09d3b6d5d8e45c",
"0x5bc5689e2b4572b8ceea472cc7827e22cbfd018920beebf5c5b25f65f5cd5357"
"0x5bc5689e2b4572b8ceea472cc7827e22cbfd018920beebf5c5b25f65f5cd5357",
"0xda418efcaa0076f77e4d2f0c57fc32fa67179a5631d9df52d56497113d0e87af",
"0x5a8050832e835202695129f6f384652827e61ea5f1be7ff300183201d8bd6b4d",
"0xd9f444c382da42c310bd2f05955187163ae7b224e5efd44ab95af332e197d374",
"0x9ef2c5bad361117eedbc2adcb72a2ef5eba4caf3a99a0cbb2a65a94d185e48ae",
"0x7e3e089bc46b00a4174d90003379c382ab5bd84d092b9c4db3189d2bdc24f00b",
"0x94f50fb12eed909d251fe69adb1a1f214776cb029d487360b55c3a2abb663d7e",
"0xd3e1f4244dea40d0741255db2dae72103e263390e0ccfdefcbb2da59ecc5ec9f",
"0x6808bf0cb7d4b677527de762b6db8ddf74a1b272349f34f44505912bd95d62f3",
"0xbf7672ac474b5b849bc086ff8455216f015c8fc7660436dee153522ef6991c04",
"0xe79d27a369cdd5455ddbc6bd9158cd1870aa895b3c3971d07f1555b95ed02ac3",
"0xfa9e20a36c11b0dfbf7e9c62872a6423f5460dfd18e447481461a41176678262",
"0xaafb6c407910341bedc82c0f260cdef75ce5653f644b93a465cb990247a32986",
"0x5058e655e0c179e6c20f48fbd08c2f34f9341f6c07972ff40f55bfabbc783b12",
"0x28d2e7c852de8602a764ff693b6881af18ddadd67fc7eff481f48ac20ebf32f6",
"0xf82e09e7916f61b5cfdc3dcf193bf9d535f2b33f93a06c90fbdc78b3aac6b7ef",
"0x626f3cca9e1a9e5e123e34485c8697c758ffc32213a727665065dd6abd2babe5",
"0xb7f1c07f673d903daa61dec649eb12286a7a0568ee36ecfb1023ec41427c8dd0",
"0x8d1d42bfe88dbe4c621cf68d380dc57e7768121a815546bb4aab29b7486da9ee",
"0x79835acd7266bce85978f481aa3c58f3bab9106d72892df8579e472dc95c6899",
"0x0911c9c804bbe9be0aebab6c92f5b71a893f72a9d0cd35a51b0e8cd19ab0c02a",
"0x7fd2eff10936d8d12fd9a1c6d27e77cbec4e48253465eb7e65876134ff60c8ec",
"0xc739ad4255415e2831c6996673f3d02dc79f6e6d6822f7dee23bff5b94833c3a",
"0x2559faafbae0852fe5a1c924f0e4f6ccdf4fd22f483148b3672a3e7b3692b669",
"0xca37f0aa3d375dbddc0b426c9564fe68f10b0a4cbbf1ab87f97b27b44878f2fb",
"0x00ba40205d1bd46ad5b5e73cd5b1f3418bd892586d5a4647ac9a6d158f15bd93",
"0xfa6d25c829299535e6b80af81a2416d10ed6903117e73c656b979a5f5abe3ee0",
"0xfd82d8944315cfb228a8fa416c18ff82cbd8869c3babbd3389dca6dd66797785",
"0xd8834cc29788cb40ec901725419df8c031a13e190756a6352696de870eaf4671",
"0xdf6843a52bf55e0f4404e7bdf144bb17d5c47a72ef9482e712090ac9730a7f52",
"0x4c2c562f835966c72985f7cca89a3b1a7b0d4cb04623dc96e337daa35a2f5925",
"0x49d2afd87e83a04059dbf3ef4e2598b8d0c495ab1cf91ed3004e16a608e910c2",
"0x5be64774739c001c239efae1ce9f2a5706cc6e3054ddf24b03c09358f2f4852f",
"0x678f789dc8c409653b36f4d2015338165d3bc6a73f2a77ebfe438676b8412d7a",
"0xf87c8fbf02d8e84cf72680e6b9a8b8be39fbae9f1eb1047c536d77535494a301",
"0xe2428b952d2c6d60d4925f56b3d8227cd6bc608da2c1b20264befd8b1ad89454",
"0x561a95eb50c663462bb8af3aab336bd745b0571746b10fefa791bc11be777763",
"0x6945f40e3499d2769ceecf499c701015d93fddb607720b18dbbd5a6a2aa46639",
"0x9c35b0367a2b82270d64f11c5299336b21b9f454077dcf7af3b2e434677a31b6",
"0x454dc6bb2443509381e478f1836cb36808e2ecd1a9944072056c292b710072f4",
"0x0c80566f34a46477592560a883a9c01fa393f7a2c9dbb28a54e46a5c017e8596",
"0xeff6a1255090509eccfdea2e591516886c91191f1f02eaae4808ac95009086fc",
"0x37cf60888e5ec75841e7f0533feef7200185a1c9f3253073216d83923c864829",
"0xb169ebb9e418809a96529835bc293782e4fc6310dba450afe3e95a7abdf7cc01",
"0xa3c8d5c71ce0477a247f56bfe95272ea07f0b7f10a8526b6e3ff9a8de8faa9ab",
"0x2bf18db4ee84bafbbabaf05d1d4d383c0d5fc91be6ae902334496996eb3a8e48",
"0x6c116f0d5809a2c28351a737ef3dbd1808685e1fd656e37df6b6e524aa82c918",
"0x21e2c8e019c687fdb360c9bdd4e3a5133488cd2e0365aee3b823120734aa6f27",
"0x20c9a1db9de894ab4f576265da25f391b32c0805c3da76fbfdd0aaf300f88a39",
"0x23ef1f43af87be7396449fc1f89d9766c59e8adf2660812293c65a27482ddb8e",
"0x04a82d3a4a5e7f2507688ecdcdc300d7fb97aa8be92a671d7d42c0b60fa4532b",
"0x99e204c42afd6d4040faad384517d99bd0e077b03310d32223234d2251d6a07c",
"0xe342c0c4295665b9e25773fc9998e18c460e723d0a14efdb59c19b27b9c7011b",
"0xb654b1b8ede0d54a605cda54b4635d2b3c2bb8efd01ebd416e52cc87b590d4f3",
"0x5daabc41eeb6de98336411a03ec0323995e81549941cf32b7e15c765d1b7b39e",
"0x5103fc7f0fc6df43fb081b580bb01476f2b1cbde73e4d0f9d1fa6d8427fae789",
"0xe2ecf5daba51d2f7b22106033fc43f956bd1db0c5ad02bd941bd3d2b96ca21c5",
"0xf152bce5c6d1efb7e22cde72d6b8ca37f556ffb686a13770c5fab46e04837c92",
"0x306007d8091caa5baaa78643307f5abf9a5f03996fc072a9016ba6b487b2017c",
"0xf57308d0c02c6b8e2416c070554c7e29911fa84ef4cf2d934e2322ca262e987c",
"0xb234fe7433d7fd71fe0c6dfc834e4bcbf84a261b95760a6c4eb67d222b9ff392",
"0x753059f3405f60da3aa7cd1aa0cbcfa4d5ef4f2a6ed34b853b2c5ab2181fd383",
"0x096c6630e821816d9f4bd83fbe0ccfd223282f34aae5a49f969ba30b98c324c3",
"0xd3eec9dedb057fbc839c474fc99cb54d89f3f47d896e06e758c98f1cd194b61f",
"0x0d44cb2a83b9a3fa18daac280cf08b46cc637d705488fd9400cd7300475d0a1c",
"0x2e37a3036db99c4cb1c135f5ca6b527fa13b2e80ee421805b7be5d8b16983602",
"0x381e0ca505308b7d3a083e60b0f9cb44c89f84942430ec9e4c5571796ab6a8eb",
"0x90b04d35906c6f5a59c266c3bce7c2b63cea1486f714e272592ef9ecab25b0ee",
"0x9cbea70e760f2ee97537d058d57f395886a2c3a6e769ccd3433b797b8716517b",
"0x4e2167846e8d6f0f6495b5f1443f59bea143b63f242e40186fc6429434d1136e",
"0xcaa0512739d000bb9783fceb46d0427098886e2b7f2e1140855f1a91f843d5b3",
"0xc14df4e379e84591f618e60b5953aa6764146c7822aa1f0e3c2287e20753985a",
"0xf4443154c04fd378b2c3812fee84b774b37d6e12778674403fb5c995379df866",
"0x1a501c2733cc138fb6ff3716899e08dbcd4d75edc18af8972e8a749e45eaf67a",
"0xfc8cb80bb0d0fb490f29aae3067641eef72e9225c558e7e299e0796a2086969d",
"0x2b7895550febf03070485c02d521e7ddd80b94b7fe33a60b7d7ea3545b13e7dd",
"0xfc4137c3cccd45050b5770a40b2f38c43c62b70b07d17bb6d762b405f3d753dc",
"0x86ed22bbbb9fc6600112b91601af4fff56d0ecbe9b3099f91d4477cab8e300f5",
"0x2273a60405ffb04bd024d880c79010f18d58e3c8ca0dc82795a0125364679fa6",
"0x00dfbfe7be3bb2116d9a603a01ac428c0088a2c1477810cd5d3be0d1bd86beab",
"0x7acfb03315585c79e2a47dbe847d24cab0785791f6af7f179fea4f9d6ecb0e0f",
"0xbf6a2e20ee1da5eec12b792bbaec2531e20766ba54bac423011c1057215851db",
"0xb5e94d1e3ba7363d1d79fb62dedd0b6c26b0485052dd64a7093d41ad2d41b890",
"0x9b0cc26f08708814960de8f280ac26d8ed5089a19bcbd2d765059306da22c196",
"0x22d8af121d3e395d3cb4f6ee43c06e6292f1b5ffda672d2e40dba69a2885f5ac",
"0x04bc174272a57189d76aa17de0f76806e8481f4903575ed8c4df12b042637e0e",
"0x06ebd2b6ec4b80280969a92726df5f9cb12d4288b60af617b7040876116656d3",
"0x0e9430513e63b5173271c89b1c91af0b4818d5d14a3034e1228c56c94186a109",
"0x8dc5422ba98d9e58112b052a00d4b82b1db32e22dd7ff2d845619899bd47f277",
"0xde513d40bdbb1e4956b468cece598d77134626a900066b92fb2ecd6fcb5f81c2",
"0x90746299ec75af1eb444ad14ac666ee444aa020fac3fb57796516d8772ec8f45",
"0xaa91c30c62b24f943ee1eec7586b682289541c0355c2726e44424da8686ca24d",
"0x76eb68baae9fb7ed126097f93842dcadfe6e7188d61549d9c0922a9b3ef8e80a",
"0x5aa5b4045e7fe71559a6e93f4a89b135eaef38b9a7f3a84e383ab1ff902ceca9",
"0x504b78f8fc3646e9722e96a5e97d99f2560d4fa3337fa5faf1cc8c8a05f3520d",
"0xffd7a5d7c3b21e8144f7678a9ddc039cf85eb32b09000a600c9f12aa7d6083ed",
"0xcbb4010000e96ff0b50b9627dae032bd50782ccbd51af8af7cfcd6cd184675f7",
"0xb96fabbdd02371bf4a6a0dc00e3874cf43d47246e27163c910c141b6759a4249",
"0x7358419f4e994ff296a37f2e88b238b3de6ba73062073c9467dec52a2df64422",
"0xca90be9f190a1fd0548becfa719a6e4763e92de0e4da4283a33b5f7d2886b425",
"0xa629364f7d6329b008d9c6a0262327bcc12953aa515cdb7b8817e7fe1d746d46",
"0xc5167bd8cac1ea6d14f305c9d4fe075e1875d96353e5236473b6daca5ae9b4fe",
"0xaf4ce2490e9504172a4393cf14e691e947c86a0ec7b53416384a5832b213d6c5",
"0xbfa4853ef2eecc5d99a90e1abfef37ca10c1f823c1d0ad59a1bb19339861241f",
"0xbb5a6584cdc7e4d06ec5fc1514233cc42970f6c332c3a9590978dc9908e58c0a",
"0xe69d7a0766db411e504f09a8f39f0583b2869016bbe95f21dba432bbe8b88442",
"0x89cf4caaaf200881779f5fa6da8ae91ff1c962045dd0622b5ca65c830d3a9d4f",
"0x82d66c631f4c4167e5301d896dbdfe24d8245b1de041fc85eaeb6e35117ed9a0",
"0x957907bc93879681d8682a188622f9bf2c7d2595dbe3e2e34bb01711cc4124d1",
"0xccb3a3380550586696abd3ac267e85c7516b2b682b3c48f66aca94d57500f3b3",
"0xaf56d4650406e70748dc860a7879d8d522599081f8e7011056c976b860703e43",
"0x5d96ac1d2dff8a054d880a44f5d45a1bd18aba29085fcd633b0608351ff1876c",
"0xe051736dad8b9f93a8f1c13031c2b63249925e152685a2e7ec188ee089861b20",
"0x0db8987339e1fae41af5f08e6fa15da5fd80de3431b54e82cf8edbdc792f870e",
"0xcc99097678110af2be8dc07da8d642dce928b7d9e2728fe6fef1fe2eaa81a72a",
"0x2428c1f94ca57c7913b011a68281eee9ee4855e4ed2c97e34a370e649b21acb1",
"0x501ee9580c89b1f67c5b3b69ae5fd1f83852a2f9330f53565bcd04d8a7c0b776",
"0x16ae47cfa19e8046f93a579fa2557b17aeca7892fc7a82b6d539930c8b7c95c9",
"0xda62590043ca70c1cdfc7969cdfa853bddbcef0ef62aabb9f372805322511014",
"0x481b4aeaaa60504c94dcfea966840b381db85183c34cd25b4857300b5c189003",
"0x035dcc47f8670a9f648bcb0232e42fd4876243a7a3bf737b88d723ba187929a7",
"0xebe9bf09e3577865aeb341a06f67bb6e607f10b04ed9f9d733492a9d0e9ceb1a",
"0xad5d85b58af6aef7f81bd6b2407c6e4884ec82b2ae2aeaa24e379a3d35902375",
"0x0f0dd63d7c6c284659283825624140b31a3adaf7cdbb2255faca443e52ebfe84",
"0x40079be1c9394e95b4823895ed380d79333ca2085aed2abd0d766f84d21b7b42",
"0x7cc40ed01b436ce225a3f9c5c2bc7f6f81aee40bb83a54bca2fe899b15f3e2b6",
"0x1b6356e1a83ca5b0eefda1fd62fa959b118d2a19a6a90f182a53414b3fc7f9f0",
"0xae4a71712cc96a5b30b45e3b92c339c2e975e4ed683f4d1fcadcdc121ff7c6bf",
"0x226f6d8c71ec32c5eaab6b01c0fc1d00ae95e60b383d09560e90549b79eb1447",
"0xf3dec779841c9384df93bcefbba8700a292b570b29d286a7c9c5a442b4788a20",
"0x63ef48e80efa45383857adcb0f31076393260cbad058d1938345ad13faae50b4",
"0x7ba213077278bd01ff874e760c3a0bfd4e5cf65bb18eed8b0f8076d81e16d05b",
"0x92eace1cff535745aae658d9ad90c372d41ce318a20716ae3b2c0958bdb48b33",
"0x76b2607fae8dbe532d8516c4c33e309f2c0a963bb68035f4361a2859d62d2daf",
"0xfb0c46a588438f4f217e183ef7772b649b5a45a318253496b01676253dedb568",
"0xd6dba12c700a0c5ca59b9d17f628066405de8e4399f92472cccce2e25e70ba4d",
"0x2390984444cbc9de301adaf0853f4f6ed2fb56fefeabce717306cf1e40da9618",
"0xa6992b0db82c50ce623febc7f16c8a316ce262255de11ad1914316b3d986fe66",
"0x96787cf8e0a70ffe5b84151ae8219362ba6d9465223c23c135590fe4cb9ab592",
"0x0b4e7fba24f6bed3790b8fc289e2715d9a3f4235c9316dd9d4f889f710a65ae4",
"0x49585a7af98ad032371df01c458fab933ac399ff4d2d35731a2d86fc136b5eb8",
"0x517947f22d8d56466dbb496f3e98fcd01864176af19be2112f2260e55bddbd58",
"0xbc0aa38cbc79c3557fa81e446071e6d531989d088c6311060bff971e430fa36b",
"0x5a0a88750db7ad3190dcaaf5bde2649e6fd99cc5b05979fff84ae689ed47b51b",
"0x9ad902d6abe59faabfa5fde485c2fe16f6f3e2c41aa8bdaba021e94a3c292dc8",
"0xbd6e7bf8b466940b9f619e3f433fb68c68d344974798996e454c925e2628b1b2",
"0x112b5e9d438eb79aa9d7b4488b6aa9f7edf7599dce805d8781510302da45e9a2",
"0x881255ae2cab543ee6b429e1b201c197cb35051d53f881cb51fc58f9c760e51d",
"0xcf56a4ed6b771e4905e6ba888390526eba8ba3483ead7585dee00c0eaea58cbc",
"0x12705d1609ee34bce0c82bab9da730452db57863371f5b84de84c4c135537237",
"0xa9abc081a75bfcde86ea830483be240cb3bcfa72ae34a16dfc35cb964dd7a457",
"0x65cf0c91f8567d8567ac3f00ddc19da0b916179266c7e42768fd5d7271e91092",
"0x07f15ccc91a47edfe9b67a79d05ed1da1410e3ac16e4a4c2147c3ef0de26ae62",
"0xb44eb7db9e4dbe8901bfac43ad54ef27576a3ac0c78218138715e03b3f9a782e",
"0x9ef66d20aeccdea66ade0fe8ccc0a85dbe9498305509c8919b670ae8e854a450",
"0xb10a7dbd4857f910ae8cca5d1b7a85923adaad65c4b00d1255813efef00c646f",
"0x20410bc64b58a1c221f7b38f293700591692c73a785d94a5548b336bd5db10da",
"0xed7176f398eb75ae6c1d4348d3882aade99573f8369ef00e809b1ff91b5a9191",
"0x35f06785e1f7e362682e969586322b1588624f6025692ca87db21cb199c382f4",
"0x9a6b06141660e2bd106f6d46afd2f3e64d72711ffe6a9c495887bc3932f7934e",
"0xd64e602a976e86d3948d0f4cf0b8f13a63558d5bdea5affb36eeb91007a653b3",
"0x71050553ca1da7234764a5486e4647fdad00652560eb47102c1d0fc039b4b63e",
"0x53ba43f79de58a957e8ac0d998ecb4999c68f423b1769d9e8df2405259dcdaa6",
"0x17b33d269609bd90fac57eaeb298bf489a3b8a985cfd9998d297c7e0628bc84c",
"0x9b47bba7ff6b283e76100e36246c0dcad836ff29a674fbb1c6d2f58e38e29f92",
"0x262f8387628500fa26e2cdca6b5938262c8fa81888daed41a09d880df136167f",
"0xb6c6d0ed2824a9818fa9476c99a810cf33a0c1fd3f2fb6b7a44603f2442525e7",
"0x6f3437e677dcc60cb587f6100eaa2e7ef5c76b57023b63ae61a6a8ed2750512d",
"0x4bb5aad759b0c6390cf4dbc6c32e78f2721c2be3f40ca3d11de880fbe90e456a",
"0xfbeced3e048769a07bfd50010de59cd24f2c534234abc1ee7f3fe9d7e3d9f482",
"0xbe871fdd431974370e8156fb9347377d1bb07990ff60296abbaaf2c38b4059d0",
"0xcbdf566fceb5adc33608a847c77b6f3501cdc4182ba98e04e1f7e8b8ae66ba5b",
"0xa06a5de4aeafdb5981ae02004a38c524116ebe557dda83abdf03141e4dfbd4cd",
"0x73660bb361b94287b85b6c41394bea5447e8e3ccffaba1772a765a8fc0fa9b27",
"0xf80439908d6173fe4fea42aeb07ddea2c644e581373dbe9dbb9d092f12fd2125",
"0x32d935f078bdbb9ae693bef4ab8316e7da701edf42f156568b639193b0ea24bf",
"0x8ef0b9f247b5b9d9e5a813877d03a518d1f06900e2fdf5743feda3537ab3777d",
"0x59f92f14340a77b94839b6ea61389691a6fa5bb2ddc5b2241e873b066d37fc61",
"0x0288bdae302901b701df46263f4b12bdd6bd76436c3ae1626936e1b4bb631ca7",
"0x2fcfd5a54d6bbe07d4063ef64fe4b164bd8cb89a8e03cb5fabee98bfef280e43",
"0xd8d1169b3a9aca48f167cd43e9617d3ecc74658ea61ace251beb370d3c05f86c",
"0xdb2c3ef7b8a8a18ad3a8ce7cb8a6be2e1ef2cd694fb3f1b2e0732f2e27c758cb",
"0xfbffbf26f4756bcf1cd217686597ffe9c1282a70b279c8b01ee5076ec31b1c2c",
"0x2cf1404cc8482698eccde40185740753c47319b1aa30d9204fe53922feaae9b8",
"0x553a9e079d09d3f8251947ab37891027fc3d041246f2e43f01321fc3523b508f",
"0x35954f131f276933c7ef3938f35c6011806c7dd637fa249451379003d740cbaa",
"0x1ee7a658e8a2368391810667c35b8b0e321d381907e2312e7789a3cc7b0341aa",
"0x4cb3c908526bc6ccbdd39d855b3b3649e59eec4322bf48f07598d98240101d0c",
"0x996eb99b0cb27fb5ec07873af17933f377ed2c131f02926bab74341c475a95a9",
"0x68a90de2c8ea439db0417b77a23eb66e7421268335be7e4f593a52b1093681fa",
"0x4325afa26bf545951a73c016d866f1c71a5855ef52202b7ad0c54dc23906936d",
"0xaf5111b606b0e6d699bb4fb61c8a0592af7eb45dd7357b3e896ebc3e798919ad",
"0x4de03498b2d532a8eb01a12edea2a15b8a907d6d23ccfc380507a5d294abce99",
"0x5d642ba55dc046ef1b84fe594ae2e46b844cc6590d580e2591ba9c0c1c942c24",
"0x8477bb69d6d23ae4f45e56c372d66f42ba7417cf5107136081b5ccdd208b1545",
"0x924f641a7cea7cf76ad8920b9ee7c39e74304169571b028fbe40e6ed255a90f4",
"0x4b39dbfae8635f286284009a0213c6d267d5f48485a25f2df855c5cac5e7d6ca",
"0x61827bf3a120a079111665f78ba2e5ad15943492e4ffdfc708e235f44ce92842",
"0xdc0c0dafc340277058a6f8e03bf4e5e733e05fa638e2a8c4ebe4521960bbdc20",
"0x7038c52c4e92d6ed20b19481ed63d432325f94ee2a3ad4c0d5e25fafe5b32bd6",
"0xe1f5a1123679ce45005d53672018e16648fd44c4a2251802f295d29ed876a0db",
"0x173a332438a181f6cf1773b2603556d229254861534f34ad65b2170b0cf47fb5",
"0x22a19ab1d828b5d05057f40f5cf798e27ec068fc584968987b7bbe440f52ac84",
"0x15b3e3b6e1de6b8a407ad7e325c53c4bfa9e73f23c6f3acedcdf68d759f73860",
"0x0347a8e310cbc921daf829002ba0c44399c4dd7c930ac1fca203272949e09dc9",
"0x9babeec2d3003ddb15fdcb89ec7b61c0d7f795145b58f55a17810447d5afe5be",
"0x62c14ff2ed414ede1e3fa7fc2fccbab0b8f284b9f38127823c3dc9eb55a08b44",
"0xb301a7c865abe0f42d9444b5c3b775dc6ab4c99effbe8d54cec29745ff4af01d",
"0x3fabed16363d78f34a13ca6b99cfc592671668d4281c5eb3ab07f4b9017a4ebc",
"0xb14af7c7331f7b5518e04bc31c8c2b63d9d497acac405bc22e1ab2431105b867",
"0xb7b90e8852ae132f858cb98635fd6e1267b141de7da6773ec4092d5c83def42b",
"0x1a5c8bed811fc72de8146516cc4d668fc4557baf48082503a2a5cc0e473b6e08",
"0x0060158b427095081a18fc57c4f56290873188960dc0e4d2b16cdd57b5761100",
"0x2f0dd546f4982d47b6e04ce282b19be8e165b5c89b5f03c1ac8c6d8be2af2b4f",
"0x229d31a50f46afeda23a2535c8b011a827383c981e7e59f67622359e4b307a10",
"0x10da7b2e78954845b506b9672ff93d3a617f187521116d442b364a1178f10f08",
"0x511325c034c772ff3a61b1c8d94e3c13f19cf1e93523a8924666833bc1e5b0ba",
"0x3a8b0251b2f7c99774d0a7863388422723b5405fd1db035ec5e30df90306e251",
"0x885c3b0507dcdd6e1cec187cf3d97ee12ce101ccf63d99c1c7fc77dd57ee0439",
"0xa76f1fdcf546ccb62f86ec7b85bd8eb662f696d7aed2f2ed2b3c769af90897b2",
"0xbec073b15a35de54487b60f92615cd44dbcb5a5fa9d1bcca2ea31437b2e45756",
"0x662013eba4201630ccc4a9b6fb1dac36f439cc6cd2b6f295c0260713d075108e",
"0x81b0148f7e690088330650b01c52831d50f3d30ba2fd2074bca5513c179ad30a",
"0x01614cb8f2f2bd6794a20513fffe99a36eee526b7145405db15f9c2ede980093",
"0xd3d8664a5464c8f681e37265948c4046e373843e602bee66c8583277397db258",
"0xdaf61a9e7157100cce6ea5bc405926fd4190fda7354edaafad9b6e468bb24422",
"0xeb05144e032621f7b1792dacbdc79c044a268442a9a9284ef1b2d1d60cadb29f",
"0xb1916e802e8e3eca2570d1a190d317cf38cf9f6b58119b65ed0d60b9749461ad",
"0x17e103cb26e7c5c7678a523f99f90387490bcca00916190e7e1b38274c52a375",
"0x105b8126ef037364f46a44fdca4fea1ec4c099de5a05918688249fb8844dfea0",
"0x7ca09f536bb9f0d7b3809c086918ba4750722edfd47153a8412fc761eb4783c8",
"0xc988cc2d92dcb85eaff8cf2f70ca49646cc5aff6c8e10b43a0581193137d3b92"
]
},
"nodes": [

View File

@ -43,7 +43,13 @@
"eip211Transition": 5067000,
"eip214Transition": 5067000,
"eip658Transition": 5067000,
"wasmActivationTransition": 6600000
"wasmActivationTransition": 6600000,
"eip145Transition": 9200000,
"eip1014Transition": 9200000,
"eip1052Transition": 9200000,
"eip1283Transition": 9200000,
"kip4Transition": 9200000,
"kip6Transition": 9200000
},
"genesis": {
"seal": {
@ -56,8 +62,8 @@
"gasLimit": "0x5B8D80"
},
"hardcodedSync": {
"header": "f9023ea00861b3771ffb84fce48b8ba3c54a09f81e91ccb38c401261f06d370098889a43a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d493479400e6d2b931f55a3f1701c7389d592a7778897879a071cc81d58cdd21d1e17f7389e55c530cd9f94cc15bb32af6477320682327dcffa06090021a7c09ae5e75e443410ebdb76de04f1eafb0ab910daae96ee6eec560eaa032510bf257dd03b11f3b4761b94b495a5b5a18cd6eb17c77785e0f46e2ffc882b901000000000000008000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000000000000400000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000090fffffffffffffffffffffffffffffffd8381e001837a12008306697a845b83bd6096d583010b068650617269747986312e32372e30826c698416e0ef58b841117e2088e2835bf2afcd5d48f42b3bf2a1f33435f21f089ead2a6bae7d01c1486e645b460bb3c726a827ff1eb50e0579f3410563bae090fc256cf1d8d594b82100",
"totalDifficulty": "2845866505151538604560067685603735513869853136",
"header": "f90247a0434ba1ebe3bc9d9d12886f616afee0eb785dd00d78aa64502045b249c8a3cc33a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347940010f94b296a852aaac52ea6c5ac72e03afd032da0d1c1137be913218840c44bdb8f5709a59d8608b05be0dc2401402e11ff1bf2ada0eb50a864e57e19bd4c8499725a6a3841f706ff5d743e64f1d06db903aca62eaba0d4d77727f7253c109e6f048fa5b1583b6c2911ad77f7d59d8947d38a96a67729b901000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000090fffffffffffffffffffffffffffffffe838ee001837a120083013502845bea4f689fde830200088f5061726974792d457468657265756d86312e32392e30826c698416fa93dab841c80b3e34bacdc91e01f6a76c64a920a65edfea86b84e0fd59c6d8dbd7758f2dc4e3cb446de8dc07f3d8cb71375a6315f401cd1ce54c56694e4e01f7e774555e701",
"totalDifficulty": "3135776192732436705400032023148164213445066062",
"CHTs": [
"0xdb9557458495268ddd69409fc1f66631ed5ff9bf6c479be6eabe5d83a460acac",
"0xd413800c22172be6e0b7a36348c90098955991f119ddad32c5b928e8db4deb02",
@ -4214,7 +4220,423 @@
"0x3ef50c81169af169c100f58f3afcb8e2f926d957b2adbaca8787be5d4e8d7233",
"0x8783eaeb56ca2d7fec84e0e272b77271fdfd6c14452a2e1dd83de770c5d99a1a",
"0x861024460895378ba100c5d0c05e62bb6cac8b21ae529ab5cab39eb6c6cabd90",
"0x1c741ed9eda60e5ac585e2f48f06fb988367c2c40a0d8111bb04b260fe44ec6b"
"0x1c741ed9eda60e5ac585e2f48f06fb988367c2c40a0d8111bb04b260fe44ec6b",
"0x6051d77e0596a911bce132c4bc12be2ae5cf29d113dd52a41b3bc166861149ce",
"0x92c049df5ddb238644015d4e039e169614ed1d926de070952f2407912906cb4b",
"0xa897567fc1ee9437f2876deb3de2b11b8fc00aa07340564031573f0351ec556d",
"0x3e54a8e15218db168960d28369003cdb1a76f8db19384e9e2696ae66a6693d6e",
"0xc5db7ade97cf28f8b61f2c63a0773201ba64f37dadc19c03943b6772aa7a1a50",
"0xda784d1bf64b7efd06558b90cd2436f3e61dc0f7a8370ff92516ed062f461091",
"0xa1d10b0a36ec5169d2df740878d051bf4d38ebc5dc04ae5558daaabc2bfa1471",
"0xcc89a8be2ff74a7bb9e967cfea3cac067aa84cc455a2fdd5449577b52a2b4ff3",
"0xbd23a3e6d3198d81d798c2851c36b954fa6f359bc8fc6e04a0b757e3d0ba053a",
"0x74640f825b9d9f95be69763845aaa0269d3a6ed5aaec88bfd9b5c4139ba7ef41",
"0xc01a29e41af3cc0d0a13ea83f131f3e4828ec3e83dd2fdf9739c139938dfc2b3",
"0x832509e705972acc7efe91475e8d76ac00a12750e194847093825e6c4db9e83b",
"0x63139d1224766ada1318613b9ec5894308efa2473e809d9e37c8305c6965f2cc",
"0x76547e54dc59473093c3fcca1166307cc7d0f4f0e8a35d850507bec216b76476",
"0x3a6a14785272391982cfa690762f5b2aeccc1dc0bb13eab6b9fcfd056f40703a",
"0x603e32b52795c04416d800b6a936343aaaa09898fa97cadc2b157eeaaf3bd6f7",
"0xf241102a3d3f3a9fdc5a1a586b16fdce4280c6c6da04290541eb3cb9c28c7325",
"0x6db6de041bcc7c00104a21bbd487a1e1ddd5e4953f7a503aa992d68a8a7bbc43",
"0x8377d795c55eac07c0acca674e775ed7d8eea35867990c8a776f40965c9ddc68",
"0x48d62d562279641043e405f4d7fbd76050d773103871c5de2c8acc25992db502",
"0xa9ef42d314e15c419537e022753ab46d41318f1fa8784e4363494f395eb6d236",
"0x99572f567eb602a1d9839bd23b41562bb3782eccf9a8893e54b002e685ab378c",
"0xd8cf2fa2291efed46c1a36e1b8837be62e86caacc380aa6397792ae8baf8f3a1",
"0xed2e800df1acb7bba5ee6251592c397a604debd7b0bfc28c8b0002dc40faa8bf",
"0x9ccece195d9e67e318f6d2952bca9486d09f4207c6d8be266cc0eebe41290920",
"0xb20580a5c96c25bb59e1bae6ced3ea5cb69d903f64e648bcb38b799141b3cd5b",
"0x1488647e697452306d2744ca6c709007cf75e2e37da3c7c05006211ba0720824",
"0x009c3dfc5494962c77900fb8da67d7bf2a2f4b855c521b9d50c4aafb1e0735bb",
"0x482428835dfff3ee1da335b36ba3aa1969fa35e89150e5b3c1991f28272d14f5",
"0x6a972044f2076e98833b243c9ed18162d96b46823170ef7c20b1a02d8bbd48c5",
"0x676242effe0fae84110c4933beecfe5ff549b439e54ff5a588add229329e5365",
"0x2441bce77589ebf8019fa8ae870a8529479c6eaa0fed7e0fbd3cc7439dbd4a09",
"0x0b20c25d2c6897c1a8dc9ea1364d3c72d33c97b4d70b9176c3f0a1e3b6ce08a7",
"0x685aa4e279118f8326a90c78e9896e40d9baa62144e2425887dcc704106979c3",
"0xabcab60973f6bc9ec3b596452e7434c4dc89c55c8eea925fc0092d1103c6f86b",
"0xcbf44f106f3f2c0050906b5e344ad22f0e0034067d35402d447311d254516dd0",
"0xcaa67796a8ac69283d7b6304181a988992130ad8441d47b4fdaf236686dc1caa",
"0xad06e6db230bd0bfaa0df59d1ae517ced29d5f11b34f76ef9bb6a73407128b59",
"0x93ef56a4951e4e5c19230918b1219c1f07e9356363503c1410e71486ed338f87",
"0xc6fef02b5bdd4909906c40cf5b999fe9e08e4c0d8bfe59d3c9aa99011136f780",
"0xfed633749700ffdeb0921a537a215ae31c25b85e4f80727376e50c247b4c5a38",
"0xc39d8cc15f4331fb7db2c24ee1163bd164e81ad2ebc43271f841fb25d03835c6",
"0xb13962dcb364ee49e2d0a34dd1a555fa8df363041504ef1e987ce78646d64146",
"0x97e0d3047e2151d53cbd1358da627453558362c6a830910b33f241848b20cffc",
"0x587ea98cbd1da50c0af1986f6ee5e676658c06442e893304708db831fec8e804",
"0x0a1d21212d9bd85a1a39e046c897d1dafb496bfd80762beda2fd3eb1cdc72eb9",
"0x46aad83612f04e7a51fd642de742f713601992e58de4daf24148a3e6f3318aa2",
"0xa65a8a9ed4fb28fcab6ee3af5df4647083c2e735fc652568759fe0426e9a294e",
"0xacd7ed5525cad187f053ba98487cc4abf24f76c8c0e97e71a696d553a3a41b7f",
"0xdcbdcfbaee764bb404bfa5261b5037b9c7ce567a3c1aa9f7280071990320da18",
"0xf195aae79a232b2170a98602efaa2efbdebb3c40d2438e63bf0954e4dc779cb0",
"0xfbb2675a62e2e67baf85e56fcd4cdf2bf89ff7905952155d3cfd4e625fb674d9",
"0x5b955473a35f6b0d24fa8be8009734ecee62f6c4bcf0cafc2335f07c51752fb1",
"0x66f37b268338f4ba1e21eef6884aef245bc36935be1f5eb14ee1d23618f00f5b",
"0xaab809ee86773263043201b83bd445d98a634d8a6da4c389b2336f68381dd481",
"0x509fc38118491458e45c7e8ab1d60c687f50d85fc1c0bf104b531a3b352198ea",
"0x20d1e4f38e83b27b77d55281af40e9f96be098fdbb90730170638c88ab7e435a",
"0xb33711864d62709a98f81d9c5f0a301bd5808d0e8ecef1063c97347af754c8c2",
"0xd69fd6c0fea478bb380b948f5b054f91831cf26d304991d40ebdf0b00a97503c",
"0x87157d452bf57e617ac1dd2372438b0777b83f6087d8223008d823652c634882",
"0x9c54b0172ae0223e6b23f7e000cb6887144e615efb02c74596002dc26d43eb5c",
"0x5b0f87baa8e40f0a2bbc1a76afbe0b21b5e8aae1443f0d38c3ac55c5f942db42",
"0xeb68d93e19860fb9fb76847080edc345972e29ab1ffd417ae5727d3cec79c0eb",
"0xd27026033bba2557c79c4babaf669a399fbc72a2a5cc06c707e24eaacee83bce",
"0x420d887bd82cccac29711c52f4d362b6a7d854e694f8d597d208d0a094fbad8e",
"0x02ff085c6c3c47879a91f511ea4c54a214af8160e07dce8e82a6be9e8299e237",
"0x1f0384e0afaf47ba59aff9f224905950768674c48de0fb0312749b16edb0a347",
"0x55cefaac814e132ff335882a366ea6173bc21fa713e93d8ad92260c84cfd2d85",
"0x58a8dd6e036a05a937a7053be916c0e7f719f2a1905186e7586a9d2dafd5a1a4",
"0x8714d03549461e32a467cefdad60a96788c97172db05c18eb9debf6e6a4d39e8",
"0xd141656c1f57c12feed31dbff3817e1d2af4e1b5cf6aa75d1bb29ea2c0a3ae69",
"0x5ec365177e19fca3c1063e65a9342008aff04ba9d03d53837b598b143504b97a",
"0xc620e23ae73d423bf2628a3de70b1a1f915d80173e0c8d1443a44b91400c5a8a",
"0xc72c2356ed53eae5a4a56bc248d9d2f4e9154f1404780b84781f357cbc7ad2d2",
"0xe60bfe30e5a1a9457ccca65810675e129948b474f391ce64d270200be7ea6beb",
"0xf679887baad8f8e497d60b015156f194b94fc30c6cb1f83fbc4575e99b95a8d5",
"0x654463146799fcfa18a74ffe4f2423fa04c8747c16b789dd24da26d0338d381c",
"0xe8b5406278d9e4622d088976af8b5e6b14cc146a9530c862a42fa5566a247355",
"0x8ecb4735132f769663781f96fb531115190e68390c54e33b250db874e90aebaa",
"0xef13bf38c2ba993c9dea5777e5db348339273d0e6dd1f41867d3b258f24ac4d4",
"0xbbbbeedf7276a857c513f4ebce88e3b531c99cf206eacd1c6c29d3cabab45df4",
"0x89cd50cde2de3ef40de7502241b78e664de53dd4a5e2ed85db62c55be0a4d8a4",
"0x0da2cae061e7dff539c7e39b0b9f63af3217f1a51bc597db957b6a3972cf7186",
"0x57aa87a6daab3c65519de7c1c1360ab33b830d46f169d4e0d3c38e7dadef289b",
"0x85fb1241c4110b4f3a6c197450af8ac47bb24d531219f6cefcc079717b208c84",
"0x52194cfba6bd7d5eb8b438054fbaf5fef387cdb8b1a7ebafe44cdcf4da47b1dd",
"0x7d24eb47a1310f7f4244e825847f634fd4a4224f695a3609c5250dc6052de6d4",
"0x38edeacb93b10653624f77dc05063499daa770b74d6b63ebe656be5a3630b7b8",
"0x4e050f7b9d73c1aea3ce60c8eae8e46b55b6d4c1cd1eae22faf982895871dcd1",
"0xf22b284ed4d97b7d3553600388748721a328052daaf92a58ed5403fd4020a496",
"0xc5fcf858d9a9748045fa0ca1271ba5af780a788c51d693815e0490671be3885b",
"0x2342efdb88226e68173ef84060a0d4dc6c8aa9c9431883beef4a5588f3157fae",
"0xe1599bf452eaacb8dcd51ff835a9ef5761dbce83cfc719813d6a10772ca5fdb3",
"0xb754797393b3216778ea6389361ca5951f365ae4e7ed99ed4cd4c9c76ff442d3",
"0x3fb5f9f3754764155296c6ff4c469109512264c603ece7c78c1231942bb8ac35",
"0xbb347d23c7d703cd2801e2763f1a6c375b5cb2a666ba137c4d6442c3f94688dc",
"0xed3806645b55fd7027dfb7f5f796933049ae558d26ca695a01e1b11333f5e453",
"0xeadb7740432ede4f90c0bc490c15fb377b68de0fc1ee3a56e87e21e7771211fe",
"0xa8c0e907e0b544e7fc3116d47e4cdfc8e8688f5cc4a67cdf600f74be6b79775a",
"0xc6f6b94f2fb4c56066e3c722123b8e85f80ce8baa0427b62c5a2ff937702c481",
"0xfff0b94553a7daaee58a7e15daf9845d1a3ad4917d81d4f23dad27d0262b48ec",
"0xb9084676613e1a063c2b491bef1b984acfbd2dce60a8ed970688239524e31962",
"0x196af717eab2cf09b47db13605ca4864cb0c4189d40c9b618d8a7d3f92831d78",
"0x4fac369653dcfe74d86b7422354d68f7580b1ae0ab359a8b8f8be8582590ea7a",
"0x035ff04f84478354706945480266321d31790f5445028f3e964801fd9a16c78c",
"0x63df70a24370a408bffabbe1c7a4c9b9e40be1cb326ab10d63fe54bb9de50d34",
"0x37b5c558d31128595425ca68deddf5ae7539abc6da838837eb1e0457e092d9ea",
"0x41a9ce82ab27afbc84669368c2e75a15e6386b77034ec316795a896ef9de577d",
"0x08bc6cc18842df4130280823f7676f418f4797d3ddfc544e54267e6456cfac68",
"0xeabc09ffceeb35cc4ec18518d4920bea2f43bf746f23b5524fa405bd874e9d34",
"0x40336744dcfe6f312e17eea83f53538f9999864c41bec43576cfcbef68d12e7d",
"0x3e780dc9c8f2b708527f1eebca75d18507e00e226a00e1b1ddd0b715aa8dd561",
"0xfd6a3c50c4a4d6e1a6fe27fa96f6aa2654573cbb9b839ce8e09a75993e2bf8e5",
"0x166c03d381d6ab94666099024adc95de0ecc9818e5ceb49965767682ca0c73fe",
"0x9805810b802a51a3ae18ce44f6b2c68abfccef2119df2430e4693e291059e222",
"0x353ecd1a0922e819ffdcd634385ebbdb674d247c4fe75e2d5437b659c98424a4",
"0xe75fb8682b706ed6596699d6151db4dcb19f6e71a3b6e34aabc3508c919f5c17",
"0x6e9bab64b10a2341f49e81d862ef3322d3117842e3f1aabc8b774c68484a2a31",
"0x82b5e79ee8d72c3613458c975530bcbba359734a4e9f07015686dfc521230329",
"0x86d48d57ccbe1f1986a4043748b1a0d8d76fd56bc74e7c48c6fd742affa0ee11",
"0x46ee65b9c2fa3e69aa1cd6ba5aaafa7f7aef59224098b60e22994996c927c9c4",
"0xb6983761b177e21899799410dc018f1acd3d417fe35943fbe57207f9f799a100",
"0x594057a8386db6e159d43d136c464c5e3980eae75a73900f7a84ba94803fc6c4",
"0x53c78073c4a4c44d17db85be06f38ab47ecbb7f36ffa87b9db707fd2bc87f391",
"0x4a976673044732e3e8a0987fc8f3c36375e3c4fb3722fcde5259af492ec458c7",
"0x1e2fc8db341a4d9e123ae4ff4f4d8096d8afef47c5d2915c665922bd1de3b00c",
"0x565cd8eb410c6e0b4d67b54d37ff42f6189095965896b0f81566ca502bea34e3",
"0x32d030e4ff6b2f5a560cb7525b5e66ab1f34a1e06531f9b81c48b8a257bd5637",
"0x25a91d756023bb9ae538034bd39b6e698d05fb1393d1328c4fc7e5c14209cce1",
"0x8036f74c4cbaff3bd98820ddd84bc093c95e88d357b341154a3189715225d068",
"0x8bc6bc61f7a57a145b8d728f583e027c8630f0c07e003b189f390ffd11d6f150",
"0xfbeb53fa167d067c9b0a2c0710f7f5931484f7dd90b9456c52c578a15f402d9b",
"0x8d355b208a16b8aa3f7bc5d8864dc7d6a1c4917a97c523274b86e82998d60b63",
"0xc94e45da800d7b56456d55a9aa36ddf9df45e9cfeeacb1116b8c51a0cea34ebd",
"0x81761ec04a8d219aedb2f58aee529e876043b0a476e771957bc03fef9f0780de",
"0x29264094c720151f7448cae053a403aa86fc20649bcf383517e214d1677e893e",
"0x9ac97c7eef9b69dcd73ec7144a0cddfbf0973791beed405202fb4c2d932ec59e",
"0x89611f8e2f9e2e1629f83ec14aaec1656876718c05088e5087887c87b8414c39",
"0x67244fcdba97905472631378fa3a228b649880c2efdd57e5a6c95e9b70ad8456",
"0x0d554cfc4df02560c3e76159d1964c69c39f5df9489bba5516f28a32a4be202e",
"0xce9274a36a0f25a13edef679a5b286bf91a9dc5274354bb6f1ce0ac52557e650",
"0x6017e68689d9f6dca78f42b93a224b445c18b70288a6e4c0d6cc295627dbb1b9",
"0x76791c90f887355878d0a4d8c84ec3990a3159933ecc8d868d196b363153bd5f",
"0xacb6a6c9ec937b3d5cedba26ab6d581fc41cf9a58b0867232e2c8c73d9978cc3",
"0x89f17989ef556a0562c2aa5a2a1d71e5132b89d656bead1ef88ad31073b80cd1",
"0x06efcf8dbadaf28ee719a5b9c017a093fde84a7f4b9966fe3052c0b2fe410ea6",
"0xce16909616f1d97e5853818938b4798030259ddd41e3468f35b940ca901d6817",
"0xd3ff9e9ad14a605a94bdf05dd2639b6fbda28ccf7b2b228f064b0de52410df5b",
"0xb3e8ca9ee88d4c3ce347d82e8f22793ba22b7adc350fd694b1b00b0764c584f9",
"0xe690268a4ec089aced00f9654aa95acb7a8d7270d9428205b103c30a08d142eb",
"0xd685e1460799c51f14273361e31b9739e5212fa538fb8dfbb8e81e8b1d329bbe",
"0x664c293680fb7c5a89ff3c31e81ec8d0c30a6274ef44e4e76bdb9bba83f3c0b3",
"0x44027fd23526685d920d37b032f912159e308286eaac018244006690b4191d4e",
"0x7ea934c3d75a9ecb6a2055dcd5feaf2d4c851eaf360a648d5d87ef40fba2fbd0",
"0xfd97fc801315e5be630ccb3dc983c409a58fc1fc307adc1e4a48fc60c89ea40f",
"0x15aa0c3c732a2c6684d521729dfeb93f62e22e155d85d20e5488e2c86b043142",
"0xba235420ac54100da28cd6f30ff64b8594e73c42f45ca8494fb3d3c4d66651a9",
"0x9948e8489cd94bed4b8e90a8bd35e01ffe38e7c077f587c6c1949caa99cc98e0",
"0xcf66ccdfa85655d7d4c94cffd41f499afdfa2bbddcdaac547223e6ac4d1f9cf1",
"0x7e5382881f710530720b420a3f3ac08211565ecc8fead8ec649cea11f9385c3d",
"0x104576fbb1760c16ce85c3e5757832d53bda83d618500ef677a6a192ff14a5fb",
"0x9e4689bb1ee34635e1106e38ca41833d2dbc1cfacb7635ede5761048a8637c7c",
"0xc8c7f7ac271015da443320f4af650fc71ea0914f4c41252a5b7ec76f329d5268",
"0x46a93ae992001a54119c8d27788e3ef8927dee0a9949b22ece0196a90932c1da",
"0xa69467f9944f1a5e3a46718a99d3cb14930cab6d971baa37bb774cc757e55c2b",
"0x33f7272fdbfb91428a1344df5867300e256fc3cc2e439c777c3feae1cb27b781",
"0x0aaa367f4c7f399edc64ac1754f47aa5c28b0fa208238276de6bd9e424021ce3",
"0xf5f363c3bfa4a23bf221951f4b53a77b27613938babe40f0832d05fdfd252233",
"0xec315af99bdfdcb3cab1f1dcaa5b42ef53f4e3fcf4d921578892a5896fa20e9c",
"0xb580a8e51e875446d7096a20801dded1f7e5b5fac9f47e9361dfc9dd80214013",
"0xb877df38d8f4cebdfb89f26868bdb97ef945da187b44e1cbeafc1d4b7059d713",
"0x78613b9d2d6b639a54ecf1d50a56af80560b436fa632ae636cf354d4a6dd4af8",
"0x80a9d0a5e43558f1d24256baa6940c0074fa84d4b8e7e236054943f9ad5fbe2b",
"0x60f79f699ba1a740c9784f2a8f1b652d4e695ad2d230b110472b95914fd35c8d",
"0xae20de288eb7362a36a1ff236faaed6ddaacf58783d098118bc9fe66b8780651",
"0xcd08003531d6094cabdbe4d971a01b41552784c246bd2a2f749ee9947d1394d6",
"0x676720accf739c380f64748390c1acd2f88d454539866f7326a517c9b629b545",
"0x086b71ac681c0ea369c16b22ca49753b2083ec25b46ba659206433eb060d98c3",
"0x78910ab7d67e67da722ad53b669d8c3a312de3cf362c6254c09581088e920acb",
"0x5bc6e98a830c114cb432091679ac5b3efd25c362d6f99585ce3a027dff95e524",
"0x8d0daff5a97327b615d1535fea44fa33610fd645d93035e1e5e2bb49d4dcef24",
"0xbb46662b884bc6676d98ebf3f2a35ff9190339b72d68520fe40100b4eafaa2a2",
"0x9aa8faaf935c95a60ffae0487844860084a963792ae0bb90a831f825339810ac",
"0xfd77b5d6b6b87bfb0ddcad7b0ed3992e5fe897b16db06b118230b2d292e317e9",
"0xc465a3384c694bc50cbe97ce9f3bc364884651a97a491f7f64e65dc319d1c9f0",
"0xc4634431867d7a302be79e83fb50d01df7f3b950aeede21fcb59b883399b06e4",
"0xfd524c29525cb97a89026ff68048ca6e2a9f522791eadd74447a6c278151d7df",
"0xc7df516c295a58cf4cd5614eee3d2f773a412dcd4926eadad7e935ecae6d8907",
"0xfb915abde0108d6e84354e21a513fa564f5201277e060bb916a9153537fba1f7",
"0x1d3c6a780f1b259e096f4a141ab83cb6bd035407421e2468e743daec211e536f",
"0xb2f47534f060c70f61a7c16f920d0e11b957bb3ef912ed9292f35b8ceda2acea",
"0x03e0ebe6e9992f6921362d463b68f91518d91079c001c6bea7b3452879fdc29c",
"0xd9a7de173a1617ad813a554a56d7c7d2f010ac78d7782e524b35b5c676cb72dc",
"0x90d05d99167e53d34a02c5b66ed6920190370656905465f20efe56499aa0ba6b",
"0x17702606dc895aae35aef034fddf8f7235efcc66e5c9d252347063209c2177b0",
"0x3c416492193d81fc03b5c1964989a314e5ee6d689c638c996f6761b4d7acd6be",
"0x3c6c1162ea9b277f831989ea26e14bb23ce4d72bb9c865e354992559266ceb16",
"0x96de93f849613bb2ffc117bf111d4798b9252649f94f21187da324a3fe363833",
"0x91e50fc6e564cb9d6b7aab3a6e93f6b32944d5a781196a9a8b12ac7f6f527565",
"0xdbefa2bb2ee620d75295d0f3103e06b428f955dba1a792421e435051c46f7933",
"0x78f29df98ef7dce9fe7b4414da90fb4df5d99231ab0a3b7a3e70659986580fe4",
"0x56cf56899c2388d55eb1496ccbe62041d14cf655c9dbc53984d86c22ed281acd",
"0x099f52c675171088550a9e93e1ab17f003190fa3388d956724d422e5925c4813",
"0x9913e4ad8405b8a60fa512fb616c544c6cdc415cb1023aad0669d58cc3810161",
"0xbf5d51369b2510bb57f8fc8e9342890e8bb37049079dc79ab97afc0bcbf3cbf2",
"0x3a012d45d250c818b641fb18b71b622f5bdf0b7a541e0d8de54f61e516ee3ce7",
"0x0233833414d2cff3da0326f7baccf1bd522db5fee290ab4fc0a976934a20358e",
"0x38a0978c955f20cdc32e2013a5373efbfc50924e45e9c4c756291a903f4162b4",
"0x4107f33a14052662a0469ddd646ab6659006df131c4b0f6b0e6cfd331b46fea2",
"0x8074fb5054c755c912bc68b1dc22ae40ba13c06912c8af1c12652eb4d84c6503",
"0xf6d151b8f9c26c3a31366d967dd7338e80e8107b9b81da0a98faf16df9cbc91a",
"0xcebb0256d0a8a4b22d2341ec7c48292c3226caf4aeaa2003ee36dde25cff833b",
"0x5fa9ac499a2642b0cb7ca365062c02588f9c555bcdf584f533ee8e8544b9928d",
"0x800c7f04db30247318b8d4c11d575dca66bf615674fbeb9e8c20f387d907c8e2",
"0xb0a43de06c9d48afefd5411d759e3c6293cbea4a7c6d862b119182ea02af81b7",
"0xb6e7ca0075d28959cf87d716fea885e9e3a0062fc7da1b6e06089c808a632b8a",
"0x734c1b19f0b5972b5215f675cf60c68c12cf6d6bda7b5a95ee9a781482e68365",
"0x1995b08fffb20dedbef592ac23a81d87129ceb396e065265dd4a6cb876beaf09",
"0x051082047a6b579684b5444ce5b75bc630277ec06b0087779387b9d7fcd18fec",
"0x4aabfe145c368e6878e2ccbdfbecf2f1db5c9078650696bb3a584c14fe17177f",
"0x42811ab68b304ce30fe896c52b53d861abc3c8b5e4e740fa97b1695db9a6691f",
"0xd90cdb12ad64f86b2aa7afb781c00301f50206b05f1543b111c2b971ed209c94",
"0x385435507c2ef42b5f1760b97497e8a02a4b5ec4926c3cce8569fc0f4be59ce8",
"0x2d7a4908350c9cf022920cc51e0cad9c3c05d1d14a92d72310b52f984c857101",
"0xceea9c58106f4f806a256f64dc04e1c4b53e6cc5eb048f3df7a14f8de3506e96",
"0x7032f864eb3eae8d198c3f8edd9cc2dfe88b9971cd01b33318dcba004f9b044b",
"0x71bfeb4c183b20fba60e225524c809b0864fa14f5c0137accc36649ed0712e5c",
"0xef0ec5a2761c46827110c20e14fc4aecadc2407541ea046de09a58cda3b2e839",
"0x5e6debf328055c9413fc3eeca28583f917b361a5b5bda9af4306929931a4116a",
"0x1aab81df07eab969189333e5b2930fcc1b88a525ec5bc6af6626fdcb202b8f34",
"0xbdbf97e1558711d4872821b9400e03a811c61096bb838d3126b1c2154f8fb776",
"0x7d8aaee482933ffaa97777af3e4bf69ce7d99afb24e546d2e365d445d3d0190d",
"0x9da421621b14164582b2b877090c9a956f3a7c917031bf743a9ce457b6292369",
"0x050d717f0433a72b17a0e9a1340f26aed5bf17f90c08a5b73e675860ac9c24de",
"0x80551d3ff835aaf987b9ec056a73a3890985ef551431daa9d4aca10c81cac7fd",
"0x625a5b5aed2660d32d2fd8c4d1bfc248365a5cddaf9b5695e3f131629739ec60",
"0x7d86bc2dc5914d16b3d0d882a5db0230b4b688cbd8c81098d2efc5080e589646",
"0xfe42cd832cdffe56426031ba7d837c56d86be72b89ca9f5474bd08db80cfe903",
"0x0ca30e1fd3bf3e16a0e295ecbb442757248b2ad47baf88fc37d6c55901e709f6",
"0x2a83ed111b99844e17fb7aa69854525958255ffea04e0bfdc365264e72b349db",
"0x709779de19590b69864f5b9228b3a1c334724e20be006ef5ae38f8c05eb6f37e",
"0xf09c664d0e2e88ad5418d14481fefdc9e9c46158bc5439ffe0bf6d6d5ecb2eda",
"0xf350785dd3617ef73b0a5bf439ce5c49adca0c041b6b5047a664e5e33967ddf7",
"0x8fcf87571154dc4eb0a73c6ee31cc0db5f4e064cf23a255a408b2f2c7cc9c0e9",
"0x75801fc8867ce7c75b3148c6c022d7702143b93d93c1fc2349e3e969d0179cae",
"0xf14c18bf68ae881d3fb07f631340b00557a83860d0ba0efbfe55fe199176aff6",
"0xbe48c727fb6a32242229eaa09146c76522dcf6bed6d1c6fc1bebf86b5e4ccdb4",
"0x8487b971e383272df82cd812a0bf3a2026b85bc3897b4ce9ce48afa00849fe00",
"0x60d18b465172f59c0d71594b5273a90cb41db24a5d4c9fc37020f9d8c467a4a2",
"0xab4e36d9f17c748c87d89c23b667e3f4e3265e77b62dbd9c92659026f8a53d12",
"0x2c711e8eec1be3caef6e16a03dcca83dac3a565c93327c67f4e8ea9f2697d9e9",
"0x21d7a9a3f22a163767075aa693d82a962e4458b074bd8f3485c4ded1e47c1172",
"0x1509cac83867db4b888107974d5c8547dc4aa1db3b8e886291f0f9eb6bd7af58",
"0xa555eb6001982080479c9bdd3da9e8668bf90ca9538f16f27a45ae698ec85fb7",
"0x1d1f4019fa3ea7ec85ee0a411560738791f2d45fe5fe6242d07267183a852b96",
"0x2a8994caa5a27eb3081cc749cda5694da7ed3fbb8a1b4c67d7e92306cbd3c6ab",
"0x8389736ba114022ea97a6b3e755d75e74c3c370e9eb916c0f2d73a46a6f6c396",
"0x6a2e5d08ea64195a4153fc7bf26b7a99dcc9d31d8f58faa07510a1e87fb1ade4",
"0x78b66b38fd9b9cd3a5bc9c91c6f816153c2c28c1055f7ac9ad12ab61f9464850",
"0xb2f77834f7a88c4354763a28080d95be44dcb380d01f09292e679a6ed274b179",
"0x2070456b2bff0c30ae26b2435af82e153094aeea8102a75541e02a39e9ebe717",
"0x4e3bb2fbdc71602a62f0c423aa45398af09cf3ad24f0437027b4590e7056b882",
"0xe877208750e7569ce78215659ef78c1656e98b63ab6cc3e1381d7581afbea99f",
"0x8c9d8132d01b83cffd1a1fd9d19a3d2fb3a58a0a28602018db44f8b6dd5fe1aa",
"0xcece8a371b72873a660f4077f98b04d6a3cde1a150852db3a194a293e6a08b72",
"0xe72c8c7211acf2d67bf904cb3428c6fd69c8bd679d52deb3b2e749d5a3124b86",
"0x504f53693e747fbba7475dff1aad887ad0124d48cc7892ec488fda56e31e0de5",
"0xddb3ce7ac0c7e2d141df857e9e5c083d66f4a62ead742bf6631756a72fb643e3",
"0xdc1c6a45dd8308ea319671732de7578a65a025852a3cbe88aa9c5d770d662990",
"0xb6a9df772bdb5e7cdff569bb9d3078c081ed53e870f9171a5b0369d84fe6ce50",
"0x1b436a7d8c9af357c8175430e1d0d3057096c92bd8bd24fe285fa55673c68b72",
"0xa57ce7fed7521e26a908ead89253bc57939459b4c34fba583eececd0f9104e2a",
"0xb16c4f6bdce8de6657d049e2e41f91a1cef8623671eacb9ab02fff45cee7c0c3",
"0x8d23f3ea29767da41a2090cfceb56a65236a405031b69e614c78fb79c1847651",
"0x33149d4876fa24dea4e872ad1da120e01b14e8f9f8270611fced328a36df25ca",
"0x266c558486a85de2536e8849d49443e7d39797a6ed817ad7217633c28166bf06",
"0x2073a22a1f157025228f8fc82bdaff48d3cb1c9b004ac2cb867c2cbbe7cafd95",
"0xcf643d70f96101c99cedcc2d169b0196c564e7a8b235ae093ad25b1ba8981d01",
"0x1489101bebc1b60e4c0fa37a94bb946f1b51ba284c667e4b2f3270ef9d264d41",
"0xd2eceef739e2601390d872bdbb58f8d67864b000f7fbab07b0354c44279379e8",
"0xf67a02d7e7e6da127015e5137f90e30ab43c2e828e62c3f8fe68ad5a0a3011fb",
"0x01fe94220a07efc2bde2404c584f67e9b06e47eba05e50e71ad59710d9d6a9fd",
"0x509bc80d1a464116b4f234001c3f2d7aac7772cd6ec5fa6ef9606ab8ab273127",
"0xea497db1233691b3b86d8fc80ebffe28e6863f445a0b352a1286412a94ad2f6f",
"0x80074f0e2cb668b6a8e5b454db4ae5cf2482894f3ccb7648cfa26b3365a0a54b",
"0x56c8cd15e5e1a21c6f32f7211d18e1e56c89da688cf11b80c9e55718514eab24",
"0x6705989758f03a8c4ee93d44f53b4114a10337694197186b62ac2220188a6879",
"0x7581fbe2b0ae6e0a61d073fd1700c3111d2e3e8d9342d50800ba05d026c9edd6",
"0x254548091369ad8e0abbd2eb4e31e2c08c8e9dc0256fd0097e50e75831946924",
"0xac3119a5ba40f303a9a70ecdef9b6d4722c6adb6592a0cf52e5a312a1a0819c2",
"0xbdf30bc3cd852b882c5d410e6d4419e0f36ed17b761b58fe58ded980829d1cf8",
"0xebaeb5965563f6fa71006d8c2d285e093a167630c319cee2dfa961c4a583b60b",
"0x6cd6717885bd50263a97574e8a56a3ccbca88229bce595b38a454c1184ea94d5",
"0xc9bba3a4a72f0f08663e3af36e2bac2d4c8422b2e3c3bd29705e5b86e7c42a20",
"0xcc442230a4b6aed5a1eb2b8744a09aefe3cfb9e477816edfa829dfbd3f7c2bc7",
"0x4ee48471e3326144c7a48b15fa5f39691ad9f24b82430ca281eb961b18019e02",
"0x35016779dd877530f99b1d3372a0a77eff7ae831208d2bdfcd0979b838190acd",
"0xe3e9b2a4adc4b4e167376b33b8bf2a00bd64d52f66f4e7b291ade123e8c04402",
"0x5a7d1466e2538b62ceada9577079248788e912e23c4cbc52d7b82a1afbf28ec7",
"0xb8529c5a2b2dbfb72e70165b9c9291e7c02b43f157092a9b59f3b4cad85fb587",
"0x55e90fb666a950f38ae41732269cb69afca8643b75a99bce508a16e06685fc03",
"0x651f45996ed080d8a8dcb78550089331ad67c2df33330d1871dff956463aed3e",
"0x9c6b4a961663c59dde3a4f0fdf7b68b0b5f049719ba0d10d841855cffc7ee166",
"0x15c4d98e0ee4fa1535f83625a20bf1a491852884f8fe608eae6b9bde45986779",
"0xc7891cf34fcc61935a43f37bd7f23b674c785ef3b2718358fb627364f3d8b09b",
"0x8105fb76c9e1281f0f0d7c520dc6cd1d546d1b4b29cfa398eecd7965af57f408",
"0x2a141f1c4e1f1e7be5d10c444dde3d1b5fe1a69f23506d5ed1842957375a1208",
"0x9d2313d34a5a6873f2ae1e2684db00e3b693ba962796fe7e78d26a2c49104471",
"0x84d0245b1537ec1fae89fd39741d3b6427ceda3933f750befe4e2d8ec022fc1b",
"0x93d281ff6bfefc7f94058a0d87cb826b97728c9593ef4876b88c35d26b2163bf",
"0x9018a352b09cb36311d0b54bd83a33dcd5dca5ae7032f1d399fec3a0e5a9cb7a",
"0xf01486ea17dca0bb45c245c9ba76d20d1a2f4eb718b7d07fb5183736e5cd08c5",
"0xf797fec0a8cddb31ace66223e1355211e41211c4bfd33a416c8a1a86e497f630",
"0xf24aef69daf4e2b90b4f3bdedddbabbae86a7fd1182947da7306c8145f82cc88",
"0xdd14e34cd000ddb6ae6426dcd78cabbf6bfbe5b3ecd7553c4989b3891e506257",
"0xc01ece645c7430801ee26b20ad215e221f7de0e060fe004b450b372a6c4438a9",
"0x64c836653383bcb165207b80ee8f9377ad25ef1bcd6cf7ba166cca78419898ed",
"0x6b1cdfa195df272d7e35ccd843191d5725919c0d8b71f1abf4b950a954770503",
"0xebe51f94a9f445d83180c4f075d944fa2f05b5b784b7e3cb4bc1544020bd35cf",
"0x7007f6b2c04420d390fbdb6a9fc21edd3249c1a098f8a9383131b8be50c6f975",
"0x6b7e93d96f74fc2b068104a017fa5fa74836e8be5599865309361ad9305d9bb9",
"0xcccf2dc238c68464f3b0446e06fc48fedf202e366ace5734cdac5a7fe7c43342",
"0xfec77750c277cd6bb0937d25d7f598d07cbffb1117b6232e932a296f32b45ce5",
"0xb04e6883c11ba3f2b7f0301936c3dcb622f6f78b3d3d57e48c6dcba063038d6e",
"0x23fe83ea2f197cb97a3ce5f2afd75652cae993f30bc627e8b9572a0069ae088b",
"0x44cc6bf2cee513c826ecc0c130a32b61b0153129721ad2a70d4503b71c128247",
"0xb1a11be5f9b37faf1901b1405494af32084641dc02685ec33be7f6e82e360a42",
"0x284bdf9a6aa0ac964b0f5e72d0d47c713c919d72fa9e24f42b65d749efb6e2c3",
"0x96a0a0e92ed87946a322c9bca6c0ebf70e51e0e825e2ede85b975349f9bb2caa",
"0xd73becb6a3d9a155d46a96d17412cf015724a79b166d2961d24125d92e355947",
"0xd87f142746611ce081945999f42b8da49cbda03a10bf953ba043417861e27c77",
"0x4af010485b5385579a534693bd21ef3768c8920c5b5dc6f9ddf722cc68fd6550",
"0x65a2f4ec5cf3b37530fe0672418214618393db47ad1185f285b445cf5f53cf26",
"0x1d208df2a2785599565a40d934eb2c0cc1673c1dd43d11685d698122ecf24d6f",
"0x046b7211427f0027ed4b69b06cb43986d3cd53d1579a2e078f6904a06ee6587d",
"0x053fb3ccc35edc1a1623f8652dd8c239045b02b58ffe7d51337950ae43365990",
"0x0721f82b52ff87272e1b0af5f4dc16b90dfaaaf3ba56b91bef555d283c98a8ae",
"0x26e93e4e2d0df49a857bfcede7e6ac234d75b71617b561ce39c3db130449e4fb",
"0x5ed6d53c35a0aa52d9dd0a244f31be00426292c11676737c7283f567ba485dbc",
"0x047add17bd1601a06a32b721165ecb2afb55290e025a828696f234b95c52eae3",
"0x6f9e6884745cad7091646f7767a6b17ebffb6482ee4726f5c1f3b9e02d26b77f",
"0xab29ed5be2938fb19614da621ea9828eb338514309b3d56efc6f19b57b8f4f6a",
"0xd5f14043479e8e37c6545527b2e93a76763a654661af9a25adae2b37dac40672",
"0xc43635c1489c3cf3aa231419bfcf7bcb2455a812e4781e5947bb76cf0219d1e4",
"0x72117c92f8bab83ccc7fbe5fc6cd931c765238ebd67ad8c4fcf1353eb78c8df2",
"0x5deed7df3559ed75d7c2388a6e583067222684741143350f85d236bc5fb4075f",
"0x655bbc0590587c030902e32d88cc9113fcbdd5c3371820868768abaeacecb79c",
"0x1641c875e8e44b1fa71badcbb4f724a71e2f916d9cd212f09e9039ef41b1242f",
"0x40bacdc3f0b53522cdf2aefafa2bbd9c8e9f8fd25b4b33bc19a87d72a5de293d",
"0xc35c6cffaf93170662a6f3d4a0152c988b0f4916a33d037801f6f589a6140d08",
"0xd4e37125c0ca7844e048bac4e4c6069ec24b0493e4547f53679fc8df2a1e0d66",
"0x34c6bcf0e01dd55a1d3a82df0a2edcb2caccbc4be93fdfde32c7718276e59bbd",
"0x78e50d394d9d42ca8aae3f4dfccfb32fc3923073a6643c010fd49ec3897c4fe5",
"0xeba75c237fb213e893083c5ed0e28dde3ec1f5f1b60edfdddc17a54de9cdfaf3",
"0x4f35b9a67fc66492fea331378c479eba921fe0a6f0589b5ac07a44b1ea9b4995",
"0xbb4d09d98b27d672259b608f5075575cca49f87cce0519813c1ba4a614d369ee",
"0x805550ff80be2d3a543d5076961442f70dcd86e82762149c125d2ce371a32f69",
"0x7d81d766cc2d6c66c7bf07be0a416f2a4c0dd2bd94026270a410ea6a051dd4aa",
"0x30edcef101041c0e0acecb8d6c15a28c70b8894e56196cddefb5fb6b39658ceb",
"0xe5e8d21a00ccabb5cb50b4d04ee1dcc217fb9908e72bf0efab69d4ce7a0b7eab",
"0x3793f78e2e582b56fb3d24e3bd4aa69db4741e768795f9681285b0fd34ebd602",
"0x4a9378b4ad76df084739b2a63459ac67abf8eff7499210936fb9e6644265054a",
"0x2e47b62375a267e104ad2f8bfaf5c985fa7c6f4c438739810517e41141a825bd",
"0x73dd638affe08c104b37d53e6cb7884b8a9b4cd4afa4bbec6ef608bab4559dd9",
"0xc566dce73da1cc0d069a032045c44ac4ac86b145efd2cb1c363a2d403fd8d7cd",
"0xa1a04d193c5408d0edf85e771c6976c5dc8de08c87d906b6c9de1efa4b4fcbca",
"0x3245d987f205b01c54448d9a9d76bc7e4174c1021726a68f5feb6748ac373f8d",
"0x1f963274b936dde92281f2acd47d3685261b03845efc7d10ab8b02500c239af0",
"0xa1e6d69b28aea5d24c3dc301dfb0d6df8ee8fe693d1d0f419cde3b4d20d22a89",
"0x31b0711795c77cae9e1b62b87244a5eafa4b1764aa1620d53be7e779db060984",
"0xea141234dd00c54ec7912c80baca591b53a058b81fed1673317fa4ec37c2c715",
"0xc5902f3941d647da304e94062951b09f4d6da5bcff71517c7a35e8b49c6ac569",
"0x54c5677af7e31b84010f8b2941ffed5080e9b4d18b608fcbd230f163a5e04ba5",
"0x1a12c56542a01bc58288a58e8e526f2b2c81a5238c83bbfe4808d206b3b9d0e4",
"0xa398984bab81b7640674893496b2346eaa8257aad4cc8bb52474f7162fb6fa2d",
"0xf454a592314f1b7459d21eeb6ffe54c96ae2e25a7051dab1f8129518d3788cd2",
"0x12964f85bcc410fb40e89f518d749c7dc2be7546a960849dcf560eea2ec3d333",
"0xb176775c2ff94aedef2c099b936468986874d17eb563086215525acbee2fdbd1",
"0x250de6e94d60125bb680e862fcb70823d06e4f27308ea1ecea8a465d5febc860",
"0x731cc6817fec665290d7637c67fdbc72bbebae70a13f9346f47f3b368eb1e4a5",
"0x489e955cc77f16d8aa16bada320127380315eac978f0fd1033baa04911123332",
"0xa88623d9f89221dce46b55ca7248581b4e3ace94a7cccc9fd44a3b03412fa729",
"0x4a65c1808d79577b36dfb5fa74d0deb96e90d0105ee373d552685cc4f73d35d8",
"0x7aa8228e6f8fb98a23e1599e12ecd84c879475a23f10660e763e338a347c9002",
"0xa825d02023ab76478b83a2a61dcc5cf6c05ca741b960043adba3f13d1c99612a",
"0xd3c90e414048510d9594ae7a593e25641b6baf93f9625cc4a3caade094f02e07",
"0x4d631d9024a4e75111189835b4ec0007a6951d710512dd7b91a8d56da95e57b2",
"0x0158c7f49affa675f04aa4ebdf11e88bab8c6dc0abfecc0100d32489b0d59f1a",
"0x4ec96496694ef53e17b11b3122cc45cce3a4fe1a252fe9c0d379b302db1318ac",
"0xeec44b94d9f6c7359047d257ffd5f1f5cee87b3ebd6845dc1fe48be686a9ba69",
"0xe24c7b667382186196e5da605ce75a5cd9c7e4803fb2e268b4de5848737db026",
"0x14a551850f945f02fa2ab84c6ffe3d17930c81f5b15e94ce92342c3646b8cd9a",
"0x7e9a72469a52662bbcacd449dc7df8fa75e5cd1480dac3b635148a34e0f9e375",
"0x2dd4fd0394574c5d8c9272c42175e3a88d8501753bb25a77fa937123d0c2538d",
"0x6f044eb54fd2b8e51d89f6db2ee6e654b144d9e69cddf79b7899e06af5bf39f5",
"0x091d894513fca5d2239f6b5ce73737aa2657c90c2af196b2b893ba0b0205859c",
"0x78b9ca580e3eca0808e60420ead000365eeafa60c6394e055a653a7f536c8b6d",
"0x595cfe325aa600ef0d7de6b2eb717b33c92dfbb1e3ecb9487fe332c878e9f121",
"0xe1dcc22aa44a0243f67dcb0aef7e0eb1ceee269b0faa67633698f8ee29155115",
"0xddd3354d4167161c6b18173ae24fffac2cff12a8bcb3ddf1d964907716685cb3",
"0x55c02c02eb758f8c9aab60a2432d29e73b567e3e3d0148611ce4569fa074c5d9",
"0x5d07d5a7d45fe16a0521c55b3dd898b922c619e5b938535e2f974f11b4edfdc7",
"0xf75100a1a2f8d6daf0dc36569cbf579ed6e52838b163fe50b5deaad8fe7bd07d",
"0xb6391d028fbc88825eabf1dc4566e58112e6d3b7cfe6d387d0fa58df387df78f",
"0x8a58fab34ada4d30eecbad64a001015c91e446cf14e95c9d7a3d894530787b03",
"0xac13b36d0eb0483ab8cf21f49c78f9e32092c74c878ff520ed3a3c0080fdf5fd",
"0x358c1021fbdf028ec82b811cdeb504adc2611a1ab477e8a191f780592d598361",
"0x6a7af25eb6d089ad63712187d1414e8bd04fb6bfbac26e6086b1a3e2d66f6f7e",
"0x6bebf617fe66129dc8c9dadef1a74df05567ed2fc89f4e5c4dd36327d6b0a5e5",
"0xc658d76eb9f3fb91327c64668b08d3e9bfbbb2099bf5d9aeb6a61b8e4f893c39",
"0xfe68ecc66eaa625842068f8b5ad4bdf5cbce0c82ecfafa4bf48b08de49c58595",
"0x26eba0a168f2e5f3ffcb37955839d37432b39f2f447497854d42d802826fb1ee",
"0x8b4f8964062d0f00e40a41d21852a2e22abba1f4cf75f594797e368e4df21f7d",
"0x916993123d91b181d145e2668efed6eae3845fd76d4765df3a04d8adde8c7142",
"0x21b91051b5c5fbd22164a655de8485cfc2a9eff58ed09d6fbbb454560898daed",
"0xb52ca76065bcf43f06335977ab87d6c809dcf4e6cdfc452ac2018b77c64089a5"
]
},
"accounts": {

View File

@ -15,9 +15,14 @@
},
"772000": {
"safeContract": "0x83451c8bc04d4ee9745ccc58edfab88037bc48cc"
},
"5329160": {
"safeContract": "0xa105Db0e6671C7B5f4f350ff1Af6460E6C696e71"
}
}
}
},
"blockRewardContractAddress": "0x4d0153D434384128D17243409e02fca1B3EE21D6",
"blockRewardContractTransition": 5761140
}
}
},

View File

@ -18,9 +18,14 @@
},
"509355": {
"safeContract": "0x03048F666359CFD3C74a1A5b9a97848BF71d5038"
},
"4622420": {
"safeContract": "0x4c6a159659CCcb033F4b2e2Be0C16ACC62b89DDB"
}
}
}
},
"blockRewardContractAddress": "0x3145197AD50D7083D0222DE4fCCf67d9BD05C30D",
"blockRewardContractTransition": 4639000
}
}
},

View File

@ -9,12 +9,14 @@
"durationLimit": "0x0d",
"blockReward": {
"0": "0x4563918244F40000",
"1700000": "0x29A2241AF62C0000"
"1700000": "0x29A2241AF62C0000",
"4230000": "0x1BC16D674EC80000"
},
"homesteadTransition": 0,
"eip100bTransition": 1700000,
"difficultyBombDelays": {
"1700000": 3000000
"1700000": 3000000,
"4230000": 2000000
}
}
}
@ -26,8 +28,8 @@
"maximumExtraDataSize": "0x20",
"minGasLimit": "0x1388",
"networkID" : "0x3",
"forkBlock": 3383558,
"forkCanonHash": "0x6b4b80d65951375a70bc1ecf9a270d152dd355454d57869abbae2e42c213e0f3",
"forkBlock": "0x40E80F",
"forkCanonHash": "0x3e12d5c0f8d63fbc5831cc7f7273bd824fa4d0a9a4102d65d99a7ea5604abc00",
"maxCodeSize": 24576,
"maxCodeSizeTransition": 10,
"eip150Transition": 0,
@ -39,7 +41,11 @@
"eip140Transition": 1700000,
"eip211Transition": 1700000,
"eip214Transition": 1700000,
"eip658Transition": 1700000
"eip658Transition": 1700000,
"eip145Transition": 4230000,
"eip1014Transition": 4230000,
"eip1052Transition": 4230000,
"eip1283Transition": 4230000
},
"genesis": {
"seal": {
@ -56,8 +62,8 @@
"gasLimit": "0x1000000"
},
"hardcodedSync":{
"header": "f9020fa0a415a8dcd55fe9c93372da415ff6897036e48cd3c1a5ff8ffe119eea1096ecd6a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d493479443d58f2096e015db88e44346e73d8c59cb1753bda0100f05d66d36782b7c061c724d8d07619cc61053eda41badc8d2cb9292898ebaa00a60317b490365f40f0528e1f559b0f49facb6638c82a9490d368e963647c704a0118b536c3bdabf90273d527dfc26914c7878176fff16cee8fbb150e00ffcdd29b9010000000000000000040000002040000000000000000000000000000000000000000040020000000000010800000000000010000000000200000800000000600400000000080100c000004002080000000045000000008000080000000020000200404010010200010004000000000008020000008000000000000000108010440000800080000400010000080010820008000410800000100000000000000000000240244000000010000000000010010000000000002000000000000000000004000000020000001000002000000000000000013000000100000800008000200000104000000000000080000080200402010000000000000000001020000008008501602a8414833bc8018347b7a983476d40845b838083904d696e656420627920416e74506f6f6ca0f540bc9cfa258b97576bfb9a79518b2c07ed73a98bc6baa61cf7af4b40ad5b6988965658a406315a8a",
"totalDifficulty": "9811143388018700",
"header": "f90205a0cd611d63e443c91cba1dc259c71fdb96bb70a1b78639166f7aa2a09b36843f7da01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347940d85c541489b4c67df516b86898e1ed59c8d639aa039cdbfba9ee1e91d15fead004292a7df0865204ee878daab2375683cc527df2ca0ebf5b5fe0ef82be8bda42e6767ad7881269cefe650c3a899f38f5ed1c05cad8da0578f443c4b0d0bd89e42a855436258f1b19d1a018f6769c5be7aca0679ce6008b90100000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000200000000000000000000000008000000002000000000000000000000000000000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000800000000000000008501856ef4de83436801837a121d830e1038845bea0191864b564f544845a08dcfeee7d79d54276d8bc35765ade21a51d277540a3d7a8e17fc26b6324487e488d70f2a90307e2ae3",
"totalDifficulty": "14549054591321060",
"CHTs": [
"0x614648fc0a459451850bdfe353a932b5ff824e1b568478394f78b3ed5427e37a",
"0x1eae561c582dbb7f4e041998e084e165d0332c915d3a6da367638a8d24f3fafc",
@ -1971,14 +1977,259 @@
"0xb652952de1bf9e1174e5f6a37b069b437792672a37a9e0159c4f36b6e64306b4",
"0xb72dd6cb5df1b00dbbd84e097e2da79af2ce60559697ab4c93b0a8b85b2ee406",
"0xb96fd4a94ac30c10f757691f7f06f25a4900fe424f4eb7ccf322e2f95249b914",
"0x99fd442599036f161ccef1ae8088c5ef694c1819f5b76d9d2fa8f979935f69f8"
"0x99fd442599036f161ccef1ae8088c5ef694c1819f5b76d9d2fa8f979935f69f8",
"0x3e53574f6ae31a45ef928f9c37bea6c61e6d728a5ade9851567d3167f5ca3314",
"0xd7e3d08c5b71a7ad8338e8b51ec54cb11ad4d643d129a371af07376f8c47c1d4",
"0x1033c8aed4ec46377f75cc9a6b3297e1da0a7d1e74df20bae9fdf6d037afdc28",
"0x924d621544f3301f9e212fbb95872fce9eb4a4172a11693674de733bfc2b0018",
"0x7f61884149ea4def1444a70c022da1c23f31ecc51bb175905b492236a57c7fde",
"0x40c50785bc0665ab4eb3cec95405e17510c571570a5859ead804530dbcbd1387",
"0xf806491cf778f4796c0f73428e6eaf237da8488af99e9b61d72c56fa03e7051c",
"0x7a9670842dcb12c66f11e357a84849cee227ea5a7351e7c6c9370e9ef2560129",
"0x1c974da4e1073157c10deac8b256c8ced77a030e0500b2b8a90b6ca1d32ab4fa",
"0x97ebcc81ba9c1e04865ee4617daa967dec39f65501be21fbbe929db869d57dd8",
"0xa36e4506065d8b9c662697b18ffe50ed2f6ccfe6d07a065bdad048778cc53668",
"0xb9d5566eb0d40bbb03114d333d1d1dc85b0e780ec63229f3b93b2c84af5f9509",
"0xcd16693573724880c3c83834d516be65c35a861b76b43878e28aa7fcbc961361",
"0x4f60ecd7811acc087fc4557fdfaa1a0b522fe30da1cbae5e7740eec3cff04c00",
"0x9e58573b152bf5008e0ea3fc0d64573211916521a62fb08ba0f1b44c5da12e7d",
"0x2c6693cfd7e5bf9a3d8cef9e186b3da25d07af983564ced6238f9191b020f105",
"0x8cc6149caeafef85ec7b2456f33530459c52b30a5365a2a0022b1c308357f7b4",
"0x6f66863bd9909f687523128569cd0894c4cf41e2eddd5cd9c20d30c446b1711b",
"0x402317752053e7b6d7e2d5512d6397112d80ace0873f5f9d32c023a402ec03b3",
"0x2fcd50a79495057908bd34875e3531e6298488f0d06d043fb6fb0b140895d379",
"0x533ba9669dcee2c6e35713c7eca3bca168a326a36b0e39fcde76cbd35ab3d99d",
"0xdc2e86503e8066bc5fac91fe63544e33568a3c744967b9360458101c3d9df096",
"0xf994b38ba312d8bfb00d428b13a088738d93965b525eae81b45b9be344f99fd2",
"0x0721f3f772958d6a58dba638453b8d004e0c76dc8b4cf6d595b712edddcf002f",
"0x3c650c2c7ebbe7879a15882c3157552e8ae1adebea8f0c65a2dda272cc4ed838",
"0x649fe38e87546703245a7adf5925e8c7a07942750e14d39553a56ca3fcbd8c65",
"0xad204bf42d2a444faa864df8e9d023483a6b6daaa8001e00bb5373a45ed064a3",
"0x2c5cdc73d8ddef2e5c0d47358ac180043e7e246e590a7e8ad2b0a3f9b4e9375d",
"0xf38f6c364bbbbe626e849ca9bb9324c54cf0ba8dfc0b2741a3ff87ce7734adbc",
"0x317efc1cea774849d6219d31c8464a15956da4f3810bf15d4353443f79d98e75",
"0xb6796dccdf4d3cab16b5ec9567237cb988ee94131f3262c2a581180b775e76de",
"0x1fde3fdf2303d080d400c43345a424f50f6551a6a06ad50c6e277d49e8034df3",
"0x4d7bc44a3b56f5e69fd3e5e8c0cd8f5f839a775c4ee381b4b1d0a36656cf91cc",
"0x6051b60fdced0c51aa6a1cab2418c8f21c5d174109d514a4c6de758b2056611b",
"0x3c2f7be830078af3c2c6d1557b3da74d1d5bbfd8094f98886a959aa71ce70b15",
"0x8f296b90a0ece0a3dbec19a801072497c5840f9c0491062cd402db00c2b69f2a",
"0x6c14c4697f8291dbdfdbfea5522798e3f8b17204f80d8370e6d379e6ee659e77",
"0x4e98f63afaa50f8a30b0d352eb5fcb5403c635cf54b41545aa8b48465d23fb1d",
"0xad3059433e981ff12cd0d7dbc11a8d92a65cb39c6e936e9c7db5934d45806492",
"0x1cbb21f28ad2d191d6850c97487e5a733306f2f6ba370723fd5ed37cf6c880a0",
"0x82a0010a1b20d383bff0e5d7ba3751bc0d9161a4817554432558c5c2825babb3",
"0x33e54e93443e87c003d582dc51d0b9981ddcaeac4df0993877739651cbf52a58",
"0x1de8bc150f4142cd45b5d0784e5952abd8de7cba9654af959498c0fd0bcac404",
"0x3ee852f48a1a930d671e53c9c8d8c3c38353ee1737c093960c3f841e6c682e94",
"0xa9c6e05ec91e2a2f2f003419063fe033e37e5353c6e233706e29c08693e35eb8",
"0x649f7328064c55c03249d527dadaedcdbb4cb0e939d94c866844192d99469e05",
"0x3a407d00efcd5fe7bb765347b1a3f231b744349269b3aeb44099f4bdd068eb9e",
"0xa1a20af2f7e61082810ce7e7afe6118bc0ad95e9641e6129027f46af28048107",
"0x0d68fc5e58cacb2d16d99a0e9e612d674754ea51cbee2c68a21f4b0aa926688c",
"0x9b3e58144c014343271c9dc90daa8d2f642954b3eda223d64bbb0ac41380e512",
"0xd3de08b676d4f06bbf4322ed4340caab76e6ab7144c97af91c2bc9c749e65b38",
"0x21d626c9c38087aac6262b64f09398be6e4cbf246100d8c2416cab57e9ac1b68",
"0x563a450e35f40279f5946641a823f596ef3ad22a45b8ec280128546aeb0faf14",
"0xadd9c7128e14e670c7d21d6dfa5c09a11dfd237e90709b087e3329d3cd89b5fd",
"0x258cc0f845d8e7438a707f590f55203c6c51302cef4cfbf788b1c7054688da14",
"0x4309676aa14fa8244e0a089c7013b89c9adf57fa952295b8ddb23fc6545c9870",
"0x5db769765dfb41aefc0f40f06f3d005b30ce1f14f04f653e0c5500f651cd61cb",
"0xbef131c9f19572b05d295d7122fd1a09fe4a8afd4e20c5a0f3cd2785b7eb9882",
"0x3f235228ea537332a041ec75cc6cb5663edaa1c2ed1c1700273af73a5d49bf1c",
"0xc081811bb077c6ebe224b560eb6b81f3f813b26789cb06d96110071ffc25fcb4",
"0x912444c19a5e458b79c89969ed5336f2873267baf2fe729b6f218b74d269b797",
"0x5846fc726eb9627e9d070041b92d76191c4b32e315d33ad18121b8acd01634fd",
"0xc899f45494660034d343670856c13a32b230d047434a4d54a772132ddfe3e182",
"0x11a699c18b04e8cdcd96a43b7465f7bd81f8f64d7ebe79dcaf9201cc897f2746",
"0x8e09b134dc8a1735c060175e9688fd001974bf2e3baa5a8e88dc4c87365e0e07",
"0xa086797ebca0a1d446a9289b7eda920b926e1b595c288a9dea31ad225e6de86f",
"0x0cc04369b6036dff78a9856a5173bb2dde380359a8dbe0126e06d6e763a01c36",
"0x4b5efcac86e03d1f67774769b8bcc5d131c181cd4fa297eaa6cea5ec0cdfaa6f",
"0x47272a21a07ad5e14e3f97b237dab7e33344da4db5b2d31bc7cd0cc2d2c9f1db",
"0x9540755fd321d125b73cb6f1884c2f0b2a821d29362194f5029a5e7ba2d3ed44",
"0x229b88922fe52a78090673775f264cd665fe222002d6add2ed29b7ffd98de717",
"0x8fa2d755d5cc0efb01d9fd6f5ae1f7864404ae111d8ba17e23686ea9b6566336",
"0x33a8f2e0775fd19b1302b985bd6c29d4ab5fc63060bcf3df2c3685ab1b19ce67",
"0xf6d6bebb541ef9b84d779c62adb76774bb38a8eba3823e74e0790dc7401bebbc",
"0xa1f421108d49ed23996e55012613fc05e0f86e00f17251b1ff1e0824d35befc7",
"0x2cc572ed83dc6c604bb455ab050c550184a923f4b13815f06d10ef19dffb3c7a",
"0x28220e7d1a9583d68656f03ef4d6fa3e249c71d1b42698f87ba1fc582493e194",
"0xe8aa37b3214abb1bc167fdb6f10119a4019541f31c76b3b3f8c363bb138bd09e",
"0x825189c2c836dda454b457a03ff83d422bf78df1f368434768690fa7f51c57e0",
"0x5dad65d275e69478c81ecaec5b872660205735d9649ac020f65f5ea6ae972dda",
"0x84a1184d8f94fab280e0593479179348f9184d6fe5a2b2ea9697894c42574473",
"0xbef5a05bc7e1fb94465570144499672d95f31fa241b4c510011f6677e2bf72fb",
"0xd08235ebe6d79a8549bcd3d2414cd8afd2a3e2ca22ced226c60aacad1361ff89",
"0xbab5204ad45ec52860023e7474579e7c95397f3c4ac01db7e446e92c19dceef0",
"0x6c81acf2ff161d423a904c457166ff454ef41571d01e73d56bf9ab892790248d",
"0xaf4a603b808e3ddece42e3e123ea02defb9f8ef2546a95c5a617b6ecdb89c306",
"0xeecdbda25b04eb764e322d9a1e5eefad399c9ced8c77b1e4ecfbefcc90bb403d",
"0x9463f4677a2039ca372b61b16d5bcb7c043b26af04aea4d3f0dcdec7bd222070",
"0x27bfd92799b4cf9699d2bfcb158f6727bb986fc0dee780fc1052366ebc4e6364",
"0x63c3faa1a8fc0d531261cd241b1299d4fc13629abb4cd357eeb130505fbddf94",
"0x9a4535b07ff68862f3396b14b88fa07cff7abdd5744775aeeec6868606eb4712",
"0xae59e7c3e0a1df32f6e027da2983d3c55b4ba4d99e85329361561bd7f13ac629",
"0xcc5dc26b9be8fd8432537d967afe12fc668949e4fcf72d97a40f9214975fa57a",
"0x8f11634c83c7a43be8b98335ba617a64c6379f5f92664055c5e1620791134ddb",
"0x14ce2a69d844e6a46aa244c5aca9fb74c127f2151c7c16f4611ca030df365d8b",
"0xb06f220566a5e62570b9e9e49a8b9d5663501ba145b12260fbf9d4a18a4b19e3",
"0x6274f3cf553c45e6ba7ef644d75bf208e08a8c6325e336aefd35dda9cca3c4d2",
"0xd0d685497c2f2b923d0b9f1590a748da8c684a915a470db58c3105c83d8304e7",
"0xf37fab515f96e655f182f0b6e6aa3602f2cd74773329094772151e8c33d1f9a4",
"0xf6efd731481e8553f1d18b5735166499e787009b484b0dfbe4d35e7930f0d837",
"0xc96132b510863e553e08c54e98b5e9c0067f26e421980a6a3bfd4f07480c4396",
"0xdca9d8182c573871b6d6a184cb9819256398080bcb7fd765e6c69cd972a28d8d",
"0xd632ca6f5d45646726ecd2977ffea5c71a867890633f571b359657c0d096f840",
"0xfe3884dbca6bd3b0087466b04e6a5857ad59d7a25021e1d994d059d20005185b",
"0x7f40eb6fb94b05bb43873a98e9d4eb5f7ac90fb8913240bc0909c6be42922b30",
"0x5113a0808666815cfc52b8ed63c649d96f35c365def36ae623f536241b163c3f",
"0x8e6dbacfb5c593d7d7c2650d3d0115c3702cbb55f73011823a202e69ca33cc70",
"0x8f069ac7caa48bce09fe93f4aaef6784d8a6f7a3a09edb82c7512ec18acc3ab9",
"0xa5525e51fd789c59d3b208efffe09abca47cfd6981d36ab44084b86706c69888",
"0xcb4a7e60d5e8b9d22887ef1e8ce339cfcea0ae1fcbfa9adb766ad05d84182de7",
"0x0a14f23f9066ebdb67df31e66f6b8ab1c089025c0ba56ea56d15f73749f47cb7",
"0x0963e3eba12e41d21af7625b8dc487b637b1789a6ac05fb23062e0166942df68",
"0xcb7ec271b2f42cae0027d22b688b19b9288f2b5d9c43bc5b1ea23b35f5542828",
"0x9b97e6f4b2eeee29ecccf9584dc020c8caa3cae51c82f5b58d279eaf0c6ab4e3",
"0xad7f1963ce9993e6172c2ae90c6e1d4d3d3c52e14284fcc1b1e9a56776afb97e",
"0x52ef2ad7bc2921742dcbac9772f13d5c31be938eb1ad6aceb2fa8a163389cefa",
"0x369ead6d900e64ae0b5028df8574e59b67c61dca418c87ce6461eb4c8535fd30",
"0x7e1a18f6199f05f21f9eb5463e9ffd87637d2fd24a23047fe095895c533cb6a5",
"0xe1b8813a95e511aaec9b358d515e624fbc20e551c56328f843ae90b3c895d3a2",
"0xc2ea59f3d1e7bbe115390a4c210142fe9f9dcb1959764450f5b5292ad90e0fcf",
"0x97d235c3f18e6819c08dab4efe66d0f11f0d06f8ffc9686e3f28400e057e6f4a",
"0xea64f817770252b77b08ca2f579b440ec02e833fc88af7c9c96a8e1e07b2cb2c",
"0x185f5fd1f7001b533dc01783c83b7ab0828a4e2f188cc4e26768c515b4c421f6",
"0x0c9de9844e856a1e4340bf54dcaf9dc66b489304765b5c3c6ca20284f5a0dca6",
"0x4dd1d52da1d260d1f0f63bafc4c816b30cea8ec3434e7d4b63a0eff86997254c",
"0x0b3eb94aa246f7c8c871535ae2d3abe5c1b951e76b77510140ef52d5ea2457ee",
"0x27102708eea5d715799642f213049d8ac9abc3b12c76d147ce443dab28af96d8",
"0x81fb3c4e8dc6c658af2901b7aebf7467b9ae045dd0f58fe8d77f8770ac517fb6",
"0xf68dba4eee635d7494bae6fb9f0c44e739b3121d4bc6f6f21b495a716af3cf52",
"0xcf87b723dc473d313bf9ddfa233056036c5658777e831796f1f56647cd040c8d",
"0x49927c2100039ac496d9c16dd12f0a05c9441b8616c69c726fd2e79ae65e130c",
"0x088195c7251f6b9fa2105e77df5620211b8ca783a77f1a98de6752fc442c26c7",
"0x604de480bcb88e908b90451c4142b99b9cbb479167473befca7bea9b4ca427a3",
"0x642fdaf6bc1abbf261a9480fcf9bb48cf03fb80bdd434c6ab63401856c74fa39",
"0xe6b596393fce7a3023a316ac34a1fac18e18779ca5983791866f857f257592e1",
"0x40384a52564fae5df8c3e41827cdf584e38f3f555a54ca749b7b516619071d85",
"0xe52f7c17a4106594563ae7b724e5613e87d8442177f39a36b946b0e335be0e5b",
"0x7726202a7c255a9d7be47af6f3b44c1e379cda61237866554396fb6ec043482c",
"0x665da353f296cde80e3cbcb9c7f93b107d5676e5cd694b0bd33334d74789beb9",
"0xa1157fb181aaa945793b029d3915b37103d050f1f695862d9cda90df4755c189",
"0x449c9daae1c38b3d3f6861b63dc611f147b8a29029927f85c32b5e549df8ee9d",
"0x5739fcf908f960416e5227ff6f95aeb00696f8eb7192968239458a0aebf42533",
"0xfaa6017362d6e64f9ae1b6d11764ddba77cd980261acb5bdbb17b7cdee2d3024",
"0x97fee586909cf8b3cfb2f2dd4e251cb642eb551f1e5d9fa557a21dcb66f430c0",
"0x9934d90e0b4ec5900107784116323b0f76d1d71491cf9c619520478b6eb97ded",
"0x8f5afe4fefb0de810442e27b2bce63e3a424f9e339a0e1f46f6ed26617a4a404",
"0xcd12ed1b75e624f93d9e4fe7dc65407b6d7963196e4082fb459ce354977355a1",
"0x6858ff0c07de87aa2a88498ae948efe6cffade00f3f21086f0a71b8364090846",
"0xaca944e99122e4fe41c62b88e67940332483c8aae2a3a615508f848ceb1c044e",
"0xa8bd3a6b9197bec8849737e6771f563d14d7415707776ce4bed9460588c55c9a",
"0x20e5f18e795844cfcf2326f52ab1e56f4a53a1ce405fd355e0d37de5f6d552ee",
"0x038821cd156d2d9e2aff1ce3525726c42f581ca3e9e2cc75df524a3dbdfe0feb",
"0xb79bd23e10d1e6c7ae749cecaa732966ce3c2e2ea8b640d43638f14d7b78a21c",
"0x2c4d3bcae4a76ffe6924555b320847211367ee37899b187d8fd4358d91264c2e",
"0x0896db7657d2aa8d02166bd4ceba7515b24147fcf0aa34223590fb80db64f848",
"0x95f3b077d9cff92709d46185a7dc5d723f257fb77641e436b0f1bf07947844a1",
"0x6fdd931a93229698b92e6419074bc308a372e0923b35321b725916c1c151cae0",
"0x8c337c36c2027d0f2fb96405f76d3190cdd01237f6a602e07976e09c4a2339e4",
"0x8091a859cf86c1e2c22c870ebc10334f50e05bac2a41153102bf3f698bf5156b",
"0x382435cc836bf1873e489cb4d42d1c1673c5d1b452ba2d16dbd1ca77b7b87a8c",
"0xac5a288b16ed708b00cbe956a14e2a64b321f40851ea25ab1aded7b096822ec6",
"0xee95229b736827a21e0b7154873da3da59e38a1ba3e3b1770de2ed973495ce23",
"0x5168b6e3b5274054fec17b94a82931bc229d0270e5e4c9aeb61539fe8ba4f4a3",
"0x588c6144dfbb33922873946a44f03092bd65d2f89a01b3973002ed38f48917bc",
"0x6c51a964953dc036043b6e8c7fba4a3f6e83295306767bcc3562d9a8cdb04497",
"0x2cfdde11cc2669c5b504c4a82c31ff03e3f0b3aff044519b44bc39405bbf3c9b",
"0xd93200e550400a7179ac7afc394b22b7c710192eadbf429389450e7167191fdb",
"0x5e32f497c606c01c9be8ce1cca5f3cf475f62cf38b2195244a7c93df7d064dde",
"0xb0b74e34fd3983aef31a0584b6a7483ee936622b5fffbdce1411a9db6171c685",
"0x69d0a96a2eca8d6a5c32e700d3f910330930a3348e98e5b276013fcf70c0fe5f",
"0xf49cbb26f2463d38fde553e483de2d4bc6dd85efe2d0e56a2a79810df6af6db9",
"0xe58aa167a26eadf11257f1fcb5da7eef50f9985b590320cc6c015176965ea58e",
"0x582f08d14023a661a9d9dfd1db8324be88fc631cb9136e92f379ad7703584414",
"0xb9c5e584f5e231c2c41b41799a945d60b7ec22487e587ef55e2d9710489c3c00",
"0xd8b70c6657a5bf1fa4b67c4fc046c1eda284d677c610dbacaf0d2d84e4d782dd",
"0x1e29fb536e468abf66b59dd6f48d177a5100a192f1807afc12bb5c97368b8c95",
"0x595ac042acdb51370bf6bfd2b2058dda4781dd7ec330474defb2d5dd00e7f50e",
"0x3d00887c6509b148c5fadde21014aae94924fef363fd5723d80d6a2028df1de5",
"0x9632d419c9fdc7e676b130806cb9c86631aade0adc6a129d3769908ff97e7ef3",
"0x2f91c7ce1158228c8264a6147c6ca396b0d536cdde997a922cde2772b786743f",
"0xf4c731ac2e2612b7e7afcf3ba5911a58a03ef075133923218f54a736b25acb81",
"0x6748a5143ad25f0a461257a47eeab8f057782a438274ced580ca2bd3725d6be3",
"0x05a113b04baf81b6396dd32d59a53908d07a374ebdf17314ec334b3330a16697",
"0xb1bafaef874da14b8a9f883f89a896ec37d9af1b0958f5a3867780278bfe940d",
"0xd93c67d29befe531330c37dfa8b64db7339c8ac35b8cb674d741a414d90d77ef",
"0xb4319281a05f00968367a349e626ec63227412193c30aaface14ac6f7a480024",
"0xf6dce891724a49024535047bee44f889d5228066a931a8f40c18c7efed60f31b",
"0xccdeb308e3402c7dc72588871c23fb4f6fdc60e2a63230f628c5a2951878535b",
"0x90958033b917eea42ffb8bf077ccab8b19c2448113e8a128815ab3441846404b",
"0xed3436553039948094f33cffaff91a7e2b1eab86b638de5c8eb09538c8df0a8e",
"0x0910ba3093e37abaf70ffc0ef6a3b9b82e8c5f7ae497eec14b753f23d0a25343",
"0xdc78d9fceec3add33b92e1a0ee402b80cd87946e0def46f284d26eac46be6bcd",
"0x1d71a82b8d4b89f73747d8073a36acf8496757d35bb452e9cfd74d758774f492",
"0x5399612937722b6fbcb90c1768a92bb35acd4da6168814c1d9deb6156cf35134",
"0xc7156ee3591ab91824b2d4868d28baf11855a63a801b3793f99fdde7867eac7f",
"0xd25b288f3333ea8e33b33f42edf7b64976281e4840599aa49b07da39f895073a",
"0xc6596a87c544e84cfaa2baaa4fb0a93f3fa9d1e6938426ea05a418568a1589b0",
"0xdbf30aa7f1cb3a6d7661934d7ea2ec0665843ea1ec9b05fddd77fdb33fa7d834",
"0xfac2628cb5d70c177e8defbf62b0448297ecb13962638b9a26c5d05203cb2937",
"0xe8c5a65edee74d59e7b9e2fb4779485958d95303fff95bfebc0c1f4bf24ef9bf",
"0x67d21b95f40ca7c991f37981393dcec3218e70d5402d82ecbfcb21ae532872bb",
"0xba19c10601ed47bffb755f791325c50af9229d5c25294f315116fdb3640ca35a",
"0xf4c026f65e31d4606541c4e679d30d7336cfeeb81e9893465eb08c310792c061",
"0x5425ff602e06947bda22cfb687ace85c10896b5d48de144149cba26bac9a35ae",
"0xe11f82e350c209213da7d977e2de5199559ad3793a24a226cb4be367c0aa3c4a",
"0x13fb0194f473a4af296184d3293005dffcd28f1cf4b986bdaacede4a0a096fa6",
"0x3dab80c9fc23caf409e38791f07e107da3b5d0d9d4a64f87164e6d455660591b",
"0x5901803b234ef0445acfcd884baf0777970d3b600b80ab137dc1015a5b790885",
"0x3735e2d0ac017cd2c0570830615ed9eb642183fee5b2c2155e5fefc1d4de2561",
"0xea9030afc249a1e4f6c86f4b19660ea63783824e8d1c3de30e38802f1aea9d12",
"0x9def3e76927de026a1a76f39c242263ffbf94b3ef42fb6e98626b67ec3d8a308",
"0x504886149f29d5eb02412202f04b45f23cc97a82206bdf237c433e83c52e386b",
"0xc84d3b9e84727c2d9279dfebcdf8104762e3b101dd9f39c9482a456f87bcf976",
"0xf4737cd864b198c8049ece5a4f18eea18b48a649997c42504b560188adfb6f2b",
"0x410894a208ff99bd3ec5f925aa7ff6932c4404390df9e337601fa7b7251d09ff",
"0xe6e7e7e2b54bcbe37eb7066b9a3c9b01b632d228c8ed5ef8f1871d9b7235a55b",
"0x8af02fe4657f956a9ede1a9b0a83d67367b43a8f5efdf6bd753d09f428b58960",
"0xdd056794fcb6f5b694934b17ef35577dbec8d04159fe447e0eba02e4ff4e9d96",
"0xef94a3289e9e197b04c17886373adbe66a451efe052fb419846006b5f659b91a",
"0x5558e5612dc3c222d76e7190409fc5d38479e08f6fdf4b7df299e6da968c2811",
"0x5f38cc5046a503d424c4b63cb060956babe0c1de050517d0067eccfa23cb664d",
"0x4c6a04827f46e51378e9ff3b811c878dead62da3ae20ef7d392c57512a09a719",
"0x68fd9806bea1d937063778b245a6788cd992e4fc614d5bce1ffe06e125f67701",
"0x0751a79f86c4220e81bbf6b4692976ed4c315f9b62937b23083f4db2370f04eb",
"0x5651840a2065a3468d97bf824e78455eb5e22b82b45fde7bb69b297c4c40e853",
"0x959d277feeb492ed3393fcd5774f1dde27c6018b07cc4fc93b937ee266ae12ea",
"0xa145f2ed353350a89de97e83433188adbd14107c16c654c131203442c34f2899",
"0xf888fc918eab6f88a5f6c6545d47a32dd558f10d00e86a9949cb3f144b7264e2",
"0x2fef7f5f1c149c87602cbe38766631abc992efaee26dd5b59e5bc361f6901cf4",
"0x5e0735a9be3eb97e790a93502a6f138a8863b0f0c20905bee0ac14d5aff4206a",
"0x9c40cbdab8e769c2c6f9de57c23621c5d8a25adaedd1ba9df17fa1680cd4e63f",
"0x995e6a00f5bc63f0a5d837a6671260771f12995b82da19c72a9546ae7b93f38f",
"0x10f647c754615b475543fca5ed9a4491503867fae619513dad966af63cb7ed2b",
"0x7e45b18bd06e946d20bc3807aa926aa5c0a10555b9dc18f9574136969b8f48d5",
"0x8a419fcb515eef5a139a37241872c095818fd276f77e6388c74b82ae49e6386f",
"0x2cb2d82e278d8ba47830ed3dbd6edf69d2e49721182488ffe0a01698071e7d40",
"0x1e9e5cb7a68e7a2decef900bd250bd31c1f024ddd240799327263c72940c7e59",
"0x3904821c4388b16e9851e3a7318f4177102da38525cd3e5e5708430a243470a1",
"0x7785b9d9ff6dfa45d945aaa393270bf8b62bbde46e941aa068606c173963e518",
"0xbff782fbbadfc752f5c2a239e49ab13200fc35cfba29a5332fe936a0367d336c",
"0x294a3fd73f102c7017c9208857dfb91ec7c323916dd1bb3c9f1fac2c7d952b4b",
"0x3a9fe3685814b6cff7a918e514a9d5375a1fc8268a48d5b78da93dfda115332f",
"0xc033a1836983d485e28e5a0825c953a76d4c7f4a8ec5f7eb0b11baf5ae2beb79",
"0xb088df91127cad5d72f8db96d7ea86dfdef55374bf982222a45df7d2b631ceb6"
]
},
"nodes": [
"enode://6332792c4a00e3e4ee0926ed89e0d27ef985424d97b6a45bf0f23e51f0dcb5e66b875777506458aea7af6f9e4ffb69f43f3778ee73c81ed9d34c51c4b16b0b0f@52.232.243.152:30303",
"enode://94c15d1b9e2fe7ce56e458b9a3b672ef11894ddedd0c6f247e0f1d3487f52b66208fb4aeb8179fce6e3a749ea93ed147c37976d67af557508d199d9594c35f09@192.81.208.223:30303",
"enode://30b7ab30a01c124a6cceca36863ece12c4f5fa68e3ba9b0b51407ccc002eeed3b3102d20a88f1c1d3c3154e2449317b8ef95090e77b312d5cc39354f86d5d606@52.176.7.10:30303",
"enode://865a63255b3bb68023b6bffd5095118fcc13e79dcf014fe4e47e065c350c7cc72af2e53eff895f11ba1bbb6a2b33271c1116ee870f266618eadfc2e78aa7349c@52.176.100.77:30303"
"enode://865a63255b3bb68023b6bffd5095118fcc13e79dcf014fe4e47e065c350c7cc72af2e53eff895f11ba1bbb6a2b33271c1116ee870f266618eadfc2e78aa7349c@52.176.100.77:30303",
"enode://691907d5a7dee24884b791e799183e5db01f4fe0b6e9b795ffaf5cf85a3023a637f2abadc82fc0da168405092df869126377c5f190794cd2d1c067245ae2b1ce@13.125.237.43:30303"
],
"accounts": {
"0000000000000000000000000000000000000000": { "balance": "1" },

View File

@ -0,0 +1,468 @@
{ "block":
[
{
"reference": "9590",
"failing": "stCreateTest",
"subtests": [
"CreateOOGafterInitCodeReturndata2_d0g1v0_Constantinople"
]
},
{
"reference": "9590",
"failing": "stCreate2",
"subtests": [
"RevertDepthCreateAddressCollision_d0g1v0_Constantinople",
"RevertDepthCreateAddressCollision_d1g1v1_Constantinople",
"CREATE2_Suicide_d5g0v0_Constantinople",
"CREATE2_Suicide_d7g0v0_Constantinople",
"create2collisionSelfdestructedOOG_d2g0v0_Byzantium",
"create2collisionSelfdestructedOOG_d2g0v0_Constantinople",
"create2collisionNonce_d1g0v0_Byzantium",
"create2collisionNonce_d1g0v0_Constantinople",
"CreateMessageRevertedOOGInInit_d0g1v0_Constantinople",
"create2callPrecompiles_d3g0v0_Constantinople",
"create2collisionCode_d1g0v0_Byzantium",
"create2collisionCode_d1g0v0_Constantinople",
"create2collisionStorage_d0g0v0_Byzantium",
"create2collisionStorage_d0g0v0_Constantinople",
"create2callPrecompiles_d4g0v0_Constantinople",
"create2collisionSelfdestructedRevert_d0g0v0_Byzantium",
"create2collisionSelfdestructedRevert_d0g0v0_Constantinople",
"CreateMessageReverted_d0g1v0_Constantinople",
"RevertOpcodeCreate_d0g1v0_Constantinople",
"CREATE2_Suicide_d11g0v0_Constantinople",
"create2checkFieldsInInitcode_d5g0v0_Constantinople",
"create2collisionSelfdestructedOOG_d1g0v0_Byzantium",
"create2collisionSelfdestructedOOG_d1g0v0_Constantinople",
"returndatacopy_following_create_d1g0v0_Constantinople",
"RevertDepthCreate2OOG_d1g1v1_Constantinople",
"create2collisionSelfdestructed_d2g0v0_Byzantium",
"create2collisionSelfdestructed_d2g0v0_Constantinople",
"create2callPrecompiles_d2g0v0_Constantinople",
"create2InitCodes_d2g0v0_Constantinople",
"create2collisionNonce_d2g0v0_Byzantium",
"create2collisionNonce_d2g0v0_Constantinople",
"create2collisionCode_d0g0v0_Byzantium",
"create2collisionCode_d0g0v0_Constantinople",
"CREATE2_Bounds_d0g0v0_Constantinople",
"RevertDepthCreate2OOG_d0g0v0_Constantinople",
"CREATE2_Suicide_d1g0v0_Constantinople",
"CREATE2_Bounds3_d0g1v0_Constantinople",
"create2collisionStorage_d2g0v0_Byzantium",
"create2collisionStorage_d2g0v0_Constantinople",
"RevertDepthCreateAddressCollision_d0g0v1_Constantinople",
"create2callPrecompiles_d5g0v0_Constantinople",
"create2collisionCode2_d0g0v0_Byzantium",
"create2collisionCode2_d0g0v0_Constantinople",
"create2noCash_d0g0v0_Byzantium",
"create2noCash_d0g0v0_Constantinople",
"create2checkFieldsInInitcode_d7g0v0_Constantinople",
"create2SmartInitCode_d1g0v0_Constantinople",
"create2InitCodes_d6g0v0_Constantinople",
"create2noCash_d1g0v0_Byzantium",
"create2noCash_d1g0v0_Constantinople",
"CREATE2_ContractSuicideDuringInit_ThenStoreThenReturn_d0g0v0_Constantinople",
"RevertOpcodeInCreateReturns_d0g0v0_Constantinople",
"create2collisionStorage_d1g0v0_Byzantium",
"create2collisionStorage_d1g0v0_Constantinople",
"create2checkFieldsInInitcode_d3g0v0_Constantinople",
"create2collisionBalance_d0g0v0_Byzantium",
"create2collisionBalance_d0g0v0_Constantinople",
"create2collisionSelfdestructed2_d0g0v0_Constantinople",
"create2InitCodes_d3g0v0_Constantinople",
"create2collisionCode2_d1g0v0_Byzantium",
"create2collisionCode2_d1g0v0_Constantinople",
"create2checkFieldsInInitcode_d1g0v0_Constantinople",
"create2collisionBalance_d1g0v0_Byzantium",
"create2collisionBalance_d1g0v0_Constantinople",
"CREATE2_Bounds3_d0g2v0_Constantinople",
"create2callPrecompiles_d6g0v0_Constantinople",
"Create2Recursive_d0g0v0_Constantinople",
"create2collisionSelfdestructedOOG_d0g0v0_Byzantium",
"create2collisionSelfdestructedOOG_d0g0v0_Constantinople",
"CREATE2_Suicide_d3g0v0_Constantinople",
"returndatacopy_following_create_d0g0v0_Constantinople",
"create2InitCodes_d8g0v0_Constantinople",
"RevertDepthCreate2OOG_d0g0v1_Constantinople",
"create2checkFieldsInInitcode_d2g0v0_Constantinople",
"RevertDepthCreate2OOG_d1g0v1_Constantinople",
"Create2OnDepth1024_d0g0v0_Constantinople",
"create2collisionSelfdestructed2_d1g0v0_Constantinople",
"create2collisionSelfdestructedRevert_d2g0v0_Byzantium",
"create2collisionSelfdestructedRevert_d2g0v0_Constantinople",
"create2callPrecompiles_d0g0v0_Constantinople",
"RevertDepthCreateAddressCollision_d0g1v1_Constantinople",
"create2collisionSelfdestructed_d1g0v0_Byzantium",
"create2collisionSelfdestructed_d1g0v0_Constantinople",
"call_outsize_then_create2_successful_then_returndatasize_d0g0v0_Byzantium",
"call_outsize_then_create2_successful_then_returndatasize_d0g0v0_Constantinople",
"Create2OOGafterInitCodeRevert_d0g0v0_Constantinople",
"Create2OOGafterInitCodeReturndata3_d0g0v0_Constantinople",
"Create2OOGafterInitCodeReturndataSize_d0g0v0_Constantinople",
"create2InitCodes_d7g0v0_Constantinople",
"CREATE2_Suicide_d10g0v0_Constantinople",
"RevertDepthCreate2OOG_d0g1v0_Constantinople",
"create2InitCodes_d5g0v0_Constantinople",
"create2collisionSelfdestructedRevert_d1g0v0_Byzantium",
"create2collisionSelfdestructedRevert_d1g0v0_Constantinople",
"RevertDepthCreate2OOG_d1g1v0_Constantinople",
"create2collisionSelfdestructed_d0g0v0_Byzantium",
"create2collisionSelfdestructed_d0g0v0_Constantinople",
"create2noCash_d2g0v0_Byzantium",
"create2noCash_d2g0v0_Constantinople",
"CREATE2_Bounds3_d0g0v0_Constantinople",
"create2collisionNonce_d0g0v0_Byzantium",
"create2collisionNonce_d0g0v0_Constantinople",
"CREATE2_Suicide_d2g0v0_Constantinople",
"Create2OOGafterInitCode_d0g0v0_Constantinople",
"call_then_create2_successful_then_returndatasize_d0g0v0_Byzantium",
"call_then_create2_successful_then_returndatasize_d0g0v0_Constantinople",
"create2collisionBalance_d2g0v0_Byzantium",
"create2collisionBalance_d2g0v0_Constantinople",
"create2checkFieldsInInitcode_d6g0v0_Constantinople",
"RevertDepthCreate2OOG_d0g1v1_Constantinople",
"returndatacopy_afterFailing_create_d0g0v0_Constantinople",
"returndatacopy_following_revert_in_create_d0g0v0_Constantinople",
"CREATE2_Suicide_d9g0v0_Constantinople",
"create2callPrecompiles_d7g0v0_Constantinople",
"RevertDepthCreateAddressCollision_d1g0v1_Constantinople",
"create2InitCodes_d1g0v0_Constantinople",
"CREATE2_Bounds_d0g1v0_Constantinople",
"Create2OOGafterInitCodeReturndata_d0g0v0_Constantinople",
"create2checkFieldsInInitcode_d4g0v0_Constantinople",
"CreateMessageRevertedOOGInInit_d0g0v0_Constantinople",
"RevertDepthCreateAddressCollision_d1g1v0_Constantinople",
"returndatacopy_following_successful_create_d0g0v0_Constantinople",
"create2checkFieldsInInitcode_d0g0v0_Constantinople",
"CreateMessageReverted_d0g0v0_Constantinople",
"create2SmartInitCode_d0g0v0_Constantinople",
"CREATE2_Bounds2_d0g0v0_Constantinople",
"returndatasize_following_successful_create_d0g0v0_Constantinople",
"CREATE2_Bounds2_d0g1v0_Constantinople",
"returndatacopy_0_0_following_successful_create_d0g0v0_Constantinople",
"RevertDepthCreateAddressCollision_d0g0v0_Constantinople",
"CREATE2_Suicide_d0g0v0_Constantinople",
"create2InitCodes_d0g0v0_Constantinople",
"Create2OnDepth1023_d0g0v0_Constantinople",
"create2InitCodes_d4g0v0_Constantinople",
"Create2OOGafterInitCodeReturndata2_d0g0v0_Constantinople",
"create2collisionBalance_d3g0v0_Byzantium",
"create2collisionBalance_d3g0v0_Constantinople",
"CREATE2_Suicide_d4g0v0_Constantinople",
"Create2OOGafterInitCode_d0g1v0_Constantinople",
"RevertDepthCreateAddressCollision_d1g0v0_Constantinople",
"Create2OOGafterInitCodeRevert2_d0g0v0_Constantinople",
"Create2OOGafterInitCodeReturndata_d0g1v0_Constantinople",
"Create2Recursive_d0g1v0_Constantinople",
"create2collisionCode_d2g0v0_Byzantium",
"create2collisionCode_d2g0v0_Constantinople",
"CREATE2_Suicide_d6g0v0_Constantinople",
"CREATE2_Suicide_d8g0v0_Constantinople",
"RevertOpcodeCreate_d0g0v0_Constantinople",
"Create2OOGafterInitCodeReturndata2_d0g1v0_Constantinople",
"create2callPrecompiles_d1g0v0_Constantinople",
"RevertInCreateInInit_d0g0v0_Constantinople",
"RevertDepthCreate2OOG_d1g0v0_Constantinople"
]
},
{
"reference": "9590",
"failing": "bcStateTest",
"subtests": [
"suicideStorageCheck_Byzantium",
"suicideStorageCheck_Constantinople",
"suicideStorageCheckVCreate2_Byzantium",
"suicideStorageCheckVCreate2_Constantinople",
"create2collisionwithSelfdestructSameBlock_Constantinople",
"blockhashNonConstArg_Constantinople",
"suicideThenCheckBalance_Constantinople",
"suicideThenCheckBalance_Homestead",
"suicideStorageCheckVCreate_Byzantium",
"suicideStorageCheckVCreate_Constantinople"
]
},
{
"reference": "9590",
"failing": "stEIP158Specific",
"subtests": [
"callToEmptyThenCallError_d0g0v0_Byzantium",
"callToEmptyThenCallError_d0g0v0_Constantinople",
"callToEmptyThenCallError_d0g0v0_EIP158"
]
},
{
"reference": "9590",
"failing": "stPreCompiledContracts",
"subtests": [
"identity_to_smaller_d0g0v0_Constantinople",
"identity_to_bigger_d0g0v0_Constantinople"
]
},
{
"reference": "9590",
"failing": "stReturnDataTest",
"subtests": [
"modexp_modsize0_returndatasize_d0g1v0_Constantinople",
"modexp_modsize0_returndatasize_d0g2v0_Constantinople",
"modexp_modsize0_returndatasize_d0g3v0_Constantinople"
]
},
{
"reference": "9590",
"failing": "stSpecialTest",
"subtests": [
"push32withoutByte_d0g0v0_Constantinople"
]
}
],
"state":
[
{
"reference": "9590",
"failing": "stCreateTest",
"subtests": {
"CreateOOGafterInitCodeReturndata2": {
"subnumbers": ["2"],
"chain": "Constantinople (test)"
}
}
},
{
"reference": "9590",
"failing": "stCreate2Test",
"subtests": {
"RevertInCreateInInit": {
"subnumbers": ["1"],
"chain": "Constantinople (test)"
}
}
},
{
"reference": "9590",
"failing": "stEIP150Specific",
"subtests": {
"NewGasPriceForCodes": {
"subnumbers": ["1"],
"chain": "Constantinople (test)"
}
}
},
{
"reference": "9590",
"failing": "stInitCodeTest",
"subtests": {
"OutOfGasContractCreation": {
"subnumbers": ["4"],
"chain": "Constantinople (test)"
}
}
},
{
"reference": "9590",
"failing": "stPreCompiledContracts",
"subtests": {
"modexp": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
}
}
},
{
"reference": "9590",
"failing": "stRevertTest",
"subtests": {
"LoopCallsDepthThenRevert3": {
"subnumbers": ["1"],
"chain": "Constantinople (test)"
},
"RevertOpcodeCreate": {
"subnumbers": ["1"],
"chain": "Constantinople (test)"
},
"RevertSubCallStorageOOG2": {
"subnumbers": ["1","3"],
"chain": "Constantinople (test)"
},
"RevertDepthCreateOOG": {
"subnumbers": ["3","4"],
"chain": "Constantinople (test)"
},
"RevertOpcodeMultipleSubCalls": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"RevertOpcodeDirectCall": {
"subnumbers": ["1","2"],
"chain": "Constantinople (test)"
},
"LoopCallsDepthThenRevert2": {
"subnumbers": ["1"],
"chain": "Constantinople (test)"
},
"RevertDepth2": {
"subnumbers": ["1"],
"chain": "Constantinople (test)"
},
"RevertRemoteSubCallStorageOOG2": {
"subnumbers": ["1","2"],
"chain": "Constantinople (test)"
},
"RevertDepthCreateAddressCollision": {
"subnumbers": ["3","4"],
"chain": "Constantinople (test)"
}
}
},
{
"reference": "9590",
"failing": "stStaticCall",
"subtests": {
"static_RevertDepth2": {
"subnumbers": ["1","3"],
"chain": "Constantinople (test)"
},
"static_CheckOpcodes4": {
"subnumbers": ["3"],
"chain": "Constantinople (test)"
},
"static_CheckOpcodes3": {
"subnumbers": ["5","6","7","8"],
"chain": "Constantinople (test)"
},
"static_callBasic": {
"subnumbers": ["1","2"],
"chain": "Constantinople (test)"
},
"static_CheckOpcodes2": {
"subnumbers": ["5","6","7","8"],
"chain": "Constantinople (test)"
},
"static_callCreate": {
"subnumbers": ["2"],
"chain": "Constantinople (test)"
}
}
},
{
"reference": "https://github.com/ethereum/tests/issues/512",
"failing": "stZeroKnowledge",
"subtests": {
"pointAddTrunc": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"pointAdd": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"pointMulAdd": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"pointMulAdd2": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
}
}
},
{
"reference": "9590",
"failing": "stCreate2Test",
"subtests": {
"call_then_create2_successful_then_returndatasize": {
"subnumbers": ["1"],
"chain": "Constantinople (test)"
},
"returndatacopy_afterFailing_create": {
"subnumbers": ["1"],
"chain": "Constantinople (test)"
},
"create2checkFieldsInInitcode": {
"subnumbers": ["1","2","3","5","6","7"],
"chain": "Constantinople (test)"
},
"Create2Recursive": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"create2collisionBalance": {
"subnumbers": ["2","3"],
"chain": "Constantinople (test)"
},
"create2InitCodes": {
"subnumbers": ["1","5","6","7","8","9"],
"chain": "Constantinople (test)"
},
"Create2OOGafterInitCode": {
"subnumbers": ["2"],
"chain": "Constantinople (test)"
},
"CreateMessageRevertedOOGInInit": {
"subnumbers": ["2"],
"chain": "Constantinople (test)"
},
"returndatacopy_following_revert_in_create": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"create2collisionSelfdestructed": {
"subnumbers": ["2"],
"chain": "Constantinople (test)"
},
"returndatacopy_0_0_following_successful_create": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"Create2OnDepth1023": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"Create2OOGafterInitCodeReturndata2": {
"subnumbers": ["2"],
"chain": "Constantinople (test)"
},
"RevertOpcodeInCreateReturns": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"CREATE2_ContractSuicideDuringInit_ThenStoreThenReturn": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"returndatasize_following_successful_create": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"call_outsize_then_create2_successful_then_returndatasize": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"CreateMessageReverted": {
"subnumbers": ["2"],
"chain": "Constantinople (test)"
},
"CREATE2_Suicide": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"Create2OOGafterInitCodeRevert": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"Create2OnDepth1024": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
},
"create2collisionStorage": {
"subnumbers": ["2","3"],
"chain": "Constantinople (test)"
},
"create2callPrecompiles": {
"subnumbers": ["*"],
"chain": "Constantinople (test)"
}
}
}
]
}

View File

@ -58,7 +58,7 @@ impl Default for Factory {
impl Factory {
/// Create a read-only accountdb.
/// This will panic when write operations are called.
pub fn readonly<'db>(&self, db: &'db HashDB<KeccakHasher>, address_hash: H256) -> Box<HashDB<KeccakHasher> + 'db> {
pub fn readonly<'db>(&self, db: &'db HashDB<KeccakHasher, DBValue>, address_hash: H256) -> Box<HashDB<KeccakHasher, DBValue> + 'db> {
match *self {
Factory::Mangled => Box::new(AccountDB::from_hash(db, address_hash)),
Factory::Plain => Box::new(Wrapping(db)),
@ -66,7 +66,7 @@ impl Factory {
}
/// Create a new mutable hashdb.
pub fn create<'db>(&self, db: &'db mut HashDB<KeccakHasher>, address_hash: H256) -> Box<HashDB<KeccakHasher> + 'db> {
pub fn create<'db>(&self, db: &'db mut HashDB<KeccakHasher, DBValue>, address_hash: H256) -> Box<HashDB<KeccakHasher, DBValue> + 'db> {
match *self {
Factory::Mangled => Box::new(AccountDBMut::from_hash(db, address_hash)),
Factory::Plain => Box::new(WrappingMut(db)),
@ -78,19 +78,19 @@ impl Factory {
/// DB backend wrapper for Account trie
/// Transforms trie node keys for the database
pub struct AccountDB<'db> {
db: &'db HashDB<KeccakHasher>,
db: &'db HashDB<KeccakHasher, DBValue>,
address_hash: H256,
}
impl<'db> AccountDB<'db> {
/// Create a new AccountDB from an address.
#[cfg(test)]
pub fn new(db: &'db HashDB<KeccakHasher>, address: &Address) -> Self {
pub fn new(db: &'db HashDB<KeccakHasher, DBValue>, address: &Address) -> Self {
Self::from_hash(db, keccak(address))
}
/// Create a new AcountDB from an address' hash.
pub fn from_hash(db: &'db HashDB<KeccakHasher>, address_hash: H256) -> Self {
pub fn from_hash(db: &'db HashDB<KeccakHasher, DBValue>, address_hash: H256) -> Self {
AccountDB {
db: db,
address_hash: address_hash,
@ -98,12 +98,12 @@ impl<'db> AccountDB<'db> {
}
}
impl<'db> AsHashDB<KeccakHasher> for AccountDB<'db> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> { self }
impl<'db> AsHashDB<KeccakHasher, DBValue> for AccountDB<'db> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
}
impl<'db> HashDB<KeccakHasher> for AccountDB<'db> {
impl<'db> HashDB<KeccakHasher, DBValue> for AccountDB<'db> {
fn keys(&self) -> HashMap<H256, i32> {
unimplemented!()
}
@ -137,19 +137,19 @@ impl<'db> HashDB<KeccakHasher> for AccountDB<'db> {
/// DB backend wrapper for Account trie
pub struct AccountDBMut<'db> {
db: &'db mut HashDB<KeccakHasher>,
db: &'db mut HashDB<KeccakHasher, DBValue>,
address_hash: H256,
}
impl<'db> AccountDBMut<'db> {
/// Create a new AccountDB from an address.
#[cfg(test)]
pub fn new(db: &'db mut HashDB<KeccakHasher>, address: &Address) -> Self {
pub fn new(db: &'db mut HashDB<KeccakHasher, DBValue>, address: &Address) -> Self {
Self::from_hash(db, keccak(address))
}
/// Create a new AcountDB from an address' hash.
pub fn from_hash(db: &'db mut HashDB<KeccakHasher>, address_hash: H256) -> Self {
pub fn from_hash(db: &'db mut HashDB<KeccakHasher, DBValue>, address_hash: H256) -> Self {
AccountDBMut {
db: db,
address_hash: address_hash,
@ -162,7 +162,7 @@ impl<'db> AccountDBMut<'db> {
}
}
impl<'db> HashDB<KeccakHasher> for AccountDBMut<'db>{
impl<'db> HashDB<KeccakHasher, DBValue> for AccountDBMut<'db>{
fn keys(&self) -> HashMap<H256, i32> {
unimplemented!()
}
@ -208,19 +208,19 @@ impl<'db> HashDB<KeccakHasher> for AccountDBMut<'db>{
}
}
impl<'db> AsHashDB<KeccakHasher> for AccountDBMut<'db> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> { self }
impl<'db> AsHashDB<KeccakHasher, DBValue> for AccountDBMut<'db> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
}
struct Wrapping<'db>(&'db HashDB<KeccakHasher>);
struct Wrapping<'db>(&'db HashDB<KeccakHasher, DBValue>);
impl<'db> AsHashDB<KeccakHasher> for Wrapping<'db> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> { self }
impl<'db> AsHashDB<KeccakHasher, DBValue> for Wrapping<'db> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
}
impl<'db> HashDB<KeccakHasher> for Wrapping<'db> {
impl<'db> HashDB<KeccakHasher, DBValue> for Wrapping<'db> {
fn keys(&self) -> HashMap<H256, i32> {
unimplemented!()
}
@ -252,13 +252,13 @@ impl<'db> HashDB<KeccakHasher> for Wrapping<'db> {
}
}
struct WrappingMut<'db>(&'db mut HashDB<KeccakHasher>);
impl<'db> AsHashDB<KeccakHasher> for WrappingMut<'db> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> { self }
struct WrappingMut<'db>(&'db mut HashDB<KeccakHasher, DBValue>);
impl<'db> AsHashDB<KeccakHasher, DBValue> for WrappingMut<'db> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
}
impl<'db> HashDB<KeccakHasher> for WrappingMut<'db>{
impl<'db> HashDB<KeccakHasher, DBValue> for WrappingMut<'db>{
fn keys(&self) -> HashMap<H256, i32> {
unimplemented!()
}

View File

@ -160,11 +160,6 @@ pub trait BlockProvider {
.and_then(|n| body.view().localized_transaction_at(&address.block_hash, n, address.index)))
}
/// Get transaction receipt.
fn transaction_receipt(&self, address: &TransactionAddress) -> Option<Receipt> {
self.block_receipts(&address.block_hash).and_then(|br| br.receipts.into_iter().nth(address.index))
}
/// Get a list of transactions for a given block.
/// Returns None if block does not exist.
fn transactions(&self, hash: &H256) -> Option<Vec<LocalizedTransaction>> {

View File

@ -51,7 +51,7 @@ impl Block {
#[inline]
pub fn encoded(&self) -> encoded::Block {
encoded::Block::new(encode(self).into_vec())
encoded::Block::new(encode(self))
}
#[inline]

View File

@ -52,7 +52,7 @@ use encoded;
use engines::{EthEngine, EpochTransition, ForkChoice};
use error::{
ImportErrorKind, BlockImportErrorKind, ExecutionError, CallError, BlockError, ImportResult,
QueueError, QueueErrorKind, Error as EthcoreError
QueueError, QueueErrorKind, Error as EthcoreError, EthcoreResult,
};
use vm::{EnvInfo, LastHashes};
use evm::Schedule;
@ -296,7 +296,6 @@ impl Importer {
continue;
}
let raw = block.bytes.clone();
match self.check_and_lock_block(block, client) {
Ok(closed_block) => {
if self.engine.is_proposal(&header) {
@ -314,8 +313,8 @@ impl Importer {
}
},
Err(err) => {
self.bad_blocks.report(raw, format!("{:?}", err));
invalid_blocks.insert(header.hash());
self.bad_blocks.report(bytes, format!("{:?}", err));
invalid_blocks.insert(hash);
},
}
}
@ -356,7 +355,7 @@ impl Importer {
imported
}
fn check_and_lock_block(&self, block: PreverifiedBlock, client: &Client) -> Result<LockedBlock, ()> {
fn check_and_lock_block(&self, block: PreverifiedBlock, client: &Client) -> EthcoreResult<LockedBlock> {
let engine = &*self.engine;
let header = block.header.clone();
@ -364,7 +363,7 @@ impl Importer {
let best_block_number = client.chain.read().best_block_number();
if client.pruning_info().earliest_state > header.number() {
warn!(target: "client", "Block import failed for #{} ({})\nBlock is ancient (current best block: #{}).", header.number(), header.hash(), best_block_number);
return Err(());
bail!("Block is ancient");
}
// Check if parent is in chain
@ -372,7 +371,7 @@ impl Importer {
Some(h) => h,
None => {
warn!(target: "client", "Block import failed for #{} ({}): Parent not found ({}) ", header.number(), header.hash(), header.parent_hash());
return Err(());
bail!("Parent not found");
}
};
@ -391,13 +390,13 @@ impl Importer {
if let Err(e) = verify_family_result {
warn!(target: "client", "Stage 3 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
return Err(());
bail!(e);
};
let verify_external_result = self.verifier.verify_block_external(&header, engine);
if let Err(e) = verify_external_result {
warn!(target: "client", "Stage 4 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
return Err(());
bail!(e);
};
// Enact Verified Block
@ -417,9 +416,13 @@ impl Importer {
&mut chain.ancestry_with_metadata_iter(*header.parent_hash()),
);
let mut locked_block = enact_result.map_err(|e| {
warn!(target: "client", "Block import failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
})?;
let mut locked_block = match enact_result {
Ok(b) => b,
Err(e) => {
warn!(target: "client", "Block import failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
bail!(e);
}
};
// Strip receipts for blocks before validate_receipts_transition,
// if the expected receipts root header does not match.
@ -433,7 +436,7 @@ impl Importer {
// Final Verification
if let Err(e) = self.verifier.verify_block_final(&header, locked_block.block().header()) {
warn!(target: "client", "Stage 5 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
return Err(());
bail!(e);
}
Ok(locked_block)
@ -443,7 +446,7 @@ impl Importer {
///
/// The block is guaranteed to be the next best blocks in the
/// first block sequence. Does no sealing or transaction validation.
fn import_old_block(&self, unverified: Unverified, receipts_bytes: &[u8], db: &KeyValueDB, chain: &BlockChain) -> Result<(), ::error::Error> {
fn import_old_block(&self, unverified: Unverified, receipts_bytes: &[u8], db: &KeyValueDB, chain: &BlockChain) -> EthcoreResult<()> {
let receipts = ::rlp::decode_list(receipts_bytes);
let _import_lock = self.import_lock.lock();
@ -1153,7 +1156,8 @@ impl Client {
},
};
snapshot::take_snapshot(&*self.engine, &self.chain.read(), start_hash, db.as_hashdb(), writer, p)?;
let processing_threads = self.config.snapshot.processing_threads;
snapshot::take_snapshot(&*self.engine, &self.chain.read(), start_hash, db.as_hashdb(), writer, p, processing_threads)?;
Ok(())
}
@ -1491,7 +1495,7 @@ impl Call for Client {
let sender = t.sender();
let options = || TransactOptions::with_tracing().dont_check_nonce();
let cond = |gas| {
let exec = |gas| {
let mut tx = t.as_unsigned().clone();
tx.gas = gas;
let tx = tx.fake_sign(sender);
@ -1499,22 +1503,28 @@ impl Call for Client {
let mut clone = state.clone();
let machine = self.engine.machine();
let schedule = machine.schedule(env_info.number);
Ok(Executive::new(&mut clone, &env_info, &machine, &schedule)
Executive::new(&mut clone, &env_info, &machine, &schedule)
.transact_virtual(&tx, options())
.ok()
.map(|r| r.exception.is_none())
.unwrap_or(false))
};
if !cond(upper)? {
let cond = |gas| exec(gas).unwrap_or(false);
if !cond(upper) {
upper = max_upper;
if !cond(upper)? {
trace!(target: "estimate_gas", "estimate_gas failed with {}", upper);
let err = ExecutionError::Internal(format!("Requires higher than upper limit of {}", upper));
return Err(err.into())
match exec(upper) {
Some(false) => return Err(CallError::Exceptional),
None => {
trace!(target: "estimate_gas", "estimate_gas failed with {}", upper);
let err = ExecutionError::Internal(format!("Requires higher than upper limit of {}", upper));
return Err(err.into())
},
_ => {},
}
}
let lower = t.gas_required(&self.engine.schedule(env_info.number)).into();
if cond(lower)? {
if cond(lower) {
trace!(target: "estimate_gas", "estimate_gas succeeded with {}", lower);
return Ok(lower)
}
@ -1523,12 +1533,12 @@ impl Call for Client {
/// Returns the lowest value between `lower` and `upper` for which `cond` returns true.
/// We assert: `cond(lower) = false`, `cond(upper) = true`
fn binary_chop<F, E>(mut lower: U256, mut upper: U256, mut cond: F) -> Result<U256, E>
where F: FnMut(U256) -> Result<bool, E>
where F: FnMut(U256) -> bool
{
while upper - lower > 1.into() {
let mid = (lower + upper) / 2;
trace!(target: "estimate_gas", "{} .. {} .. {}", lower, mid, upper);
let c = cond(mid)?;
let c = cond(mid);
match c {
true => upper = mid,
false => lower = mid,
@ -1785,26 +1795,49 @@ impl BlockChainClient for Client {
}
fn transaction_receipt(&self, id: TransactionId) -> Option<LocalizedReceipt> {
// NOTE Don't use block_receipts here for performance reasons
let address = self.transaction_address(id)?;
let hash = address.block_hash;
let chain = self.chain.read();
self.transaction_address(id)
.and_then(|address| chain.block_number(&address.block_hash).and_then(|block_number| {
let transaction = chain.block_body(&address.block_hash)
.and_then(|body| body.view().localized_transaction_at(&address.block_hash, block_number, address.index));
let number = chain.block_number(&hash)?;
let body = chain.block_body(&hash)?;
let mut receipts = chain.block_receipts(&hash)?.receipts;
receipts.truncate(address.index + 1);
let previous_receipts = (0..address.index + 1)
.map(|index| {
let mut address = address.clone();
address.index = index;
chain.transaction_receipt(&address)
})
.collect();
match (transaction, previous_receipts) {
(Some(transaction), Some(previous_receipts)) => {
Some(transaction_receipt(self.engine().machine(), transaction, previous_receipts))
},
_ => None,
}
}))
let transaction = body.view().localized_transaction_at(&hash, number, address.index)?;
let receipt = receipts.pop()?;
let gas_used = receipts.last().map_or_else(|| 0.into(), |r| r.gas_used);
let no_of_logs = receipts.into_iter().map(|receipt| receipt.logs.len()).sum::<usize>();
let receipt = transaction_receipt(self.engine().machine(), transaction, receipt, gas_used, no_of_logs);
Some(receipt)
}
fn block_receipts(&self, id: BlockId) -> Option<Vec<LocalizedReceipt>> {
let hash = self.block_hash(id)?;
let chain = self.chain.read();
let receipts = chain.block_receipts(&hash)?;
let number = chain.block_number(&hash)?;
let body = chain.block_body(&hash)?;
let engine = self.engine.clone();
let mut gas_used = 0.into();
let mut no_of_logs = 0;
Some(body
.view()
.localized_transactions(&hash, number)
.into_iter()
.zip(receipts.receipts)
.map(move |(transaction, receipt)| {
let result = transaction_receipt(engine.machine(), transaction, receipt, gas_used, no_of_logs);
gas_used = result.cumulative_gas_used;
no_of_logs += result.logs.len();
result
})
.collect()
)
}
fn tree_route(&self, from: &H256, to: &H256) -> Option<TreeRoute> {
@ -1823,8 +1856,8 @@ impl BlockChainClient for Client {
self.state_db.read().journal_db().state(hash)
}
fn block_receipts(&self, hash: &H256) -> Option<Bytes> {
self.chain.read().block_receipts(hash).map(|receipts| ::rlp::encode(&receipts).into_vec())
fn encoded_block_receipts(&self, hash: &H256) -> Option<Bytes> {
self.chain.read().block_receipts(hash).map(|receipts| ::rlp::encode(&receipts))
}
fn queue_info(&self) -> BlockQueueInfo {
@ -2239,27 +2272,35 @@ impl ScheduleInfo for Client {
}
impl ImportSealedBlock for Client {
fn import_sealed_block(&self, block: SealedBlock) -> ImportResult {
let h = block.header().hash();
fn import_sealed_block(&self, block: SealedBlock) -> EthcoreResult<H256> {
let start = Instant::now();
let header = block.header().clone();
let route = {
// Do a super duper basic verification to detect potential bugs
if let Err(e) = self.engine.verify_block_basic(&header) {
self.importer.bad_blocks.report(
block.rlp_bytes(),
format!("Detected an issue with locally sealed block: {}", e),
);
return Err(e.into());
}
// scope for self.import_lock
let _import_lock = self.importer.import_lock.lock();
trace_time!("import_sealed_block");
let number = block.header().number();
let block_data = block.rlp_bytes();
let header = block.header().clone();
let route = self.importer.commit_block(block, &header, encoded::Block::new(block_data), self);
trace!(target: "client", "Imported sealed block #{} ({})", number, h);
trace!(target: "client", "Imported sealed block #{} ({})", header.number(), header.hash());
self.state_db.write().sync_cache(&route.enacted, &route.retracted, false);
route
};
let h = header.hash();
let route = ChainRoute::from([route].as_ref());
self.importer.miner.chain_new_blocks(
self,
&[h.clone()],
&[h],
&[],
route.enacted(),
route.retracted(),
@ -2267,10 +2308,10 @@ impl ImportSealedBlock for Client {
);
self.notify(|notify| {
notify.new_blocks(
vec![h.clone()],
vec![h],
vec![],
route.clone(),
vec![h.clone()],
vec![h],
vec![],
start.elapsed(),
);
@ -2352,14 +2393,13 @@ impl ProvingBlockChainClient for Client {
env_info.gas_limit = transaction.gas.clone();
let mut jdb = self.state_db.read().journal_db().boxed_clone();
state::prove_transaction(
state::prove_transaction_virtual(
jdb.as_hashdb_mut(),
header.state_root().clone(),
&transaction,
self.engine.machine(),
&env_info,
self.factories.clone(),
false,
)
}
@ -2378,16 +2418,14 @@ impl Drop for Client {
/// Returns `LocalizedReceipt` given `LocalizedTransaction`
/// and a vector of receipts from given block up to transaction index.
fn transaction_receipt(machine: &::machine::EthereumMachine, mut tx: LocalizedTransaction, mut receipts: Vec<Receipt>) -> LocalizedReceipt {
assert_eq!(receipts.len(), tx.transaction_index + 1, "All previous receipts are provided.");
fn transaction_receipt(
machine: &::machine::EthereumMachine,
mut tx: LocalizedTransaction,
receipt: Receipt,
prior_gas_used: U256,
prior_no_of_logs: usize,
) -> LocalizedReceipt {
let sender = tx.sender();
let receipt = receipts.pop().expect("Current receipt is provided; qed");
let prior_gas_used = match tx.transaction_index {
0 => 0.into(),
i => receipts.get(i - 1).expect("All previous receipts are provided; qed").gas_used,
};
let no_of_logs = receipts.into_iter().map(|receipt| receipt.logs.len()).sum::<usize>();
let transaction_hash = tx.hash();
let block_hash = tx.block_hash;
let block_number = tx.block_number;
@ -2416,7 +2454,7 @@ fn transaction_receipt(machine: &::machine::EthereumMachine, mut tx: LocalizedTr
transaction_hash: transaction_hash,
transaction_index: transaction_index,
transaction_log_index: i,
log_index: no_of_logs + i,
log_index: prior_no_of_logs + i,
}).collect(),
log_bloom: receipt.log_bloom,
outcome: receipt.outcome,
@ -2464,6 +2502,33 @@ mod tests {
assert!(client.tree_route(&genesis, &new_hash).is_none());
}
#[test]
fn should_return_block_receipts() {
use client::{BlockChainClient, BlockId, TransactionId};
use test_helpers::{generate_dummy_client_with_data};
let client = generate_dummy_client_with_data(2, 2, &[1.into(), 1.into()]);
let receipts = client.block_receipts(BlockId::Latest).unwrap();
assert_eq!(receipts.len(), 2);
assert_eq!(receipts[0].transaction_index, 0);
assert_eq!(receipts[0].block_number, 2);
assert_eq!(receipts[0].cumulative_gas_used, 53_000.into());
assert_eq!(receipts[0].gas_used, 53_000.into());
assert_eq!(receipts[1].transaction_index, 1);
assert_eq!(receipts[1].block_number, 2);
assert_eq!(receipts[1].cumulative_gas_used, 106_000.into());
assert_eq!(receipts[1].gas_used, 53_000.into());
let receipt = client.transaction_receipt(TransactionId::Hash(receipts[0].transaction_hash));
assert_eq!(receipt, Some(receipts[0].clone()));
let receipt = client.transaction_receipt(TransactionId::Hash(receipts[1].transaction_hash));
assert_eq!(receipt, Some(receipts[1].clone()));
}
#[test]
fn should_return_correct_log_index() {
use hash::keccak;
@ -2507,20 +2572,15 @@ mod tests {
topics: vec![],
data: vec![],
}];
let receipts = vec![Receipt {
outcome: TransactionOutcome::StateRoot(state_root),
gas_used: 5.into(),
log_bloom: Default::default(),
logs: vec![logs[0].clone()],
}, Receipt {
let receipt = Receipt {
outcome: TransactionOutcome::StateRoot(state_root),
gas_used: gas_used,
log_bloom: Default::default(),
logs: logs.clone(),
}];
};
// when
let receipt = transaction_receipt(&machine, transaction, receipts);
let receipt = transaction_receipt(&machine, transaction, receipt, 5.into(), 1);
// then
assert_eq!(receipt, LocalizedReceipt {

View File

@ -19,6 +19,7 @@ use std::fmt::{Display, Formatter, Error as FmtError};
use verification::{VerifierType, QueueConfig};
use journaldb;
use snapshot::SnapshotConfiguration;
pub use std::time::Duration;
pub use blockchain::Config as BlockChainConfig;
@ -120,6 +121,8 @@ pub struct ClientConfig {
pub check_seal: bool,
/// Maximal number of transactions queued for verification in a separate thread.
pub transaction_verification_queue_size: usize,
/// Snapshot configuration
pub snapshot: SnapshotConfiguration,
}
impl Default for ClientConfig {
@ -144,6 +147,7 @@ impl Default for ClientConfig {
history_mem: 32 * mb,
check_seal: true,
transaction_verification_queue_size: 8192,
snapshot: Default::default(),
}
}
}

View File

@ -118,7 +118,7 @@ pub struct TestBlockChainClient {
}
/// Used for generating test client blocks.
#[derive(Clone)]
#[derive(Clone, Copy)]
pub enum EachBlockWith {
/// Plain block.
Nothing,
@ -242,69 +242,68 @@ impl TestBlockChainClient {
*self.error_on_logs.write() = val;
}
/// Add blocks to test client.
pub fn add_blocks(&self, count: usize, with: EachBlockWith) {
let len = self.numbers.read().len();
for n in len..(len + count) {
let mut header = BlockHeader::new();
header.set_difficulty(From::from(n));
header.set_parent_hash(self.last_hash.read().clone());
header.set_number(n as BlockNumber);
header.set_gas_limit(U256::from(1_000_000));
header.set_extra_data(self.extra_data.clone());
let uncles = match with {
EachBlockWith::Uncle | EachBlockWith::UncleAndTransaction => {
let mut uncles = RlpStream::new_list(1);
let mut uncle_header = BlockHeader::new();
uncle_header.set_difficulty(From::from(n));
uncle_header.set_parent_hash(self.last_hash.read().clone());
uncle_header.set_number(n as BlockNumber);
uncles.append(&uncle_header);
header.set_uncles_hash(keccak(uncles.as_raw()));
uncles
},
_ => RlpStream::new_list(0)
};
let txs = match with {
EachBlockWith::Transaction | EachBlockWith::UncleAndTransaction => {
let mut txs = RlpStream::new_list(1);
let keypair = Random.generate().unwrap();
// Update nonces value
self.nonces.write().insert(keypair.address(), U256::one());
let tx = Transaction {
action: Action::Create,
value: U256::from(100),
data: "3331600055".from_hex().unwrap(),
gas: U256::from(100_000),
gas_price: U256::from(200_000_000_000u64),
nonce: U256::zero()
};
let signed_tx = tx.sign(keypair.secret(), None);
txs.append(&signed_tx);
txs.out()
},
_ => ::rlp::EMPTY_LIST_RLP.to_vec()
};
/// Add a block to test client.
pub fn add_block<F>(&self, with: EachBlockWith, hook: F)
where F: Fn(BlockHeader) -> BlockHeader
{
let n = self.numbers.read().len();
let mut rlp = RlpStream::new_list(3);
rlp.append(&header);
rlp.append_raw(&txs, 1);
rlp.append_raw(uncles.as_raw(), 1);
let unverified = Unverified::from_rlp(rlp.out()).unwrap();
self.import_block(unverified).unwrap();
}
}
let mut header = BlockHeader::new();
header.set_difficulty(From::from(n));
header.set_parent_hash(self.last_hash.read().clone());
header.set_number(n as BlockNumber);
header.set_gas_limit(U256::from(1_000_000));
header.set_extra_data(self.extra_data.clone());
header = hook(header);
let uncles = match with {
EachBlockWith::Uncle | EachBlockWith::UncleAndTransaction => {
let mut uncles = RlpStream::new_list(1);
let mut uncle_header = BlockHeader::new();
uncle_header.set_difficulty(From::from(n));
uncle_header.set_parent_hash(self.last_hash.read().clone());
uncle_header.set_number(n as BlockNumber);
uncles.append(&uncle_header);
header.set_uncles_hash(keccak(uncles.as_raw()));
uncles
},
_ => RlpStream::new_list(0)
};
let txs = match with {
EachBlockWith::Transaction | EachBlockWith::UncleAndTransaction => {
let mut txs = RlpStream::new_list(1);
let keypair = Random.generate().unwrap();
// Update nonces value
self.nonces.write().insert(keypair.address(), U256::one());
let tx = Transaction {
action: Action::Create,
value: U256::from(100),
data: "3331600055".from_hex().unwrap(),
gas: U256::from(100_000),
gas_price: U256::from(200_000_000_000u64),
nonce: U256::zero()
};
let signed_tx = tx.sign(keypair.secret(), None);
txs.append(&signed_tx);
txs.out()
},
_ => ::rlp::EMPTY_LIST_RLP.to_vec()
};
/// Make a bad block by setting invalid extra data.
pub fn corrupt_block(&self, n: BlockNumber) {
let hash = self.block_hash(BlockId::Number(n)).unwrap();
let mut header: BlockHeader = self.block_header(BlockId::Number(n)).unwrap().decode().expect("decoding failed");
header.set_extra_data(b"This extra data is way too long to be considered valid".to_vec());
let mut rlp = RlpStream::new_list(3);
rlp.append(&header);
rlp.append_raw(&::rlp::NULL_RLP, 1);
rlp.append_raw(&::rlp::NULL_RLP, 1);
self.blocks.write().insert(hash, rlp.out());
rlp.append_raw(&txs, 1);
rlp.append_raw(uncles.as_raw(), 1);
let unverified = Unverified::from_rlp(rlp.out()).unwrap();
self.import_block(unverified).unwrap();
}
/// Add a sequence of blocks to test client.
pub fn add_blocks(&self, count: usize, with: EachBlockWith) {
for _ in 0..count {
self.add_block(with, |header| header);
}
}
/// Make a bad block by setting invalid parent hash.
@ -687,6 +686,10 @@ impl BlockChainClient for TestBlockChainClient {
self.receipts.read().get(&id).cloned()
}
fn block_receipts(&self, _id: BlockId) -> Option<Vec<LocalizedReceipt>> {
Some(self.receipts.read().values().cloned().collect())
}
fn logs(&self, filter: Filter) -> Result<Vec<LocalizedLogEntry>, BlockId> {
match self.error_on_logs.read().as_ref() {
Some(id) => return Err(id.clone()),
@ -786,7 +789,7 @@ impl BlockChainClient for TestBlockChainClient {
None
}
fn block_receipts(&self, hash: &H256) -> Option<Bytes> {
fn encoded_block_receipts(&self, hash: &H256) -> Option<Bytes> {
// starts with 'f' ?
if *hash > H256::from("f000000000000000000000000000000000000000000000000000000000000000") {
let receipt = BlockReceipts::new(vec![Receipt::new(

View File

@ -42,7 +42,7 @@ use engines::EthEngine;
use ethereum_types::{H256, U256, Address};
use ethcore_miner::pool::VerifiedTransaction;
use bytes::Bytes;
use hashdb::DBValue;
use kvdb::DBValue;
use types::ids::*;
use types::basic_account::BasicAccount;
@ -281,6 +281,9 @@ pub trait BlockChainClient : Sync + Send + AccountData + BlockChain + CallContra
/// Get transaction receipt with given hash.
fn transaction_receipt(&self, id: TransactionId) -> Option<LocalizedReceipt>;
/// Get localized receipts for all transaction in given block.
fn block_receipts(&self, id: BlockId) -> Option<Vec<LocalizedReceipt>>;
/// Get a tree route between `from` and `to`.
/// See `BlockChain::tree_route`.
fn tree_route(&self, from: &H256, to: &H256) -> Option<TreeRoute>;
@ -292,7 +295,7 @@ pub trait BlockChainClient : Sync + Send + AccountData + BlockChain + CallContra
fn state_data(&self, hash: &H256) -> Option<Bytes>;
/// Get raw block receipts data by block header hash.
fn block_receipts(&self, hash: &H256) -> Option<Bytes>;
fn encoded_block_receipts(&self, hash: &H256) -> Option<Bytes>;
/// Get block queue information.
fn queue_info(&self) -> BlockQueueInfo;

View File

@ -16,8 +16,8 @@
//! A blockchain engine that supports a non-instant BFT proof-of-authority.
use std::collections::{BTreeMap, HashSet};
use std::fmt;
use std::collections::{BTreeMap, BTreeSet, HashSet};
use std::{cmp, fmt};
use std::iter::FromIterator;
use std::ops::Deref;
use std::sync::atomic::{AtomicUsize, AtomicBool, Ordering as AtomicOrdering};
@ -80,6 +80,8 @@ pub struct AuthorityRoundParams {
pub empty_steps_transition: u64,
/// Number of accepted empty steps.
pub maximum_empty_steps: usize,
/// Transition block to strict empty steps validation.
pub strict_empty_steps_transition: u64,
}
const U16_MAX: usize = ::std::u16::MAX as usize;
@ -109,6 +111,7 @@ impl From<ethjson::spec::AuthorityRoundParams> for AuthorityRoundParams {
maximum_uncle_count: p.maximum_uncle_count.map_or(0, Into::into),
empty_steps_transition: p.empty_steps_transition.map_or(u64::max_value(), |n| ::std::cmp::max(n.into(), 1)),
maximum_empty_steps: p.maximum_empty_steps.map_or(0, Into::into),
strict_empty_steps_transition: p.strict_empty_steps_transition.map_or(0, Into::into),
}
}
}
@ -122,10 +125,10 @@ struct Step {
}
impl Step {
fn load(&self) -> usize { self.inner.load(AtomicOrdering::SeqCst) }
fn load(&self) -> u64 { self.inner.load(AtomicOrdering::SeqCst) as u64 }
fn duration_remaining(&self) -> Duration {
let now = unix_now();
let expected_seconds = (self.load() as u64)
let expected_seconds = self.load()
.checked_add(1)
.and_then(|ctr| ctr.checked_mul(self.duration as u64))
.map(Duration::from_secs);
@ -161,8 +164,8 @@ impl Step {
}
}
fn check_future(&self, given: usize) -> Result<(), Option<OutOfBounds<u64>>> {
const REJECTED_STEP_DRIFT: usize = 4;
fn check_future(&self, given: u64) -> Result<(), Option<OutOfBounds<u64>>> {
const REJECTED_STEP_DRIFT: u64 = 4;
// Verify if the step is correct.
if given <= self.load() {
@ -181,8 +184,8 @@ impl Step {
let d = self.duration as u64;
Err(Some(OutOfBounds {
min: None,
max: Some(d * current as u64),
found: d * given as u64,
max: Some(d * current),
found: d * given,
}))
} else {
Ok(())
@ -191,8 +194,8 @@ impl Step {
}
// Chain scoring: total weight is sqrt(U256::max_value())*height - step
fn calculate_score(parent_step: U256, current_step: U256, current_empty_steps: U256) -> U256 {
U256::from(U128::max_value()) + parent_step - current_step + current_empty_steps
fn calculate_score(parent_step: u64, current_step: u64, current_empty_steps: usize) -> U256 {
U256::from(U128::max_value()) + U256::from(parent_step) - U256::from(current_step) + U256::from(current_empty_steps)
}
struct EpochManager {
@ -283,13 +286,26 @@ impl EpochManager {
/// A message broadcast by authorities when it's their turn to seal a block but there are no
/// transactions. Other authorities accumulate these messages and later include them in the seal as
/// proof.
#[derive(Clone, Debug)]
#[derive(Clone, Debug, PartialEq, Eq)]
struct EmptyStep {
signature: H520,
step: usize,
step: u64,
parent_hash: H256,
}
impl PartialOrd for EmptyStep {
fn partial_cmp(&self, other: &Self) -> Option<cmp::Ordering> {
Some(self.cmp(other))
}
}
impl Ord for EmptyStep {
fn cmp(&self, other: &Self) -> cmp::Ordering {
self.step.cmp(&other.step)
.then_with(|| self.parent_hash.cmp(&other.parent_hash))
.then_with(|| self.signature.cmp(&other.signature))
}
}
impl EmptyStep {
fn from_sealed(sealed_empty_step: SealedEmptyStep, parent_hash: &H256) -> EmptyStep {
let signature = sealed_empty_step.signature;
@ -321,7 +337,7 @@ impl EmptyStep {
impl fmt::Display for EmptyStep {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(f, "({}, {}, {})", self.signature, self.step, self.parent_hash)
write!(f, "({:x}, {}, {:x})", self.signature, self.step, self.parent_hash)
}
}
@ -352,7 +368,7 @@ pub fn empty_step_full_rlp(signature: &H520, empty_step_rlp: &[u8]) -> Vec<u8> {
s.out()
}
pub fn empty_step_rlp(step: usize, parent_hash: &H256) -> Vec<u8> {
pub fn empty_step_rlp(step: u64, parent_hash: &H256) -> Vec<u8> {
let mut s = RlpStream::new_list(2);
s.append(&step).append(parent_hash);
s.out()
@ -364,7 +380,7 @@ pub fn empty_step_rlp(step: usize, parent_hash: &H256) -> Vec<u8> {
/// empty message is included.
struct SealedEmptyStep {
signature: H520,
step: usize,
step: u64,
}
impl Encodable for SealedEmptyStep {
@ -398,7 +414,7 @@ pub struct AuthorityRound {
validators: Box<ValidatorSet>,
validate_score_transition: u64,
validate_step_transition: u64,
empty_steps: Mutex<Vec<EmptyStep>>,
empty_steps: Mutex<BTreeSet<EmptyStep>>,
epoch_manager: Mutex<EpochManager>,
immediate_transitions: bool,
block_reward: U256,
@ -407,6 +423,7 @@ pub struct AuthorityRound {
maximum_uncle_count_transition: u64,
maximum_uncle_count: usize,
empty_steps_transition: u64,
strict_empty_steps_transition: u64,
maximum_empty_steps: usize,
machine: EthereumMachine,
}
@ -493,7 +510,7 @@ fn header_expected_seal_fields(header: &Header, empty_steps_transition: u64) ->
}
}
fn header_step(header: &Header, empty_steps_transition: u64) -> Result<usize, ::rlp::DecoderError> {
fn header_step(header: &Header, empty_steps_transition: u64) -> Result<u64, ::rlp::DecoderError> {
let expected_seal_fields = header_expected_seal_fields(header, empty_steps_transition);
Rlp::new(&header.seal().get(0).expect(
&format!("was either checked with verify_block_basic or is genesis; has {} fields; qed (Make sure the spec file has a correct genesis seal)", expected_seal_fields))).as_val()
@ -532,17 +549,17 @@ fn header_empty_steps_signers(header: &Header, empty_steps_transition: u64) -> R
}
}
fn step_proposer(validators: &ValidatorSet, bh: &H256, step: usize) -> Address {
let proposer = validators.get(bh, step);
fn step_proposer(validators: &ValidatorSet, bh: &H256, step: u64) -> Address {
let proposer = validators.get(bh, step as usize);
trace!(target: "engine", "Fetched proposer for step {}: {}", step, proposer);
proposer
}
fn is_step_proposer(validators: &ValidatorSet, bh: &H256, step: usize, address: &Address) -> bool {
fn is_step_proposer(validators: &ValidatorSet, bh: &H256, step: u64, address: &Address) -> bool {
step_proposer(validators, bh, step) == *address
}
fn verify_timestamp(step: &Step, header_step: usize) -> Result<(), BlockError> {
fn verify_timestamp(step: &Step, header_step: u64) -> Result<(), BlockError> {
match step.check_future(header_step) {
Err(None) => {
trace!(target: "engine", "verify_timestamp: block from the future");
@ -563,7 +580,7 @@ fn verify_external(header: &Header, validators: &ValidatorSet, empty_steps_trans
let header_step = header_step(header, empty_steps_transition)?;
let proposer_signature = header_signature(header, empty_steps_transition)?;
let correct_proposer = validators.get(header.parent_hash(), header_step);
let correct_proposer = validators.get(header.parent_hash(), header_step as usize);
let is_invalid_proposer = *header.author() != correct_proposer || {
let empty_steps_rlp = if header.number() >= empty_steps_transition {
Some(header_empty_steps_raw(header))
@ -633,13 +650,13 @@ impl AuthorityRound {
panic!("authority_round: step duration can't be zero")
}
let should_timeout = our_params.start_step.is_none();
let initial_step = our_params.start_step.unwrap_or_else(|| (unix_now().as_secs() / (our_params.step_duration as u64))) as usize;
let initial_step = our_params.start_step.unwrap_or_else(|| (unix_now().as_secs() / (our_params.step_duration as u64)));
let engine = Arc::new(
AuthorityRound {
transition_service: IoService::<()>::start()?,
step: Arc::new(PermissionedStep {
inner: Step {
inner: AtomicUsize::new(initial_step),
inner: AtomicUsize::new(initial_step as usize),
calibrate: our_params.start_step.is_none(),
duration: our_params.step_duration,
},
@ -650,7 +667,7 @@ impl AuthorityRound {
validators: our_params.validators,
validate_score_transition: our_params.validate_score_transition,
validate_step_transition: our_params.validate_step_transition,
empty_steps: Mutex::new(Vec::new()),
empty_steps: Default::default(),
epoch_manager: Mutex::new(EpochManager::blank()),
immediate_transitions: our_params.immediate_transitions,
block_reward: our_params.block_reward,
@ -660,6 +677,7 @@ impl AuthorityRound {
maximum_uncle_count: our_params.maximum_uncle_count,
empty_steps_transition: our_params.empty_steps_transition,
maximum_empty_steps: our_params.maximum_empty_steps,
strict_empty_steps_transition: our_params.strict_empty_steps_transition,
machine: machine,
});
@ -698,22 +716,41 @@ impl AuthorityRound {
})
}
fn empty_steps(&self, from_step: U256, to_step: U256, parent_hash: H256) -> Vec<EmptyStep> {
self.empty_steps.lock().iter().filter(|e| {
U256::from(e.step) > from_step &&
U256::from(e.step) < to_step &&
e.parent_hash == parent_hash
}).cloned().collect()
fn empty_steps(&self, from_step: u64, to_step: u64, parent_hash: H256) -> Vec<EmptyStep> {
let from = EmptyStep {
step: from_step + 1,
parent_hash,
signature: Default::default(),
};
let to = EmptyStep {
step: to_step,
parent_hash: Default::default(),
signature: Default::default(),
};
if from >= to {
return vec![];
}
self.empty_steps.lock()
.range(from..to)
.filter(|e| e.parent_hash == parent_hash)
.cloned()
.collect()
}
fn clear_empty_steps(&self, step: U256) {
fn clear_empty_steps(&self, step: u64) {
// clear old `empty_steps` messages
self.empty_steps.lock().retain(|e| U256::from(e.step) > step);
let mut empty_steps = self.empty_steps.lock();
*empty_steps = empty_steps.split_off(&EmptyStep {
step: step + 1,
parent_hash: Default::default(),
signature: Default::default(),
});
}
fn handle_empty_step_message(&self, empty_step: EmptyStep) {
let mut empty_steps = self.empty_steps.lock();
empty_steps.push(empty_step);
self.empty_steps.lock().insert(empty_step);
}
fn generate_empty_step(&self, parent_hash: &H256) {
@ -743,7 +780,7 @@ impl AuthorityRound {
}
}
fn report_skipped(&self, header: &Header, current_step: usize, parent_step: usize, validators: &ValidatorSet, set_number: u64) {
fn report_skipped(&self, header: &Header, current_step: u64, parent_step: u64, validators: &ValidatorSet, set_number: u64) {
// we're building on top of the genesis block so don't report any skipped steps
if header.number() == 1 {
return;
@ -829,6 +866,10 @@ impl Engine<EthereumMachine> for AuthorityRound {
/// Additional engine-specific information for the user/developer concerning `header`.
fn extra_info(&self, header: &Header) -> BTreeMap<String, String> {
if header.seal().len() < header_expected_seal_fields(header, self.empty_steps_transition) {
return BTreeMap::default();
}
let step = header_step(header, self.empty_steps_transition).as_ref().map(ToString::to_string).unwrap_or("".into());
let signature = header_signature(header, self.empty_steps_transition).as_ref().map(ToString::to_string).unwrap_or("".into());
@ -869,12 +910,12 @@ impl Engine<EthereumMachine> for AuthorityRound {
let current_step = self.step.inner.load();
let current_empty_steps_len = if header.number() >= self.empty_steps_transition {
self.empty_steps(parent_step.into(), current_step.into(), parent.hash()).len()
self.empty_steps(parent_step, current_step, parent.hash()).len()
} else {
0
};
let score = calculate_score(parent_step.into(), current_step.into(), current_empty_steps_len.into());
let score = calculate_score(parent_step, current_step, current_empty_steps_len);
header.set_difficulty(score);
}
@ -918,8 +959,8 @@ impl Engine<EthereumMachine> for AuthorityRound {
}
let header = block.header();
let parent_step: U256 = header_step(parent, self.empty_steps_transition)
.expect("Header has been verified; qed").into();
let parent_step = header_step(parent, self.empty_steps_transition)
.expect("Header has been verified; qed");
let step = self.step.inner.load();
@ -954,7 +995,7 @@ impl Engine<EthereumMachine> for AuthorityRound {
if is_step_proposer(&*validators, header.parent_hash(), step, header.author()) {
// this is guarded against by `can_propose` unless the block was signed
// on the same step (implies same key) and on a different node.
if parent_step == step.into() {
if parent_step == step {
warn!("Attempted to seal block on the same step as parent. Is this authority sealing with more than one node?");
return Seal::None;
}
@ -966,13 +1007,16 @@ impl Engine<EthereumMachine> for AuthorityRound {
block.transactions().is_empty() &&
empty_steps.len() < self.maximum_empty_steps {
self.generate_empty_step(header.parent_hash());
if self.step.can_propose.compare_and_swap(true, false, AtomicOrdering::SeqCst) {
self.generate_empty_step(header.parent_hash());
}
return Seal::None;
}
let empty_steps_rlp = if header.number() >= self.empty_steps_transition {
let empty_steps: Vec<_> = empty_steps.iter().map(|e| e.sealed()).collect();
Some(::rlp::encode_list(&empty_steps).into_vec())
Some(::rlp::encode_list(&empty_steps))
} else {
None
};
@ -990,12 +1034,12 @@ impl Engine<EthereumMachine> for AuthorityRound {
// report any skipped primaries between the parent block and
// the block we're sealing, unless we have empty steps enabled
if header.number() < self.empty_steps_transition {
self.report_skipped(header, step, u64::from(parent_step) as usize, &*validators, set_number);
self.report_skipped(header, step, parent_step, &*validators, set_number);
}
let mut fields = vec![
encode(&step).into_vec(),
encode(&(&H520::from(signature) as &[u8])).into_vec(),
encode(&step),
encode(&(&H520::from(signature) as &[u8])),
];
if let Some(empty_steps_rlp) = empty_steps_rlp {
@ -1147,8 +1191,11 @@ impl Engine<EthereumMachine> for AuthorityRound {
// reported as there's no way to tell whether the empty step message was never sent or simply not included.
let empty_steps_len = if header.number() >= self.empty_steps_transition {
let validate_empty_steps = || -> Result<usize, Error> {
let strict_empty_steps = header.number() >= self.strict_empty_steps_transition;
let empty_steps = header_empty_steps(header)?;
let empty_steps_len = empty_steps.len();
let mut prev_empty_step = 0;
for empty_step in empty_steps {
if empty_step.step <= parent_step || empty_step.step >= step {
Err(EngineError::InsufficientProof(
@ -1164,7 +1211,20 @@ impl Engine<EthereumMachine> for AuthorityRound {
Err(EngineError::InsufficientProof(
format!("invalid empty step proof: {:?}", empty_step)))?;
}
if strict_empty_steps {
if empty_step.step <= prev_empty_step {
Err(EngineError::InsufficientProof(format!(
"{} empty step: {:?}",
if empty_step.step == prev_empty_step { "duplicate" } else { "unordered" },
empty_step
)))?;
}
prev_empty_step = empty_step.step;
}
}
Ok(empty_steps_len)
};
@ -1403,10 +1463,12 @@ impl Engine<EthereumMachine> for AuthorityRound {
#[cfg(test)]
mod tests {
use std::collections::BTreeMap;
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering as AtomicOrdering};
use hash::keccak;
use ethereum_types::{Address, H520, H256, U256};
use ethkey::Signature;
use header::Header;
use rlp::encode;
use block::*;
@ -1418,10 +1480,40 @@ mod tests {
use spec::Spec;
use transaction::{Action, Transaction};
use engines::{Seal, Engine, EngineError, EthEngine};
use engines::validator_set::TestSet;
use engines::validator_set::{TestSet, SimpleList};
use error::{Error, ErrorKind};
use super::{AuthorityRoundParams, AuthorityRound, EmptyStep, SealedEmptyStep, calculate_score};
fn aura<F>(f: F) -> Arc<AuthorityRound> where
F: FnOnce(&mut AuthorityRoundParams),
{
let mut params = AuthorityRoundParams {
step_duration: 1,
start_step: Some(1),
validators: Box::new(TestSet::default()),
validate_score_transition: 0,
validate_step_transition: 0,
immediate_transitions: true,
maximum_uncle_count_transition: 0,
maximum_uncle_count: 0,
empty_steps_transition: u64::max_value(),
maximum_empty_steps: 0,
block_reward: Default::default(),
block_reward_contract_transition: 0,
block_reward_contract: Default::default(),
strict_empty_steps_transition: 0,
};
// mutate aura params
f(&mut params);
// create engine
let mut c_params = ::spec::CommonParams::default();
c_params.gas_limit_bound_divisor = 5.into();
let machine = ::machine::EthereumMachine::regular(c_params, Default::default());
AuthorityRound::new(params, machine).unwrap()
}
#[test]
fn has_valid_metadata() {
let engine = Spec::new_test_round().engine;
@ -1440,7 +1532,7 @@ mod tests {
fn can_do_signature_verification_fail() {
let engine = Spec::new_test_round().engine;
let mut header: Header = Header::default();
header.set_seal(vec![encode(&H520::default()).into_vec()]);
header.set_seal(vec![encode(&H520::default())]);
let verify_result = engine.verify_block_external(&header);
assert!(verify_result.is_err());
@ -1517,7 +1609,7 @@ mod tests {
let tap = AccountProvider::transient_provider();
let addr = tap.insert_account(keccak("0").into(), &"0".into()).unwrap();
let mut parent_header: Header = Header::default();
parent_header.set_seal(vec![encode(&0usize).into_vec()]);
parent_header.set_seal(vec![encode(&0usize)]);
parent_header.set_gas_limit("222222".parse::<U256>().unwrap());
let mut header: Header = Header::default();
header.set_number(1);
@ -1528,14 +1620,14 @@ mod tests {
// Two validators.
// Spec starts with step 2.
header.set_difficulty(calculate_score(U256::from(0), U256::from(2), U256::zero()));
header.set_difficulty(calculate_score(0, 2, 0));
let signature = tap.sign(addr, Some("0".into()), header.bare_hash()).unwrap();
header.set_seal(vec![encode(&2usize).into_vec(), encode(&(&*signature as &[u8])).into_vec()]);
header.set_seal(vec![encode(&2usize), encode(&(&*signature as &[u8]))]);
assert!(engine.verify_block_family(&header, &parent_header).is_ok());
assert!(engine.verify_block_external(&header).is_err());
header.set_difficulty(calculate_score(U256::from(0), U256::from(1), U256::zero()));
header.set_difficulty(calculate_score(0, 1, 0));
let signature = tap.sign(addr, Some("0".into()), header.bare_hash()).unwrap();
header.set_seal(vec![encode(&1usize).into_vec(), encode(&(&*signature as &[u8])).into_vec()]);
header.set_seal(vec![encode(&1usize), encode(&(&*signature as &[u8]))]);
assert!(engine.verify_block_family(&header, &parent_header).is_ok());
assert!(engine.verify_block_external(&header).is_ok());
}
@ -1546,7 +1638,7 @@ mod tests {
let addr = tap.insert_account(keccak("0").into(), &"0".into()).unwrap();
let mut parent_header: Header = Header::default();
parent_header.set_seal(vec![encode(&0usize).into_vec()]);
parent_header.set_seal(vec![encode(&0usize)]);
parent_header.set_gas_limit("222222".parse::<U256>().unwrap());
let mut header: Header = Header::default();
header.set_number(1);
@ -1557,12 +1649,12 @@ mod tests {
// Two validators.
// Spec starts with step 2.
header.set_difficulty(calculate_score(U256::from(0), U256::from(1), U256::zero()));
header.set_difficulty(calculate_score(0, 1, 0));
let signature = tap.sign(addr, Some("0".into()), header.bare_hash()).unwrap();
header.set_seal(vec![encode(&1usize).into_vec(), encode(&(&*signature as &[u8])).into_vec()]);
header.set_seal(vec![encode(&1usize), encode(&(&*signature as &[u8]))]);
assert!(engine.verify_block_family(&header, &parent_header).is_ok());
assert!(engine.verify_block_external(&header).is_ok());
header.set_seal(vec![encode(&5usize).into_vec(), encode(&(&*signature as &[u8])).into_vec()]);
header.set_seal(vec![encode(&5usize), encode(&(&*signature as &[u8]))]);
assert!(engine.verify_block_basic(&header).is_err());
}
@ -1572,7 +1664,7 @@ mod tests {
let addr = tap.insert_account(keccak("0").into(), &"0".into()).unwrap();
let mut parent_header: Header = Header::default();
parent_header.set_seal(vec![encode(&4usize).into_vec()]);
parent_header.set_seal(vec![encode(&4usize)]);
parent_header.set_gas_limit("222222".parse::<U256>().unwrap());
let mut header: Header = Header::default();
header.set_number(1);
@ -1584,47 +1676,28 @@ mod tests {
let signature = tap.sign(addr, Some("0".into()), header.bare_hash()).unwrap();
// Two validators.
// Spec starts with step 2.
header.set_seal(vec![encode(&5usize).into_vec(), encode(&(&*signature as &[u8])).into_vec()]);
header.set_difficulty(calculate_score(U256::from(4), U256::from(5), U256::zero()));
header.set_seal(vec![encode(&5usize), encode(&(&*signature as &[u8]))]);
header.set_difficulty(calculate_score(4, 5, 0));
assert!(engine.verify_block_family(&header, &parent_header).is_ok());
header.set_seal(vec![encode(&3usize).into_vec(), encode(&(&*signature as &[u8])).into_vec()]);
header.set_difficulty(calculate_score(U256::from(4), U256::from(3), U256::zero()));
header.set_seal(vec![encode(&3usize), encode(&(&*signature as &[u8]))]);
header.set_difficulty(calculate_score(4, 3, 0));
assert!(engine.verify_block_family(&header, &parent_header).is_err());
}
#[test]
fn reports_skipped() {
let last_benign = Arc::new(AtomicUsize::new(0));
let params = AuthorityRoundParams {
step_duration: 1,
start_step: Some(1),
validators: Box::new(TestSet::new(Default::default(), last_benign.clone())),
validate_score_transition: 0,
validate_step_transition: 0,
immediate_transitions: true,
maximum_uncle_count_transition: 0,
maximum_uncle_count: 0,
empty_steps_transition: u64::max_value(),
maximum_empty_steps: 0,
block_reward: Default::default(),
block_reward_contract_transition: 0,
block_reward_contract: Default::default(),
};
let aura = {
let mut c_params = ::spec::CommonParams::default();
c_params.gas_limit_bound_divisor = 5.into();
let machine = ::machine::EthereumMachine::regular(c_params, Default::default());
AuthorityRound::new(params, machine).unwrap()
};
let aura = aura(|p| {
p.validators = Box::new(TestSet::new(Default::default(), last_benign.clone()));
});
let mut parent_header: Header = Header::default();
parent_header.set_seal(vec![encode(&1usize).into_vec()]);
parent_header.set_seal(vec![encode(&1usize)]);
parent_header.set_gas_limit("222222".parse::<U256>().unwrap());
let mut header: Header = Header::default();
header.set_difficulty(calculate_score(U256::from(1), U256::from(3), U256::zero()));
header.set_difficulty(calculate_score(1, 3, 0));
header.set_gas_limit("222222".parse::<U256>().unwrap());
header.set_seal(vec![encode(&3usize).into_vec()]);
header.set_seal(vec![encode(&3usize)]);
// Do not report when signer not present.
assert!(aura.verify_block_family(&header, &parent_header).is_ok());
@ -1645,29 +1718,9 @@ mod tests {
#[test]
fn test_uncles_transition() {
let last_benign = Arc::new(AtomicUsize::new(0));
let params = AuthorityRoundParams {
step_duration: 1,
start_step: Some(1),
validators: Box::new(TestSet::new(Default::default(), last_benign.clone())),
validate_score_transition: 0,
validate_step_transition: 0,
immediate_transitions: true,
maximum_uncle_count_transition: 1,
maximum_uncle_count: 0,
empty_steps_transition: u64::max_value(),
maximum_empty_steps: 0,
block_reward: Default::default(),
block_reward_contract_transition: 0,
block_reward_contract: Default::default(),
};
let aura = {
let mut c_params = ::spec::CommonParams::default();
c_params.gas_limit_bound_divisor = 5.into();
let machine = ::machine::EthereumMachine::regular(c_params, Default::default());
AuthorityRound::new(params, machine).unwrap()
};
let aura = aura(|params| {
params.maximum_uncle_count_transition = 1;
});
assert_eq!(aura.maximum_uncle_count(0), 2);
assert_eq!(aura.maximum_uncle_count(1), 0);
@ -1701,27 +1754,9 @@ mod tests {
#[test]
#[should_panic(expected="authority_round: step duration can't be zero")]
fn test_step_duration_zero() {
let last_benign = Arc::new(AtomicUsize::new(0));
let params = AuthorityRoundParams {
step_duration: 0,
start_step: Some(1),
validators: Box::new(TestSet::new(Default::default(), last_benign.clone())),
validate_score_transition: 0,
validate_step_transition: 0,
immediate_transitions: true,
maximum_uncle_count_transition: 0,
maximum_uncle_count: 0,
empty_steps_transition: u64::max_value(),
maximum_empty_steps: 0,
block_reward: Default::default(),
block_reward_contract_transition: 0,
block_reward_contract: Default::default(),
};
let mut c_params = ::spec::CommonParams::default();
c_params.gas_limit_bound_divisor = 5.into();
let machine = ::machine::EthereumMachine::regular(c_params, Default::default());
AuthorityRound::new(params, machine).unwrap();
aura(|params| {
params.step_duration = 0;
});
}
fn setup_empty_steps() -> (Spec, Arc<AccountProvider>, Vec<Address>) {
@ -1736,19 +1771,36 @@ mod tests {
(spec, tap, accounts)
}
fn empty_step(engine: &EthEngine, step: usize, parent_hash: &H256) -> EmptyStep {
fn empty_step(engine: &EthEngine, step: u64, parent_hash: &H256) -> EmptyStep {
let empty_step_rlp = super::empty_step_rlp(step, parent_hash);
let signature = engine.sign(keccak(&empty_step_rlp)).unwrap().into();
let parent_hash = parent_hash.clone();
EmptyStep { step, signature, parent_hash }
}
fn sealed_empty_step(engine: &EthEngine, step: usize, parent_hash: &H256) -> SealedEmptyStep {
fn sealed_empty_step(engine: &EthEngine, step: u64, parent_hash: &H256) -> SealedEmptyStep {
let empty_step_rlp = super::empty_step_rlp(step, parent_hash);
let signature = engine.sign(keccak(&empty_step_rlp)).unwrap().into();
SealedEmptyStep { signature, step }
}
fn set_empty_steps_seal(header: &mut Header, step: u64, block_signature: &ethkey::Signature, empty_steps: &[SealedEmptyStep]) {
header.set_seal(vec![
encode(&(step as usize)),
encode(&(&**block_signature as &[u8])),
::rlp::encode_list(&empty_steps),
]);
}
fn assert_insufficient_proof<T: ::std::fmt::Debug>(result: Result<T, Error>, contains: &str) {
match result {
Err(Error(ErrorKind::Engine(EngineError::InsufficientProof(ref s)), _)) =>{
assert!(s.contains(contains), "Expected {:?} to contain {:?}", s, contains);
},
e => assert!(false, "Unexpected result: {:?}", e),
}
}
#[test]
fn broadcast_empty_step_message() {
let (spec, tap, accounts) = setup_empty_steps();
@ -1775,10 +1827,15 @@ mod tests {
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
// spec starts with step 2
let empty_step_rlp = encode(&empty_step(engine, 2, &genesis_header.hash())).into_vec();
let empty_step_rlp = encode(&empty_step(engine, 2, &genesis_header.hash()));
// we've received the message
assert!(notify.messages.read().contains(&empty_step_rlp));
let len = notify.messages.read().len();
// make sure that we don't generate empty step for the second time
assert_eq!(engine.generate_seal(b1.block(), &genesis_header), Seal::None);
assert_eq!(len, notify.messages.read().len());
}
#[test]
@ -1828,8 +1885,8 @@ mod tests {
let empty_step2 = sealed_empty_step(engine, 2, &genesis_header.hash());
let empty_steps = ::rlp::encode_list(&vec![empty_step2]);
assert_eq!(seal[0], encode(&3usize).into_vec());
assert_eq!(seal[2], empty_steps.into_vec());
assert_eq!(seal[0], encode(&3usize));
assert_eq!(seal[2], empty_steps);
}
}
@ -1882,8 +1939,8 @@ mod tests {
let empty_steps = ::rlp::encode_list(&vec![empty_step2, empty_step3]);
assert_eq!(seal[0], encode(&4usize).into_vec());
assert_eq!(seal[2], empty_steps.into_vec());
assert_eq!(seal[0], encode(&4usize));
assert_eq!(seal[2], empty_steps);
}
}
@ -1932,7 +1989,7 @@ mod tests {
let engine = &*spec.engine;
let mut parent_header: Header = Header::default();
parent_header.set_seal(vec![encode(&0usize).into_vec()]);
parent_header.set_seal(vec![encode(&0usize)]);
parent_header.set_gas_limit("222222".parse::<U256>().unwrap());
let mut header: Header = Header::default();
@ -1945,46 +2002,31 @@ mod tests {
// empty step with invalid step
let empty_steps = vec![SealedEmptyStep { signature: 0.into(), step: 2 }];
header.set_seal(vec![
encode(&2usize).into_vec(),
encode(&(&*signature as &[u8])).into_vec(),
::rlp::encode_list(&empty_steps).into_vec(),
]);
set_empty_steps_seal(&mut header, 2, &signature, &empty_steps);
assert!(match engine.verify_block_family(&header, &parent_header) {
Err(Error(ErrorKind::Engine(EngineError::InsufficientProof(ref s)), _))
if s.contains("invalid step") => true,
_ => false,
});
assert_insufficient_proof(
engine.verify_block_family(&header, &parent_header),
"invalid step"
);
// empty step with invalid signature
let empty_steps = vec![SealedEmptyStep { signature: 0.into(), step: 1 }];
header.set_seal(vec![
encode(&2usize).into_vec(),
encode(&(&*signature as &[u8])).into_vec(),
::rlp::encode_list(&empty_steps).into_vec(),
]);
set_empty_steps_seal(&mut header, 2, &signature, &empty_steps);
assert!(match engine.verify_block_family(&header, &parent_header) {
Err(Error(ErrorKind::Engine(EngineError::InsufficientProof(ref s)), _))
if s.contains("invalid empty step proof") => true,
_ => false,
});
assert_insufficient_proof(
engine.verify_block_family(&header, &parent_header),
"invalid empty step proof"
);
// empty step with valid signature from incorrect proposer for step
engine.set_signer(tap.clone(), addr1, "1".into());
let empty_steps = vec![sealed_empty_step(engine, 1, &parent_header.hash())];
header.set_seal(vec![
encode(&2usize).into_vec(),
encode(&(&*signature as &[u8])).into_vec(),
::rlp::encode_list(&empty_steps).into_vec(),
]);
set_empty_steps_seal(&mut header, 2, &signature, &empty_steps);
assert!(match engine.verify_block_family(&header, &parent_header) {
Err(Error(ErrorKind::Engine(EngineError::InsufficientProof(ref s)), _))
if s.contains("invalid empty step proof") => true,
_ => false,
});
assert_insufficient_proof(
engine.verify_block_family(&header, &parent_header),
"invalid empty step proof"
);
// valid empty steps
engine.set_signer(tap.clone(), addr1, "1".into());
@ -1993,13 +2035,9 @@ mod tests {
let empty_step3 = sealed_empty_step(engine, 3, &parent_header.hash());
let empty_steps = vec![empty_step2, empty_step3];
header.set_difficulty(calculate_score(U256::from(0), U256::from(4), U256::from(2)));
header.set_difficulty(calculate_score(0, 4, 2));
let signature = tap.sign(addr1, Some("1".into()), header.bare_hash()).unwrap();
header.set_seal(vec![
encode(&4usize).into_vec(),
encode(&(&*signature as &[u8])).into_vec(),
::rlp::encode_list(&empty_steps).into_vec(),
]);
set_empty_steps_seal(&mut header, 4, &signature, &empty_steps);
assert!(engine.verify_block_family(&header, &parent_header).is_ok());
}
@ -2072,4 +2110,152 @@ mod tests {
addr1_balance + (1000 + 0) + (1000 + 2),
)
}
#[test]
fn extra_info_from_seal() {
let (spec, tap, accounts) = setup_empty_steps();
let engine = &*spec.engine;
let addr1 = accounts[0];
engine.set_signer(tap.clone(), addr1, "1".into());
let mut header: Header = Header::default();
let empty_step = empty_step(engine, 1, &header.parent_hash());
let sealed_empty_step = empty_step.sealed();
header.set_number(2);
header.set_seal(vec![
encode(&2usize).to_vec(),
encode(&H520::default()).to_vec(),
::rlp::encode_list(&vec![sealed_empty_step]).to_vec(),
]);
let info = engine.extra_info(&header);
let mut expected = BTreeMap::default();
expected.insert("step".into(), "2".into());
expected.insert("signature".into(), Signature::from(H520::default()).to_string());
expected.insert("emptySteps".into(), format!("[{}]", empty_step));
assert_eq!(info, expected);
header.set_seal(vec![]);
assert_eq!(
engine.extra_info(&header),
BTreeMap::default(),
);
}
#[test]
fn test_empty_steps() {
let engine = aura(|p| {
p.step_duration = 4;
p.empty_steps_transition = 0;
p.maximum_empty_steps = 0;
});
let parent_hash: H256 = 1.into();
let signature = H520::default();
let step = |step: u64| EmptyStep {
step,
parent_hash,
signature,
};
engine.handle_empty_step_message(step(1));
engine.handle_empty_step_message(step(3));
engine.handle_empty_step_message(step(2));
engine.handle_empty_step_message(step(1));
assert_eq!(engine.empty_steps(0, 4, parent_hash), vec![step(1), step(2), step(3)]);
assert_eq!(engine.empty_steps(2, 3, parent_hash), vec![]);
assert_eq!(engine.empty_steps(2, 4, parent_hash), vec![step(3)]);
engine.clear_empty_steps(2);
assert_eq!(engine.empty_steps(0, 3, parent_hash), vec![]);
assert_eq!(engine.empty_steps(0, 4, parent_hash), vec![step(3)]);
}
#[test]
fn should_reject_duplicate_empty_steps() {
// given
let (_spec, tap, accounts) = setup_empty_steps();
let engine = aura(|p| {
p.validators = Box::new(SimpleList::new(accounts.clone()));
p.step_duration = 4;
p.empty_steps_transition = 0;
p.maximum_empty_steps = 0;
});
let mut parent = Header::default();
parent.set_seal(vec![encode(&0usize)]);
let mut header = Header::default();
header.set_number(parent.number() + 1);
header.set_parent_hash(parent.hash());
header.set_author(accounts[0]);
// when
engine.set_signer(tap.clone(), accounts[1], "0".into());
let empty_steps = vec![
sealed_empty_step(&*engine, 1, &parent.hash()),
sealed_empty_step(&*engine, 1, &parent.hash()),
];
let step = 2;
let signature = tap.sign(accounts[0], Some("1".into()), header.bare_hash()).unwrap();
set_empty_steps_seal(&mut header, step, &signature, &empty_steps);
header.set_difficulty(calculate_score(0, step, empty_steps.len()));
// then
assert_insufficient_proof(
engine.verify_block_family(&header, &parent),
"duplicate empty step"
);
}
#[test]
fn should_reject_empty_steps_out_of_order() {
// given
let (_spec, tap, accounts) = setup_empty_steps();
let engine = aura(|p| {
p.validators = Box::new(SimpleList::new(accounts.clone()));
p.step_duration = 4;
p.empty_steps_transition = 0;
p.maximum_empty_steps = 0;
});
let mut parent = Header::default();
parent.set_seal(vec![encode(&0usize)]);
let mut header = Header::default();
header.set_number(parent.number() + 1);
header.set_parent_hash(parent.hash());
header.set_author(accounts[0]);
// when
engine.set_signer(tap.clone(), accounts[1], "0".into());
let es1 = sealed_empty_step(&*engine, 1, &parent.hash());
engine.set_signer(tap.clone(), accounts[0], "1".into());
let es2 = sealed_empty_step(&*engine, 2, &parent.hash());
let mut empty_steps = vec![es2, es1];
let step = 3;
let signature = tap.sign(accounts[1], Some("0".into()), header.bare_hash()).unwrap();
set_empty_steps_seal(&mut header, step, &signature, &empty_steps);
header.set_difficulty(calculate_score(0, step, empty_steps.len()));
// then make sure it's rejected because of the order
assert_insufficient_proof(
engine.verify_block_family(&header, &parent),
"unordered empty step"
);
// now try to fix the order
empty_steps.reverse();
set_empty_steps_seal(&mut header, step, &signature, &empty_steps);
assert_eq!(engine.verify_block_family(&header, &parent).unwrap(), ());
}
}

View File

@ -110,7 +110,7 @@ impl Engine<EthereumMachine> for BasicAuthority {
if self.validators.contains(header.parent_hash(), author) {
// account should be pernamently unlocked, otherwise sealing will fail
if let Ok(signature) = self.sign(header.bare_hash()) {
return Seal::Regular(vec![::rlp::encode(&(&H520::from(signature) as &[u8])).into_vec()]);
return Seal::Regular(vec![::rlp::encode(&(&H520::from(signature) as &[u8]))]);
} else {
trace!(target: "basicauthority", "generate_seal: FAIL: accounts secret key unavailable");
}
@ -234,7 +234,7 @@ mod tests {
fn can_do_signature_verification_fail() {
let engine = new_test_authority().engine;
let mut header: Header = Header::default();
header.set_seal(vec![::rlp::encode(&H520::default()).into_vec()]);
header.set_seal(vec![::rlp::encode(&H520::default())]);
let verify_result = engine.verify_block_external(&header);
assert!(verify_result.is_err());

View File

@ -18,7 +18,7 @@ use engines::{Engine, Seal};
use parity_machine::{Machine, Transactions, TotalScoredHeader};
/// `InstantSeal` params.
#[derive(Debug, PartialEq)]
#[derive(Default, Debug, PartialEq)]
pub struct InstantSealParams {
/// Whether to use millisecond timestamp
pub millisecond_timestamp: bool,
@ -120,7 +120,7 @@ mod tests {
assert!(engine.verify_block_basic(&header).is_ok());
header.set_seal(vec![::rlp::encode(&H520::default()).into_vec()]);
header.set_seal(vec![::rlp::encode(&H520::default())]);
assert!(engine.verify_block_unordered(&header).is_ok());
}

View File

@ -32,7 +32,7 @@ pub mod epoch;
pub use self::authority_round::AuthorityRound;
pub use self::basic_authority::BasicAuthority;
pub use self::epoch::{EpochVerifier, Transition as EpochTransition};
pub use self::instant_seal::InstantSeal;
pub use self::instant_seal::{InstantSeal, InstantSealParams};
pub use self::null_engine::NullEngine;
pub use self::tendermint::Tendermint;
@ -44,7 +44,7 @@ use self::epoch::PendingTransition;
use account_provider::AccountProvider;
use builtin::Builtin;
use vm::{EnvInfo, Schedule, CreateContractAddress};
use vm::{EnvInfo, Schedule, CreateContractAddress, CallType, ActionValue};
use error::Error;
use header::{Header, BlockNumber};
use snapshot::SnapshotComponents;
@ -163,8 +163,10 @@ pub fn default_system_or_code_call<'a>(machine: &'a ::machine::EthereumMachine,
None,
Some(code),
Some(code_hash),
Some(ActionValue::Apparent(U256::zero())),
U256::max_value(),
Some(data),
Some(CallType::StaticCall),
)
},
};

View File

@ -231,7 +231,7 @@ mod tests {
},
block_hash: Some(keccak("1")),
};
let raw_rlp = ::rlp::encode(&message).into_vec();
let raw_rlp = ::rlp::encode(&message);
let rlp = Rlp::new(&raw_rlp);
assert_eq!(Ok(message), rlp.as_val());
@ -268,8 +268,8 @@ mod tests {
fn proposal_message() {
let mut header = Header::default();
let seal = vec![
::rlp::encode(&0u8).into_vec(),
::rlp::encode(&H520::default()).into_vec(),
::rlp::encode(&0u8),
::rlp::encode(&H520::default()),
Vec::new()
];

View File

@ -407,9 +407,9 @@ impl Tendermint {
let precommits = self.votes.round_signatures(vote_step, &bh);
trace!(target: "engine", "Collected seal: {:?}", precommits);
let seal = vec![
::rlp::encode(&vote_step.view).into_vec(),
::rlp::encode(&vote_step.view),
::rlp::NULL_RLP.to_vec(),
::rlp::encode_list(&precommits).into_vec()
::rlp::encode_list(&precommits)
];
self.submit_seal(bh, seal);
self.votes.throw_out_old(&vote_step);
@ -491,8 +491,8 @@ impl Engine<EthereumMachine> for Tendermint {
*self.proposal.write() = bh;
*self.proposal_parent.write() = header.parent_hash().clone();
Seal::Proposal(vec![
::rlp::encode(&view).into_vec(),
::rlp::encode(&signature).into_vec(),
::rlp::encode(&view),
::rlp::encode(&signature),
::rlp::EMPTY_LIST_RLP.to_vec()
])
} else {
@ -520,7 +520,7 @@ impl Engine<EthereumMachine> for Tendermint {
self.broadcast_message(rlp.as_raw().to_vec());
if let Some(double) = self.votes.vote(message.clone(), sender) {
let height = message.vote_step.height as BlockNumber;
self.validators.report_malicious(&sender, height, height, ::rlp::encode(&double).into_vec());
self.validators.report_malicious(&sender, height, height, ::rlp::encode(&double));
return Err(EngineError::DoubleVote(sender));
}
trace!(target: "engine", "Handling a valid {:?} from {}.", message, sender);
@ -693,7 +693,6 @@ impl Engine<EthereumMachine> for Tendermint {
}
fn stop(&self) {
self.step_service.stop()
}
fn is_proposal(&self, header: &Header) -> bool {
@ -824,8 +823,8 @@ mod tests {
let vote_info = message_info_rlp(&VoteStep::new(header.number() as Height, view, Step::Propose), Some(header.bare_hash()));
let signature = tap.sign(*author, None, keccak(vote_info)).unwrap();
vec![
::rlp::encode(&view).into_vec(),
::rlp::encode(&H520::from(signature)).into_vec(),
::rlp::encode(&view),
::rlp::encode(&H520::from(signature)),
::rlp::EMPTY_LIST_RLP.to_vec()
]
}
@ -929,7 +928,7 @@ mod tests {
let signature1 = tap.sign(proposer, None, keccak(&vote_info)).unwrap();
seal[1] = ::rlp::NULL_RLP.to_vec();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1.clone())]).into_vec();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1.clone())]);
header.set_seal(seal.clone());
// One good signature is not enough.
@ -941,7 +940,7 @@ mod tests {
let voter = insert_and_unlock(&tap, "0");
let signature0 = tap.sign(voter, None, keccak(&vote_info)).unwrap();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1.clone()), H520::from(signature0.clone())]).into_vec();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1.clone()), H520::from(signature0.clone())]);
header.set_seal(seal.clone());
assert!(engine.verify_block_external(&header).is_ok());
@ -949,7 +948,7 @@ mod tests {
let bad_voter = insert_and_unlock(&tap, "101");
let bad_signature = tap.sign(bad_voter, None, keccak(vote_info)).unwrap();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1), H520::from(bad_signature)]).into_vec();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1), H520::from(bad_signature)]);
header.set_seal(seal);
// One good and one bad signature.
@ -1085,7 +1084,7 @@ mod tests {
let signature0 = tap.sign(voter, None, keccak(&vote_info)).unwrap();
seal[1] = ::rlp::NULL_RLP.to_vec();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1.clone())]).into_vec();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1.clone())]);
header.set_seal(seal.clone());
let epoch_verifier = super::EpochVerifier {
@ -1113,7 +1112,7 @@ mod tests {
_ => panic!(),
}
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1.clone()), H520::from(signature0.clone())]).into_vec();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1.clone()), H520::from(signature0.clone())]);
header.set_seal(seal.clone());
assert!(epoch_verifier.verify_light(&header).is_ok());
@ -1121,7 +1120,7 @@ mod tests {
let bad_voter = insert_and_unlock(&tap, "101");
let bad_signature = tap.sign(bad_voter, None, keccak(&vote_info)).unwrap();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1), H520::from(bad_signature)]).into_vec();
seal[2] = ::rlp::encode_list(&vec![H520::from(signature1), H520::from(bad_signature)]);
header.set_seal(seal);
// One good and one bad signature.

View File

@ -174,7 +174,7 @@ mod tests {
// Check a block that is a bit in future, reject it but don't report the validator.
let mut header = Header::default();
let seal = vec![encode(&4u8).into_vec(), encode(&(&H520::default() as &[u8])).into_vec()];
let seal = vec![encode(&4u8), encode(&(&H520::default() as &[u8]))];
header.set_seal(seal);
header.set_author(v1);
header.set_number(2);
@ -185,7 +185,7 @@ mod tests {
// Now create one that is more in future. That one should be rejected and validator should be reported.
let mut header = Header::default();
let seal = vec![encode(&8u8).into_vec(), encode(&(&H520::default() as &[u8])).into_vec()];
let seal = vec![encode(&8u8), encode(&(&H520::default() as &[u8]))];
header.set_seal(seal);
header.set_author(v1);
header.set_number(2);

View File

@ -158,7 +158,7 @@ fn decode_first_proof(rlp: &Rlp) -> Result<(Header, Vec<DBValue>), ::error::Erro
fn encode_proof(header: &Header, receipts: &[Receipt]) -> Bytes {
let mut stream = RlpStream::new_list(2);
stream.append(header).append_list(receipts);
stream.drain().into_vec()
stream.drain()
}
fn decode_proof(rlp: &Rlp) -> Result<(Header, Vec<Receipt>), ::error::Error> {

View File

@ -34,12 +34,18 @@ pub struct TestSet {
last_benign: Arc<AtomicUsize>,
}
impl Default for TestSet {
fn default() -> Self {
TestSet::new(Default::default(), Default::default())
}
}
impl TestSet {
pub fn new(last_malicious: Arc<AtomicUsize>, last_benign: Arc<AtomicUsize>) -> Self {
TestSet {
validator: SimpleList::new(vec![Address::from_str("7d577a597b2742b498cb5cf0c26cdcd726d39e6e").unwrap()]),
last_malicious: last_malicious,
last_benign: last_benign,
last_malicious,
last_benign,
}
}
}

View File

@ -681,7 +681,7 @@ mod tests {
fn can_do_difficulty_verification_fail() {
let engine = test_spec().engine;
let mut header: Header = Header::default();
header.set_seal(vec![rlp::encode(&H256::zero()).into_vec(), rlp::encode(&H64::zero()).into_vec()]);
header.set_seal(vec![rlp::encode(&H256::zero()), rlp::encode(&H64::zero())]);
let verify_result = engine.verify_block_basic(&header);
@ -696,7 +696,7 @@ mod tests {
fn can_do_proof_of_work_verification_fail() {
let engine = test_spec().engine;
let mut header: Header = Header::default();
header.set_seal(vec![rlp::encode(&H256::zero()).into_vec(), rlp::encode(&H64::zero()).into_vec()]);
header.set_seal(vec![rlp::encode(&H256::zero()), rlp::encode(&H64::zero())]);
header.set_difficulty(U256::from_str("ffffffffffffffffffffffffffffffffffffffffffffaaaaaaaaaaaaaaaaaaaa").unwrap());
let verify_result = engine.verify_block_basic(&header);
@ -737,7 +737,7 @@ mod tests {
fn can_do_seal256_verification_fail() {
let engine = test_spec().engine;
let mut header: Header = Header::default();
header.set_seal(vec![rlp::encode(&H256::zero()).into_vec(), rlp::encode(&H64::zero()).into_vec()]);
header.set_seal(vec![rlp::encode(&H256::zero()), rlp::encode(&H64::zero())]);
let verify_result = engine.verify_block_unordered(&header);
match verify_result {
@ -751,7 +751,7 @@ mod tests {
fn can_do_proof_of_work_unordered_verification_fail() {
let engine = test_spec().engine;
let mut header: Header = Header::default();
header.set_seal(vec![rlp::encode(&H256::from("b251bd2e0283d0658f2cadfdc8ca619b5de94eca5742725e2e757dd13ed7503d")).into_vec(), rlp::encode(&H64::zero()).into_vec()]);
header.set_seal(vec![rlp::encode(&H256::from("b251bd2e0283d0658f2cadfdc8ca619b5de94eca5742725e2e757dd13ed7503d")), rlp::encode(&H64::zero())]);
header.set_difficulty(U256::from_str("ffffffffffffffffffffffffffffffffffffffffffffaaaaaaaaaaaaaaaaaaaa").unwrap());
let verify_result = engine.verify_block_unordered(&header);
@ -944,7 +944,7 @@ mod tests {
let tempdir = TempDir::new("").unwrap();
let ethash = Ethash::new(tempdir.path(), ethparams, machine, None);
let mut header = Header::default();
header.set_seal(vec![rlp::encode(&H256::from("b251bd2e0283d0658f2cadfdc8ca619b5de94eca5742725e2e757dd13ed7503d")).into_vec(), rlp::encode(&H64::zero()).into_vec()]);
header.set_seal(vec![rlp::encode(&H256::from("b251bd2e0283d0658f2cadfdc8ca619b5de94eca5742725e2e757dd13ed7503d")), rlp::encode(&H64::zero())]);
let info = ethash.extra_info(&header);
assert_eq!(info["nonce"], "0x0000000000000000");
assert_eq!(info["mixHash"], "0xb251bd2e0283d0658f2cadfdc8ca619b5de94eca5742725e2e757dd13ed7503d");

View File

@ -152,6 +152,9 @@ pub fn new_frontier_test_machine() -> EthereumMachine { load_machine(include_byt
/// Create a new Foundation Homestead-era chain spec as though it never changed from Frontier.
pub fn new_homestead_test_machine() -> EthereumMachine { load_machine(include_bytes!("../../res/ethereum/homestead_test.json")) }
/// Create a new Foundation Homestead-EIP210-era chain spec as though it never changed from Homestead/Frontier.
pub fn new_eip210_test_machine() -> EthereumMachine { load_machine(include_bytes!("../../res/ethereum/eip210_test.json")) }
/// Create a new Foundation Byzantium era spec.
pub fn new_byzantium_test_machine() -> EthereumMachine { load_machine(include_bytes!("../../res/ethereum/byzantium_test.json")) }

View File

@ -627,7 +627,8 @@ impl<'a, B: 'a + StateBackend> Executive<'a, B> {
let schedule = self.schedule;
// refunds from SSTORE nonzero -> zero
let sstore_refunds = substate.sstore_clears_refund;
assert!(substate.sstore_clears_refund >= 0, "On transaction level, sstore clears refund cannot go below zero.");
let sstore_refunds = U256::from(substate.sstore_clears_refund as u64);
// refunds from contract suicides
let suicide_refunds = U256::from(schedule.suicide_refund_gas) * U256::from(substate.suicides.len());
let refunds_bound = sstore_refunds + suicide_refunds;
@ -1653,7 +1654,7 @@ mod tests {
let gas_used = gas - gas_left;
// sstore: 0 -> (1) -> () -> (1 -> 0 -> 1)
assert_eq!(gas_used, U256::from(41860));
assert_eq!(refund, U256::from(19800));
assert_eq!(refund, 19800);
assert_eq!(state.storage_at(&operating_address, &k).unwrap(), H256::from(U256::from(1)));
// Test a call via top-level -> y2 -> x2
@ -1671,7 +1672,7 @@ mod tests {
let gas_used = gas - gas_left;
// sstore: 1 -> (1) -> () -> (0 -> 1 -> 0)
assert_eq!(gas_used, U256::from(11860));
assert_eq!(refund, U256::from(19800));
assert_eq!(refund, 19800);
}
fn wasm_sample_code() -> Arc<Vec<u8>> {

View File

@ -394,12 +394,12 @@ impl<'a, T: 'a, V: 'a, B: 'a> Ext for Externalities<'a, T, V, B>
self.depth
}
fn add_sstore_refund(&mut self, value: U256) {
self.substate.sstore_clears_refund = self.substate.sstore_clears_refund.saturating_add(value);
fn add_sstore_refund(&mut self, value: usize) {
self.substate.sstore_clears_refund += value as i128;
}
fn sub_sstore_refund(&mut self, value: U256) {
self.substate.sstore_clears_refund = self.substate.sstore_clears_refund.saturating_sub(value);
fn sub_sstore_refund(&mut self, value: usize) {
self.substate.sstore_clears_refund -= value as i128;
}
fn trace_next_instruction(&mut self, pc: usize, instruction: u8, current_gas: U256) -> bool {

View File

@ -449,7 +449,7 @@ mod tests {
let header_rlp = "f901f9a0d405da4e66f1445d455195229624e133f5baafe72b5cf7b3c36c12c8146e98b7a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347948888f1f195afa192cfee860698584c030f4c9db1a05fb2b4bfdef7b314451cb138a534d225c922fc0e5fbe25e451142732c3e25c25a088d2ec6b9860aae1a2c3b299f72b6a5d70d7f7ba4722c78f2c49ba96273c2158a007c6fdfa8eea7e86b81f5b0fc0f78f90cc19f4aa60d323151e0cac660199e9a1b90100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008302008003832fefba82524d84568e932a80a0a0349d8c3df71f1a48a9df7d03fd5f14aeee7d91332c009ecaff0a71ead405bd88ab4e252a7e8c2a23".from_hex().unwrap();
let header: Header = rlp::decode(&header_rlp).expect("error decoding header");
let encoded_header = rlp::encode(&header).into_vec();
let encoded_header = rlp::encode(&header);
assert_eq!(header_rlp, encoded_header);
}

View File

@ -95,10 +95,11 @@ mod difficulty_test_foundation {
declare_test!{DifficultyTests_difficultyMainNetwork, "BasicTests/difficultyMainNetwork.json"}
}
mod difficulty_test_ropsten {
difficulty_json_test_nopath!(new_ropsten_test);
declare_test!{DifficultyTests_difficultyRopsten, "BasicTests/difficultyRopsten.json"}
}
// Disabling Ropsten diff tests; waiting for upstream ethereum/tests Constantinople update
//mod difficulty_test_ropsten {
// difficulty_json_test_nopath!(new_ropsten_test);
// declare_test!{DifficultyTests_difficultyRopsten, "BasicTests/difficultyRopsten.json"}
//}
mod difficulty_test_frontier {
difficulty_json_test_nopath!(new_frontier_test);

View File

@ -209,11 +209,11 @@ impl<'a, T: 'a, V: 'a, B: 'a> Ext for TestExt<'a, T, V, B>
false
}
fn add_sstore_refund(&mut self, value: U256) {
fn add_sstore_refund(&mut self, value: usize) {
self.ext.add_sstore_refund(value)
}
fn sub_sstore_refund(&mut self, value: U256) {
fn sub_sstore_refund(&mut self, value: usize) {
self.ext.sub_sstore_refund(value)
}
}

View File

@ -20,6 +20,7 @@ use ethtrie::RlpCodec;
use ethereum_types::H256;
use memorydb::MemoryDB;
use keccak_hasher::KeccakHasher;
use kvdb::DBValue;
use super::HookType;
@ -36,7 +37,7 @@ fn test_trie<H: FnMut(&str, HookType)>(json: &[u8], trie: TrieSpec, start_stop_h
for (name, test) in tests.into_iter() {
start_stop_hook(&name, HookType::OnStart);
let mut memdb = MemoryDB::<KeccakHasher>::new();
let mut memdb = MemoryDB::<KeccakHasher, DBValue>::new();
let mut root = H256::default();
let mut t = factory.create(&mut memdb, &mut root);

View File

@ -135,8 +135,10 @@ impl EthereumMachine {
Some(contract_address),
code,
code_hash,
None,
gas,
data
data,
None,
)
}
@ -149,8 +151,10 @@ impl EthereumMachine {
contract_address: Option<Address>,
code: Option<Arc<Vec<u8>>>,
code_hash: Option<H256>,
value: Option<ActionValue>,
gas: U256,
data: Option<Vec<u8>>
data: Option<Vec<u8>>,
call_type: Option<CallType>,
) -> Result<Vec<u8>, Error> {
let env_info = {
let mut env_info = block.env_info();
@ -167,11 +171,11 @@ impl EthereumMachine {
origin: SYSTEM_ADDRESS,
gas,
gas_price: 0.into(),
value: ActionValue::Transfer(0.into()),
value: value.unwrap_or(ActionValue::Transfer(0.into())),
code,
code_hash,
data,
call_type: CallType::Call,
call_type: call_type.unwrap_or(CallType::Call),
params_type: ParamsType::Separate,
};
let schedule = self.schedule(env_info.number);
@ -367,11 +371,11 @@ impl EthereumMachine {
}
/// Does verification of the transaction against the parent state.
pub fn verify_transaction<C: BlockInfo + CallContract>(&self, t: &SignedTransaction, header: &Header, client: &C)
pub fn verify_transaction<C: BlockInfo + CallContract>(&self, t: &SignedTransaction, parent: &Header, client: &C)
-> Result<(), transaction::Error>
{
if let Some(ref filter) = self.tx_filter.as_ref() {
if !filter.transaction_allowed(header.parent_hash(), header.number(), t, client) {
if !filter.transaction_allowed(&parent.hash(), parent.number() + 1, t, client) {
return Err(transaction::Error::NotAllowed.into())
}
}

View File

@ -50,7 +50,7 @@ use executive::contract_address;
use header::{Header, BlockNumber};
use miner;
use miner::pool_client::{PoolClient, CachedNonceClient, NonceCache};
use receipt::{Receipt, RichReceipt};
use receipt::RichReceipt;
use spec::Spec;
use state::State;
use ethkey::Password;
@ -1039,19 +1039,17 @@ impl miner::MinerService for Miner {
self.transaction_queue.status()
}
fn pending_receipt(&self, best_block: BlockNumber, hash: &H256) -> Option<RichReceipt> {
fn pending_receipts(&self, best_block: BlockNumber) -> Option<Vec<RichReceipt>> {
self.map_existing_pending_block(|pending| {
let txs = pending.transactions();
txs.iter()
.map(|t| t.hash())
.position(|t| t == *hash)
.map(|index| {
let receipts = pending.receipts();
let receipts = pending.receipts();
pending.transactions()
.into_iter()
.enumerate()
.map(|(index, tx)| {
let prev_gas = if index == 0 { Default::default() } else { receipts[index - 1].gas_used };
let tx = &txs[index];
let receipt = &receipts[index];
RichReceipt {
transaction_hash: hash.clone(),
transaction_hash: tx.hash(),
transaction_index: index,
cumulative_gas_used: receipt.gas_used,
gas_used: receipt.gas_used - prev_gas,
@ -1067,15 +1065,7 @@ impl miner::MinerService for Miner {
outcome: receipt.outcome.clone(),
}
})
}, best_block).and_then(|x| x)
}
fn pending_receipts(&self, best_block: BlockNumber) -> Option<BTreeMap<H256, Receipt>> {
self.map_existing_pending_block(|pending| {
let hashes = pending.transactions().iter().map(|t| t.hash());
let receipts = pending.receipts().iter().cloned();
hashes.zip(receipts).collect()
.collect()
}, best_block)
}

View File

@ -44,7 +44,7 @@ use client::{
};
use error::Error;
use header::{BlockNumber, Header};
use receipt::{RichReceipt, Receipt};
use receipt::RichReceipt;
use transaction::{self, UnverifiedTransaction, SignedTransaction, PendingTransaction};
use state::StateInfo;
use ethkey::Password;
@ -95,10 +95,13 @@ pub trait MinerService : Send + Sync {
// Pending block
/// Get a list of all pending receipts from pending block.
fn pending_receipts(&self, best_block: BlockNumber) -> Option<BTreeMap<H256, Receipt>>;
fn pending_receipts(&self, best_block: BlockNumber) -> Option<Vec<RichReceipt>>;
/// Get a particular receipt from pending block.
fn pending_receipt(&self, best_block: BlockNumber, hash: &H256) -> Option<RichReceipt>;
fn pending_receipt(&self, best_block: BlockNumber, hash: &H256) -> Option<RichReceipt> {
let receipts = self.pending_receipts(best_block)?;
receipts.into_iter().find(|r| &r.transaction_hash == hash)
}
/// Get `Some` `clone()` of the current pending block's state or `None` if we're not sealing.
fn pending_state(&self, latest_block_number: BlockNumber) -> Option<Self::State>;

View File

@ -138,7 +138,7 @@ impl JobDispatcher for StratumJobDispatcher {
);
self.with_core_result(|client, miner| {
let seal = vec![encode(&payload.mix_hash).into_vec(), encode(&payload.nonce).into_vec()];
let seal = vec![encode(&payload.mix_hash), encode(&payload.nonce)];
let import = miner.submit_seal(payload.pow_hash, seal)
.and_then(|block| client.import_sealed_block(block));

View File

@ -20,6 +20,7 @@ use itertools::Itertools;
use hash::{keccak};
use ethereum_types::{H256, U256};
use hashdb::HashDB;
use kvdb::DBValue;
use keccak_hasher::KeccakHasher;
use triehash::sec_trie_root;
use bytes::Bytes;
@ -67,7 +68,7 @@ impl PodAccount {
}
/// Place additional data into given hash DB.
pub fn insert_additional(&self, db: &mut HashDB<KeccakHasher>, factory: &TrieFactory<KeccakHasher, RlpCodec>) {
pub fn insert_additional(&self, db: &mut HashDB<KeccakHasher, DBValue>, factory: &TrieFactory<KeccakHasher, RlpCodec>) {
match self.code {
Some(ref c) if !c.is_empty() => { db.insert(c); }
_ => {}

View File

@ -123,7 +123,7 @@ pub fn to_fat_rlps(account_hash: &H256, acc: &BasicAccount, acct_db: &AccountDB,
let stream = ::std::mem::replace(&mut account_stream, RlpStream::new_list(2));
chunks.push(stream.out());
target_chunk_size = max_chunk_size;
leftover = Some(pair.into_vec());
leftover = Some(pair);
break;
}
},

View File

@ -30,7 +30,7 @@ use machine::EthereumMachine;
use ids::BlockId;
use header::Header;
use receipt::Receipt;
use snapshot::{Error, ManifestData};
use snapshot::{Error, ManifestData, Progress};
use itertools::{Position, Itertools};
use rlp::{RlpStream, Rlp};
@ -59,6 +59,7 @@ impl SnapshotComponents for PoaSnapshot {
chain: &BlockChain,
block_at: H256,
sink: &mut ChunkSink,
_progress: &Progress,
preferred_size: usize,
) -> Result<(), Error> {
let number = chain.block_number(&block_at)

View File

@ -22,7 +22,7 @@ use std::sync::Arc;
use blockchain::{BlockChain, BlockChainDB};
use engines::EthEngine;
use snapshot::{Error, ManifestData};
use snapshot::{Error, ManifestData, Progress};
use ethereum_types::H256;
@ -49,6 +49,7 @@ pub trait SnapshotComponents: Send {
chain: &BlockChain,
block_at: H256,
chunk_sink: &mut ChunkSink,
progress: &Progress,
preferred_size: usize,
) -> Result<(), Error>;

View File

@ -28,7 +28,7 @@ use std::sync::Arc;
use blockchain::{BlockChain, BlockChainDB, BlockProvider};
use engines::EthEngine;
use snapshot::{Error, ManifestData};
use snapshot::{Error, ManifestData, Progress};
use snapshot::block::AbridgedBlock;
use ethereum_types::H256;
use kvdb::KeyValueDB;
@ -65,6 +65,7 @@ impl SnapshotComponents for PowSnapshot {
chain: &BlockChain,
block_at: H256,
chunk_sink: &mut ChunkSink,
progress: &Progress,
preferred_size: usize,
) -> Result<(), Error> {
PowWorker {
@ -72,6 +73,7 @@ impl SnapshotComponents for PowSnapshot {
rlps: VecDeque::new(),
current_hash: block_at,
writer: chunk_sink,
progress: progress,
preferred_size: preferred_size,
}.chunk_all(self.blocks)
}
@ -96,6 +98,7 @@ struct PowWorker<'a> {
rlps: VecDeque<Bytes>,
current_hash: H256,
writer: &'a mut ChunkSink<'a>,
progress: &'a Progress,
preferred_size: usize,
}
@ -138,6 +141,7 @@ impl<'a> PowWorker<'a> {
last = self.current_hash;
self.current_hash = block.header_view().parent_hash();
self.progress.blocks.fetch_add(1, Ordering::SeqCst);
}
if loaded_size != 0 {

View File

@ -20,6 +20,7 @@
//! https://wiki.parity.io/Warp-Sync-Snapshot-Format
use std::collections::{HashMap, HashSet};
use std::cmp;
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use hash::{keccak, KECCAK_NULL_RLP, KECCAK_EMPTY};
@ -33,16 +34,16 @@ use ids::BlockId;
use ethereum_types::{H256, U256};
use hashdb::HashDB;
use keccak_hasher::KeccakHasher;
use kvdb::DBValue;
use snappy;
use bytes::Bytes;
use parking_lot::Mutex;
use journaldb::{self, Algorithm, JournalDB};
use kvdb::KeyValueDB;
use kvdb::{KeyValueDB, DBValue};
use trie::{Trie, TrieMut};
use ethtrie::{TrieDB, TrieDBMut};
use rlp::{RlpStream, Rlp};
use bloom_journal::Bloom;
use num_cpus;
use self::io::SnapshotWriter;
@ -88,6 +89,28 @@ const MAX_CHUNK_SIZE: usize = PREFERRED_CHUNK_SIZE / 4 * 5;
const MIN_SUPPORTED_STATE_CHUNK_VERSION: u64 = 1;
// current state chunk version.
const STATE_CHUNK_VERSION: u64 = 2;
/// number of snapshot subparts, must be a power of 2 in [1; 256]
const SNAPSHOT_SUBPARTS: usize = 16;
/// Maximum number of snapshot subparts (must be a multiple of `SNAPSHOT_SUBPARTS`)
const MAX_SNAPSHOT_SUBPARTS: usize = 256;
/// Configuration for the Snapshot service
#[derive(Debug, Clone, PartialEq)]
pub struct SnapshotConfiguration {
/// If `true`, no periodic snapshots will be created
pub no_periodic: bool,
/// Number of threads for creating snapshots
pub processing_threads: usize,
}
impl Default for SnapshotConfiguration {
fn default() -> Self {
SnapshotConfiguration {
no_periodic: false,
processing_threads: ::std::cmp::max(1, num_cpus::get() / 2),
}
}
}
/// A progress indicator for snapshots.
#[derive(Debug, Default)]
@ -128,9 +151,10 @@ pub fn take_snapshot<W: SnapshotWriter + Send>(
engine: &EthEngine,
chain: &BlockChain,
block_at: H256,
state_db: &HashDB<KeccakHasher>,
state_db: &HashDB<KeccakHasher, DBValue>,
writer: W,
p: &Progress
p: &Progress,
processing_threads: usize,
) -> Result<(), Error> {
let start_header = chain.block_header_data(&block_at)
.ok_or(Error::InvalidStartingBlock(BlockId::Hash(block_at)))?;
@ -142,17 +166,45 @@ pub fn take_snapshot<W: SnapshotWriter + Send>(
let writer = Mutex::new(writer);
let chunker = engine.snapshot_components().ok_or(Error::SnapshotsUnsupported)?;
let snapshot_version = chunker.current_version();
let (state_hashes, block_hashes) = scope(|scope| {
let (state_hashes, block_hashes) = scope(|scope| -> Result<(Vec<H256>, Vec<H256>), Error> {
let writer = &writer;
let block_guard = scope.spawn(move || chunk_secondary(chunker, chain, block_at, writer, p));
let state_res = chunk_state(state_db, &state_root, writer, p);
state_res.and_then(|state_hashes| {
block_guard.join().map(|block_hashes| (state_hashes, block_hashes))
})
// The number of threads must be between 1 and SNAPSHOT_SUBPARTS
assert!(processing_threads >= 1, "Cannot use less than 1 threads for creating snapshots");
let num_threads: usize = cmp::min(processing_threads, SNAPSHOT_SUBPARTS);
info!(target: "snapshot", "Using {} threads for Snapshot creation.", num_threads);
let mut state_guards = Vec::with_capacity(num_threads as usize);
for thread_idx in 0..num_threads {
let state_guard = scope.spawn(move || -> Result<Vec<H256>, Error> {
let mut chunk_hashes = Vec::new();
for part in (thread_idx..SNAPSHOT_SUBPARTS).step_by(num_threads) {
debug!(target: "snapshot", "Chunking part {} in thread {}", part, thread_idx);
let mut hashes = chunk_state(state_db, &state_root, writer, p, Some(part))?;
chunk_hashes.append(&mut hashes);
}
Ok(chunk_hashes)
});
state_guards.push(state_guard);
}
let block_hashes = block_guard.join()?;
let mut state_hashes = Vec::new();
for guard in state_guards {
let part_state_hashes = guard.join()?;
state_hashes.extend(part_state_hashes);
}
debug!(target: "snapshot", "Took a snapshot of {} accounts", p.accounts.load(Ordering::SeqCst));
Ok((state_hashes, block_hashes))
})?;
info!("produced {} state chunks and {} block chunks.", state_hashes.len(), block_hashes.len());
info!(target: "snapshot", "produced {} state chunks and {} block chunks.", state_hashes.len(), block_hashes.len());
let manifest_data = ManifestData {
version: snapshot_version,
@ -200,6 +252,7 @@ pub fn chunk_secondary<'a>(mut chunker: Box<SnapshotComponents>, chain: &'a Bloc
chain,
start_hash,
&mut chunk_sink,
progress,
PREFERRED_CHUNK_SIZE,
)?;
}
@ -263,10 +316,12 @@ impl<'a> StateChunker<'a> {
/// Walk the given state database starting from the given root,
/// creating chunks and writing them out.
/// `part` is a number between 0 and 15, which describe which part of
/// the tree should be chunked.
///
/// Returns a list of hashes of chunks created, or any error it may
/// have encountered.
pub fn chunk_state<'a>(db: &HashDB<KeccakHasher>, root: &H256, writer: &Mutex<SnapshotWriter + 'a>, progress: &'a Progress) -> Result<Vec<H256>, Error> {
pub fn chunk_state<'a>(db: &HashDB<KeccakHasher, DBValue>, root: &H256, writer: &Mutex<SnapshotWriter + 'a>, progress: &'a Progress, part: Option<usize>) -> Result<Vec<H256>, Error> {
let account_trie = TrieDB::new(db, &root)?;
let mut chunker = StateChunker {
@ -281,11 +336,33 @@ pub fn chunk_state<'a>(db: &HashDB<KeccakHasher>, root: &H256, writer: &Mutex<Sn
let mut used_code = HashSet::new();
// account_key here is the address' hash.
for item in account_trie.iter()? {
let mut account_iter = account_trie.iter()?;
let mut seek_to = None;
if let Some(part) = part {
assert!(part < 16, "Wrong chunk state part number (must be <16) in snapshot creation.");
let part_offset = MAX_SNAPSHOT_SUBPARTS / SNAPSHOT_SUBPARTS;
let mut seek_from = vec![0; 32];
seek_from[0] = (part * part_offset) as u8;
account_iter.seek(&seek_from)?;
// Set the upper-bound, except for the last part
if part < SNAPSHOT_SUBPARTS - 1 {
seek_to = Some(((part + 1) * part_offset) as u8)
}
}
for item in account_iter {
let (account_key, account_data) = item?;
let account = ::rlp::decode(&*account_data)?;
let account_key_hash = H256::from_slice(&account_key);
if seek_to.map_or(false, |seek_to| account_key[0] >= seek_to) {
break;
}
let account = ::rlp::decode(&*account_data)?;
let account_db = AccountDB::from_hash(db, account_key_hash);
let fat_rlps = account::to_fat_rlps(&account_key_hash, &account, &account_db, &mut used_code, PREFERRED_CHUNK_SIZE - chunker.chunk_size(), PREFERRED_CHUNK_SIZE)?;
@ -416,7 +493,7 @@ struct RebuiltStatus {
// rebuild a set of accounts and their storage.
// returns a status detailing newly-loaded code and accounts missing code.
fn rebuild_accounts(
db: &mut HashDB<KeccakHasher>,
db: &mut HashDB<KeccakHasher, DBValue>,
account_fat_rlps: Rlp,
out_chunk: &mut [(H256, Bytes)],
known_code: &HashMap<H256, H256>,
@ -463,7 +540,7 @@ fn rebuild_accounts(
}
}
::rlp::encode(&acc).into_vec()
::rlp::encode(&acc)
};
*out = (hash, thin_rlp);

View File

@ -62,7 +62,7 @@ impl StateProducer {
/// Tick the state producer. This alters the state, writing new data into
/// the database.
pub fn tick<R: Rng>(&mut self, rng: &mut R, db: &mut HashDB<KeccakHasher>) {
pub fn tick<R: Rng>(&mut self, rng: &mut R, db: &mut HashDB<KeccakHasher, DBValue>) {
// modify existing accounts.
let mut accounts_to_modify: Vec<_> = {
let trie = TrieDB::new(&*db, &self.state_root).unwrap();
@ -80,7 +80,7 @@ impl StateProducer {
let mut account: BasicAccount = ::rlp::decode(&*account_data).expect("error decoding basic account");
let acct_db = AccountDBMut::from_hash(db, *address_hash);
fill_storage(acct_db, &mut account.storage_root, &mut self.storage_seed);
*account_data = DBValue::from_vec(::rlp::encode(&account).into_vec());
*account_data = DBValue::from_vec(::rlp::encode(&account));
}
// sweep again to alter account trie.
@ -131,7 +131,7 @@ pub fn fill_storage(mut db: AccountDBMut, root: &mut H256, seed: &mut H256) {
}
/// Compare two state dbs.
pub fn compare_dbs(one: &HashDB<KeccakHasher>, two: &HashDB<KeccakHasher>) {
pub fn compare_dbs(one: &HashDB<KeccakHasher, DBValue>, two: &HashDB<KeccakHasher, DBValue>) {
let keys = one.keys();
for key in keys.keys() {

View File

@ -22,7 +22,7 @@ use hash::{KECCAK_NULL_RLP, keccak};
use basic_account::BasicAccount;
use snapshot::account;
use snapshot::{chunk_state, Error as SnapshotError, Progress, StateRebuilder};
use snapshot::{chunk_state, Error as SnapshotError, Progress, StateRebuilder, SNAPSHOT_SUBPARTS};
use snapshot::io::{PackedReader, PackedWriter, SnapshotReader, SnapshotWriter};
use super::helpers::{compare_dbs, StateProducer};
@ -53,7 +53,11 @@ fn snap_and_restore() {
let state_root = producer.state_root();
let writer = Mutex::new(PackedWriter::new(&snap_file).unwrap());
let state_hashes = chunk_state(&old_db, &state_root, &writer, &Progress::default()).unwrap();
let mut state_hashes = Vec::new();
for part in 0..SNAPSHOT_SUBPARTS {
let mut hashes = chunk_state(&old_db, &state_root, &writer, &Progress::default(), Some(part)).unwrap();
state_hashes.append(&mut hashes);
}
writer.into_inner().finish(::snapshot::ManifestData {
version: 2,
@ -164,7 +168,7 @@ fn checks_flag() {
let state_root = producer.state_root();
let writer = Mutex::new(PackedWriter::new(&snap_file).unwrap());
let state_hashes = chunk_state(&old_db, &state_root, &writer, &Progress::default()).unwrap();
let state_hashes = chunk_state(&old_db, &state_root, &writer, &Progress::default(), None).unwrap();
writer.into_inner().finish(::snapshot::ManifestData {
version: 2,

View File

@ -33,7 +33,10 @@ use vm::{EnvInfo, CallType, ActionValue, ActionParams, ParamsType};
use builtin::Builtin;
use encoded;
use engines::{EthEngine, NullEngine, InstantSeal, BasicAuthority, AuthorityRound, Tendermint, DEFAULT_BLOCKHASH_CONTRACT};
use engines::{
EthEngine, NullEngine, InstantSeal, InstantSealParams, BasicAuthority,
AuthorityRound, Tendermint, DEFAULT_BLOCKHASH_CONTRACT
};
use error::Error;
use executive::Executive;
use factory::Factories;
@ -596,7 +599,8 @@ impl Spec {
match engine_spec {
ethjson::spec::Engine::Null(null) => Arc::new(NullEngine::new(null.params.into(), machine)),
ethjson::spec::Engine::Ethash(ethash) => Arc::new(::ethereum::Ethash::new(spec_params.cache_dir, ethash.params.into(), machine, spec_params.optimization_setting)),
ethjson::spec::Engine::InstantSeal(instant_seal) => Arc::new(InstantSeal::new(instant_seal.params.into(), machine)),
ethjson::spec::Engine::InstantSeal(Some(instant_seal)) => Arc::new(InstantSeal::new(instant_seal.params.into(), machine)),
ethjson::spec::Engine::InstantSeal(None) => Arc::new(InstantSeal::new(InstantSealParams::default(), machine)),
ethjson::spec::Engine::BasicAuthority(basic_authority) => Arc::new(BasicAuthority::new(basic_authority.params.into(), machine)),
ethjson::spec::Engine::AuthorityRound(authority_round) => AuthorityRound::new(authority_round.params.into(), machine)
.expect("Failed to start AuthorityRound consensus engine."),
@ -873,14 +877,13 @@ impl Spec {
data: d,
}.fake_sign(from);
let res = ::state::prove_transaction(
let res = ::state::prove_transaction_virtual(
db.as_hashdb_mut(),
*genesis.state_root(),
&tx,
self.engine.machine(),
&env_info,
factories.clone(),
true,
);
res.map(|(out, proof)| {

View File

@ -213,7 +213,7 @@ impl Account {
/// Get (and cache) the contents of the trie's storage at `key`.
/// Takes modified storage into account.
pub fn storage_at(&self, db: &HashDB<KeccakHasher>, key: &H256) -> TrieResult<H256> {
pub fn storage_at(&self, db: &HashDB<KeccakHasher, DBValue>, key: &H256) -> TrieResult<H256> {
if let Some(value) = self.cached_storage_at(key) {
return Ok(value);
}
@ -226,7 +226,7 @@ impl Account {
/// Get (and cache) the contents of the trie's storage at `key`.
/// Does not take modified storage into account.
pub fn original_storage_at(&self, db: &HashDB<KeccakHasher>, key: &H256) -> TrieResult<H256> {
pub fn original_storage_at(&self, db: &HashDB<KeccakHasher, DBValue>, key: &H256) -> TrieResult<H256> {
if let Some(value) = self.cached_original_storage_at(key) {
return Ok(value);
}
@ -248,7 +248,7 @@ impl Account {
}
}
fn get_and_cache_storage(storage_root: &H256, storage_cache: &mut LruCache<H256, H256>, db: &HashDB<KeccakHasher>, key: &H256) -> TrieResult<H256> {
fn get_and_cache_storage(storage_root: &H256, storage_cache: &mut LruCache<H256, H256>, db: &HashDB<KeccakHasher, DBValue>, key: &H256) -> TrieResult<H256> {
let db = SecTrieDB::new(db, storage_root)?;
let panicky_decoder = |bytes:&[u8]| ::rlp::decode(&bytes).expect("decoding db value failed");
let item: U256 = db.get_with(key, panicky_decoder)?.unwrap_or_else(U256::zero);
@ -354,7 +354,7 @@ impl Account {
/// Provide a database to get `code_hash`. Should not be called if it is a contract without code. Returns the cached code, if successful.
#[must_use]
pub fn cache_code(&mut self, db: &HashDB<KeccakHasher>) -> Option<Arc<Bytes>> {
pub fn cache_code(&mut self, db: &HashDB<KeccakHasher, DBValue>) -> Option<Arc<Bytes>> {
// TODO: fill out self.code_cache;
trace!("Account::cache_code: ic={}; self.code_hash={:?}, self.code_cache={}", self.is_cached(), self.code_hash, self.code_cache.pretty());
@ -384,7 +384,7 @@ impl Account {
/// Provide a database to get `code_size`. Should not be called if it is a contract without code. Returns whether
/// the cache succeeds.
#[must_use]
pub fn cache_code_size(&mut self, db: &HashDB<KeccakHasher>) -> bool {
pub fn cache_code_size(&mut self, db: &HashDB<KeccakHasher, DBValue>) -> bool {
// TODO: fill out self.code_cache;
trace!("Account::cache_code_size: ic={}; self.code_hash={:?}, self.code_cache={}", self.is_cached(), self.code_hash, self.code_cache.pretty());
self.code_size.is_some() ||
@ -473,7 +473,7 @@ impl Account {
}
/// Commit the `storage_changes` to the backing DB and update `storage_root`.
pub fn commit_storage(&mut self, trie_factory: &TrieFactory, db: &mut HashDB<KeccakHasher>) -> TrieResult<()> {
pub fn commit_storage(&mut self, trie_factory: &TrieFactory, db: &mut HashDB<KeccakHasher, DBValue>) -> TrieResult<()> {
let mut t = trie_factory.from_existing(db, &mut self.storage_root)?;
for (k, v) in self.storage_changes.drain() {
// cast key and value to trait type,
@ -490,7 +490,7 @@ impl Account {
}
/// Commit any unsaved code. `code_hash` will always return the hash of the `code_cache` after this.
pub fn commit_code(&mut self, db: &mut HashDB<KeccakHasher>) {
pub fn commit_code(&mut self, db: &mut HashDB<KeccakHasher, DBValue>) {
trace!("Commiting code of {:?} - {:?}, {:?}", self, self.code_filth == Filth::Dirty, self.code_cache.is_empty());
match (self.code_filth == Filth::Dirty, self.code_cache.is_empty()) {
(true, true) => {
@ -579,7 +579,7 @@ impl Account {
/// trie.
/// `storage_key` is the hash of the desired storage key, meaning
/// this will only work correctly under a secure trie.
pub fn prove_storage(&self, db: &HashDB<KeccakHasher>, storage_key: H256) -> TrieResult<(Vec<Bytes>, H256)> {
pub fn prove_storage(&self, db: &HashDB<KeccakHasher, DBValue>, storage_key: H256) -> TrieResult<(Vec<Bytes>, H256)> {
let mut recorder = Recorder::new();
let trie = TrieDB::new(db, &self.storage_root)?;

View File

@ -28,16 +28,17 @@ use state::Account;
use parking_lot::Mutex;
use ethereum_types::{Address, H256};
use memorydb::MemoryDB;
use hashdb::{AsHashDB, HashDB, DBValue};
use hashdb::{AsHashDB, HashDB};
use kvdb::DBValue;
use keccak_hasher::KeccakHasher;
/// State backend. See module docs for more details.
pub trait Backend: Send {
/// Treat the backend as a read-only hashdb.
fn as_hashdb(&self) -> &HashDB<KeccakHasher>;
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue>;
/// Treat the backend as a writeable hashdb.
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher>;
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue>;
/// Add an account entry to the cache.
fn add_to_account_cache(&mut self, addr: Address, data: Option<Account>, modified: bool);
@ -76,18 +77,18 @@ pub trait Backend: Send {
// TODO: when account lookup moved into backends, this won't rely as tenuously on intended
// usage.
#[derive(Clone, PartialEq)]
pub struct ProofCheck(MemoryDB<KeccakHasher>);
pub struct ProofCheck(MemoryDB<KeccakHasher, DBValue>);
impl ProofCheck {
/// Create a new `ProofCheck` backend from the given state items.
pub fn new(proof: &[DBValue]) -> Self {
let mut db = MemoryDB::<KeccakHasher>::new();
let mut db = MemoryDB::<KeccakHasher, DBValue>::new();
for item in proof { db.insert(item); }
ProofCheck(db)
}
}
impl HashDB<KeccakHasher> for ProofCheck {
impl HashDB<KeccakHasher, DBValue> for ProofCheck {
fn keys(&self) -> HashMap<H256, i32> { self.0.keys() }
fn get(&self, key: &H256) -> Option<DBValue> {
self.0.get(key)
@ -108,14 +109,14 @@ impl HashDB<KeccakHasher> for ProofCheck {
fn remove(&mut self, _key: &H256) { }
}
impl AsHashDB<KeccakHasher> for ProofCheck {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> { self }
impl AsHashDB<KeccakHasher, DBValue> for ProofCheck {
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
}
impl Backend for ProofCheck {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> { self }
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
fn add_to_account_cache(&mut self, _addr: Address, _data: Option<Account>, _modified: bool) {}
fn cache_code(&self, _hash: H256, _code: Arc<Vec<u8>>) {}
fn get_cached_account(&self, _addr: &Address) -> Option<Option<Account>> { None }
@ -134,18 +135,18 @@ impl Backend for ProofCheck {
/// The proof-of-execution can be extracted with `extract_proof`.
///
/// This doesn't cache anything or rely on the canonical state caches.
pub struct Proving<H: AsHashDB<KeccakHasher>> {
pub struct Proving<H: AsHashDB<KeccakHasher, DBValue>> {
base: H, // state we're proving values from.
changed: MemoryDB<KeccakHasher>, // changed state via insertions.
changed: MemoryDB<KeccakHasher, DBValue>, // changed state via insertions.
proof: Mutex<HashSet<DBValue>>,
}
impl<AH: AsHashDB<KeccakHasher> + Send + Sync> AsHashDB<KeccakHasher> for Proving<AH> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> { self }
impl<AH: AsHashDB<KeccakHasher, DBValue> + Send + Sync> AsHashDB<KeccakHasher, DBValue> for Proving<AH> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
}
impl<H: AsHashDB<KeccakHasher> + Send + Sync> HashDB<KeccakHasher> for Proving<H> {
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> HashDB<KeccakHasher, DBValue> for Proving<H> {
fn keys(&self) -> HashMap<H256, i32> {
let mut keys = self.base.as_hashdb().keys();
keys.extend(self.changed.keys());
@ -182,10 +183,10 @@ impl<H: AsHashDB<KeccakHasher> + Send + Sync> HashDB<KeccakHasher> for Proving<H
}
}
impl<H: AsHashDB<KeccakHasher> + Send + Sync> Backend for Proving<H> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self }
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> Backend for Proving<H> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> { self }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> { self }
fn add_to_account_cache(&mut self, _: Address, _: Option<Account>, _: bool) { }
@ -204,13 +205,13 @@ impl<H: AsHashDB<KeccakHasher> + Send + Sync> Backend for Proving<H> {
fn is_known_null(&self, _: &Address) -> bool { false }
}
impl<H: AsHashDB<KeccakHasher>> Proving<H> {
impl<H: AsHashDB<KeccakHasher, DBValue>> Proving<H> {
/// Create a new `Proving` over a base database.
/// This will store all values ever fetched from that base.
pub fn new(base: H) -> Self {
Proving {
base: base,
changed: MemoryDB::<KeccakHasher>::new(),
changed: MemoryDB::<KeccakHasher, DBValue>::new(),
proof: Mutex::new(HashSet::new()),
}
}
@ -222,7 +223,7 @@ impl<H: AsHashDB<KeccakHasher>> Proving<H> {
}
}
impl<H: AsHashDB<KeccakHasher> + Clone> Clone for Proving<H> {
impl<H: AsHashDB<KeccakHasher, DBValue> + Clone> Clone for Proving<H> {
fn clone(&self) -> Self {
Proving {
base: self.base.clone(),
@ -236,12 +237,12 @@ impl<H: AsHashDB<KeccakHasher> + Clone> Clone for Proving<H> {
/// it. Doesn't cache anything.
pub struct Basic<H>(pub H);
impl<H: AsHashDB<KeccakHasher> + Send + Sync> Backend for Basic<H> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> {
impl<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync> Backend for Basic<H> {
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> {
self.0.as_hashdb()
}
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> {
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
self.0.as_hashdb_mut()
}

View File

@ -224,17 +224,16 @@ pub fn check_proof(
}
}
/// Prove a transaction on the given state.
/// Prove a `virtual` transaction on the given state.
/// Returns `None` when the transacion could not be proved,
/// and a proof otherwise.
pub fn prove_transaction<H: AsHashDB<KeccakHasher> + Send + Sync>(
pub fn prove_transaction_virtual<H: AsHashDB<KeccakHasher, DBValue> + Send + Sync>(
db: H,
root: H256,
transaction: &SignedTransaction,
machine: &Machine,
env_info: &EnvInfo,
factories: Factories,
virt: bool,
) -> Option<(Bytes, Vec<DBValue>)> {
use self::backend::Proving;
@ -252,7 +251,7 @@ pub fn prove_transaction<H: AsHashDB<KeccakHasher> + Send + Sync>(
};
let options = TransactOptions::with_no_tracing().dont_check_nonce().save_output_from_contract();
match state.execute(env_info, machine, transaction, options, virt) {
match state.execute(env_info, machine, transaction, options, true) {
Err(ExecutionError::Internal(_)) => None,
Err(e) => {
trace!(target: "state", "Proved call failed: {}", e);
@ -619,7 +618,7 @@ impl<B: Backend> State<B> {
&self, address: &Address, key: &H256, f_cached_at: FCachedStorageAt, f_at: FStorageAt,
) -> TrieResult<H256> where
FCachedStorageAt: Fn(&Account, &H256) -> Option<H256>,
FStorageAt: Fn(&Account, &HashDB<KeccakHasher>, &H256) -> TrieResult<H256>
FStorageAt: Fn(&Account, &HashDB<KeccakHasher, DBValue>, &H256) -> TrieResult<H256>
{
// Storage key search and update works like this:
// 1. If there's an entry for the account in the local cache check for the key and return it if found.
@ -1015,7 +1014,7 @@ impl<B: Backend> State<B> {
/// Load required account data from the databases. Returns whether the cache succeeds.
#[must_use]
fn update_account_cache(require: RequireCache, account: &mut Account, state_db: &B, db: &HashDB<KeccakHasher>) -> bool {
fn update_account_cache(require: RequireCache, account: &mut Account, state_db: &B, db: &HashDB<KeccakHasher, DBValue>) -> bool {
if let RequireCache::None = require {
return true;
}

View File

@ -16,7 +16,7 @@
//! Execution environment substate.
use std::collections::HashSet;
use ethereum_types::{U256, Address};
use ethereum_types::Address;
use log_entry::LogEntry;
use evm::{Schedule, CleanDustMode};
use super::CleanupMode;
@ -35,7 +35,7 @@ pub struct Substate {
pub logs: Vec<LogEntry>,
/// Refund counter of SSTORE.
pub sstore_clears_refund: U256,
pub sstore_clears_refund: i128,
/// Created contracts.
pub contracts_created: Vec<Address>,
@ -52,7 +52,7 @@ impl Substate {
self.suicides.extend(s.suicides);
self.touched.extend(s.touched);
self.logs.extend(s.logs);
self.sstore_clears_refund = self.sstore_clears_refund + s.sstore_clears_refund;
self.sstore_clears_refund += s.sstore_clears_refund;
self.contracts_created.extend(s.contracts_created);
}

View File

@ -29,7 +29,7 @@ use hashdb::HashDB;
use keccak_hasher::KeccakHasher;
use header::BlockNumber;
use journaldb::JournalDB;
use kvdb::{KeyValueDB, DBTransaction};
use kvdb::{KeyValueDB, DBTransaction, DBValue};
use lru_cache::LruCache;
use memory_cache::MemoryLruCache;
use parking_lot::Mutex;
@ -312,12 +312,12 @@ impl StateDB {
}
/// Conversion method to interpret self as `HashDB` reference
pub fn as_hashdb(&self) -> &HashDB<KeccakHasher> {
pub fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> {
self.db.as_hashdb()
}
/// Conversion method to interpret self as mutable `HashDB` reference
pub fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> {
pub fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
self.db.as_hashdb_mut()
}
@ -412,9 +412,9 @@ impl StateDB {
}
impl state::Backend for StateDB {
fn as_hashdb(&self) -> &HashDB<KeccakHasher> { self.db.as_hashdb() }
fn as_hashdb(&self) -> &HashDB<KeccakHasher, DBValue> { self.db.as_hashdb() }
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher> {
fn as_hashdb_mut(&mut self) -> &mut HashDB<KeccakHasher, DBValue> {
self.db.as_hashdb_mut()
}

View File

@ -90,7 +90,7 @@ pub fn create_test_block_with_data(header: &Header, transactions: &[SignedTransa
rlp.append(header);
rlp.begin_list(transactions.len());
for t in transactions {
rlp.append_raw(&rlp::encode(t).into_vec(), 1);
rlp.append_raw(&rlp::encode(t), 1);
}
rlp.append_list(&uncles);
rlp.out()

View File

@ -38,7 +38,7 @@ fn test_blockhash_eip210(factory: Factory) {
let test_blockhash_contract = "73fffffffffffffffffffffffffffffffffffffffe33141561007a57600143036020526000356101006020510755600061010060205107141561005057600035610100610100602051050761010001555b6000620100006020510714156100755760003561010062010000602051050761020001555b61014a565b4360003512151561009057600060405260206040f35b610100600035430312156100b357610100600035075460605260206060f3610149565b62010000600035430312156100d157600061010060003507146100d4565b60005b156100f6576101006101006000350507610100015460805260206080f3610148565b630100000060003543031215610116576000620100006000350714610119565b60005b1561013c57610100620100006000350507610200015460a052602060a0f3610147565b600060c052602060c0f35b5b5b5b5b";
let blockhash_contract_code = Arc::new(test_blockhash_contract.from_hex().unwrap());
let blockhash_contract_code_hash = keccak(blockhash_contract_code.as_ref());
let machine = ::ethereum::new_constantinople_test_machine();
let machine = ::ethereum::new_eip210_test_machine();
let mut env_info = EnvInfo::default();
// populate state with 256 last hashes

View File

@ -465,19 +465,19 @@ impl<K: Kind> VerificationQueue<K> {
/// Add a block to the queue.
pub fn import(&self, input: K::Input) -> ImportResult {
let h = input.hash();
let hash = input.hash();
{
if self.processing.read().contains_key(&h) {
if self.processing.read().contains_key(&hash) {
bail!(ErrorKind::Import(ImportErrorKind::AlreadyQueued));
}
let mut bad = self.verification.bad.lock();
if bad.contains(&h) {
if bad.contains(&hash) {
bail!(ErrorKind::Import(ImportErrorKind::KnownBad));
}
if bad.contains(&input.parent_hash()) {
bad.insert(h.clone());
bad.insert(hash);
bail!(ErrorKind::Import(ImportErrorKind::KnownBad));
}
}
@ -486,21 +486,21 @@ impl<K: Kind> VerificationQueue<K> {
Ok(item) => {
self.verification.sizes.unverified.fetch_add(item.heap_size_of_children(), AtomicOrdering::SeqCst);
self.processing.write().insert(h.clone(), item.difficulty());
self.processing.write().insert(hash, item.difficulty());
{
let mut td = self.total_difficulty.write();
*td = *td + item.difficulty();
}
self.verification.unverified.lock().push_back(item);
self.more_to_verify.notify_all();
Ok(h)
Ok(hash)
},
Err(err) => {
match err {
// Don't mark future blocks as bad.
Error(ErrorKind::Block(BlockError::TemporarilyInvalid(_)), _) => {},
_ => {
self.verification.bad.lock().insert(h.clone());
self.verification.bad.lock().insert(hash);
}
}
Err(err)

View File

@ -147,7 +147,9 @@ pub fn verify_block_family<C: BlockInfo + CallContract>(header: &Header, parent:
verify_uncles(params.block, params.block_provider, engine)?;
for tx in &params.block.transactions {
engine.machine().verify_transaction(tx, header, params.client)?;
// transactions are verified against the parent header since the current
// state wasn't available when the tx was created
engine.machine().verify_transaction(tx, parent, params.client)?;
}
Ok(())
@ -245,15 +247,15 @@ fn verify_uncles(block: &PreverifiedBlock, bc: &BlockProvider, engine: &EthEngin
/// Phase 4 verification. Check block information against transaction enactment results,
pub fn verify_block_final(expected: &Header, got: &Header) -> Result<(), Error> {
if expected.state_root() != got.state_root() {
return Err(From::from(BlockError::InvalidStateRoot(Mismatch { expected: expected.state_root().clone(), found: got.state_root().clone() })))
}
if expected.gas_used() != got.gas_used() {
return Err(From::from(BlockError::InvalidGasUsed(Mismatch { expected: expected.gas_used().clone(), found: got.gas_used().clone() })))
}
if expected.log_bloom() != got.log_bloom() {
return Err(From::from(BlockError::InvalidLogBloom(Mismatch { expected: expected.log_bloom().clone(), found: got.log_bloom().clone() })))
}
if expected.state_root() != got.state_root() {
return Err(From::from(BlockError::InvalidStateRoot(Mismatch { expected: expected.state_root().clone(), found: got.state_root().clone() })))
}
if expected.receipts_root() != got.receipts_root() {
return Err(From::from(BlockError::InvalidReceiptsRoot(Mismatch { expected: expected.receipts_root().clone(), found: got.receipts_root().clone() })))
}

View File

@ -38,9 +38,9 @@ impl<'a> BodyView<'a> {
/// ```
/// #[macro_use]
/// extern crate ethcore;
///
///
/// use ethcore::views::{BodyView};
///
///
/// fn main() {
/// let bytes : &[u8] = &[];
/// let body_view = view!(BodyView, bytes);

View File

@ -17,10 +17,10 @@ ethcore-light = { path = "../light" }
ethcore-transaction = { path = "../transaction" }
ethcore = { path = ".." }
ethereum-types = "0.4"
hashdb = "0.2.1"
fastmap = { path = "../../util/fastmap" }
rlp = { version = "0.2.4", features = ["ethereum"] }
rustc-hex = "1.0"
hashdb = "0.3.0"
fastmap = { path = "../../util/fastmap" }
rlp = { version = "0.3.0", features = ["ethereum"] }
keccak-hash = "0.1"
keccak-hasher = { path = "../../util/keccak-hasher" }
triehash-ethereum = {version = "0.2", path = "../../util/triehash-ethereum" }
@ -31,7 +31,6 @@ env_logger = "0.5"
rand = "0.4"
heapsize = "0.4"
semver = "0.9"
smallvec = { version = "0.4", features = ["heapsizeof"] }
parking_lot = "0.6"
trace-time = "0.1"
ipnetwork = "0.12.6"

View File

@ -264,12 +264,10 @@ pub struct EthSync {
fn light_params(
network_id: u64,
max_peers: u32,
median_peers: f64,
pruning_info: PruningInfo,
sample_store: Option<Box<SampleStore>>,
) -> LightParams {
const MAX_LIGHTSERV_LOAD: f64 = 0.5;
let mut light_params = LightParams {
network_id: network_id,
config: Default::default(),
@ -282,9 +280,7 @@ fn light_params(
sample_store: sample_store,
};
let max_peers = ::std::cmp::max(max_peers, 1);
light_params.config.load_share = MAX_LIGHTSERV_LOAD / max_peers as f64;
light_params.config.median_peers = median_peers;
light_params
}
@ -301,9 +297,10 @@ impl EthSync {
.map(|mut p| { p.push("request_timings"); light_net::FileStore(p) })
.map(|store| Box::new(store) as Box<_>);
let median_peers = (params.network_config.min_peers + params.network_config.max_peers) as f64 / 2.0;
let light_params = light_params(
params.config.network_id,
params.network_config.max_peers,
median_peers,
pruning_info,
sample_store,
);
@ -544,7 +541,7 @@ struct TxRelay(Arc<BlockChainClient>);
impl LightHandler for TxRelay {
fn on_transactions(&self, ctx: &EventContext, relay: &[::transaction::UnverifiedTransaction]) {
trace!(target: "pip", "Relaying {} transactions from peer {}", relay.len(), ctx.peer());
self.0.queue_transactions(relay.iter().map(|tx| ::rlp::encode(tx).into_vec()).collect(), ctx.peer())
self.0.queue_transactions(relay.iter().map(|tx| ::rlp::encode(tx)).collect(), ctx.peer())
}
}
@ -940,19 +937,3 @@ impl LightSyncProvider for LightSync {
Default::default() // TODO
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn light_params_load_share_depends_on_max_peers() {
let pruning_info = PruningInfo {
earliest_chain: 0,
earliest_state: 0,
};
let params1 = light_params(0, 10, pruning_info.clone(), None);
let params2 = light_params(0, 20, pruning_info, None);
assert!(params1.config.load_share > params2.config.load_share)
}
}

View File

@ -28,6 +28,7 @@ use ethcore::client::{BlockStatus, BlockId, BlockImportError, BlockImportErrorKi
use ethcore::error::{ImportErrorKind, QueueErrorKind, BlockError};
use sync_io::SyncIo;
use blocks::{BlockCollection, SyncBody, SyncHeader};
use chain::BlockSet;
const MAX_HEADERS_TO_REQUEST: usize = 128;
const MAX_BODIES_TO_REQUEST: usize = 32;
@ -35,6 +36,26 @@ const MAX_RECEPITS_TO_REQUEST: usize = 128;
const SUBCHAIN_SIZE: u64 = 256;
const MAX_ROUND_PARENTS: usize = 16;
const MAX_PARALLEL_SUBCHAIN_DOWNLOAD: usize = 5;
const MAX_USELESS_HEADERS_PER_ROUND: usize = 3;
// logging macros prepend BlockSet context for log filtering
macro_rules! trace_sync {
($self:ident, $fmt:expr, $($arg:tt)+) => {
trace!(target: "sync", concat!("{:?}: ", $fmt), $self.block_set, $($arg)+);
};
($self:ident, $fmt:expr) => {
trace!(target: "sync", concat!("{:?}: ", $fmt), $self.block_set);
};
}
macro_rules! debug_sync {
($self:ident, $fmt:expr, $($arg:tt)+) => {
debug!(target: "sync", concat!("{:?}: ", $fmt), $self.block_set, $($arg)+);
};
($self:ident, $fmt:expr) => {
debug!(target: "sync", concat!("{:?}: ", $fmt), $self.block_set);
};
}
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
/// Downloader state
@ -65,6 +86,7 @@ pub enum BlockRequest {
}
/// Indicates sync action
#[derive(Eq, PartialEq, Debug)]
pub enum DownloadAction {
/// Do nothing
None,
@ -89,15 +111,17 @@ impl From<rlp::DecoderError> for BlockDownloaderImportError {
/// Block downloader strategy.
/// Manages state and block data for a block download process.
pub struct BlockDownloader {
/// Which set of blocks to download
block_set: BlockSet,
/// Downloader state
state: State,
/// Highest block number seen
highest_block: Option<BlockNumber>,
/// Downloaded blocks, holds `H`, `B` and `S`
blocks: BlockCollection,
/// Last impoted block number
/// Last imported block number
last_imported_block: BlockNumber,
/// Last impoted block hash
/// Last imported block hash
last_imported_hash: H256,
/// Number of blocks imported this round
imported_this_round: Option<usize>,
@ -112,15 +136,20 @@ pub struct BlockDownloader {
target_hash: Option<H256>,
/// Probing range for seeking common best block.
retract_step: u64,
/// Whether reorg should be limited.
limit_reorg: bool,
/// consecutive useless headers this round
useless_headers_count: usize,
}
impl BlockDownloader {
/// Create a new instance of syncing strategy. This won't reorganize to before the
/// last kept state.
pub fn new(sync_receipts: bool, start_hash: &H256, start_number: BlockNumber) -> Self {
/// Create a new instance of syncing strategy.
/// For BlockSet::NewBlocks this won't reorganize to before the last kept state.
pub fn new(block_set: BlockSet, start_hash: &H256, start_number: BlockNumber) -> Self {
let sync_receipts = match block_set {
BlockSet::NewBlocks => false,
BlockSet::OldBlocks => true
};
BlockDownloader {
block_set,
state: State::Idle,
highest_block: None,
last_imported_block: start_number,
@ -133,32 +162,14 @@ impl BlockDownloader {
download_receipts: sync_receipts,
target_hash: None,
retract_step: 1,
limit_reorg: true,
}
}
/// Create a new instance of sync with unlimited reorg allowed.
pub fn with_unlimited_reorg(sync_receipts: bool, start_hash: &H256, start_number: BlockNumber) -> Self {
BlockDownloader {
state: State::Idle,
highest_block: None,
last_imported_block: start_number,
last_imported_hash: start_hash.clone(),
last_round_start: start_number,
last_round_start_hash: start_hash.clone(),
blocks: BlockCollection::new(sync_receipts),
imported_this_round: None,
round_parents: VecDeque::new(),
download_receipts: sync_receipts,
target_hash: None,
retract_step: 1,
limit_reorg: false,
useless_headers_count: 0,
}
}
/// Reset sync. Clear all local downloaded data.
pub fn reset(&mut self) {
self.blocks.clear();
self.useless_headers_count = 0;
self.state = State::Idle;
}
@ -220,45 +231,65 @@ impl BlockDownloader {
}
/// Add new block headers.
pub fn import_headers(&mut self, io: &mut SyncIo, r: &Rlp, expected_hash: Option<H256>) -> Result<DownloadAction, BlockDownloaderImportError> {
pub fn import_headers(&mut self, io: &mut SyncIo, r: &Rlp, expected_hash: H256) -> Result<DownloadAction, BlockDownloaderImportError> {
let item_count = r.item_count().unwrap_or(0);
if self.state == State::Idle {
trace!(target: "sync", "Ignored unexpected block headers");
trace_sync!(self, "Ignored unexpected block headers");
return Ok(DownloadAction::None)
}
if item_count == 0 && (self.state == State::Blocks) {
return Err(BlockDownloaderImportError::Invalid);
}
// The request is generated in ::request_blocks.
let (max_count, skip) = if self.state == State::ChainHead {
(SUBCHAIN_SIZE as usize, (MAX_HEADERS_TO_REQUEST - 2) as u64)
} else {
(MAX_HEADERS_TO_REQUEST, 0)
};
if item_count > max_count {
debug!(target: "sync", "Headers response is larger than expected");
return Err(BlockDownloaderImportError::Invalid);
}
let mut headers = Vec::new();
let mut hashes = Vec::new();
let mut valid_response = item_count == 0; //empty response is valid
let mut any_known = false;
let mut last_header = None;
for i in 0..item_count {
let info = SyncHeader::from_rlp(r.at(i)?.as_raw().to_vec())?;
let number = BlockNumber::from(info.header.number());
let hash = info.header.hash();
// Check if any of the headers matches the hash we requested
if !valid_response {
if let Some(expected) = expected_hash {
valid_response = expected == hash;
let valid_response = match last_header {
// First header must match expected hash.
None => expected_hash == hash,
Some((last_number, last_hash)) => {
// Subsequent headers must be spaced by skip interval.
let skip_valid = number == last_number + skip + 1;
// Consecutive headers must be linked by parent hash.
let parent_valid = (number != last_number + 1) || *info.header.parent_hash() == last_hash;
skip_valid && parent_valid
}
}
any_known = any_known || self.blocks.contains_head(&hash);
if self.blocks.contains(&hash) {
trace!(target: "sync", "Skipping existing block header {} ({:?})", number, hash);
continue;
};
// Disable the peer for this syncing round if it gives invalid chain
if !valid_response {
debug!(target: "sync", "Invalid headers response");
return Err(BlockDownloaderImportError::Invalid);
}
if self.highest_block.as_ref().map_or(true, |n| number > *n) {
self.highest_block = Some(number);
last_header = Some((number, hash));
if self.blocks.contains(&hash) {
trace_sync!(self, "Skipping existing block header {} ({:?})", number, hash);
continue;
}
match io.chain().block_status(BlockId::Hash(hash.clone())) {
BlockStatus::InChain | BlockStatus::Queued => {
match self.state {
State::Blocks => trace!(target: "sync", "Header already in chain {} ({})", number, hash),
_ => trace!(target: "sync", "Header already in chain {} ({}), state = {:?}", number, hash, self.state),
State::Blocks => trace_sync!(self, "Header already in chain {} ({})", number, hash),
_ => trace_sync!(self, "Header already in chain {} ({}), state = {:?}", number, hash, self.state),
}
headers.push(info);
hashes.push(hash);
@ -273,53 +304,67 @@ impl BlockDownloader {
}
}
// Disable the peer for this syncing round if it gives invalid chain
if !valid_response {
trace!(target: "sync", "Invalid headers response");
return Err(BlockDownloaderImportError::Invalid);
if let Some((number, _)) = last_header {
if self.highest_block.as_ref().map_or(true, |n| number > *n) {
self.highest_block = Some(number);
}
}
match self.state {
State::ChainHead => {
if !headers.is_empty() {
// TODO: validate heads better. E.g. check that there is enough distance between blocks.
trace!(target: "sync", "Received {} subchain heads, proceeding to download", headers.len());
trace_sync!(self, "Received {} subchain heads, proceeding to download", headers.len());
self.blocks.reset_to(hashes);
self.state = State::Blocks;
return Ok(DownloadAction::Reset);
} else {
trace_sync!(self, "No useful subchain heads received, expected hash {:?}", expected_hash);
let best = io.chain().chain_info().best_block_number;
let oldest_reorg = io.chain().pruning_info().earliest_state;
let last = self.last_imported_block;
if self.limit_reorg && best > last && (last == 0 || last < oldest_reorg) {
trace!(target: "sync", "No common block, disabling peer");
return Err(BlockDownloaderImportError::Invalid);
match self.block_set {
BlockSet::NewBlocks if best > last && (last == 0 || last < oldest_reorg) => {
trace_sync!(self, "No common block, disabling peer");
return Err(BlockDownloaderImportError::Invalid)
},
BlockSet::OldBlocks => {
trace_sync!(self, "Expected some useful headers for downloading OldBlocks. Try a different peer");
return Err(BlockDownloaderImportError::Useless)
},
_ => (),
}
}
},
State::Blocks => {
let count = headers.len();
// At least one of the heades must advance the subchain. Otherwise they are all useless.
if count == 0 || !any_known {
trace!(target: "sync", "No useful headers");
// At least one of the headers must advance the subchain. Otherwise they are all useless.
if count == 0 {
self.useless_headers_count += 1;
trace_sync!(self, "No useful headers ({:?} this round), expected hash {:?}", self.useless_headers_count, expected_hash);
// only reset download if we have multiple subchain heads, to avoid unnecessary resets
// when we are at the head of the chain when we may legitimately receive no useful headers
if self.blocks.heads_len() > 1 && self.useless_headers_count >= MAX_USELESS_HEADERS_PER_ROUND {
trace_sync!(self, "Received {:?} useless responses this round. Resetting sync", MAX_USELESS_HEADERS_PER_ROUND);
self.reset();
}
return Err(BlockDownloaderImportError::Useless);
}
self.blocks.insert_headers(headers);
trace!(target: "sync", "Inserted {} headers", count);
trace_sync!(self, "Inserted {} headers", count);
},
_ => trace!(target: "sync", "Unexpected headers({})", headers.len()),
_ => trace_sync!(self, "Unexpected headers({})", headers.len()),
}
Ok(DownloadAction::None)
}
/// Called by peer once it has new block bodies
pub fn import_bodies(&mut self, r: &Rlp) -> Result<(), BlockDownloaderImportError> {
pub fn import_bodies(&mut self, r: &Rlp, expected_hashes: &[H256]) -> Result<(), BlockDownloaderImportError> {
let item_count = r.item_count().unwrap_or(0);
if item_count == 0 {
return Err(BlockDownloaderImportError::Useless);
} else if self.state != State::Blocks {
trace!(target: "sync", "Ignored unexpected block bodies");
trace_sync!(self, "Ignored unexpected block bodies");
} else {
let mut bodies = Vec::with_capacity(item_count);
for i in 0..item_count {
@ -327,8 +372,13 @@ impl BlockDownloader {
bodies.push(body);
}
if self.blocks.insert_bodies(bodies) != item_count {
trace!(target: "sync", "Deactivating peer for giving invalid block bodies");
let hashes = self.blocks.insert_bodies(bodies);
if hashes.len() != item_count {
trace_sync!(self, "Deactivating peer for giving invalid block bodies");
return Err(BlockDownloaderImportError::Invalid);
}
if !all_expected(hashes.as_slice(), expected_hashes, |&a, &b| a == b) {
trace_sync!(self, "Deactivating peer for giving unexpected block bodies");
return Err(BlockDownloaderImportError::Invalid);
}
}
@ -336,25 +386,30 @@ impl BlockDownloader {
}
/// Called by peer once it has new block bodies
pub fn import_receipts(&mut self, _io: &mut SyncIo, r: &Rlp) -> Result<(), BlockDownloaderImportError> {
pub fn import_receipts(&mut self, r: &Rlp, expected_hashes: &[H256]) -> Result<(), BlockDownloaderImportError> {
let item_count = r.item_count().unwrap_or(0);
if item_count == 0 {
return Err(BlockDownloaderImportError::Useless);
}
else if self.state != State::Blocks {
trace!(target: "sync", "Ignored unexpected block receipts");
trace_sync!(self, "Ignored unexpected block receipts");
}
else {
let mut receipts = Vec::with_capacity(item_count);
for i in 0..item_count {
let receipt = r.at(i).map_err(|e| {
trace!(target: "sync", "Error decoding block receipts RLP: {:?}", e);
trace_sync!(self, "Error decoding block receipts RLP: {:?}", e);
BlockDownloaderImportError::Invalid
})?;
receipts.push(receipt.as_raw().to_vec());
}
if self.blocks.insert_receipts(receipts) != item_count {
trace!(target: "sync", "Deactivating peer for giving invalid block receipts");
let hashes = self.blocks.insert_receipts(receipts);
if hashes.len() != item_count {
trace_sync!(self, "Deactivating peer for giving invalid block receipts");
return Err(BlockDownloaderImportError::Invalid);
}
if !all_expected(hashes.as_slice(), expected_hashes, |a, b| a.contains(b)) {
trace_sync!(self, "Deactivating peer for giving unexpected block receipts");
return Err(BlockDownloaderImportError::Invalid);
}
}
@ -363,7 +418,7 @@ impl BlockDownloader {
fn start_sync_round(&mut self, io: &mut SyncIo) {
self.state = State::ChainHead;
trace!(target: "sync", "Starting round (last imported count = {:?}, last started = {}, block = {:?}", self.imported_this_round, self.last_round_start, self.last_imported_block);
trace_sync!(self, "Starting round (last imported count = {:?}, last started = {}, block = {:?}", self.imported_this_round, self.last_round_start, self.last_imported_block);
// Check if need to retract to find the common block. The problem is that the peers still return headers by hash even
// from the non-canonical part of the tree. So we also retract if nothing has been imported last round.
let start = self.last_round_start;
@ -375,12 +430,12 @@ impl BlockDownloader {
if let Some(&(_, p)) = self.round_parents.iter().find(|&&(h, _)| h == start_hash) {
self.last_imported_block = start - 1;
self.last_imported_hash = p.clone();
trace!(target: "sync", "Searching common header from the last round {} ({})", self.last_imported_block, self.last_imported_hash);
trace_sync!(self, "Searching common header from the last round {} ({})", self.last_imported_block, self.last_imported_hash);
} else {
let best = io.chain().chain_info().best_block_number;
let oldest_reorg = io.chain().pruning_info().earliest_state;
if self.limit_reorg && best > start && start < oldest_reorg {
debug!(target: "sync", "Could not revert to previous ancient block, last: {} ({})", start, start_hash);
if self.block_set == BlockSet::NewBlocks && best > start && start < oldest_reorg {
debug_sync!(self, "Could not revert to previous ancient block, last: {} ({})", start, start_hash);
self.reset();
} else {
let n = start - cmp::min(self.retract_step, start);
@ -389,10 +444,10 @@ impl BlockDownloader {
Some(h) => {
self.last_imported_block = n;
self.last_imported_hash = h;
trace!(target: "sync", "Searching common header in the blockchain {} ({})", start, self.last_imported_hash);
trace_sync!(self, "Searching common header in the blockchain {} ({})", start, self.last_imported_hash);
}
None => {
debug!(target: "sync", "Could not revert to previous block, last: {} ({})", start, self.last_imported_hash);
debug_sync!(self, "Could not revert to previous block, last: {} ({})", start, self.last_imported_hash);
self.reset();
}
}
@ -420,7 +475,7 @@ impl BlockDownloader {
State::ChainHead => {
if num_active_peers < MAX_PARALLEL_SUBCHAIN_DOWNLOAD {
// Request subchain headers
trace!(target: "sync", "Starting sync with better chain");
trace_sync!(self, "Starting sync with better chain");
// Request MAX_HEADERS_TO_REQUEST - 2 headers apart so that
// MAX_HEADERS_TO_REQUEST would include headers for neighbouring subchains
return Some(BlockRequest::Headers {
@ -463,8 +518,9 @@ impl BlockDownloader {
}
/// Checks if there are blocks fully downloaded that can be imported into the blockchain and does the import.
pub fn collect_blocks(&mut self, io: &mut SyncIo, allow_out_of_order: bool) -> Result<(), BlockDownloaderImportError> {
let mut bad = false;
/// Returns DownloadAction::Reset if it is imported all the the blocks it can and all downloading peers should be reset
pub fn collect_blocks(&mut self, io: &mut SyncIo, allow_out_of_order: bool) -> DownloadAction {
let mut download_action = DownloadAction::None;
let mut imported = HashSet::new();
let blocks = self.blocks.drain();
let count = blocks.len();
@ -478,8 +534,8 @@ impl BlockDownloader {
if self.target_hash.as_ref().map_or(false, |t| t == &h) {
self.state = State::Complete;
trace!(target: "sync", "Sync target reached");
return Ok(());
trace_sync!(self, "Sync target reached");
return download_action;
}
let result = if let Some(receipts) = receipts {
@ -490,15 +546,15 @@ impl BlockDownloader {
match result {
Err(BlockImportError(BlockImportErrorKind::Import(ImportErrorKind::AlreadyInChain), _)) => {
trace!(target: "sync", "Block already in chain {:?}", h);
trace_sync!(self, "Block already in chain {:?}", h);
self.block_imported(&h, number, &parent);
},
Err(BlockImportError(BlockImportErrorKind::Import(ImportErrorKind::AlreadyQueued), _)) => {
trace!(target: "sync", "Block already queued {:?}", h);
trace_sync!(self, "Block already queued {:?}", h);
self.block_imported(&h, number, &parent);
},
Ok(_) => {
trace!(target: "sync", "Block queued {:?}", h);
trace_sync!(self, "Block queued {:?}", h);
imported.insert(h.clone());
self.block_imported(&h, number, &parent);
},
@ -506,37 +562,34 @@ impl BlockDownloader {
break;
},
Err(BlockImportError(BlockImportErrorKind::Block(BlockError::UnknownParent(_)), _)) => {
trace!(target: "sync", "Unknown new block parent, restarting sync");
trace_sync!(self, "Unknown new block parent, restarting sync");
break;
},
Err(BlockImportError(BlockImportErrorKind::Block(BlockError::TemporarilyInvalid(_)), _)) => {
debug!(target: "sync", "Block temporarily invalid, restarting sync");
debug_sync!(self, "Block temporarily invalid: {:?}, restarting sync", h);
break;
},
Err(BlockImportError(BlockImportErrorKind::Queue(QueueErrorKind::Full(limit)), _)) => {
debug!(target: "sync", "Block import queue full ({}), restarting sync", limit);
debug_sync!(self, "Block import queue full ({}), restarting sync", limit);
download_action = DownloadAction::Reset;
break;
},
Err(e) => {
debug!(target: "sync", "Bad block {:?} : {:?}", h, e);
bad = true;
debug_sync!(self, "Bad block {:?} : {:?}", h, e);
download_action = DownloadAction::Reset;
break;
}
}
}
trace!(target: "sync", "Imported {} of {}", imported.len(), count);
trace_sync!(self, "Imported {} of {}", imported.len(), count);
self.imported_this_round = Some(self.imported_this_round.unwrap_or(0) + imported.len());
if bad {
return Err(BlockDownloaderImportError::Invalid);
}
if self.blocks.is_empty() {
// complete sync round
trace!(target: "sync", "Sync round complete");
self.reset();
trace_sync!(self, "Sync round complete");
download_action = DownloadAction::Reset;
}
Ok(())
download_action
}
fn block_imported(&mut self, hash: &H256, number: BlockNumber, parent: &H256) {
@ -549,4 +602,392 @@ impl BlockDownloader {
}
}
//TODO: module tests
// Determines if the first argument matches an ordered subset of the second, according to some predicate.
fn all_expected<A, B, F>(values: &[A], expected_values: &[B], is_expected: F) -> bool
where F: Fn(&A, &B) -> bool
{
let mut expected_iter = expected_values.iter();
values.iter().all(|val1| {
while let Some(val2) = expected_iter.next() {
if is_expected(val1, val2) {
return true;
}
}
false
})
}
#[cfg(test)]
mod tests {
use super::*;
use ethcore::client::TestBlockChainClient;
use ethcore::header::Header as BlockHeader;
use ethcore::spec::Spec;
use ethkey::{Generator,Random};
use hash::keccak;
use parking_lot::RwLock;
use rlp::{encode_list,RlpStream};
use tests::helpers::TestIo;
use tests::snapshot::TestSnapshotService;
use transaction::{Transaction,SignedTransaction};
use triehash_ethereum::ordered_trie_root;
fn dummy_header(number: u64, parent_hash: H256) -> BlockHeader {
let mut header = BlockHeader::new();
header.set_gas_limit(0.into());
header.set_difficulty((number * 100).into());
header.set_timestamp(number * 10);
header.set_number(number);
header.set_parent_hash(parent_hash);
header.set_state_root(H256::zero());
header
}
fn dummy_signed_tx() -> SignedTransaction {
let keypair = Random.generate().unwrap();
Transaction::default().sign(keypair.secret(), None)
}
fn import_headers(headers: &[BlockHeader], downloader: &mut BlockDownloader, io: &mut SyncIo) -> Result<DownloadAction, BlockDownloaderImportError> {
let mut stream = RlpStream::new();
stream.append_list(headers);
let bytes = stream.out();
let rlp = Rlp::new(&bytes);
let expected_hash = headers.first().unwrap().hash();
downloader.import_headers(io, &rlp, expected_hash)
}
fn import_headers_ok(headers: &[BlockHeader], downloader: &mut BlockDownloader, io: &mut SyncIo) {
let res = import_headers(headers, downloader, io);
assert!(res.is_ok());
}
#[test]
fn import_headers_in_chain_head_state() {
::env_logger::try_init().ok();
let spec = Spec::new_test();
let genesis_hash = spec.genesis_header().hash();
let mut downloader = BlockDownloader::new(BlockSet::NewBlocks, &genesis_hash, 0);
downloader.state = State::ChainHead;
let mut chain = TestBlockChainClient::new();
let snapshot_service = TestSnapshotService::new();
let queue = RwLock::new(VecDeque::new());
let mut io = TestIo::new(&mut chain, &snapshot_service, &queue, None);
// Valid headers sequence.
let valid_headers = [
spec.genesis_header(),
dummy_header(127, H256::random()),
dummy_header(254, H256::random()),
];
let rlp_data = encode_list(&valid_headers);
let valid_rlp = Rlp::new(&rlp_data);
match downloader.import_headers(&mut io, &valid_rlp, genesis_hash) {
Ok(DownloadAction::Reset) => assert_eq!(downloader.state, State::Blocks),
_ => panic!("expected transition to Blocks state"),
};
// Headers are rejected because the expected hash does not match.
let invalid_start_block_headers = [
dummy_header(0, H256::random()),
dummy_header(127, H256::random()),
dummy_header(254, H256::random()),
];
let rlp_data = encode_list(&invalid_start_block_headers);
let invalid_start_block_rlp = Rlp::new(&rlp_data);
match downloader.import_headers(&mut io, &invalid_start_block_rlp, genesis_hash) {
Err(BlockDownloaderImportError::Invalid) => (),
_ => panic!("expected BlockDownloaderImportError"),
};
// Headers are rejected because they are not spaced as expected.
let invalid_skip_headers = [
spec.genesis_header(),
dummy_header(128, H256::random()),
dummy_header(256, H256::random()),
];
let rlp_data = encode_list(&invalid_skip_headers);
let invalid_skip_rlp = Rlp::new(&rlp_data);
match downloader.import_headers(&mut io, &invalid_skip_rlp, genesis_hash) {
Err(BlockDownloaderImportError::Invalid) => (),
_ => panic!("expected BlockDownloaderImportError"),
};
// Invalid because the packet size is too large.
let mut too_many_headers = Vec::with_capacity((SUBCHAIN_SIZE + 1) as usize);
too_many_headers.push(spec.genesis_header());
for i in 1..(SUBCHAIN_SIZE + 1) {
too_many_headers.push(dummy_header((MAX_HEADERS_TO_REQUEST as u64 - 1) * i, H256::random()));
}
let rlp_data = encode_list(&too_many_headers);
let too_many_rlp = Rlp::new(&rlp_data);
match downloader.import_headers(&mut io, &too_many_rlp, genesis_hash) {
Err(BlockDownloaderImportError::Invalid) => (),
_ => panic!("expected BlockDownloaderImportError"),
};
}
#[test]
fn import_headers_in_blocks_state() {
::env_logger::try_init().ok();
let mut chain = TestBlockChainClient::new();
let snapshot_service = TestSnapshotService::new();
let queue = RwLock::new(VecDeque::new());
let mut io = TestIo::new(&mut chain, &snapshot_service, &queue, None);
let mut headers = Vec::with_capacity(3);
let parent_hash = H256::random();
headers.push(dummy_header(127, parent_hash));
let parent_hash = headers[0].hash();
headers.push(dummy_header(128, parent_hash));
let parent_hash = headers[1].hash();
headers.push(dummy_header(129, parent_hash));
let mut downloader = BlockDownloader::new(BlockSet::NewBlocks, &H256::random(), 0);
downloader.state = State::Blocks;
downloader.blocks.reset_to(vec![headers[0].hash()]);
let rlp_data = encode_list(&headers);
let headers_rlp = Rlp::new(&rlp_data);
match downloader.import_headers(&mut io, &headers_rlp, headers[0].hash()) {
Ok(DownloadAction::None) => (),
_ => panic!("expected successful import"),
};
// Invalidate parent_hash link.
headers[2] = dummy_header(129, H256::random());
let rlp_data = encode_list(&headers);
let headers_rlp = Rlp::new(&rlp_data);
match downloader.import_headers(&mut io, &headers_rlp, headers[0].hash()) {
Err(BlockDownloaderImportError::Invalid) => (),
_ => panic!("expected BlockDownloaderImportError"),
};
// Invalidate header sequence by skipping a header.
headers[2] = dummy_header(130, headers[1].hash());
let rlp_data = encode_list(&headers);
let headers_rlp = Rlp::new(&rlp_data);
match downloader.import_headers(&mut io, &headers_rlp, headers[0].hash()) {
Err(BlockDownloaderImportError::Invalid) => (),
_ => panic!("expected BlockDownloaderImportError"),
};
}
#[test]
fn import_bodies() {
::env_logger::try_init().ok();
let mut chain = TestBlockChainClient::new();
let snapshot_service = TestSnapshotService::new();
let queue = RwLock::new(VecDeque::new());
let mut io = TestIo::new(&mut chain, &snapshot_service, &queue, None);
// Import block headers.
let mut headers = Vec::with_capacity(4);
let mut bodies = Vec::with_capacity(4);
let mut parent_hash = H256::zero();
for i in 0..4 {
// Construct the block body
let mut uncles = if i > 0 {
encode_list(&[dummy_header(i - 1, H256::random())])
} else {
::rlp::EMPTY_LIST_RLP.to_vec()
};
let mut txs = encode_list(&[dummy_signed_tx()]);
let tx_root = ordered_trie_root(Rlp::new(&txs).iter().map(|r| r.as_raw()));
let mut rlp = RlpStream::new_list(2);
rlp.append_raw(&txs, 1);
rlp.append_raw(&uncles, 1);
bodies.push(rlp.out());
// Construct the block header
let mut header = dummy_header(i, parent_hash);
header.set_transactions_root(tx_root);
header.set_uncles_hash(keccak(&uncles));
parent_hash = header.hash();
headers.push(header);
}
let mut downloader = BlockDownloader::new(BlockSet::NewBlocks, &headers[0].hash(), 0);
downloader.state = State::Blocks;
downloader.blocks.reset_to(vec![headers[0].hash()]);
// Only import the first three block headers.
let rlp_data = encode_list(&headers[0..3]);
let headers_rlp = Rlp::new(&rlp_data);
assert!(downloader.import_headers(&mut io, &headers_rlp, headers[0].hash()).is_ok());
// Import first body successfully.
let mut rlp_data = RlpStream::new_list(1);
rlp_data.append_raw(&bodies[0], 1);
let bodies_rlp = Rlp::new(rlp_data.as_raw());
assert!(downloader.import_bodies(&bodies_rlp, &[headers[0].hash(), headers[1].hash()]).is_ok());
// Import second body successfully.
let mut rlp_data = RlpStream::new_list(1);
rlp_data.append_raw(&bodies[1], 1);
let bodies_rlp = Rlp::new(rlp_data.as_raw());
assert!(downloader.import_bodies(&bodies_rlp, &[headers[0].hash(), headers[1].hash()]).is_ok());
// Import unexpected third body.
let mut rlp_data = RlpStream::new_list(1);
rlp_data.append_raw(&bodies[2], 1);
let bodies_rlp = Rlp::new(rlp_data.as_raw());
match downloader.import_bodies(&bodies_rlp, &[headers[0].hash(), headers[1].hash()]) {
Err(BlockDownloaderImportError::Invalid) => (),
_ => panic!("expected BlockDownloaderImportError"),
};
}
#[test]
fn import_receipts() {
::env_logger::try_init().ok();
let mut chain = TestBlockChainClient::new();
let snapshot_service = TestSnapshotService::new();
let queue = RwLock::new(VecDeque::new());
let mut io = TestIo::new(&mut chain, &snapshot_service, &queue, None);
// Import block headers.
let mut headers = Vec::with_capacity(4);
let mut receipts = Vec::with_capacity(4);
let mut parent_hash = H256::zero();
for i in 0..4 {
// Construct the receipts. Receipt root for the first two blocks is the same.
//
// The RLP-encoded integers are clearly not receipts, but the BlockDownloader treats
// all receipts as byte blobs, so it does not matter.
let mut receipts_rlp = if i < 2 {
encode_list(&[0u32])
} else {
encode_list(&[i as u32])
};
let receipts_root = ordered_trie_root(Rlp::new(&receipts_rlp).iter().map(|r| r.as_raw()));
receipts.push(receipts_rlp);
// Construct the block header.
let mut header = dummy_header(i, parent_hash);
header.set_receipts_root(receipts_root);
parent_hash = header.hash();
headers.push(header);
}
let mut downloader = BlockDownloader::new(BlockSet::OldBlocks, &headers[0].hash(), 0);
downloader.state = State::Blocks;
downloader.blocks.reset_to(vec![headers[0].hash()]);
// Only import the first three block headers.
let rlp_data = encode_list(&headers[0..3]);
let headers_rlp = Rlp::new(&rlp_data);
assert!(downloader.import_headers(&mut io, &headers_rlp, headers[0].hash()).is_ok());
// Import second and third receipts successfully.
let mut rlp_data = RlpStream::new_list(2);
rlp_data.append_raw(&receipts[1], 1);
rlp_data.append_raw(&receipts[2], 1);
let receipts_rlp = Rlp::new(rlp_data.as_raw());
assert!(downloader.import_receipts(&receipts_rlp, &[headers[1].hash(), headers[2].hash()]).is_ok());
// Import unexpected fourth receipt.
let mut rlp_data = RlpStream::new_list(1);
rlp_data.append_raw(&receipts[3], 1);
let bodies_rlp = Rlp::new(rlp_data.as_raw());
match downloader.import_bodies(&bodies_rlp, &[headers[1].hash(), headers[2].hash()]) {
Err(BlockDownloaderImportError::Invalid) => (),
_ => panic!("expected BlockDownloaderImportError"),
};
}
#[test]
fn reset_after_multiple_sets_of_useless_headers() {
::env_logger::try_init().ok();
let spec = Spec::new_test();
let genesis_hash = spec.genesis_header().hash();
let mut downloader = BlockDownloader::new(BlockSet::NewBlocks, &genesis_hash, 0);
downloader.state = State::ChainHead;
let mut chain = TestBlockChainClient::new();
let snapshot_service = TestSnapshotService::new();
let queue = RwLock::new(VecDeque::new());
let mut io = TestIo::new(&mut chain, &snapshot_service, &queue, None);
let heads = [
spec.genesis_header(),
dummy_header(127, H256::random()),
dummy_header(254, H256::random()),
];
let short_subchain = [dummy_header(1, genesis_hash)];
import_headers_ok(&heads, &mut downloader, &mut io);
import_headers_ok(&short_subchain, &mut downloader, &mut io);
assert_eq!(downloader.state, State::Blocks);
assert!(!downloader.blocks.is_empty());
// simulate receiving useless headers
let head = vec![short_subchain.last().unwrap().clone()];
for _ in 0..MAX_USELESS_HEADERS_PER_ROUND {
let res = import_headers(&head, &mut downloader, &mut io);
assert!(res.is_err());
}
assert_eq!(downloader.state, State::Idle);
assert!(downloader.blocks.is_empty());
}
#[test]
fn dont_reset_after_multiple_sets_of_useless_headers_for_chain_head() {
::env_logger::try_init().ok();
let spec = Spec::new_test();
let genesis_hash = spec.genesis_header().hash();
let mut downloader = BlockDownloader::new(BlockSet::NewBlocks, &genesis_hash, 0);
downloader.state = State::ChainHead;
let mut chain = TestBlockChainClient::new();
let snapshot_service = TestSnapshotService::new();
let queue = RwLock::new(VecDeque::new());
let mut io = TestIo::new(&mut chain, &snapshot_service, &queue, None);
let heads = [
spec.genesis_header()
];
let short_subchain = [dummy_header(1, genesis_hash)];
import_headers_ok(&heads, &mut downloader, &mut io);
import_headers_ok(&short_subchain, &mut downloader, &mut io);
assert_eq!(downloader.state, State::Blocks);
assert!(!downloader.blocks.is_empty());
// simulate receiving useless headers
let head = vec![short_subchain.last().unwrap().clone()];
for _ in 0..MAX_USELESS_HEADERS_PER_ROUND {
let res = import_headers(&head, &mut downloader, &mut io);
assert!(res.is_err());
}
// download shouldn't be reset since this is the chain head for a single subchain.
// this state usually occurs for NewBlocks when it has reached the chain head.
assert_eq!(downloader.state, State::Blocks);
assert!(!downloader.blocks.is_empty());
}
}

View File

@ -15,7 +15,6 @@
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use std::collections::{HashSet, HashMap, hash_map};
use smallvec::SmallVec;
use hash::{keccak, KECCAK_NULL_RLP, KECCAK_EMPTY_LIST_RLP};
use heapsize::HeapSizeOf;
use ethereum_types::H256;
@ -29,8 +28,6 @@ use transaction::UnverifiedTransaction;
known_heap_size!(0, HeaderId);
type SmallHashVec = SmallVec<[H256; 1]>;
#[derive(PartialEq, Debug, Clone)]
pub struct SyncHeader {
pub bytes: Bytes,
@ -157,7 +154,7 @@ pub struct BlockCollection {
/// Used to map body to header.
header_ids: HashMap<HeaderId, H256>,
/// Used to map receipts root to headers.
receipt_ids: HashMap<H256, SmallHashVec>,
receipt_ids: HashMap<H256, Vec<H256>>,
/// First block in `blocks`.
head: Option<H256>,
/// Set of block header hashes being downloaded
@ -215,32 +212,28 @@ impl BlockCollection {
}
/// Insert a collection of block bodies for previously downloaded headers.
pub fn insert_bodies(&mut self, bodies: Vec<SyncBody>) -> usize {
let mut inserted = 0;
for b in bodies {
if let Err(e) = self.insert_body(b) {
trace!(target: "sync", "Ignored invalid body: {:?}", e);
} else {
inserted += 1;
}
}
inserted
pub fn insert_bodies(&mut self, bodies: Vec<SyncBody>) -> Vec<H256> {
bodies.into_iter()
.filter_map(|b| {
self.insert_body(b)
.map_err(|e| trace!(target: "sync", "Ignored invalid body: {:?}", e))
.ok()
})
.collect()
}
/// Insert a collection of block receipts for previously downloaded headers.
pub fn insert_receipts(&mut self, receipts: Vec<Bytes>) -> usize {
pub fn insert_receipts(&mut self, receipts: Vec<Bytes>) -> Vec<Vec<H256>> {
if !self.need_receipts {
return 0;
return Vec::new();
}
let mut inserted = 0;
for r in receipts {
if let Err(e) = self.insert_receipt(r) {
trace!(target: "sync", "Ignored invalid receipt: {:?}", e);
} else {
inserted += 1;
}
}
inserted
receipts.into_iter()
.filter_map(|r| {
self.insert_receipt(r)
.map_err(|e| trace!(target: "sync", "Ignored invalid receipt: {:?}", e))
.ok()
})
.collect()
}
/// Returns a set of block hashes that require a body download. The returned set is marked as being downloaded.
@ -401,9 +394,9 @@ impl BlockCollection {
self.blocks.contains_key(hash)
}
/// Check if collection contains a block header.
pub fn contains_head(&self, hash: &H256) -> bool {
self.heads.contains(hash)
/// Check the number of heads
pub fn heads_len(&self) -> usize {
self.heads.len()
}
/// Return used heap size.
@ -421,7 +414,7 @@ impl BlockCollection {
self.downloading_headers.contains(hash) || self.downloading_bodies.contains(hash)
}
fn insert_body(&mut self, body: SyncBody) -> Result<(), network::Error> {
fn insert_body(&mut self, body: SyncBody) -> Result<H256, network::Error> {
let header_id = {
let tx_root = ordered_trie_root(Rlp::new(&body.transactions_bytes).iter().map(|r| r.as_raw()));
let uncles = keccak(&body.uncles_bytes);
@ -438,7 +431,7 @@ impl BlockCollection {
Some(ref mut block) => {
trace!(target: "sync", "Got body {}", h);
block.body = Some(body);
Ok(())
Ok(h)
},
None => {
warn!("Got body with no header {}", h);
@ -453,7 +446,7 @@ impl BlockCollection {
}
}
fn insert_receipt(&mut self, r: Bytes) -> Result<(), network::Error> {
fn insert_receipt(&mut self, r: Bytes) -> Result<Vec<H256>, network::Error> {
let receipt_root = {
let receipts = Rlp::new(&r);
ordered_trie_root(receipts.iter().map(|r| r.as_raw()))
@ -461,7 +454,8 @@ impl BlockCollection {
self.downloading_receipts.remove(&receipt_root);
match self.receipt_ids.entry(receipt_root) {
hash_map::Entry::Occupied(entry) => {
for h in entry.remove() {
let block_hashes = entry.remove();
for h in block_hashes.iter() {
match self.blocks.get_mut(&h) {
Some(ref mut block) => {
trace!(target: "sync", "Got receipt {}", h);
@ -473,7 +467,7 @@ impl BlockCollection {
}
}
}
Ok(())
Ok(block_hashes.to_vec())
},
hash_map::Entry::Vacant(_) => {
trace!(target: "sync", "Ignored unknown/stale block receipt {:?}", receipt_root);
@ -522,7 +516,7 @@ impl BlockCollection {
let receipts_stream = RlpStream::new_list(0);
(Some(receipts_stream.out()), receipt_root)
} else {
self.receipt_ids.entry(receipt_root).or_insert_with(|| SmallHashVec::new()).push(hash);
self.receipt_ids.entry(receipt_root).or_insert_with(Vec::new).push(hash);
(None, receipt_root)
}
} else {

View File

@ -28,6 +28,7 @@ use network::PeerId;
use rlp::Rlp;
use snapshot::ChunkType;
use std::cmp;
use std::mem;
use std::collections::HashSet;
use std::time::Instant;
use sync_io::SyncIo;
@ -292,10 +293,19 @@ impl SyncHandler {
let block_set = sync.peers.get(&peer_id)
.and_then(|p| p.block_set)
.unwrap_or(BlockSet::NewBlocks);
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockBodies) {
let allowed = sync.peers.get(&peer_id).map(|p| p.is_allowed()).unwrap_or(false);
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockBodies) || !allowed {
trace!(target: "sync", "{}: Ignored unexpected bodies", peer_id);
return Ok(());
}
let expected_blocks = match sync.peers.get_mut(&peer_id) {
Some(peer) => mem::replace(&mut peer.asking_blocks, Vec::new()),
None => {
trace!(target: "sync", "{}: Ignored unexpected bodies (peer not found)", peer_id);
return Ok(());
}
};
let item_count = r.item_count()?;
trace!(target: "sync", "{} -> BlockBodies ({} entries), set = {:?}", peer_id, item_count, block_set);
if item_count == 0 {
@ -315,7 +325,7 @@ impl SyncHandler {
Some(ref mut blocks) => blocks,
}
};
downloader.import_bodies(r)?;
downloader.import_bodies(r, expected_blocks.as_slice())?;
}
sync.collect_blocks(io, block_set);
Ok(())
@ -368,10 +378,23 @@ impl SyncHandler {
let expected_hash = sync.peers.get(&peer_id).and_then(|p| p.asking_hash);
let allowed = sync.peers.get(&peer_id).map(|p| p.is_allowed()).unwrap_or(false);
let block_set = sync.peers.get(&peer_id).and_then(|p| p.block_set).unwrap_or(BlockSet::NewBlocks);
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockHeaders) || expected_hash.is_none() || !allowed {
trace!(target: "sync", "{}: Ignored unexpected headers, expected_hash = {:?}", peer_id, expected_hash);
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockHeaders) {
debug!(target: "sync", "{}: Ignored unexpected headers", peer_id);
return Ok(());
}
let expected_hash = match expected_hash {
Some(hash) => hash,
None => {
debug!(target: "sync", "{}: Ignored unexpected headers (expected_hash is None)", peer_id);
return Ok(());
}
};
if !allowed {
debug!(target: "sync", "{}: Ignored unexpected headers (peer not allowed)", peer_id);
return Ok(());
}
let item_count = r.item_count()?;
trace!(target: "sync", "{} -> BlockHeaders ({} entries), state = {:?}, set = {:?}", peer_id, item_count, sync.state, block_set);
if (sync.state == SyncState::Idle || sync.state == SyncState::WaitingPeers) && sync.old_blocks.is_none() {
@ -399,12 +422,8 @@ impl SyncHandler {
downloader.import_headers(io, r, expected_hash)?
};
if let DownloadAction::Reset = result {
// mark all outstanding requests as expired
trace!("Resetting downloads for {:?}", block_set);
for (_, ref mut p) in sync.peers.iter_mut().filter(|&(_, ref p)| p.block_set == Some(block_set)) {
p.reset_asking();
}
if result == DownloadAction::Reset {
sync.reset_downloads(block_set);
}
sync.collect_blocks(io, block_set);
@ -415,10 +434,18 @@ impl SyncHandler {
fn on_peer_block_receipts(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), DownloaderImportError> {
sync.clear_peer_download(peer_id);
let block_set = sync.peers.get(&peer_id).and_then(|p| p.block_set).unwrap_or(BlockSet::NewBlocks);
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockReceipts) {
let allowed = sync.peers.get(&peer_id).map(|p| p.is_allowed()).unwrap_or(false);
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockReceipts) || !allowed {
trace!(target: "sync", "{}: Ignored unexpected receipts", peer_id);
return Ok(());
}
let expected_blocks = match sync.peers.get_mut(&peer_id) {
Some(peer) => mem::replace(&mut peer.asking_blocks, Vec::new()),
None => {
trace!(target: "sync", "{}: Ignored unexpected bodies (peer not found)", peer_id);
return Ok(());
}
};
let item_count = r.item_count()?;
trace!(target: "sync", "{} -> BlockReceipts ({} entries)", peer_id, item_count);
if item_count == 0 {
@ -438,7 +465,7 @@ impl SyncHandler {
Some(ref mut blocks) => blocks,
}
};
downloader.import_receipts(io, r)?;
downloader.import_receipts(r, expected_blocks.as_slice())?;
}
sync.collect_blocks(io, block_set);
Ok(())
@ -745,9 +772,7 @@ mod tests {
let block = Rlp::new(&block_data);
let result = SyncHandler::on_peer_new_block(&mut sync, &mut io, 0, &block);
assert!(result.is_ok());
SyncHandler::on_peer_new_block(&mut sync, &mut io, 0, &block).expect("result to be ok");
}
#[test]

View File

@ -109,7 +109,7 @@ use ethcore::client::{BlockChainClient, BlockStatus, BlockId, BlockChainInfo, Bl
use ethcore::snapshot::{RestorationStatus};
use sync_io::SyncIo;
use super::{WarpSync, SyncConfig};
use block_sync::{BlockDownloader, BlockDownloaderImportError as DownloaderImportError};
use block_sync::{BlockDownloader, DownloadAction};
use rand::Rng;
use snapshot::{Snapshot};
use api::{EthProtocolInfo as PeerInfoDigest, WARP_SYNC_PROTOCOL_ID};
@ -429,7 +429,7 @@ impl ChainSync {
peers: HashMap::new(),
handshaking_peers: HashMap::new(),
active_peers: HashSet::new(),
new_blocks: BlockDownloader::new(false, &chain_info.best_block_hash, chain_info.best_block_number),
new_blocks: BlockDownloader::new(BlockSet::NewBlocks, &chain_info.best_block_hash, chain_info.best_block_number),
old_blocks: None,
last_sent_block_number: 0,
network_id: config.network_id,
@ -507,8 +507,9 @@ impl ChainSync {
self.peers.clear();
}
/// Reset sync. Clear all downloaded data but keep the queue
fn reset(&mut self, io: &mut SyncIo) {
/// Reset sync. Clear all downloaded data but keep the queue.
/// Set sync state to the given state or to the initial state if `None` is provided.
fn reset(&mut self, io: &mut SyncIo, state: Option<SyncState>) {
self.new_blocks.reset();
let chain_info = io.chain().chain_info();
for (_, ref mut p) in &mut self.peers {
@ -520,7 +521,7 @@ impl ChainSync {
}
}
}
self.state = ChainSync::get_init_state(self.warp_sync, io.chain());
self.state = state.unwrap_or_else(|| ChainSync::get_init_state(self.warp_sync, io.chain()));
// Reactivate peers only if some progress has been made
// since the last sync round of if starting fresh.
self.active_peers = self.peers.keys().cloned().collect();
@ -534,7 +535,7 @@ impl ChainSync {
io.snapshot_service().abort_restore();
}
self.snapshot.clear();
self.reset(io);
self.reset(io, None);
self.continue_sync(io);
}
@ -637,13 +638,13 @@ impl ChainSync {
pub fn update_targets(&mut self, chain: &BlockChainClient) {
// Do not assume that the block queue/chain still has our last_imported_block
let chain = chain.chain_info();
self.new_blocks = BlockDownloader::new(false, &chain.best_block_hash, chain.best_block_number);
self.new_blocks = BlockDownloader::new(BlockSet::NewBlocks, &chain.best_block_hash, chain.best_block_number);
self.old_blocks = None;
if self.download_old_blocks {
if let (Some(ancient_block_hash), Some(ancient_block_number)) = (chain.ancient_block_hash, chain.ancient_block_number) {
trace!(target: "sync", "Downloading old blocks from {:?} (#{}) till {:?} (#{:?})", ancient_block_hash, ancient_block_number, chain.first_block_hash, chain.first_block_number);
let mut downloader = BlockDownloader::with_unlimited_reorg(true, &ancient_block_hash, ancient_block_number);
let mut downloader = BlockDownloader::new(BlockSet::OldBlocks, &ancient_block_hash, ancient_block_number);
if let Some(hash) = chain.first_block_hash {
trace!(target: "sync", "Downloader target set to {:?}", hash);
downloader.set_target(&hash);
@ -699,7 +700,7 @@ impl ChainSync {
/// Called after all blocks have been downloaded
fn complete_sync(&mut self, io: &mut SyncIo) {
trace!(target: "sync", "Sync complete");
self.reset(io);
self.reset(io, Some(SyncState::Idle));
}
/// Enter waiting state
@ -762,12 +763,10 @@ impl ChainSync {
}
}
// Only ask for old blocks if the peer has a higher difficulty than the last imported old block
let last_imported_old_block_difficulty = self.old_blocks.as_mut().and_then(|d| {
io.chain().block_total_difficulty(BlockId::Number(d.last_imported_block_number()))
});
// Only ask for old blocks if the peer has an equal or higher difficulty
let equal_or_higher_difficulty = peer_difficulty.map_or(false, |pd| pd >= syncing_difficulty);
if force || last_imported_old_block_difficulty.map_or(true, |ld| peer_difficulty.map_or(true, |pd| pd > ld)) {
if force || equal_or_higher_difficulty {
if let Some(request) = self.old_blocks.as_mut().and_then(|d| d.request_blocks(io, num_active_peers)) {
SyncRequester::request_blocks(self, io, peer_id, request, BlockSet::OldBlocks);
return;
@ -775,9 +774,9 @@ impl ChainSync {
} else {
trace!(
target: "sync",
"peer {:?} is not suitable for requesting old blocks, last_imported_old_block_difficulty={:?}, peer_difficulty={:?}",
"peer {:?} is not suitable for requesting old blocks, syncing_difficulty={:?}, peer_difficulty={:?}",
peer_id,
last_imported_old_block_difficulty,
syncing_difficulty,
peer_difficulty
);
self.deactivate_peer(io, peer_id);
@ -855,18 +854,39 @@ impl ChainSync {
fn collect_blocks(&mut self, io: &mut SyncIo, block_set: BlockSet) {
match block_set {
BlockSet::NewBlocks => {
if self.new_blocks.collect_blocks(io, self.state == SyncState::NewBlocks) == Err(DownloaderImportError::Invalid) {
self.restart(io);
if self.new_blocks.collect_blocks(io, self.state == SyncState::NewBlocks) == DownloadAction::Reset {
self.reset_downloads(block_set);
self.new_blocks.reset();
}
},
BlockSet::OldBlocks => {
if self.old_blocks.as_mut().map_or(false, |downloader| { downloader.collect_blocks(io, false) == Err(DownloaderImportError::Invalid) }) {
self.restart(io);
} else if self.old_blocks.as_ref().map_or(false, |downloader| { downloader.is_complete() }) {
let mut is_complete = false;
let mut download_action = DownloadAction::None;
if let Some(downloader) = self.old_blocks.as_mut() {
download_action = downloader.collect_blocks(io, false);
is_complete = downloader.is_complete();
}
if download_action == DownloadAction::Reset {
self.reset_downloads(block_set);
if let Some(downloader) = self.old_blocks.as_mut() {
downloader.reset();
}
}
if is_complete {
trace!(target: "sync", "Background block download is complete");
self.old_blocks = None;
}
}
};
}
/// Mark all outstanding requests as expired
fn reset_downloads(&mut self, block_set: BlockSet) {
trace!(target: "sync", "Resetting downloads for {:?}", block_set);
for (_, ref mut p) in self.peers.iter_mut().filter(|&(_, ref p)| p.block_set == Some(block_set)) {
p.reset_asking();
}
}
@ -1183,7 +1203,7 @@ pub mod tests {
}
pub fn get_dummy_blocks(order: u32, parent_hash: H256) -> Bytes {
let mut rlp = RlpStream::new_list(1);
let mut rlp = RlpStream::new_list(2);
rlp.append_raw(&get_dummy_block(order, parent_hash), 1);
let difficulty: U256 = (100 * order).into();
rlp.append(&difficulty);

View File

@ -200,7 +200,7 @@ impl SyncPropagator {
let appended = packet.append_raw_checked(&transaction.drain(), 1, MAX_TRANSACTION_PACKET_SIZE);
if !appended {
// Maximal packet size reached just proceed with sending
debug!("Transaction packet size limit reached. Sending incomplete set of {}/{} transactions.", pushed, to_send.len());
debug!(target: "sync", "Transaction packet size limit reached. Sending incomplete set of {}/{} transactions.", pushed, to_send.len());
to_send = to_send.into_iter().take(pushed).collect();
break;
}

View File

@ -226,7 +226,7 @@ impl SyncSupplier {
let mut added_receipts = 0usize;
let mut data = Bytes::new();
for i in 0..count {
if let Some(mut receipts_bytes) = io.chain().block_receipts(&rlp.val_at::<H256>(i)?) {
if let Some(mut receipts_bytes) = io.chain().encoded_block_receipts(&rlp.val_at::<H256>(i)?) {
data.append(&mut receipts_bytes);
added_receipts += receipts_bytes.len();
added_headers += 1;

View File

@ -35,7 +35,6 @@ extern crate fastmap;
extern crate rand;
extern crate semver;
extern crate parking_lot;
extern crate smallvec;
extern crate rlp;
extern crate ipnetwork;
extern crate keccak_hash as hash;

View File

@ -34,6 +34,7 @@
use std::collections::{HashMap, HashSet};
use std::mem;
use std::ops::Deref;
use std::sync::Arc;
use std::time::{Instant, Duration};
@ -213,6 +214,44 @@ enum SyncState {
Rounds(SyncRound),
}
/// A wrapper around the SyncState that makes sure to
/// update the giving reference to `is_idle`
#[derive(Debug)]
struct SyncStateWrapper {
state: SyncState,
}
impl SyncStateWrapper {
/// Create a new wrapper for SyncState::Idle
pub fn idle() -> Self {
SyncStateWrapper {
state: SyncState::Idle,
}
}
/// Set the new state's value, making sure `is_idle` gets updated
pub fn set(&mut self, state: SyncState, is_idle_handle: &mut bool) {
*is_idle_handle = match state {
SyncState::Idle => true,
_ => false,
};
self.state = state;
}
/// Returns the internal state's value
pub fn into_inner(self) -> SyncState {
self.state
}
}
impl Deref for SyncStateWrapper {
type Target = SyncState;
fn deref(&self) -> &SyncState {
&self.state
}
}
struct ResponseCtx<'a> {
peer: PeerId,
req_id: ReqId,
@ -235,7 +274,9 @@ pub struct LightSync<L: AsLightClient> {
pending_reqs: Mutex<HashMap<ReqId, PendingReq>>, // requests from this handler
client: Arc<L>,
rng: Mutex<OsRng>,
state: Mutex<SyncState>,
state: Mutex<SyncStateWrapper>,
// We duplicate this state tracking to avoid deadlocks in `is_major_importing`.
is_idle: Mutex<bool>,
}
#[derive(Debug, Clone)]
@ -309,16 +350,17 @@ impl<L: AsLightClient + Send + Sync> Handler for LightSync<L> {
if new_best.is_none() {
debug!(target: "sync", "No peers remain. Reverting to idle");
*self.state.lock() = SyncState::Idle;
self.set_state(&mut self.state.lock(), SyncState::Idle);
} else {
let mut state = self.state.lock();
*state = match mem::replace(&mut *state, SyncState::Idle) {
let next_state = match mem::replace(&mut *state, SyncStateWrapper::idle()).into_inner() {
SyncState::Idle => SyncState::Idle,
SyncState::AncestorSearch(search) =>
SyncState::AncestorSearch(search.requests_abandoned(unfulfilled)),
SyncState::Rounds(round) => SyncState::Rounds(round.requests_abandoned(unfulfilled)),
};
self.set_state(&mut state, next_state);
}
self.maintain_sync(ctx.as_basic());
@ -390,12 +432,13 @@ impl<L: AsLightClient + Send + Sync> Handler for LightSync<L> {
data: headers,
};
*state = match mem::replace(&mut *state, SyncState::Idle) {
let next_state = match mem::replace(&mut *state, SyncStateWrapper::idle()).into_inner() {
SyncState::Idle => SyncState::Idle,
SyncState::AncestorSearch(search) =>
SyncState::AncestorSearch(search.process_response(&ctx, &*self.client)),
SyncState::Rounds(round) => SyncState::Rounds(round.process_response(&ctx)),
};
self.set_state(&mut state, next_state);
}
self.maintain_sync(ctx.as_basic());
@ -408,12 +451,18 @@ impl<L: AsLightClient + Send + Sync> Handler for LightSync<L> {
// private helpers
impl<L: AsLightClient> LightSync<L> {
/// Sets the LightSync's state, and update
/// `is_idle`
fn set_state(&self, state: &mut SyncStateWrapper, next_state: SyncState) {
state.set(next_state, &mut self.is_idle.lock());
}
// Begins a search for the common ancestor and our best block.
// does not lock state, instead has a mutable reference to it passed.
fn begin_search(&self, state: &mut SyncState) {
fn begin_search(&self, state: &mut SyncStateWrapper) {
if let None = *self.best_seen.lock() {
// no peers.
*state = SyncState::Idle;
self.set_state(state, SyncState::Idle);
return;
}
@ -422,7 +471,8 @@ impl<L: AsLightClient> LightSync<L> {
trace!(target: "sync", "Beginning search for common ancestor from {:?}",
(chain_info.best_block_number, chain_info.best_block_hash));
*state = SyncState::AncestorSearch(AncestorSearch::begin(chain_info.best_block_number));
let next_state = SyncState::AncestorSearch(AncestorSearch::begin(chain_info.best_block_number));
self.set_state(state, next_state);
}
// handles request dispatch, block import, state machine transitions, and timeouts.
@ -435,7 +485,7 @@ impl<L: AsLightClient> LightSync<L> {
let chain_info = client.chain_info();
let mut state = self.state.lock();
debug!(target: "sync", "Maintaining sync ({:?})", &*state);
debug!(target: "sync", "Maintaining sync ({:?})", **state);
// drain any pending blocks into the queue.
{
@ -445,11 +495,12 @@ impl<L: AsLightClient> LightSync<L> {
loop {
if client.queue_info().is_full() { break }
*state = match mem::replace(&mut *state, SyncState::Idle) {
let next_state = match mem::replace(&mut *state, SyncStateWrapper::idle()).into_inner() {
SyncState::Rounds(round)
=> SyncState::Rounds(round.drain(&mut sink, Some(DRAIN_AMOUNT))),
other => other,
};
self.set_state(&mut state, next_state);
if sink.is_empty() { break }
trace!(target: "sync", "Drained {} headers to import", sink.len());
@ -483,15 +534,15 @@ impl<L: AsLightClient> LightSync<L> {
let network_score = other.as_ref().map(|target| target.head_td);
trace!(target: "sync", "No target to sync to. Network score: {:?}, Local score: {:?}",
network_score, best_td);
*state = SyncState::Idle;
self.set_state(&mut state, SyncState::Idle);
return;
}
};
match mem::replace(&mut *state, SyncState::Idle) {
match mem::replace(&mut *state, SyncStateWrapper::idle()).into_inner() {
SyncState::Rounds(SyncRound::Abort(reason, remaining)) => {
if remaining.len() > 0 {
*state = SyncState::Rounds(SyncRound::Abort(reason, remaining));
self.set_state(&mut state, SyncState::Rounds(SyncRound::Abort(reason, remaining)));
return;
}
@ -505,7 +556,7 @@ impl<L: AsLightClient> LightSync<L> {
AbortReason::NoResponses => {}
AbortReason::TargetReached => {
debug!(target: "sync", "Sync target reached. Going idle");
*state = SyncState::Idle;
self.set_state(&mut state, SyncState::Idle);
return;
}
}
@ -514,15 +565,15 @@ impl<L: AsLightClient> LightSync<L> {
self.begin_search(&mut state);
}
SyncState::AncestorSearch(AncestorSearch::FoundCommon(num, hash)) => {
*state = SyncState::Rounds(SyncRound::begin((num, hash), sync_target));
self.set_state(&mut state, SyncState::Rounds(SyncRound::begin((num, hash), sync_target)));
}
SyncState::AncestorSearch(AncestorSearch::Genesis) => {
// Same here.
let g_hash = chain_info.genesis_hash;
*state = SyncState::Rounds(SyncRound::begin((0, g_hash), sync_target));
self.set_state(&mut state, SyncState::Rounds(SyncRound::begin((0, g_hash), sync_target)));
}
SyncState::Idle => self.begin_search(&mut state),
other => *state = other, // restore displaced state.
other => self.set_state(&mut state, other), // restore displaced state.
}
}
@ -543,12 +594,13 @@ impl<L: AsLightClient> LightSync<L> {
}
drop(pending_reqs);
*state = match mem::replace(&mut *state, SyncState::Idle) {
let next_state = match mem::replace(&mut *state, SyncStateWrapper::idle()).into_inner() {
SyncState::Idle => SyncState::Idle,
SyncState::AncestorSearch(search) =>
SyncState::AncestorSearch(search.requests_abandoned(&unfulfilled)),
SyncState::Rounds(round) => SyncState::Rounds(round.requests_abandoned(&unfulfilled)),
};
self.set_state(&mut state, next_state);
}
}
@ -605,34 +657,14 @@ impl<L: AsLightClient> LightSync<L> {
None
};
*state = match mem::replace(&mut *state, SyncState::Idle) {
let next_state = match mem::replace(&mut *state, SyncStateWrapper::idle()).into_inner() {
SyncState::Rounds(round) =>
SyncState::Rounds(round.dispatch_requests(dispatcher)),
SyncState::AncestorSearch(search) =>
SyncState::AncestorSearch(search.dispatch_request(dispatcher)),
other => other,
};
}
}
fn is_major_importing_do_wait(&self, wait: bool) -> bool {
const EMPTY_QUEUE: usize = 3;
if self.client.as_light_client().queue_info().unverified_queue_size > EMPTY_QUEUE {
return true;
}
let mg_state = if wait {
self.state.lock()
} else {
if let Some(mg_state) = self.state.try_lock() {
mg_state
} else {
return false;
}
};
match *mg_state {
SyncState::Idle => false,
_ => true,
self.set_state(&mut state, next_state);
}
}
}
@ -651,7 +683,8 @@ impl<L: AsLightClient> LightSync<L> {
pending_reqs: Mutex::new(HashMap::new()),
client: client,
rng: Mutex::new(OsRng::new()?),
state: Mutex::new(SyncState::Idle),
state: Mutex::new(SyncStateWrapper::idle()),
is_idle: Mutex::new(true),
})
}
}
@ -666,9 +699,6 @@ pub trait SyncInfo {
/// Whether major sync is underway.
fn is_major_importing(&self) -> bool;
/// Whether major sync is underway, skipping some synchronization.
fn is_major_importing_no_sync(&self) -> bool;
}
impl<L: AsLightClient> SyncInfo for LightSync<L> {
@ -681,11 +711,13 @@ impl<L: AsLightClient> SyncInfo for LightSync<L> {
}
fn is_major_importing(&self) -> bool {
self.is_major_importing_do_wait(true)
}
const EMPTY_QUEUE: usize = 3;
fn is_major_importing_no_sync(&self) -> bool {
self.is_major_importing_do_wait(false)
let queue_info = self.client.as_light_client().queue_info();
let is_verifying = queue_info.unverified_queue_size + queue_info.verified_queue_size > EMPTY_QUEUE;
let is_syncing = !*self.is_idle.lock();
is_verifying || is_syncing
}
}

View File

@ -179,7 +179,7 @@ mod tests {
parent_hash = Some(header.hash());
encoded::Header::new(::rlp::encode(&header).into_vec())
encoded::Header::new(::rlp::encode(&header))
}).collect();
assert!(verify(&headers, &request).is_ok());
@ -205,7 +205,7 @@ mod tests {
parent_hash = Some(header.hash());
encoded::Header::new(::rlp::encode(&header).into_vec())
encoded::Header::new(::rlp::encode(&header))
}).collect();
assert!(verify(&headers, &request).is_ok());
@ -231,7 +231,7 @@ mod tests {
parent_hash = Some(header.hash());
encoded::Header::new(::rlp::encode(&header).into_vec())
encoded::Header::new(::rlp::encode(&header))
}).collect();
assert_eq!(verify(&headers, &request), Err(BasicError::TooManyHeaders(20, 25)));
@ -250,7 +250,7 @@ mod tests {
let mut header = Header::default();
header.set_number(x);
encoded::Header::new(::rlp::encode(&header).into_vec())
encoded::Header::new(::rlp::encode(&header))
}).collect();
assert_eq!(verify(&headers, &request), Err(BasicError::WrongSkip(5, Some(2))));

View File

@ -225,32 +225,28 @@ fn propagate_blocks() {
#[test]
fn restart_on_malformed_block() {
::env_logger::try_init().ok();
let mut net = TestNet::new(2);
net.peer(1).chain.add_blocks(10, EachBlockWith::Uncle);
net.peer(1).chain.corrupt_block(6);
net.peer(1).chain.add_blocks(5, EachBlockWith::Nothing);
net.peer(1).chain.add_block(EachBlockWith::Nothing, |mut header| {
header.set_extra_data(b"This extra data is way too long to be considered valid".to_vec());
header
});
net.sync_steps(20);
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 5);
// This gets accepted just fine since the TestBlockChainClient performs no validation.
// Probably remove this test?
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 6);
}
#[test]
fn restart_on_broken_chain() {
fn reject_on_broken_chain() {
let mut net = TestNet::new(2);
net.peer(1).chain.add_blocks(10, EachBlockWith::Uncle);
net.peer(1).chain.add_blocks(10, EachBlockWith::Nothing);
net.peer(1).chain.corrupt_block_parent(6);
net.sync_steps(20);
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 5);
}
#[test]
fn high_td_attach() {
let mut net = TestNet::new(2);
net.peer(1).chain.add_blocks(10, EachBlockWith::Uncle);
net.peer(1).chain.corrupt_block_parent(6);
net.sync_steps(20);
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 5);
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 0);
}
#[test]

View File

@ -10,7 +10,7 @@ ethkey = { path = "../../ethkey" }
evm = { path = "../evm" }
heapsize = "0.4"
keccak-hash = "0.1"
rlp = { version = "0.2.4", features = ["ethereum"] }
rlp = { version = "0.3.0", features = ["ethereum"] }
unexpected = { path = "../../util/unexpected" }
ethereum-types = "0.4"

View File

@ -90,8 +90,8 @@ pub mod signature {
match v {
v if v == 27 => 0,
v if v == 28 => 1,
v if v > 36 => ((v - 1) % 2) as u8,
_ => 4
v if v >= 35 => ((v - 1) % 2) as u8,
_ => 4
}
}
}
@ -364,7 +364,7 @@ impl UnverifiedTransaction {
pub fn chain_id(&self) -> Option<u64> {
match self.v {
v if self.is_unsigned() => Some(v),
v if v > 36 => Some((v - 35) / 2),
v if v >= 35 => Some((v - 35) / 2),
_ => None,
}
}
@ -583,6 +583,27 @@ mod tests {
assert_eq!(t.chain_id(), None);
}
#[test]
fn signing_eip155_zero_chainid() {
use ethkey::{Random, Generator};
let key = Random.generate().unwrap();
let t = Transaction {
action: Action::Create,
nonce: U256::from(42),
gas_price: U256::from(3000),
gas: U256::from(50_000),
value: U256::from(1),
data: b"Hello!".to_vec()
};
let hash = t.hash(Some(0));
let sig = ::ethkey::sign(&key.secret(), &hash).unwrap();
let u = t.with_signature(sig, Some(0));
assert!(SignedTransaction::new(u).is_ok());
}
#[test]
fn signing() {
use ethkey::{Random, Generator};

View File

@ -5,7 +5,7 @@ version = "0.1.0"
authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
rlp = { version = "0.2.4", features = ["ethereum"] }
rlp = { version = "0.3.0", features = ["ethereum"] }
rlp_derive = { path = "../../util/rlp_derive" }
parity-bytes = "0.1"
ethereum-types = "0.4"

View File

@ -7,10 +7,10 @@ authors = ["Parity Technologies <admin@parity.io>"]
byteorder = "1.0"
parity-bytes = "0.1"
ethereum-types = "0.4"
patricia-trie = "0.2.1"
patricia-trie = "0.3.0"
patricia-trie-ethereum = { path = "../../util/patricia-trie-ethereum" }
log = "0.4"
common-types = { path = "../types" }
ethjson = { path = "../../json" }
rlp = { version = "0.2.4", features = ["ethereum"] }
rlp = { version = "0.3.0", features = ["ethereum"] }
keccak-hash = "0.1"

View File

@ -140,10 +140,10 @@ pub trait Ext {
fn depth(&self) -> usize;
/// Increments sstore refunds counter.
fn add_sstore_refund(&mut self, value: U256);
fn add_sstore_refund(&mut self, value: usize);
/// Decrements sstore refunds counter.
fn sub_sstore_refund(&mut self, value: U256);
fn sub_sstore_refund(&mut self, value: usize);
/// Decide if any more operations should be traced. Passthrough for the VM trace.
fn trace_next_instruction(&mut self, _pc: usize, _instruction: u8, _current_gas: U256) -> bool { false }

View File

@ -56,7 +56,7 @@ pub struct FakeExt {
pub store: HashMap<H256, H256>,
pub suicides: HashSet<Address>,
pub calls: HashSet<FakeCall>,
pub sstore_clears: U256,
pub sstore_clears: i128,
pub depth: usize,
pub blockhashes: HashMap<U256, H256>,
pub codes: HashMap<Address, Arc<Bytes>>,
@ -220,12 +220,12 @@ impl Ext for FakeExt {
self.is_static
}
fn add_sstore_refund(&mut self, value: U256) {
self.sstore_clears = self.sstore_clears + value;
fn add_sstore_refund(&mut self, value: usize) {
self.sstore_clears += value as i128;
}
fn sub_sstore_refund(&mut self, value: U256) {
self.sstore_clears = self.sstore_clears - value;
fn sub_sstore_refund(&mut self, value: usize) {
self.sstore_clears -= value as i128;
}
fn trace_next_instruction(&mut self, _pc: usize, _instruction: u8, _gas: U256) -> bool {

View File

@ -285,7 +285,7 @@ impl<'a> Runtime<'a> {
self.ext.set_storage(key, val).map_err(|_| Error::StorageUpdateError)?;
if former_val != H256::zero() && val == H256::zero() {
let sstore_clears_schedule = U256::from(self.schedule().sstore_refund_gas);
let sstore_clears_schedule = self.schedule().sstore_refund_gas;
self.ext.add_sstore_refund(sstore_clears_schedule);
}

View File

@ -19,7 +19,7 @@ parking_lot = "0.6"
parity-crypto = "0.1"
ethereum-types = "0.4"
dir = { path = "../util/dir" }
smallvec = "0.4"
smallvec = "0.6"
parity-wordlist = "1.0"
tempdir = "0.3"

View File

@ -11,7 +11,7 @@ parity-bytes = "0.1"
ethereum-types = "0.4"
jsonrpc-core = { git = "https://github.com/paritytech/jsonrpc.git", branch = "parity-1.11" }
jsonrpc-http-server = { git = "https://github.com/paritytech/jsonrpc.git", branch = "parity-1.11" }
rlp = { version = "0.2.4", features = ["ethereum"] }
rlp = { version = "0.3.0", features = ["ethereum"] }
cid = "0.2"
multihash = "0.7"
unicase = "2.0"

View File

@ -80,7 +80,7 @@ impl IpfsHandler {
fn block_list(&self, hash: H256) -> Result<Out> {
let uncles = self.client().find_uncles(&hash).ok_or(Error::BlockNotFound)?;
Ok(Out::OctetStream(rlp::encode_list(&uncles).into_vec()))
Ok(Out::OctetStream(rlp::encode_list(&uncles)))
}
/// Get transaction by hash and return as raw binary.
@ -88,7 +88,7 @@ impl IpfsHandler {
let tx_id = TransactionId::Hash(hash);
let tx = self.client().transaction(tx_id).ok_or(Error::TransactionNotFound)?;
Ok(Out::OctetStream(rlp::encode(&*tx).into_vec()))
Ok(Out::OctetStream(rlp::encode(&*tx)))
}
/// Get state trie node by hash and return as raw binary.

Some files were not shown because too many files have changed in this diff Show More