Compare commits

..

14 Commits
main ... v2.1.9

Author SHA1 Message Date
Afri Schoedon
af1169d2ff
patch stable release pipeline (#10021)
* version: bump stable to 2.1.9

* ci: move future releases to ethereum subdir on s3 (#10017)

* ci: move future releases to ethereum subdir on s3

* ci: redesign s3 bucket logic

* ci: use the releases bucket
2018-12-05 15:51:10 +01:00
Afri Schoedon
3eae1d3640
Stable patch release 2.1.8 (#10001)
* version: bump stable to 2.1.8

* Fix Bloom migration (#9992)

* Fix wrong block number in blooms migration

* Fix wrong `const` type (usize -> u64) 😬

* Fix daemonize (#10000)

* Revert "prevent silent errors in daemon mode, closes #9367 (#9946)"

This reverts commit 52d5278a62.

* deps(daemonize): switch back to crates.io

* move daemonize before creating account provider (#10003)

* move daemonize before creating account provider

* daemonize: add a future-proofing comment
2018-11-30 16:05:29 +01:00
Afri Schoedon
126208cc74
Backports for stable 2.1.7 (#9975)
* version: bump stable to 2.1.7

* Adjust requests costs for light client (#9925)

* PIP Table Cost relative to average peers instead of max peers

* Add tracing in PIP new_cost_table

* Update stat peer_count

* Use number of leeching peers for Light serve costs

* Fix test::light_params_load_share_depends_on_max_peers (wrong type)

* Remove (now) useless test

* Remove `load_share` from LightParams.Config
Prevent div. by 0

* Add LEECHER_COUNT_FACTOR

* PR Grumble: u64 to u32 for f64 casting

* Prevent u32 overflow for avg_peer_count

* Add tests for LightSync::Statistics

* Fix empty steps (#9939)

* Don't send empty step twice or empty step then block.

* Perform basic validation of locally sealed blocks.

* Don't include empty step twice.

* prevent silent errors in daemon mode, closes #9367 (#9946)

* Fix light client informant while syncing (#9932)

* Add `is_idle` to LightSync to check importing status

* Use SyncStateWrapper to make sure is_idle gets updates

* Update is_major_import to use verified queue size as well

* Add comment for `is_idle`

* Add Debug to `SyncStateWrapper`

* `fn get` -> `fn into_inner`

*  ci: rearrange pipeline by logic (#9970)

* ci: rearrange pipeline by logic

* ci: rename docs script

* Add readiness check for docker container (#9804)

* Update Dockerfile

Since parity is built for "mission critical use", I thought other operators may see the need for this.

Adding the `curl` and `jq` commands allows for an extremely simple health check to be usable in container orchestrators.

For example. Here is a health check for a parity docker container running in Kubernetes.

This can be setup as a readiness Probe that would prevent clustered nodes that aren't ready from serving traffic.

```bash
#!/bin/bash

ETH_SYNCING=$(curl -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' http://localhost:8545 -H 'Content-Type: application/json')
RESULT=$(echo "$ETH_SYNCING | jq -r .result)

if [ "$RESULT" == "false" ]; then
  echo "Parity is ready to start accepting traffic"
  exit 0
else
  echo "Parity is still syncing the blockchain"
  exit 1
fi
```

* add sync check script

* Fix docker script (#9854)


* Dockerfile: change source path of the newly added check_sync.sh (#9869)

* Do not use the home directory as the working dir in docker (#9834)

* Do not create a home directory.

* Re-add -m flag

* fix docker build (#9971)

* bump smallvec to 0.6 in ethcore-light, ethstore and whisper (#9588)

* bump smallvec to 0.6 in ethcore-light, ethstore and whisper

* bump transaction-pool

* Fix test.

* patch cargo to use tokio-proto from git repo

this makes sure we no longer depend on smallvec 0.2.1 which is
affected by https://github.com/servo/rust-smallvec/issues/96

* use patched version of untrusted 0.5.1

* ci: allow audit to fail
2018-11-28 13:14:55 +01:00
Thibaut Sardan
491f17f149
Backport Stable 2.1.6 (#9904)
* bump stable 2.1.6

* ethcore: use Machine::verify_transaction on parent block (#9900)

* ethcore: use Machine::verify_transaction on parent block

also fixes off-by-one activation of transaction permission contract

* ethcore: clarify call to verify_transaction

* fix: Intermittent failing CI due to addr in use (#9885)

Allow OS to set port at runtime

* gitlab-ci: make android release build succeed (#9743)


* use docker cargo config file for android builds

* make android build succeed

* Update light-client hardcoded headers for
foundation: #6692865, ropsten: #4417537, kovan: #9363457

* Remove rust-toolchain file (#9906)

* light-fetch: Differentiate between out-of-gas/manual throw and use required gas from response on failure (#9824)

* fix start_gas, handle OOG exceptions & NotEnoughGas

* Change START_GAS: 50_000 -> 60_000
* When the `OutOfGas exception` is received then try to double the gas until it succeeds or block gas limit is reached
* When `NotEnoughBasGas error` is received then use the required gas provided in the response

* fix(light-fetch): ensure block_gas_limit is tried

Try the `block_gas_limit` before regard the execution as an error

* Update rpc/src/v1/helpers/light_fetch.rs

Co-Authored-By: niklasad1 <niklasadolfsson1@gmail.com>

* fix #9824 merge artifacts

* simplify cargo audit

* ci: nuke the gitlab caches (#9855)
2018-11-14 14:48:29 +01:00
Afri Schoedon
aad068d2c6
version: mark 2.1.5 stable (#9821)
* version: mark 2.1.5 stable

* ci: remove failing tests for android, windows, and macos (#9788)

* ci: remove failing tests for android, windows, and macos

* ci: restore android build jobs

* Move state root verification before gas used (#9841)

* Classic.json Bootnode Update (#9828)

* fix: Update bootnodes list to only responsive nodes

* feat: Add more bootnodes to classic.json list

* feat: Add retested bootnodes
2018-11-01 15:37:22 +01:00
Afri Schoedon
bee2cb8fb4
Backports: parity beta 2.1.4 (#9787)
* version: bump parity beta to 2.1.4

* ethcore: bump ropsten forkblock checkpoint (#9775)

* ethcore: handle vm exception when estimating gas (#9615)

* removed "rustup" & added new runner tag (#9731)

* removed "rustup" & added new runner tag

* exchanged tag "rust-windows" with "windows"

* revert windows tag change

* sync: retry different peer after empty subchain heads response (#9753)

* If no subchain heads then try a different peer

* Add log when useless chain head

* Restrict ChainHead useless peer to ancient blocks

* sync: replace `limit_reorg` with `block_set` condition

* update jsonrpc-core to a1b2bb742ce16d1168669ffb13ffe856e8131228

* Allow zero chain id in EIP155 signing process (#9792)

* Allow zero chain id in EIP155 signing process

* Rename test

* Fix test failure

* Insert dev account before unlocking (#9813)
2018-10-28 12:21:33 +01:00
Afri Schoedon
18ddd7c249
Beta release 2.1.3 backports (#9749)
* parity-version: mark 2.1.3 beta as critical

* Use signed 256-bit integer for sstore gas refund substate  (#9746)

* Add signed refund

* Use signed 256-bit integer for sstore gas refund substate

* Fix tests

* Remove signed mod and use i128 directly

* Fix evm test case casting

* Fix jsontests ext signature

* Add --force to cargo audit install script (#9735)

* heads ref not present for branches beta and stable (#9741)

* aura: fix panic on extra_info with unsealed block (#9755)

* aura: fix panic when unsealed block passed to extra_info

* aura: use hex formatting for EmptyStep hashes

* aura: add test for extra_info
2018-10-15 21:44:18 +02:00
André Silva
d35f4c1f0d [beta] More backports for 2.1.2 (#9733)
* produce portable binaries (#9725)

* HF in POA Core (2018-10-22) (#9724)

https://github.com/poanetwork/poa-chain-spec/pull/87

* Use static call and apparent value transfer for block reward contract code (#9603)

* Verify block syncing responses against requests (#9670)

* sync: Validate received BlockHeaders packets against stored request.

* sync: Validate received BlockBodies and BlockReceipts.

* sync: Fix broken tests.

* sync: Unit tests for BlockDownloader::import_headers.

* sync: Unit tests for import_{bodies,receipts}.

* tests: Add missing method doc.

* Fix ancient blocks sync (#9531)

* Log block set in block_sync for easier debugging

* logging macros

* Match no args in sync logging macros

* Add QueueFull error

* Only allow importing headers if the first matches requested

* WIP

* Test for chain head gaps and log

* Calc distance even with 2 heads

* Revert previous commits, preparing simple fix

This reverts commit 5f38aa885b22ebb0e3a1d60120cea69f9f322628.

* Reject headers with no gaps when ChainHead

* Reset block sync download when queue full

* Simplify check for subchain heads

* Add comment to explain subchain heads filter

* Fix is_subchain_heads check and comment

* Prevent premature round completion after restart

This is a problem on mainnet where multiple stale peer requests will
force many rounds to complete quickly, forcing the retraction.

* Reset stale old blocks request after queue full

* Revert "Reject headers with no gaps when ChainHead"

This reverts commit 0eb865539e5dee37ab34f168f5fb643300de5ace.

* Add BlockSet to BlockDownloader logging

Currently it is difficult to debug this because there are two instances,
one for OldBlocks and one for NewBlocks. This adds the BlockSet to all
log messages for easy log filtering.

* Reset OldBlocks download from last enqueued

Previously when the ancient block queue was full it would restart the
download from the last imported block, so the ones still in the queue would be
redownloaded. Keeping the existing downloader instance and just
resetting it will start again from the last enqueued block.:wq

* Ignore expired Body and Receipt requests

* Log when ancient block download being restarted

* Only request old blocks from peers with >= difficulty

https://github.com/paritytech/parity-ethereum/pull/9226 might be too
permissive and causing the behaviour of the retraction soon after the
fork block. With this change the peer difficulty has to be greater than
or euqal to our syncing difficulty, so should still fix
https://github.com/paritytech/parity-ethereum/issues/9225

* Some logging and clear stalled blocks head

* Revert "Some logging and clear stalled blocks head"

This reverts commit 757641d9b817ae8b63fec684759b0815af9c4d0e.

* Reset stalled header if useless more than once

* Store useless headers in HashSet

* Add sync target to logging macro

* Don't disable useless peer and fix log macro

* Clear useless headers on reset and comments

* Use custom error for collecting blocks

Previously we resued BlockImportError, however only the Invalid case and
this made little sense with the QueueFull error.

* Remove blank line

* Test for reset sync after consecutive useless headers

* Don't reset after consecutive headers when chain head

* Delete commented out imports

* Return DownloadAction from collect_blocks instead of error

* Don't reset after round complete, was causing test hangs

* Add comment explaining reset after useless

* Replace HashSet with counter for useless headers

* Refactor sync reset on bad block/queue full

* Add missing target for log message

* Fix compiler errors and test after merge

* ethcore: revert ethereum tests submodule update

* Add hardcoded headers (#9730)

* add foundation hardcoded header #6486017

* add ropsten hardcoded headers #4202497

* add kovan hardcoded headers #9023489

* gitlab ci: releasable_branches: change variables condition to schedule (#9729)
2018-10-10 18:55:55 +02:00
Afri Schoedon
52fe28a052
Backports for beta 2.1.2 (#9649)
* parity-version: bump beta to 2.1.2

* docs(rpc): push the branch along with tags (#9578)

* docs(rpc): push the branch along with tags

* ci: remove old rpc docs script

* Remove snapcraft clean (#9585)

* Revert " add snapcraft package image (master) (#9584)"

This reverts commit ceaedbbd7f.

* Update package-snap.sh

* Update .gitlab-ci.yml

* ci: fix regex 🙄 (#9597)

* docs(rpc): annotate tag with the provided message (#9601)

* Update ropsten.json (#9602)

* HF in POA Sokol (2018-09-19) (#9607)

https://github.com/poanetwork/poa-chain-spec/pull/86

* fix(network): don't disconnect reserved peers (#9608)

The priority of && and || was borked.

* fix failing node-table tests on mac os, closes #9632 (#9633)

* ethcore-io retries failed work steal (#9651)

* ethcore-io uses newer version of crossbeam && retries failed work steal

* ethcore-io non-mio service uses newer crossbeam

* remove master from releasable branches (#9655)

* remove master from releasable branches

need backporting in beta 
fix https://gitlab.parity.io/parity/parity-ethereum/-/jobs/101065 etc

* add except for snap packages for master

* Test fix for windows cache name... (#9658)

* Test fix for windows cache name...

* Fix variable name.

* fix(light_fetch): avoid race with BlockNumber::Latest (#9665)

* Calculate sha3 instead of sha256 for push-release. (#9673)

* Calculate sha3 instead of sha256 for push-release.

* Add pushes to the script.

* Hardfork the testnets (#9562)

* ethcore: propose hardfork block number 4230000 for ropsten

* ethcore: propose hardfork block number 9000000 for kovan

* ethcore: enable kip-4 and kip-6 on kovan

* etcore: bump kovan hardfork to block 9.2M

* ethcore: fix ropsten constantinople block number to 4.2M

* ethcore: disable difficulty_test_ropsten until ethereum/tests are updated upstream

* ci: fix push script (#9679)

* ci: fix push script

* Fix copying & running on windows.

* CI: Remove unnecessary pipes (#9681)

* ci: reduce gitlab pipelines significantly

* ci: build pipeline for PR

* ci: remove dead weight

* ci: remove github release script

* ci: remove forever broken aura tests

* ci: add random stuff to the end of the pipes

* ci: add wind and mac to the end of the pipe

* ci: remove snap artifacts

* ci: (re)move dockerfiles

* ci: clarify job names

* ci: add cargo audit job

* ci: make audit script executable

* ci: ignore snap and docker files for rust check

* ci: simplify audit script

* ci: rename misc to optional

* ci: add publish script to releaseable branches

* ci: more verbose cp command for windows build

* ci: fix weird binary checksum logic in push script

* ci: fix regex in push script for windows

* ci: simplify gitlab caching

* docs: align README with ci changes

* ci: specify default cargo target dir

* ci: print verbose environment

* ci: proper naming of scripts

* ci: restore docker files

* ci: use docker hub file

* ci: use cargo home instead of cargo target dir

* ci: touch random rust file to trigger real builds

* ci: set cargo target dir for audit script

* ci: remove temp file

* ci: don't export the cargo target dir in the audit script

* ci: fix windows unbound variable

* docs: fix gitlab badge path

* rename deprecated gitlab ci variables

https://docs.gitlab.com/ee/ci/variables/#9-0-renaming

* ci: fix git compare for nightly builds

* test: skip c++ example for all platforms but linux

* ci: add random rust file to trigger tests

* ci: remove random rust file

* disable cpp lib test for mac, win and beta (#9686)

* cleanup ci merge

* ci: fix tests

* fix bad-block reporting no reason (#9638)

* ethcore: fix detection of major import (#9552)

* sync: set state to idle after sync is completed

* sync: refactor sync reset

* Don't hash the init_code of CREATE. (#9688)

* Docker: run as parity user (#9689)

* Implement CREATE2 gas changes and fix some potential overflowing (#9694)

* Implement CREATE2 gas changes and fix some potential overflowing

* Ignore create2 state tests

* Split CREATE and CREATE2 in gasometer

* Generalize rounding (x + 31) / 32 to to_word_size

* make instantSeal engine backwards compatible, closes #9696 (#9700)

* ethcore: delay ropsten hardfork (#9704)

* fix (light/provider) : Make `read_only executions` read-only (#9591)

* `ExecutionsRequest` from light-clients as read-only

This changes so all `ExecutionRequests` from light-clients are executed
as read-only which the `virtual``flag == true ensures.

This boost up the current transaction to always succeed

Note, this only affects `eth_estimateGas` and `eth_call` AFAIK.

* grumbles(revert renaming) : TransactionProof

* grumbles(trace) : remove incorrect trace

* grumbles(state/prove_tx) : explicit `virt`

Remove the boolean flag to determine that a `state::prove_transaction`
whether it should be executed in a virtual context or not.

Because of that also rename the function to
`state::prove_transction_virtual` to make more clear

* CI: Skip docs job for nightly (#9693)

* ci: force-tag wiki changes

* ci: force-tag wiki changes

* ci: skip docs job for master and nightly

* ci: revert docs job checking for nightly tag

* ci: exclude docs job from nightly builds in gitlab script
2018-10-09 15:04:30 +02:00
gabriel klawitter
8e347b2602 rpc-docs should push the branch (#9611) 2018-09-21 14:38:28 +02:00
gabriel klawitter
a5dcaf7d21 beta: rpc-docs set github token (#9609) 2018-09-20 17:19:42 +03:00
Afri Schoedon
cb09330cb3
Backports for 2.1.1 beta (#9599)
* parity: bump version to 2.1.1 beta

* ci: fix regex 🙄

* docs(rpc): annotate tag with the provided message
2018-09-19 20:31:26 +02:00
Denis S. Soldatov aka General-Beck
363ad10906 add snapcraft package image (#9583)
* add snapcraft package image

* Update .gitlab-ci.yml

* remove snapcraft clean
2018-09-18 15:32:05 +02:00
Afri Schoedon
d147700046
Backports for 2.1.0 beta (#9518)
* parity-version: mark 2.1.0 track beta

* ci: update branch version references

* docker: release master to latest

* Fix checkpointing when creating contract failed (#9514)

* ci: fix json docs generation (#9515)

* fix typo in version string (#9516)

* Update patricia trie to 0.2.2 crates. Default dependencies on minor
version only.

* Putting back ethereum tests to the right commit

* Enable all Constantinople hard fork changes in constantinople_test.json (#9505)

* Enable all Constantinople hard fork changes in constantinople_test.json

* Address grumbles

* Remove EIP-210 activation

* 8m -> 5m

* Temporarily add back eip210 transition so we can get test passed

* Add eip210_test and remove eip210 transition from const_test

* In create memory calculation is the same for create2 because the additional parameter was popped before. (#9522)

* deps: bump fs-swap and kvdb-rocksdb

* Multithreaded snapshot creation (#9239)

* Add Progress to Snapshot Secondary chunks creation

* Use half of CPUs to multithread snapshot creation

* Use env var to define number of threads

* info to debug logs

* Add Snapshot threads as CLI option

* Randomize chunks per thread

* Remove randomness, add debugging

* Add warning

* Add tracing

* Use parity-common fix seek branch

* Fix log

* Fix tests

* Fix tests

* PR Grumbles

* PR Grumble II

* Update Cargo.lock

* PR Grumbles

* Default snapshot threads to half number of CPUs

* Fix default snapshot threads // min 1

* correct before_script for nightly build versions (#9543)

- fix gitlab array of strings syntax error
- get proper commit id
- avoid colon in stings

* Remove initial token for WS. (#9545)

* version: mark release critical

* ci: fix rpc docs generation 2 (#9550)

* Improve P2P discovery (#9526)

* Add `target` to Rust traces

* network-devp2p: Don't remove discovery peer in main sync

* network-p2p: Refresh discovery more often

* Update Peer discovery protocol

* Run discovery more often when not enough nodes connected

* Start the first discovery early

* Update fast discovery rate

* Fix tests

* Fix `ping` tests

* Fixing remote Node address ; adding PingPong round

* Fix tests: update new +1 PingPong round

* Increase slow Discovery rate
Check in flight FindNode before pings

* Add `deprecated` to deprecated_echo_hash

* Refactor `discovery_round` branching

* net_version caches network_id to avoid redundant aquire of sync read lock (#9544)

* net_version caches network_id to avoid redundant aquire of sync read lock, #8746

* use lower_hex display formatting for net_peerCount rpc method

* Increase Gas-floor-target and Gas Cap (#9564)

+ Gas-floor-target increased to 8M by default

+ Gas-cap increased to 10M by default

* Revert to old parity-tokio-ipc.

* Downgrade named pipes.
2018-09-17 19:22:30 +02:00
1591 changed files with 270632 additions and 227229 deletions

View File

@ -1,3 +1,3 @@
[target.x86_64-pc-windows-msvc]
# Link the C runtime statically ; https://github.com/openethereum/parity-ethereum/issues/6643
# Link the C runtime statically ; https://github.com/paritytech/parity-ethereum/issues/6643
rustflags = ["-Ctarget-feature=+crt-static"]

View File

@ -1,2 +0,0 @@
# Reformat the source code
610d9baba4af83b5767c659ca2ccfed337af1056

View File

@ -2,11 +2,11 @@
## 1. Purpose
A primary goal of OpenEthereum is to be inclusive to the largest number of contributors, with the most varied and diverse backgrounds possible. As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion (or lack thereof).
A primary goal of Parity is to be inclusive to the largest number of contributors, with the most varied and diverse backgrounds possible. As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion (or lack thereof).
This code of conduct outlines our expectations for all those who participate in our community, as well as the consequences for unacceptable behavior.
We invite all those who participate in OpenEthereum to help us create safe and positive experiences for everyone.
We invite all those who participate in Parity to help us create safe and positive experiences for everyone.
## 2. Open Source Citizenship
@ -63,7 +63,7 @@ Additionally, community organizers are available to help community members engag
## 7. Addressing Grievances
If you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify OpenEthereum Technologies with a concise description of your grievance. Your grievance will be handled in accordance with our existing governing policies.
If you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify Parity Technologies with a concise description of your grievance. Your grievance will be handled in accordance with our existing governing policies.
## 8. Scope
@ -73,7 +73,7 @@ This code of conduct and its related procedures also applies to unacceptable beh
## 9. Contact info
You can contact OpenEthereum via Email: community@parity.io
You can contact Parity via Email: community@parity.io
## 10. License and attribution

View File

@ -2,7 +2,7 @@
## Do you have a question?
Check out our [Beginner Introduction](https://openethereum.github.io/Beginner-Introduction), [Configuration](https://openethereum.github.io//Configuring-OpenEthereum), and [FAQ](https://openethereum.github.io/FAQ) articles on our [wiki](https://openethereum.github.io/)!
Check out our [Basic Usage](https://wiki.parity.io/Basic-Usage), [Configuration](https://wiki.parity.io/Configuring-Parity-Ethereum), and [FAQ](https://wiki.parity.io/FAQ) articles on our [wiki](https://wiki.parity.io/)!
See also frequently asked questions [tagged with `parity`](https://ethereum.stackexchange.com/questions/tagged/parity?sort=votes&pageSize=50) on Stack Exchange.
@ -10,11 +10,11 @@ See also frequently asked questions [tagged with `parity`](https://ethereum.stac
Do **not** open an issue on Github if you think your discovered bug could be a **security-relevant vulnerability**. Please, read our [security policy](../SECURITY.md) instead.
Otherwise, just create a [new issue](https://github.com/openethereum/openethereum/issues/new) in our repository and state:
Otherwise, just create a [new issue](https://github.com/paritytech/parity-ethereum/issues/new) in our repository and state:
- What's your OpenEthereum version?
- What's your Parity Ethereum version?
- What's your operating system and version?
- How did you install OpenEthereum?
- How did you install Parity Ethereum?
- Is your node fully synchronized?
- Did you try turning it off and on again?
@ -22,44 +22,9 @@ Also, try to include **steps to reproduce** the issue and expand on the **actual
## Contribute!
If you would like to contribute to OpenEthereum, please **fork it**, fix bugs or implement features, and [propose a pull request](https://github.com/openethereum/openethereum/compare).
If you would like to contribute to Parity Ethereum, please **fork it**, fix bugs or implement features, and [propose a pull request](https://github.com/paritytech/parity-ethereum/compare).
### Labels & Milestones
We use [labels](https://github.com/openethereum/openethereum/labels) to manage PRs and issues and communicate the state of a PR. Please familiarize yourself with them. Furthermore we are organizing issues in [milestones](https://github.com/openethereum/openethereum/milestones). Best way to get started is to a pick a ticket from the current milestone tagged [`easy`](https://github.com/openethereum/openethereum/labels/Q2-easy%20%F0%9F%92%83) and get going, or [`mentor`](https://github.com/openethereum/openethereum/labels/Q1-mentor%20%F0%9F%95%BA) and get in contact with the mentor offering their support on that larger task.
### Rules
There are a few basic ground-rules for contributors (including the maintainer(s) of the project):
* **No pushing directly to the master branch**.
* **All modifications** must be made in a **pull-request** to solicit feedback from other contributors.
* Pull-requests cannot be merged before CI runs green and two reviewers have given their approval.
* All code changed should be formated by running `cargo fmt -- --config=merge_imports=true`
### Recommendations
* **Non-master branch names** *should* be prefixed with a short name moniker, followed by the associated Github Issue ID (if any), and a brief description of the task using the format `<GITHUB_USERNAME>-<ISSUE_ID>-<BRIEF_DESCRIPTION>` (e.g. `gavin-123-readme`). The name moniker helps people to inquiry about their unfinished work, and the GitHub Issue ID helps your future self and other developers (particularly those who are onboarding) find out about and understand the original scope of the task, and where it fits into Parity Ethereum [Projects](https://github.com/openethereum/openethereum/projects).
* **Remove stale branches periodically**
### Preparing Pull Requests
* If your PR does not alter any logic (e.g. comments, dependencies, docs), then it may be tagged [`insubstantial`](https://github.com/openethereum/openethereum/pulls?q=is%3Aopen+is%3Apr+label%3A%22A2-insubstantial+%F0%9F%91%B6%22).
* Once a PR is ready for review please add the [`pleasereview`](https://github.com/openethereum/openethereum/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3A%22A0-pleasereview+%F0%9F%A4%93%22+) label.
### Reviewing Pull Requests*:
* At least two reviewers are required to review PRs (even for PRs tagged [`insubstantial`](https://github.com/openethereum/openethereum/pulls?q=is%3Aopen+is%3Apr+label%3A%22A2-insubstantial+%F0%9F%91%B6%22)).
When doing a review, make sure to look for any:
* Buggy behavior.
* Undue maintenance burden.
* Breaking with house coding style.
* Pessimization (i.e. reduction of speed as measured in the projects benchmarks).
* Breaking changes should be carefuly reviewed and tagged as such so they end up in the [changelog](../CHANGELOG.md).
* Uselessness (i.e. it does not strictly add a feature or fix a known issue).
Please, refer to the [Coding Guide](https://wiki.parity.io/Coding-guide) in our wiki for more details about hacking on Parity.
## License.

View File

@ -1,8 +1,6 @@
For questions please use https://discord.io/openethereum, issues are for bugs and feature requests.
_Before filing a new issue, please **provide the following information**._
- **OpenEthereum version (>=3.1.0)**: 0.0.0
- **Parity Ethereum version**: 0.0.0
- **Operating system**: Windows / MacOS / Linux
- **Installation**: homebrew / one-line installer / built from source
- **Fully synchronized**: no / yes

View File

@ -1,33 +0,0 @@
name: Build and Test Suite on Windows
on:
push:
branches:
- main
- dev
jobs:
build-tests:
name: Test and Build
strategy:
matrix:
platform:
- windows2019 # custom runner
toolchain:
- 1.52.1
runs-on: ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@main
with:
submodules: true
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.toolchain }}
profile: minimal
override: true
- name: Build tests
uses: actions-rs/cargo@v1
with:
command: test
args: --locked --all --release --features "json-tests" --verbose --no-run

View File

@ -1,40 +0,0 @@
name: Build and Test Suite
on:
pull_request:
push:
branches:
- main
- dev
jobs:
build-tests:
name: Test and Build
strategy:
matrix:
platform:
- ubuntu-16.04
- macos-latest
toolchain:
- 1.52.1
runs-on: ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@main
with:
submodules: true
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.toolchain }}
profile: minimal
override: true
- name: Build tests
uses: actions-rs/cargo@v1
with:
command: test
args: --locked --all --release --features "json-tests" --verbose --no-run
- name: Run tests for ${{ matrix.platform }}
uses: actions-rs/cargo@v1
with:
command: test
args: --locked --all --release --features "json-tests" --verbose

View File

@ -1,285 +0,0 @@
name: Build Release Suite
on:
push:
tags:
- v*
# Global vars
env:
AWS_REGION: "us-east-1"
AWS_S3_ARTIFACTS_BUCKET: "openethereum-releases"
ACTIONS_ALLOW_UNSECURE_COMMANDS: true
jobs:
build:
name: Build Release
strategy:
matrix:
platform:
- ubuntu-16.04
- macos-latest
toolchain:
- 1.52.1
runs-on: ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@main
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.toolchain }}
profile: minimal
override: true
# ==============================
# Windows Build
# ==============================
# - name: Install LLVM for Windows
# if: matrix.platform == 'windows2019'
# run: choco install llvm
# - name: Build OpenEthereum for Windows
# if: matrix.platform == 'windows2019'
# run: sh scripts/actions/build-windows.sh ${{matrix.platform}}
# - name: Upload Windows build
# uses: actions/upload-artifact@v2
# if: matrix.platform == 'windows2019'
# with:
# name: windows-artifacts
# path: artifacts
# ==============================
# Linux/Macos Build
# ==============================
- name: Build OpenEthereum for ${{matrix.platform}}
if: matrix.platform != 'windows2019'
run: sh scripts/actions/build-linux.sh ${{matrix.platform}}
- name: Upload Linux build
uses: actions/upload-artifact@v2
if: matrix.platform == 'ubuntu-16.04'
with:
name: linux-artifacts
path: artifacts
- name: Upload MacOS build
uses: actions/upload-artifact@v2
if: matrix.platform == 'macos-latest'
with:
name: macos-artifacts
path: artifacts
zip-artifacts-creator:
name: Create zip artifacts
needs: build
runs-on: ubuntu-16.04
steps:
- name: Set env
run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
# ==============================
# Create ZIP files
# ==============================
# - name: Download Windows artifacts
# uses: actions/download-artifact@v2
# with:
# name: windows-artifacts
# path: windows-artifacts
- name: Download Linux artifacts
uses: actions/download-artifact@v2
with:
name: linux-artifacts
path: linux-artifacts
- name: Download MacOS artifacts
uses: actions/download-artifact@v2
with:
name: macos-artifacts
path: macos-artifacts
- name: Display structure of downloaded files
run: ls
- name: Create zip Linux
id: create_zip_linux
run: |
cd linux-artifacts/
zip -rT openethereum-linux-${{ env.RELEASE_VERSION }}.zip *
ls openethereum-linux-${{ env.RELEASE_VERSION }}.zip
cd ..
mv linux-artifacts/openethereum-linux-${{ env.RELEASE_VERSION }}.zip .
echo "Setting outputs..."
echo ::set-output name=LINUX_ARTIFACT::openethereum-linux-${{ env.RELEASE_VERSION }}.zip
echo ::set-output name=LINUX_SHASUM::$(shasum -a 256 openethereum-linux-${{ env.RELEASE_VERSION }}.zip | awk '{print $1}')
- name: Create zip MacOS
id: create_zip_macos
run: |
cd macos-artifacts/
zip -rT openethereum-macos-${{ env.RELEASE_VERSION }}.zip *
ls openethereum-macos-${{ env.RELEASE_VERSION }}.zip
cd ..
mv macos-artifacts/openethereum-macos-${{ env.RELEASE_VERSION }}.zip .
echo "Setting outputs..."
echo ::set-output name=MACOS_ARTIFACT::openethereum-macos-${{ env.RELEASE_VERSION }}.zip
echo ::set-output name=MACOS_SHASUM::$(shasum -a 256 openethereum-macos-${{ env.RELEASE_VERSION }}.zip | awk '{print $1}')
# - name: Create zip Windows
# id: create_zip_windows
# run: |
# cd windows-artifacts/
# zip -rT openethereum-windows-${{ env.RELEASE_VERSION }}.zip *
# ls openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# cd ..
# mv windows-artifacts/openethereum-windows-${{ env.RELEASE_VERSION }}.zip .
# echo "Setting outputs..."
# echo ::set-output name=WINDOWS_ARTIFACT::openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# echo ::set-output name=WINDOWS_SHASUM::$(shasum -a 256 openethereum-windows-${{ env.RELEASE_VERSION }}.zip | awk '{print $1}')
# =======================================================================
# Upload artifacts
# This is required to share artifacts between different jobs
# =======================================================================
- name: Upload artifacts
uses: actions/upload-artifact@v2
with:
name: openethereum-linux-${{ env.RELEASE_VERSION }}.zip
path: openethereum-linux-${{ env.RELEASE_VERSION }}.zip
- name: Upload artifacts
uses: actions/upload-artifact@v2
with:
name: openethereum-macos-${{ env.RELEASE_VERSION }}.zip
path: openethereum-macos-${{ env.RELEASE_VERSION }}.zip
# - name: Upload artifacts
# uses: actions/upload-artifact@v2
# with:
# name: openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# path: openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# =======================================================================
# Upload artifacts to S3
# This is required by some software distribution systems which require
# artifacts to be downloadable, like Brew on MacOS.
# =======================================================================
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Copy files to S3 with the AWS CLI
run: |
# Deploy zip artifacts to S3 bucket to a directory whose name is the tagged release version.
# Deploy macos binary artifact (if required, add more `aws s3 cp` commands to deploy specific OS versions)
aws s3 cp macos-artifacts/openethereum s3://${{ env.AWS_S3_ARTIFACTS_BUCKET }}/${{ env.RELEASE_VERSION }}/macos/ --region ${{ env.AWS_REGION }}
outputs:
linux-artifact: ${{ steps.create_zip_linux.outputs.LINUX_ARTIFACT }}
linux-shasum: ${{ steps.create_zip_linux.outputs.LINUX_SHASUM }}
macos-artifact: ${{ steps.create_zip_macos.outputs.MACOS_ARTIFACT }}
macos-shasum: ${{ steps.create_zip_macos.outputs.MACOS_SHASUM }}
# windows-artifact: ${{ steps.create_zip_windows.outputs.WINDOWS_ARTIFACT }}
# windows-shasum: ${{ steps.create_zip_windows.outputs.WINDOWS_SHASUM }}
draft-release:
name: Draft Release
needs: zip-artifacts-creator
runs-on: ubuntu-16.04
steps:
- name: Set env
run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
# ==============================
# Download artifacts
# ==============================
- name: Download artifacts
uses: actions/download-artifact@v2
with:
name: openethereum-linux-${{ env.RELEASE_VERSION }}.zip
- name: Download artifacts
uses: actions/download-artifact@v2
with:
name: openethereum-macos-${{ env.RELEASE_VERSION }}.zip
# - name: Download artifacts
# uses: actions/download-artifact@v2
# with:
# name: openethereum-windows-${{ env.RELEASE_VERSION }}.zip
- name: Display structure of downloaded files
run: ls
# ==============================
# Create release draft
# ==============================
- name: Create Release Draft
id: create_release_draft
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
with:
tag_name: ${{ github.ref }}
release_name: OpenEthereum ${{ github.ref }}
body: |
This release contains <ADD_TEXT>
| System | Architecture | Binary | Sha256 Checksum |
|:---:|:---:|:---:|:---|
| <img src="https://gist.github.com/5chdn/1fce888fde1d773761f809b607757f76/raw/44c4f0fc63f1ea8e61a9513af5131ef65eaa6c75/apple.png" alt="Apple Icon by Pixel Perfect from https://www.flaticon.com/authors/pixel-perfect" style="width: 32px;"/> | x64 | [${{ needs.zip-artifacts-creator.outputs.macos-artifact }}](https://github.com/openethereum/openethereum/releases/download/${{ env.RELEASE_VERSION }}/${{ needs.zip-artifacts-creator.outputs.macos-artifact }}) | `${{ needs.zip-artifacts-creator.outputs.macos-shasum }}` |
| <img src="https://gist.github.com/5chdn/1fce888fde1d773761f809b607757f76/raw/44c4f0fc63f1ea8e61a9513af5131ef65eaa6c75/linux.png" alt="Linux Icon by Pixel Perfect from https://www.flaticon.com/authors/pixel-perfect" style="width: 32px;"/> | x64 | [${{ needs.zip-artifacts-creator.outputs.linux-artifact }}](https://github.com/openethereum/openethereum/releases/download/${{ env.RELEASE_VERSION }}/${{ needs.zip-artifacts-creator.outputs.linux-artifact }}) | `${{ needs.zip-artifacts-creator.outputs.linux-shasum }}` |
| <img src="https://gist.github.com/5chdn/1fce888fde1d773761f809b607757f76/raw/44c4f0fc63f1ea8e61a9513af5131ef65eaa6c75/windows.png" alt="Windows Icon by Pixel Perfect from https://www.flaticon.com/authors/pixel-perfect" style="width: 32px;"/> | x64 | [${{ needs.zip-artifacts-creator.outputs.windows-artifact }}](https://github.com/openethereum/openethereum/releases/download/${{ env.RELEASE_VERSION }}/${{ needs.zip-artifacts-creator.outputs.windows-artifact }}) | `${{ needs.zip-artifacts-creator.outputs.windows-shasum }}` |
| | | | |
| **System** | **Option** | - | **Resource** |
| <img src="https://gist.github.com/5chdn/1fce888fde1d773761f809b607757f76/raw/44c4f0fc63f1ea8e61a9513af5131ef65eaa6c75/settings.png" alt="Settings Icon by Pixel Perfect from https://www.flaticon.com/authors/pixel-perfect" style="width: 32px;"/> | Docker | - | [hub.docker.com/r/openethereum/openethereum](https://hub.docker.com/r/openethereum/openethereum) |
draft: true
prerelease: true
- name: Upload Release Asset - Linux
id: upload_release_asset_linux
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release_draft.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: ./openethereum-linux-${{ env.RELEASE_VERSION }}.zip
asset_name: openethereum-linux-${{ env.RELEASE_VERSION }}.zip
asset_content_type: application/zip
- name: Upload Release Asset - MacOS
id: upload_release_asset_macos
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release_draft.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: ./openethereum-macos-${{ env.RELEASE_VERSION }}.zip
asset_name: openethereum-macos-${{ env.RELEASE_VERSION }}.zip
asset_content_type: application/zip
# - name: Upload Release Asset - Windows
# id: upload_release_asset_windows
# uses: actions/upload-release-asset@v1
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# with:
# upload_url: ${{ steps.create_release_draft.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
# asset_path: ./openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# asset_name: openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# asset_content_type: application/zip

View File

@ -1,50 +0,0 @@
name: Check
on:
pull_request:
push:
branches:
- main
- dev
jobs:
check:
name: Check
runs-on: ubuntu-16.04
steps:
- name: Checkout sources
uses: actions/checkout@main
with:
submodules: true
- name: Install 1.52.1 toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: 1.52.1
profile: minimal
override: true
- name: Run cargo check 1/3
uses: actions-rs/cargo@v1
with:
command: check
args: --locked --no-default-features --verbose
- name: Run cargo check 2/3
uses: actions-rs/cargo@v1
with:
command: check
args: --locked --manifest-path crates/runtime/io/Cargo.toml --no-default-features --verbose
- name: Run cargo check 3/3
uses: actions-rs/cargo@v1
with:
command: check
args: --locked --manifest-path crates/runtime/io/Cargo.toml --features "mio" --verbose
- name: Run cargo check evmbin
uses: actions-rs/cargo@v1
with:
command: check
args: --locked -p evmbin --verbose
- name: Run cargo check benches
uses: actions-rs/cargo@v1
with:
command: check
args: --locked --all --benches --verbose
- name: Run validate chainspecs
run: ./scripts/actions/validate-chainspecs.sh

View File

@ -1,29 +0,0 @@
name: Docker Image Nightly Release
# Run "nightly" build on each commit to "dev" branch.
on:
push:
branches:
- dev
jobs:
deploy-docker:
name: Build Release
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@master
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: 1.52.1
profile: minimal
override: true
- name: Deploy to docker hub
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: openethereum/openethereum
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: scripts/docker/alpine/Dockerfile
tags: "nightly"

View File

@ -1,30 +0,0 @@
name: Docker Image Tag and Latest Release
on:
push:
tags:
- v*
jobs:
deploy-docker:
name: Build Release
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@master
- name: Set env
run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: 1.52.1
profile: minimal
override: true
- name: Deploy to docker hub
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: openethereum/openethereum
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: scripts/docker/alpine/Dockerfile
tags: "latest,${{ env.RELEASE_VERSION }}"

View File

@ -1,30 +0,0 @@
name: Docker Image Release
on:
push:
branches:
- main
tags:
- v*
jobs:
deploy-docker:
name: Build Release
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@master
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: 1.52.1
profile: minimal
override: true
- name: Deploy to docker hub
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: openethereum/openethereum
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: scripts/docker/alpine/Dockerfile
tag_names: true

View File

@ -1,20 +0,0 @@
on: [push, pull_request]
name: rustfmt
jobs:
fmt:
name: Rustfmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.52.1
override: true
- run: rustup component add rustfmt
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check --config merge_imports=true

4
.gitignore vendored
View File

@ -38,10 +38,8 @@ node_modules
# Build artifacts
out/
parity-clib-examples/cpp/build/
.vscode
rls/
/parity.*
# cargo remote artifacts
remote-target

155
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,155 @@
stages:
- test
- build
- publish
- optional
image: parity/rust:gitlab-ci
variables:
CI_SERVER_NAME: "GitLab CI"
CARGO_HOME: "${CI_PROJECT_DIR}/.cargo"
CARGO_TARGET: x86_64-unknown-linux-gnu
.releaseable_branches: # list of git refs for building GitLab artifacts (think "pre-release binaries")
only: &releaseable_branches
- stable
- beta
- tags
- schedules
.collect_artifacts: &collect_artifacts
artifacts:
name: "${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}"
when: on_success
expire_in: 1 mos
paths:
- artifacts/
.determine_version: &determine_version
- VERSION="$(sed -r -n '1,/^version/s/^version = "([^"]+)".*$/\1/p' Cargo.toml)"
- DATE_STR="$(date +%Y%m%d)"
- ID_SHORT="$(echo ${CI_COMMIT_SHA} | cut -c 1-7)"
- test "${CI_COMMIT_REF_NAME}" = "nightly" && VERSION="${VERSION}-${ID_SHORT}-${DATE_STR}"
- export VERSION
- echo "Version = ${VERSION}"
test-linux:
stage: test
variables:
RUN_TESTS: all
script:
- scripts/gitlab/test-all.sh stable
tags:
- rust-stable
test-audit:
stage: test
script:
- scripts/gitlab/cargo-audit.sh
tags:
- rust-stable
allow_failure: true
build-linux:
stage: build
only: *releaseable_branches
variables:
CARGO_TARGET: x86_64-unknown-linux-gnu
script:
- scripts/gitlab/build-unix.sh
<<: *collect_artifacts
tags:
- rust-stable
build-darwin:
stage: build
only: *releaseable_branches
variables:
CARGO_TARGET: x86_64-apple-darwin
CC: gcc
CXX: g++
script:
- scripts/gitlab/build-unix.sh
tags:
- rust-osx
<<: *collect_artifacts
build-windows:
stage: build
only: *releaseable_branches
variables:
CARGO_TARGET: x86_64-pc-windows-msvc
script:
- sh scripts/gitlab/build-windows.sh
tags:
- rust-windows
<<: *collect_artifacts
publish-docker:
stage: publish
only: *releaseable_branches
cache: {}
dependencies:
- build-linux
tags:
- shell
script:
- scripts/gitlab/publish-docker.sh parity
publish-awss3:
stage: publish
only: *releaseable_branches
cache: {}
dependencies:
- build-linux
- build-darwin
- build-windows
before_script: *determine_version
script:
- scripts/gitlab/publish-awss3.sh
tags:
- shell
publish-docs:
stage: publish
only:
- tags
except:
- nightly
cache: {}
script:
- scripts/gitlab/publish-docs.sh
tags:
- shell
build-android:
stage: optional
image: parity/rust-android:gitlab-ci
variables:
CARGO_TARGET: armv7-linux-androideabi
script:
- scripts/gitlab/build-unix.sh
tags:
- rust-arm
allow_failure: true
test-beta:
stage: optional
variables:
RUN_TESTS: cargo
script:
- scripts/gitlab/test-all.sh beta
tags:
- rust-beta
allow_failure: true
test-nightly:
stage: optional
variables:
RUN_TESTS: all
script:
- scripts/gitlab/test-all.sh nightly
tags:
- rust-nightly
allow_failure: true

8
.gitmodules vendored
View File

@ -1,3 +1,7 @@
[submodule "crates/ethcore/res/json_tests"]
path = crates/ethcore/res/json_tests
[submodule "ethcore/res/ethereum/tests"]
path = ethcore/res/ethereum/tests
url = https://github.com/ethereum/tests.git
branch = develop
[submodule "ethcore/res/wasm-tests"]
path = ethcore/res/wasm-tests
url = https://github.com/paritytech/wasm-tests

View File

@ -1,208 +1,379 @@
## OpenEthereum v3.3.3
## Parity-Ethereum [v2.0.1](https://github.com/paritytech/parity-ethereum/releases/tag/v2.0.1) (2018-07-27)
Enhancements:
* Implement eip-3607 (#593)
Parity-Ethereum 2.0.1-beta is a bug-fix release to improve performance and stability.
Bug fixes:
* Add type field for legacy transactions in RPC calls (#580)
* Makes eth_mining to return False if not is not allowed to seal (#581)
* Made nodes data concatenate as RLP sequences instead of bytes (#598)
Note, authorities in PoA networks based on the Aura engine, should upgrade their nodes to 1.11.8-stable or 2.0.1-beta as this release includes a critical fix.
## OpenEthereum v3.3.2
The full list of included changes:
Enhancements:
* London hardfork block: Sokol (24114400)
- Backports to 2.0.1-beta ([#9145](https://github.com/paritytech/parity-ethereum/pull/9145))
- Parity-version: bump beta to 2.0.1
- Ci: update version strings for snaps ([#9160](https://github.com/paritytech/parity-ethereum/pull/9160))
- Be more graceful on Aura difficulty validation ([#9164](https://github.com/paritytech/parity-ethereum/pull/9164))
- Be more graceful on Aura difficulty validation
- Test: rejects_step_backwards
- Test: proposer_switching
- Test: rejects_future_block
- Test: reports_skipped
- Test: verify_empty_seal_steps
- Remove node-health ([#9119](https://github.com/paritytech/parity-ethereum/pull/9119))
- Remove node-health
- Remove ntp_servers
- Add --ntp-servers as legacy instead of removing it
- Add --ntp-servers to deprecated args
- Remove unused stuff
- Remove _legacy_ntp_servers
- Parity: fix UserDefaults json parser ([#9189](https://github.com/paritytech/parity-ethereum/pull/9189))
- Parity: fix UserDefaults json parser
- Parity: use serde_derive for UserDefaults
- Parity: support deserialization of old UserDefault json format
- Parity: make UserDefaults serde backwards compatible
- Parity: tabify indentation in UserDefaults
- Fix bugfix hard fork logic ([#9138](https://github.com/paritytech/parity-ethereum/pull/9138))
- Fix bugfix hard fork logic
- Remove dustProtectionTransition from bugfix category
- Eip-168 is not enabled by default
- Remove unnecessary 'static
- Disable per-sender limit for local transactions. ([#9148](https://github.com/paritytech/parity-ethereum/pull/9148))
- Disable per-sender limit for local transactions.
- Add a missing new line.
- Rpc: fix is_major_importing sync state condition ([#9112](https://github.com/paritytech/parity-ethereum/pull/9112))
- Rpc: fix is_major_importing sync state condition
- Rpc: fix informant printout when waiting for peers
- Fix verification in ethcore-sync collect_blocks ([#9135](https://github.com/paritytech/parity-ethereum/pull/9135))
- Docker: update hub dockerfile ([#9173](https://github.com/paritytech/parity-ethereum/pull/9173))
- Update Dockerfile for hub
- Update to Ubuntu Xenial 16.04
- Fix cmake version
- Docker: fix tab indentation in hub dockerfile
- Rpc: fix broken merge
- Rpc: remove node_health leftover from merge
- Rpc: remove dapps leftover from merge
Bug fixes:
* Fix for maxPriorityFeePerGas overflow
## Parity-Ethereum [v2.0.0](https://github.com/paritytech/parity-ethereum/releases/tag/v2.0.0) "Ethereum" (2018-07-18)
## OpenEthereum v3.3.1
This is the Parity-Ethereum//v2.0.0-beta release, code-named "Ethereum", **YOLO!**
Enhancements:
* Add eth_maxPriorityFeePerGas implementation (#570)
* Add a bootnode for Kovan
Please note, Parity-Ethereum//v2.0.0 comes with some breaking changes that might be interrupting your usual workflows. Please mind them before upgrading:
Bug fixes:
* Fix for modexp overflow in debug mode (#578)
- The Parity client is now called _Parity-Ethereum_ to distinguish it from other software we provide, such as [_Parity-Bitcoin_](https://github.com/paritytech/parity-bitcoin/) and [_Parity-Polkadot_](https://github.com/paritytech/polkadot) ([#9052](https://github.com/paritytech/parity-ethereum/pull/9052)).
- The public node and the user interface (a.k.a. _"Parity Wallet"_) are completely removed from the Parity-Ethereum//v2.0.0 client ([#8758](https://github.com/paritytech/parity-ethereum/pull/8758), [#8783](https://github.com/paritytech/parity-ethereum/pull/8783), [#8641](https://github.com/paritytech/parity-ethereum/pull/8641)). Users interested running a Parity Wallet, check out [the stand-alone UI application](https://github.com/Parity-JS/shell/releases).
- The DApps subsystem was completely removed from the client ([#9017](https://github.com/paritytech/parity-ethereum/pull/9017), [#9107](https://github.com/paritytech/parity-ethereum/pull/9107)). Again, use the standalone wallet if you wish to continue working with them.
- Windows and MacOS versions are not available as installer anymore and the system trays were removed ([#8778](https://github.com/paritytech/parity-ethereum/pull/8778)). If you desire to run Parity-Ethereum on Windows or MacOS, you still can get the binaries from our mirrors. Furthermore, MacOS users are encouraged [to use our homebrew tap](https://github.com/paritytech/homebrew-paritytech/).
- Linux versions are not available as deb-/rpm-packages anymore ([#8887](https://github.com/paritytech/parity-ethereum/pull/8887)). Communities are encouraged to provide their own packages or maintain their own repositories, such as [Arch Linux does](https://www.archlinux.org/packages/community/x86_64/parity/) for instance.
- MD5-checksums are completely replaced by SHA256-checksums ([#8884](https://github.com/paritytech/parity-ethereum/pull/8884)). This is also reflected on our homepage by now.
- Deprecated, removed, or replaced CLI-options are hidden from client `--help` to further discourage their usage ([#8967](https://github.com/paritytech/parity-ethereum/pull/8967)).
## OpenEthereum v3.3.0
Additional noteworthy changes to the client:
Enhancements:
* Add `validateServiceTransactionsTransition` spec option to be able to enable additional checking of zero gas price transactions by block verifier
- Tracing of precompiled contracts when the transfer value is not zero ([#8486](https://github.com/paritytech/parity-ethereum/pull/8486))
- _Parity-Ethereum_ as a library now provides APIs for running full and light nodes and a C interface ([#8412](https://github.com/paritytech/parity-ethereum/pull/8412)). Shared crates are now available in [_Parity-Common_](https://github.com/paritytech/parity-common) ([#9083](https://github.com/paritytech/parity-ethereum/pull/9083)).
- The Morden database and keys are now moved to a `./Morden` subdirectory instead of `./test` which is by default used by Ropsten ([#8621](https://github.com/paritytech/parity-ethereum/pull/8621)).
- Adding support for having an on-chain contract calculating the block rewards ([#8419](https://github.com/paritytech/parity-ethereum/pull/8419)).
- Enforcing warp-only synchronization with `--warp-barrier [blocknumber]` flag ([#8228](https://github.com/paritytech/parity-ethereum/pull/8228)).
- Adding a fork-choice and meta-data framework suitable for implementing Casper ([#8401](https://github.com/paritytech/parity-ethereum/pull/8401)).
- Returning an error if RLP-size of a transaction exceeds a 300kB limit ([#8473](https://github.com/paritytech/parity-ethereum/pull/8473)).
- Warp-sync is now resumable by keeping the downloaded chunks between client restarts. Also, it seeds downloaded snapshots for other nodes ([#8544](https://github.com/paritytech/parity-ethereum/pull/8544)).
- The developer chain `--chain dev` now contains Byzantium features, this breaks existing developer chains ([#8717](https://github.com/paritytech/parity-ethereum/pull/8717)).
- The EIP150, EIP160 and EIP161 forks are now to be specified in common params section of a chain-spec file instead of the Ethash params to enable these features on non-proof-of-work chains ([#8614](https://github.com/paritytech/parity-ethereum/pull/8614)). Please update your chain specs.
- Allowing to disable local-by-default for transactions with new configurations ([#8882](https://github.com/paritytech/parity-ethereum/pull/8882)).
- Never drop local transactions from different senders ([#9002](https://github.com/paritytech/parity-ethereum/pull/9002)).
- Optimize pending transactions filter and fix ethstats reporting of pending transactions ([#9026](https://github.com/paritytech/parity-ethereum/pull/9026)).
- Add separate database directory for light client allowing to run full and light nodes at the same time ([#9064](https://github.com/paritytech/parity-ethereum/pull/9064)).
## OpenEthereum v3.3.0-rc.15
If you are upgrading directly from versions 1.10.9 or earlier, please note important changes to our transaction-queue implementation, namely:
* Revert eip1559BaseFeeMinValue activation on xDai at London hardfork block
- The pool now limits transactions per-sender (see `--tx-queue-per-sender`), local transactions also have to obey that limit. Consider increasing the limit via CLI-flag when running benchmarks or sending a lot of transactions at once.
- In case the pool is full, transactions received over the network, but originating from accounts that you have private keys for might not get accepted to the pool any more with higher priority. Consider running with larger pool size or submitting the transactions directly on the node via `eth_sendRawTransaction`.
## OpenEthereum v3.3.0-rc.14
The full list of included changes:
Enhancements:
* Add eip1559BaseFeeMinValue and eip1559BaseFeeMinValueTransition spec options
* Activate eip1559BaseFeeMinValue on xDai at London hardfork block (19040000), set it to 20 GWei
* Activate eip1559BaseFeeMinValue on POA Core at block 24199500 (November 8, 2021), set it to 10 GWei
* Delay difficulty bomb to June 2022 for Ethereum Mainnet (EIP-4345)
- Backports to 2.0.0-beta ([#9094](https://github.com/paritytech/parity-ethereum/pull/9094))
- Parity-version: betalize 2.0
- Multiple improvements to discovery ping handling ([#8771](https://github.com/paritytech/parity-ethereum/pull/8771))
- Discovery: Only add nodes to routing table after receiving pong.
- Discovery: Refactor packet creation into its own function.
- Discovery: Additional testing for new add_node behavior.
- Discovery: Track expiration of pings to non-yet-in-bucket nodes.
- Discovery: Verify echo hash on pong packets.
- Discovery: Track timeouts on FIND_NODE requests.
- Discovery: Retry failed pings with exponential backoff.
- !fixup Use slice instead of Vec for request_backoff.
- Add separate database directory for light client ([#9064](https://github.com/paritytech/parity-ethereum/pull/9064))
- Add separate default DB path for light client ([#8927](https://github.com/paritytech/parity-ethereum/pull/8927))
- Improve readability
- Revert "Replace `std::env::home_dir` with `dirs::home_dir` ([#9077](https://github.com/paritytech/parity-ethereum/pull/9077))" ([#9097](https://github.com/paritytech/parity-ethereum/pull/9097))
- Revert "Replace `std::env::home_dir` with `dirs::home_dir` ([#9077](https://github.com/paritytech/parity-ethereum/pull/9077))"
- This reverts commit 7e77932.
- Restore some of the changes
- Update parity-common
- Offload cull to IoWorker. ([#9099](https://github.com/paritytech/parity-ethereum/pull/9099))
- Fix work-notify. ([#9104](https://github.com/paritytech/parity-ethereum/pull/9104))
- Update hidapi, fixes [#7542](https://github.com/paritytech/parity-ethereum/issues/7542) ([#9108](https://github.com/paritytech/parity-ethereum/pull/9108))
- Docker: add cmake dependency ([#9111](https://github.com/paritytech/parity-ethereum/pull/9111))
- Update light client hardcoded headers ([#9098](https://github.com/paritytech/parity-ethereum/pull/9098))
- Insert Kovan hardcoded headers until 7690241
- Insert Kovan hardcoded headers until block 7690241
- Insert Ropsten hardcoded headers until 3612673
- Insert Mainnet hardcoded headers until block 5941249
- Make sure to produce full blocks. ([#9115](https://github.com/paritytech/parity-ethereum/pull/9115))
- Insert ETC (classic) hardcoded headers until block 6170625 ([#9121](https://github.com/paritytech/parity-ethereum/pull/9121))
- Fix verification in ethcore-sync collect_blocks ([#9135](https://github.com/paritytech/parity-ethereum/pull/9135))
- Completely remove all dapps struct from rpc ([#9107](https://github.com/paritytech/parity-ethereum/pull/9107))
- Completely remove all dapps struct from rpc
- Remove unused pub use
- `evm bench` fix broken dependencies ([#9134](https://github.com/paritytech/parity-ethereum/pull/9134))
- `evm bench` use valid dependencies
- Benchmarks of the `evm` used stale versions of a couple a crates that this commit fixes!
- Fix warnings
- Update snapcraft.yaml ([#9132](https://github.com/paritytech/parity-ethereum/pull/9132))
- Parity Ethereum 2.0.0 ([#9052](https://github.com/paritytech/parity-ethereum/pull/9052))
- Don't fetch snapshot chunks at random ([#9088](https://github.com/paritytech/parity-ethereum/pull/9088))
- Remove the dapps system ([#9017](https://github.com/paritytech/parity-ethereum/pull/9017))
- Fix nightly warnings ([#9080](https://github.com/paritytech/parity-ethereum/pull/9080))
- Db: remove wal disabling / fast-and-loose option. ([#8963](https://github.com/paritytech/parity-ethereum/pull/8963))
- Transactions hashes missing in trace_replayBlockTransactions method result [#8725](https://github.com/paritytech/parity-ethereum/issues/8725) ([#8883](https://github.com/paritytech/parity-ethereum/pull/8883))
- Delete crates from parity-ethereum and fetch them from parity-common instead ([#9083](https://github.com/paritytech/parity-ethereum/pull/9083))
- Updater verification ([#8787](https://github.com/paritytech/parity-ethereum/pull/8787))
- Phrasing, precisions and typos in CLI help ([#9060](https://github.com/paritytech/parity-ethereum/pull/9060))
- Some work towards iOS build ([#9045](https://github.com/paritytech/parity-ethereum/pull/9045))
- Clean up deprecated options and add CHECK macro ([#9036](https://github.com/paritytech/parity-ethereum/pull/9036))
- Replace `std::env::home_dir` with `dirs::home_dir` ([#9077](https://github.com/paritytech/parity-ethereum/pull/9077))
- Fix warning in secret-store test ([#9074](https://github.com/paritytech/parity-ethereum/pull/9074))
- Seedhashcompute remove needless `new` impl ([#9063](https://github.com/paritytech/parity-ethereum/pull/9063))
- Remove trait bounds from several structs ([#9055](https://github.com/paritytech/parity-ethereum/pull/9055))
- Docs: add changelog for 1.10.9 stable and 1.11.6 beta ([#9069](https://github.com/paritytech/parity-ethereum/pull/9069))
- Enable test in `miner/pool/test` ([#9072](https://github.com/paritytech/parity-ethereum/pull/9072))
- Fetch: replace futures-timer with tokio-timer ([#9066](https://github.com/paritytech/parity-ethereum/pull/9066))
- Remove util-error ([#9054](https://github.com/paritytech/parity-ethereum/pull/9054))
- Fixes for misbehavior reporting in AuthorityRound ([#8998](https://github.com/paritytech/parity-ethereum/pull/8998))
- A last bunch of txqueue performance optimizations ([#9024](https://github.com/paritytech/parity-ethereum/pull/9024))
- Reduce number of constraints for triedb types ([#9043](https://github.com/paritytech/parity-ethereum/pull/9043))
- Bump fs-swap to 0.2.3 so it is compatible with osx 10.11 again ([#9050](https://github.com/paritytech/parity-ethereum/pull/9050))
- Recursive test ([#9042](https://github.com/paritytech/parity-ethereum/pull/9042))
- Introduce more optional features in ethcore ([#9020](https://github.com/paritytech/parity-ethereum/pull/9020))
- Update ETSC bootnodes ([#9038](https://github.com/paritytech/parity-ethereum/pull/9038))
- Optimize pending transactions filter ([#9026](https://github.com/paritytech/parity-ethereum/pull/9026))
- Eip160/eip161 spec: u64 -> BlockNumber ([#9044](https://github.com/paritytech/parity-ethereum/pull/9044))
- Move the C/C++ example to another directory ([#9032](https://github.com/paritytech/parity-ethereum/pull/9032))
- Bump parking_lot to 0.6 ([#9013](https://github.com/paritytech/parity-ethereum/pull/9013))
- Never drop local transactions from different senders. ([#9002](https://github.com/paritytech/parity-ethereum/pull/9002))
- Precise HTTP or WebSockets for JSON-RPC options ([#9027](https://github.com/paritytech/parity-ethereum/pull/9027))
- Recently rejected cache for transaction queue ([#9005](https://github.com/paritytech/parity-ethereum/pull/9005))
- Make HashDB generic ([#8739](https://github.com/paritytech/parity-ethereum/pull/8739))
- Only return error log for rustls ([#9025](https://github.com/paritytech/parity-ethereum/pull/9025))
- Update Changelogs for 1.10.8 and 1.11.5 ([#9012](https://github.com/paritytech/parity-ethereum/pull/9012))
- Attempt to graceful shutdown in case of panics ([#8999](https://github.com/paritytech/parity-ethereum/pull/8999))
- Simplify kvdb error types ([#8924](https://github.com/paritytech/parity-ethereum/pull/8924))
- Add option for user to set max size limit for RPC requests ([#9010](https://github.com/paritytech/parity-ethereum/pull/9010))
- Bump ntp to 0.5.0 ([#9009](https://github.com/paritytech/parity-ethereum/pull/9009))
- Removed duplicate dependency ([#9021](https://github.com/paritytech/parity-ethereum/pull/9021))
- Minimal effective gas price in the queue ([#8934](https://github.com/paritytech/parity-ethereum/pull/8934))
- Parity: fix db path when migrating to blooms db ([#8975](https://github.com/paritytech/parity-ethereum/pull/8975))
- Preserve the current abort behavior ([#8995](https://github.com/paritytech/parity-ethereum/pull/8995))
- Improve should_replace on NonceAndGasPrice ([#8980](https://github.com/paritytech/parity-ethereum/pull/8980))
- Tentative fix for missing dependency error ([#8973](https://github.com/paritytech/parity-ethereum/pull/8973))
- Refactor evm Instruction to be a c-like enum ([#8914](https://github.com/paritytech/parity-ethereum/pull/8914))
- Fix deadlock in blockchain. ([#8977](https://github.com/paritytech/parity-ethereum/pull/8977))
- Snap: downgrade rust to revision 1.26.2, ref snapcraft/+bug/1778530 ([#8984](https://github.com/paritytech/parity-ethereum/pull/8984))
- Use local parity-dapps-glue instead of crate published at crates.io ([#8983](https://github.com/paritytech/parity-ethereum/pull/8983))
- Parity: omit redundant last imported block number in light sync informant ([#8962](https://github.com/paritytech/parity-ethereum/pull/8962))
- Disable hardware-wallets on platforms that don't support `libusb` ([#8464](https://github.com/paritytech/parity-ethereum/pull/8464))
- Bump error-chain and quick_error versions ([#8972](https://github.com/paritytech/parity-ethereum/pull/8972))
- Evm benchmark utilities ([#8944](https://github.com/paritytech/parity-ethereum/pull/8944))
- Parity: hide legacy options from cli --help ([#8967](https://github.com/paritytech/parity-ethereum/pull/8967))
- Scripts: fix docker build tag on latest using master ([#8952](https://github.com/paritytech/parity-ethereum/pull/8952))
- Add type for passwords. ([#8920](https://github.com/paritytech/parity-ethereum/pull/8920))
- Deps: bump fs-swap ([#8953](https://github.com/paritytech/parity-ethereum/pull/8953))
- Eliminate some more `transmute()` ([#8879](https://github.com/paritytech/parity-ethereum/pull/8879))
- Restrict vault.json permssion to owner and using random suffix for temp vault.json file ([#8932](https://github.com/paritytech/parity-ethereum/pull/8932))
- Print SS.self_public when starting SS node ([#8949](https://github.com/paritytech/parity-ethereum/pull/8949))
- Scripts: minor improvements ([#8930](https://github.com/paritytech/parity-ethereum/pull/8930))
- Rpc: cap gas limit of local calls ([#8943](https://github.com/paritytech/parity-ethereum/pull/8943))
- Docs: update changelogs ([#8931](https://github.com/paritytech/parity-ethereum/pull/8931))
- Ethcore: fix compilation when using slow-blocks or evm-debug features ([#8936](https://github.com/paritytech/parity-ethereum/pull/8936))
- Fixed blooms dir creation ([#8941](https://github.com/paritytech/parity-ethereum/pull/8941))
- Update hardcoded headers ([#8925](https://github.com/paritytech/parity-ethereum/pull/8925))
- New blooms database ([#8712](https://github.com/paritytech/parity-ethereum/pull/8712))
- Ethstore: retry deduplication of wallet file names until success ([#8910](https://github.com/paritytech/parity-ethereum/pull/8910))
- Update ropsten.json ([#8926](https://github.com/paritytech/parity-ethereum/pull/8926))
- Include node identity in the P2P advertised client version. ([#8830](https://github.com/paritytech/parity-ethereum/pull/8830))
- Allow disabling local-by-default for transactions with new config entry ([#8882](https://github.com/paritytech/parity-ethereum/pull/8882))
- Allow Poll Lifetime to be configured via CLI ([#8885](https://github.com/paritytech/parity-ethereum/pull/8885))
- Cleanup nibbleslice ([#8915](https://github.com/paritytech/parity-ethereum/pull/8915))
- Hardware-wallets `Clean up things I missed in the latest PR` ([#8890](https://github.com/paritytech/parity-ethereum/pull/8890))
- Remove debian/.deb and centos/.rpm packaging scripts ([#8887](https://github.com/paritytech/parity-ethereum/pull/8887))
- Remove a weird emoji in new_social docs ([#8913](https://github.com/paritytech/parity-ethereum/pull/8913))
- Minor fix in chain supplier and light provider ([#8906](https://github.com/paritytech/parity-ethereum/pull/8906))
- Block 0 is valid in queries ([#8891](https://github.com/paritytech/parity-ethereum/pull/8891))
- Fixed osx permissions ([#8901](https://github.com/paritytech/parity-ethereum/pull/8901))
- Atomic create new files with permissions to owner in ethstore ([#8896](https://github.com/paritytech/parity-ethereum/pull/8896))
- Add ETC Cooperative-run load balanced parity node ([#8892](https://github.com/paritytech/parity-ethereum/pull/8892))
- Add support for --chain tobalaba ([#8870](https://github.com/paritytech/parity-ethereum/pull/8870))
- Fix some warns on nightly ([#8889](https://github.com/paritytech/parity-ethereum/pull/8889))
- Add new ovh bootnodes and fix port for foundation bootnode 3.2 ([#8886](https://github.com/paritytech/parity-ethereum/pull/8886))
- Secretstore: service pack 1 ([#8435](https://github.com/paritytech/parity-ethereum/pull/8435))
- Handle removed logs in filter changes and add geth compatibility field ([#8796](https://github.com/paritytech/parity-ethereum/pull/8796))
- Fixed ipc leak, closes [#8774](https://github.com/paritytech/parity-ethereum/issues/8774) ([#8876](https://github.com/paritytech/parity-ethereum/pull/8876))
- Scripts: remove md5 checksums ([#8884](https://github.com/paritytech/parity-ethereum/pull/8884))
- Hardware_wallet/Ledger `Sign messages` + some refactoring ([#8868](https://github.com/paritytech/parity-ethereum/pull/8868))
- Check whether we need resealing in miner and unwrap has_account in account_provider ([#8853](https://github.com/paritytech/parity-ethereum/pull/8853))
- Docker: Fix alpine build ([#8878](https://github.com/paritytech/parity-ethereum/pull/8878))
- Remove mac os installers etc ([#8875](https://github.com/paritytech/parity-ethereum/pull/8875))
- Readme.md: update the list of dependencies ([#8864](https://github.com/paritytech/parity-ethereum/pull/8864))
- Fix concurrent access to signer queue ([#8854](https://github.com/paritytech/parity-ethereum/pull/8854))
- Tx permission contract improvement ([#8400](https://github.com/paritytech/parity-ethereum/pull/8400))
- Limit the number of transactions in pending set ([#8777](https://github.com/paritytech/parity-ethereum/pull/8777))
- Use sealing.enabled to emit eth_mining information ([#8844](https://github.com/paritytech/parity-ethereum/pull/8844))
- Don't allocate in expect_valid_rlp unless necessary ([#8867](https://github.com/paritytech/parity-ethereum/pull/8867))
- Fix Cli Return Code on --help for ethkey, ethstore & whisper ([#8863](https://github.com/paritytech/parity-ethereum/pull/8863))
- Fix subcrate test compile ([#8862](https://github.com/paritytech/parity-ethereum/pull/8862))
- Network-devp2p: downgrade logging to debug, add target ([#8784](https://github.com/paritytech/parity-ethereum/pull/8784))
- Clearing up a comment about the prefix for signing ([#8828](https://github.com/paritytech/parity-ethereum/pull/8828))
- Disable parallel verification and skip verifiying already imported txs. ([#8834](https://github.com/paritytech/parity-ethereum/pull/8834))
- Devp2p: Move UDP socket handling from Discovery to Host. ([#8790](https://github.com/paritytech/parity-ethereum/pull/8790))
- Fixed AuthorityRound deadlock on shutdown, closes [#8088](https://github.com/paritytech/parity-ethereum/issues/8088) ([#8803](https://github.com/paritytech/parity-ethereum/pull/8803))
- Specify critical release flag per network ([#8821](https://github.com/paritytech/parity-ethereum/pull/8821))
- Fix `deadlock_detection` feature branch compilation ([#8824](https://github.com/paritytech/parity-ethereum/pull/8824))
- Use system allocator when profiling memory ([#8831](https://github.com/paritytech/parity-ethereum/pull/8831))
- Added from and to to Receipt ([#8756](https://github.com/paritytech/parity-ethereum/pull/8756))
- Ethcore: fix ancient block error msg handling ([#8832](https://github.com/paritytech/parity-ethereum/pull/8832))
- Ci: Fix docker tags ([#8822](https://github.com/paritytech/parity-ethereum/pull/8822))
- Parity: fix indentation in sync logging ([#8794](https://github.com/paritytech/parity-ethereum/pull/8794))
- Removed obsolete IpcMode enum ([#8819](https://github.com/paritytech/parity-ethereum/pull/8819))
- Remove UI related settings from CLI ([#8783](https://github.com/paritytech/parity-ethereum/pull/8783))
- Remove windows tray and installer ([#8778](https://github.com/paritytech/parity-ethereum/pull/8778))
- Docs: add changelogs for 1.10.6 and 1.11.3 ([#8810](https://github.com/paritytech/parity-ethereum/pull/8810))
- Fix ancient blocks queue deadlock ([#8751](https://github.com/paritytech/parity-ethereum/pull/8751))
- Disallow unsigned transactions in case EIP-86 is disabled ([#8802](https://github.com/paritytech/parity-ethereum/pull/8802))
- Fix evmbin compilation ([#8795](https://github.com/paritytech/parity-ethereum/pull/8795))
- Have space between feature cfg flag ([#8791](https://github.com/paritytech/parity-ethereum/pull/8791))
- Rpc: fix address formatting in TransactionRequest Display ([#8786](https://github.com/paritytech/parity-ethereum/pull/8786))
- Conditionally compile ethcore public test helpers ([#8743](https://github.com/paritytech/parity-ethereum/pull/8743))
- Remove Result wrapper from AccountProvider in RPC impls ([#8763](https://github.com/paritytech/parity-ethereum/pull/8763))
- Update `license header` and `scripts` ([#8666](https://github.com/paritytech/parity-ethereum/pull/8666))
- Remove HostTrait altogether ([#8681](https://github.com/paritytech/parity-ethereum/pull/8681))
- Ethcore-sync: fix connection to peers behind chain fork block ([#8710](https://github.com/paritytech/parity-ethereum/pull/8710))
- Remove public node settings from cli ([#8758](https://github.com/paritytech/parity-ethereum/pull/8758))
- Custom Error Messages on ENFILE and EMFILE IO Errors ([#8744](https://github.com/paritytech/parity-ethereum/pull/8744))
- Ci: Fixes for Android Pipeline ([#8745](https://github.com/paritytech/parity-ethereum/pull/8745))
- Remove NetworkService::config() ([#8653](https://github.com/paritytech/parity-ethereum/pull/8653))
- Fix XOR distance calculation in discovery Kademlia impl ([#8589](https://github.com/paritytech/parity-ethereum/pull/8589))
- Print warnings when fetching pending blocks ([#8711](https://github.com/paritytech/parity-ethereum/pull/8711))
- Fix PoW blockchains sealing notifications in chain_new_blocks ([#8656](https://github.com/paritytech/parity-ethereum/pull/8656))
- Remove -k/--insecure option from curl installer ([#8719](https://github.com/paritytech/parity-ethereum/pull/8719))
- Ease tiny-keccak version requirements (1.4.1 -> 1.4) ([#8726](https://github.com/paritytech/parity-ethereum/pull/8726))
- Bump tinykeccak to 1.4 ([#8728](https://github.com/paritytech/parity-ethereum/pull/8728))
- Remove a couple of unnecessary `transmute()` ([#8736](https://github.com/paritytech/parity-ethereum/pull/8736))
- Fix some nits using clippy ([#8731](https://github.com/paritytech/parity-ethereum/pull/8731))
- Add 'interface' option to cli ([#8699](https://github.com/paritytech/parity-ethereum/pull/8699))
- Remove unused function new_pow_test_spec ([#8735](https://github.com/paritytech/parity-ethereum/pull/8735))
- Add a deadlock detection thread ([#8727](https://github.com/paritytech/parity-ethereum/pull/8727))
- Fix local transactions policy. ([#8691](https://github.com/paritytech/parity-ethereum/pull/8691))
- Shutdown the Snapshot Service early ([#8658](https://github.com/paritytech/parity-ethereum/pull/8658))
- Network-devp2p: handle UselessPeer disconnect ([#8686](https://github.com/paritytech/parity-ethereum/pull/8686))
- Fix compilation error on nightly rust ([#8707](https://github.com/paritytech/parity-ethereum/pull/8707))
- Add a test for decoding corrupt data ([#8713](https://github.com/paritytech/parity-ethereum/pull/8713))
- Update dev chain ([#8717](https://github.com/paritytech/parity-ethereum/pull/8717))
- Remove unused imports ([#8722](https://github.com/paritytech/parity-ethereum/pull/8722))
- Implement recursive Debug for Nodes in patrica_trie::TrieDB ([#8697](https://github.com/paritytech/parity-ethereum/pull/8697))
- Parity: trim whitespace when parsing duration strings ([#8692](https://github.com/paritytech/parity-ethereum/pull/8692))
- Set the request index to that of the current request ([#8683](https://github.com/paritytech/parity-ethereum/pull/8683))
- Remove empty file ([#8705](https://github.com/paritytech/parity-ethereum/pull/8705))
- Update mod.rs ([#8695](https://github.com/paritytech/parity-ethereum/pull/8695))
- Use impl Future in the light client RPC helpers ([#8628](https://github.com/paritytech/parity-ethereum/pull/8628))
- Fix cli signer ([#8682](https://github.com/paritytech/parity-ethereum/pull/8682))
- Allow making direct RPC queries from the C API ([#8588](https://github.com/paritytech/parity-ethereum/pull/8588))
- Remove the error when stopping the network ([#8671](https://github.com/paritytech/parity-ethereum/pull/8671))
- Move connection_filter to the network crate ([#8674](https://github.com/paritytech/parity-ethereum/pull/8674))
- Remove HostInfo::client_version() and secret() ([#8677](https://github.com/paritytech/parity-ethereum/pull/8677))
- Refactor EIP150, EIP160 and EIP161 forks to be specified in CommonParams ([#8614](https://github.com/paritytech/parity-ethereum/pull/8614))
- Parity: improve cli help and logging ([#8665](https://github.com/paritytech/parity-ethereum/pull/8665))
- Updated tiny-keccak to 1.4.2 ([#8669](https://github.com/paritytech/parity-ethereum/pull/8669))
- Remove the Keccak C library and use the pure Rust impl ([#8657](https://github.com/paritytech/parity-ethereum/pull/8657))
- Remove HostInfo::next_nonce ([#8644](https://github.com/paritytech/parity-ethereum/pull/8644))
- Fix not downloading old blocks ([#8642](https://github.com/paritytech/parity-ethereum/pull/8642))
- Resumable warp-sync / Seed downloaded snapshots ([#8544](https://github.com/paritytech/parity-ethereum/pull/8544))
- Don't open Browser post-install on Mac ([#8641](https://github.com/paritytech/parity-ethereum/pull/8641))
- Changelog for 1.10.4-stable and 1.11.1-beta ([#8637](https://github.com/paritytech/parity-ethereum/pull/8637))
- Typo ([#8640](https://github.com/paritytech/parity-ethereum/pull/8640))
- Fork choice and metadata framework for Engine ([#8401](https://github.com/paritytech/parity-ethereum/pull/8401))
- Check that the Android build doesn't dep on c++_shared ([#8538](https://github.com/paritytech/parity-ethereum/pull/8538))
- Remove NetworkContext::io_channel() ([#8625](https://github.com/paritytech/parity-ethereum/pull/8625))
- Fix light sync with initial validator-set contract ([#8528](https://github.com/paritytech/parity-ethereum/pull/8528))
- Store morden db and keys in "path/to/parity/data/Morden" (ropsten uses "test", like before) ([#8621](https://github.com/paritytech/parity-ethereum/pull/8621))
- ´main.rs´ typo ([#8629](https://github.com/paritytech/parity-ethereum/pull/8629))
- Fix BlockReward contract "arithmetic operation overflow" ([#8611](https://github.com/paritytech/parity-ethereum/pull/8611))
- Gitlab test script fixes ([#8573](https://github.com/paritytech/parity-ethereum/pull/8573))
- Remove manually added text to the errors ([#8595](https://github.com/paritytech/parity-ethereum/pull/8595))
- Fix account list double 0x display ([#8596](https://github.com/paritytech/parity-ethereum/pull/8596))
- Typo: wrong indentation in kovan config ([#8610](https://github.com/paritytech/parity-ethereum/pull/8610))
- Fix packet count when talking with PAR2 peers ([#8555](https://github.com/paritytech/parity-ethereum/pull/8555))
- Use full qualified syntax for itertools::Itertools::flatten ([#8606](https://github.com/paritytech/parity-ethereum/pull/8606))
- 2 tiny modification on snapshot ([#8601](https://github.com/paritytech/parity-ethereum/pull/8601))
- Fix the mio test again ([#8602](https://github.com/paritytech/parity-ethereum/pull/8602))
- Remove inject.js server-side injection for dapps ([#8539](https://github.com/paritytech/parity-ethereum/pull/8539))
- Block_header can fail so return Result ([#8581](https://github.com/paritytech/parity-ethereum/pull/8581))
- Block::decode() returns Result ([#8586](https://github.com/paritytech/parity-ethereum/pull/8586))
- Fix compiler warning ([#8590](https://github.com/paritytech/parity-ethereum/pull/8590))
- Fix Parity UI link ([#8600](https://github.com/paritytech/parity-ethereum/pull/8600))
- Make mio optional in ethcore-io ([#8537](https://github.com/paritytech/parity-ethereum/pull/8537))
- Attempt to fix intermittent test failures ([#8584](https://github.com/paritytech/parity-ethereum/pull/8584))
- Changelog and Readme ([#8591](https://github.com/paritytech/parity-ethereum/pull/8591))
- Added Dockerfile for alpine linux by @andresilva, closes [#3565](https://github.com/paritytech/parity-ethereum/issues/3565) ([#8587](https://github.com/paritytech/parity-ethereum/pull/8587))
- Add whisper CLI to the pipelines ([#8578](https://github.com/paritytech/parity-ethereum/pull/8578))
- Rename `whisper-cli binary` to `whisper` ([#8579](https://github.com/paritytech/parity-ethereum/pull/8579))
- Changelog nit ([#8585](https://github.com/paritytech/parity-ethereum/pull/8585))
- Remove unnecessary cloning in overwrite_with ([#8580](https://github.com/paritytech/parity-ethereum/pull/8580))
- Handle socket address parsing errors ([#8545](https://github.com/paritytech/parity-ethereum/pull/8545))
- Update CHANGELOG for 1.9, 1.10, and 1.11 ([#8556](https://github.com/paritytech/parity-ethereum/pull/8556))
- Decoding headers can fail ([#8570](https://github.com/paritytech/parity-ethereum/pull/8570))
- Refactoring `ethcore-sync` - Fixing warp-sync barrier ([#8543](https://github.com/paritytech/parity-ethereum/pull/8543))
- Remove State::replace_backend ([#8569](https://github.com/paritytech/parity-ethereum/pull/8569))
- Make trace-time publishable. ([#8568](https://github.com/paritytech/parity-ethereum/pull/8568))
- Don't block sync when importing old blocks ([#8530](https://github.com/paritytech/parity-ethereum/pull/8530))
- Trace precompiled contracts when the transfer value is not zero ([#8486](https://github.com/paritytech/parity-ethereum/pull/8486))
- Parity as a library ([#8412](https://github.com/paritytech/parity-ethereum/pull/8412))
- Rlp decode returns Result ([#8527](https://github.com/paritytech/parity-ethereum/pull/8527))
- Node table sorting according to last contact data ([#8541](https://github.com/paritytech/parity-ethereum/pull/8541))
- Keep all enacted blocks notify in order ([#8524](https://github.com/paritytech/parity-ethereum/pull/8524))
- Ethcore, rpc, machine: refactor block reward application and tracing ([#8490](https://github.com/paritytech/parity-ethereum/pull/8490))
- Consolidate crypto functionality in `ethcore-crypto`. ([#8432](https://github.com/paritytech/parity-ethereum/pull/8432))
- Eip 145: Bitwise shifting instructions in EVM ([#8451](https://github.com/paritytech/parity-ethereum/pull/8451))
- Remove expect ([#8536](https://github.com/paritytech/parity-ethereum/pull/8536))
- Don't panic in import_block if invalid rlp ([#8522](https://github.com/paritytech/parity-ethereum/pull/8522))
- Pass on storage keys tracing to handle the case when it is not modified ([#8491](https://github.com/paritytech/parity-ethereum/pull/8491))
- Fetching logs by hash in blockchain database ([#8463](https://github.com/paritytech/parity-ethereum/pull/8463))
- Transaction Pool improvements ([#8470](https://github.com/paritytech/parity-ethereum/pull/8470))
- More changes for Android ([#8421](https://github.com/paritytech/parity-ethereum/pull/8421))
- Enable WebAssembly and Byzantium for Ellaism ([#8520](https://github.com/paritytech/parity-ethereum/pull/8520))
- Secretstore: merge two types of errors into single one + Error::is_non_fatal ([#8357](https://github.com/paritytech/parity-ethereum/pull/8357))
- Hardware Wallet trait ([#8071](https://github.com/paritytech/parity-ethereum/pull/8071))
- Directly return None if tracing is disabled ([#8504](https://github.com/paritytech/parity-ethereum/pull/8504))
- Show imported messages for light client ([#8517](https://github.com/paritytech/parity-ethereum/pull/8517))
- Remove unused dependency `bigint` ([#8505](https://github.com/paritytech/parity-ethereum/pull/8505))
- `duration_ns: u64 -> duration: Duration` ([#8457](https://github.com/paritytech/parity-ethereum/pull/8457))
- Return error if RLP size of transaction exceeds the limit ([#8473](https://github.com/paritytech/parity-ethereum/pull/8473))
- Remove three old warp boot nodes. ([#8497](https://github.com/paritytech/parity-ethereum/pull/8497))
- Update wasmi and pwasm-utils ([#8493](https://github.com/paritytech/parity-ethereum/pull/8493))
- Update hardcodedSync for Ethereum, Kovan, and Ropsten ([#8489](https://github.com/paritytech/parity-ethereum/pull/8489))
- Fix snap builds ([#8483](https://github.com/paritytech/parity-ethereum/pull/8483))
- Bump master to 1.12 ([#8477](https://github.com/paritytech/parity-ethereum/pull/8477))
- Don't require write lock when fetching status. ([#8481](https://github.com/paritytech/parity-ethereum/pull/8481))
- Use rename_all for RichBlock and RichHeader serialization ([#8471](https://github.com/paritytech/parity-ethereum/pull/8471))
## OpenEthereum v3.3.0-rc.13
## Previous releases
Enhancements:
* London hardfork block: POA Core (24090200)
## OpenEthereum v3.3.0-rc.12
Enhancements:
* London hardfork block: xDai (19040000)
## OpenEthereum v3.3.0-rc.11
Bug fixes:
* Ignore GetNodeData requests only for non-AuRa chains
## OpenEthereum v3.3.0-rc.10
Enhancements:
* Add eip1559FeeCollector and eip1559FeeCollectorTransition spec options
## OpenEthereum v3.3.0-rc.9
Bug fixes:
* Add service transactions support for EIP-1559
* Fix MinGasPrice config option for POSDAO and EIP-1559
Enhancements:
* min_gas_price becomes min_effective_priority_fee
* added version 4 for TxPermission contract
## OpenEthereum v3.3.0-rc.8
Bug fixes:
* Ignore GetNodeData requests (#519)
## OpenEthereum v3.3.0-rc.7
Bug fixes:
* GetPooledTransactions is sent in invalid form (wrong packet id)
## OpenEthereum v3.3.0-rc.6
Enhancements:
* London hardfork block: kovan (26741100) (#502)
## OpenEthereum v3.3.0-rc.4
Enhancements:
* London hardfork block: mainnet (12,965,000) (#475)
* Support for eth/66 protocol version (#465)
* Bump ethereum/tests to v9.0.3
* Add eth_feeHistory
Bug fixes:
* GetNodeData from eth63 is missing (#466)
* Effective gas price not omitting (#477)
* London support in openethereum-evm (#479)
* gasPrice is required field for Transaction object (#481)
## OpenEthereum v3.3.0-rc.3
Bug fixes:
* Add effective_gas_price to eth_getTransactionReceipt #445 (#450)
* Update eth_gasPrice to support EIP-1559 #449 (#458)
* eth_estimateGas returns "Requires higher than upper limit of X" after London Ropsten Hard Fork #459 (#460)
## OpenEthereum v3.3.0-rc.2
Enhancements:
* EIP-1559: Fee market change for ETH 1.0 chain
* EIP-3198: BASEFEE opcode
* EIP-3529: Reduction in gas refunds
* EIP-3541: Reject new contracts starting with the 0xEF byte
* Delay difficulty bomb to December 2021 (EIP-3554)
* London hardfork blocks: goerli (5,062,605), rinkeby (8,897,988), ropsten (10,499,401)
* Add chainspecs for aleut and baikal
* Bump ethereum/tests to v9.0.2
## OpenEthereum v3.2.6
Enhancement:
* Berlin hardfork blocks: poacore (21,364,900), poasokol (21,050,600)
## OpenEthereum v3.2.5
Bug fixes:
* Backport: Block sync stopped without any errors. #277 (#286)
* Strict memory order (#306)
Enhancements:
* Executable queue for ancient blocks inclusion (#208)
* Backport AuRa commits for xdai (#330)
* Add Nethermind to clients that accept service transactions (#324)
* Implement the filter argument in parity_pendingTransactions (#295)
* Ethereum-types and various libs upgraded (#315)
* [evmbin] Omit storage output, now for std-json (#311)
* Freeze pruning while creating snapshot (#205)
* AuRa multi block reward (#290)
* Improved metrics. DB read/write. prometheus prefix config (#240)
* Send RLPx auth in EIP-8 format (#287)
* rpc module reverted for RPC JSON api (#284)
* Revert "Remove eth/63 protocol version (#252)"
* Support for eth/65 protocol version (#366)
* Berlin hardfork blocks: kovan (24,770,900), xdai (16,101,500)
* Bump ethereum/tests to v8.0.3
devops:
* Upgrade docker alpine to `v1.13.2`. for rust `v1.47`.
* Send SIGTERM instead of SIGHUP to OE daemon (#317)
## OpenEthereum v3.2.4
* Fix for Typed transaction broadcast.
## OpenEthereum v3.2.3
* Hotfix for berlin consensus error.
## OpenEthereum v3.2.2-rc.1
Bug fixes:
* Backport: Block sync stopped without any errors. #277 (#286)
* Strict memory order (#306)
Enhancements:
* Executable queue for ancient blocks inclusion (#208)
* Backport AuRa commits for xdai (#330)
* Add Nethermind to clients that accept service transactions (#324)
* Implement the filter argument in parity_pendingTransactions (#295)
* Ethereum-types and various libs upgraded (#315)
* Bump ethereum/tests to v8.0.2
* [evmbin] Omit storage output, now for std-json (#311)
* Freeze pruning while creating snapshot (#205)
* AuRa multi block reward (#290)
* Improved metrics. DB read/write. prometheus prefix config (#240)
* Send RLPx auth in EIP-8 format (#287)
* rpc module reverted for RPC JSON api (#284)
* Revert "Remove eth/63 protocol version (#252)"
devops:
* Upgrade docker alpine to `v1.13.2`. for rust `v1.47`.
* Send SIGTERM instead of SIGHUP to OE daemon (#317)
## OpenEthereum v3.2.1
Hot fix issue, related to initial sync:
* Initial sync gets stuck. (#318)
## OpenEthereum v3.2.0
Bug fixes:
* Update EWF's chains with Istanbul transition block numbers (#11482) (#254)
* fix Supplied instant is later than self (#169)
* ethcore/snapshot: fix double-lock in Service::feed_chunk (#289)
Enhancements:
* Berlin hardfork blocks: mainnet (12,244,000), goerli (4,460,644), rinkeby (8,290,928) and ropsten (9,812,189)
* yolo3x spec (#241)
* EIP-2930 RPC support
* Remove eth/63 protocol version (#252)
* Snapshot manifest block added to prometheus (#232)
* EIP-1898: Allow default block parameter to be blockHash
* Change ProtocolId to U64
* Update ethereum/tests
- [CHANGELOG-1.11](docs/CHANGELOG-1.11.md) (_stable_)
- [CHANGELOG-1.10](docs/CHANGELOG-1.10.md) (EOL: 2018-07-18)
- [CHANGELOG-1.9](docs/CHANGELOG-1.9.md) (EOL: 2018-05-09)
- [CHANGELOG-1.8](docs/CHANGELOG-1.8.md) (EOL: 2018-03-22)
- [CHANGELOG-1.7](docs/CHANGELOG-1.7.md) (EOL: 2018-01-25)
- [CHANGELOG-1.6](docs/CHANGELOG-1.6.md) (EOL: 2017-10-15)
- [CHANGELOG-1.5](docs/CHANGELOG-1.5.md) (EOL: 2017-07-28)
- [CHANGELOG-1.4](docs/CHANGELOG-1.4.md) (EOL: 2017-03-13)
- [CHANGELOG-1.3](docs/CHANGELOG-1.3.md) (EOL: 2017-01-19)
- [CHANGELOG-1.2](docs/CHANGELOG-1.2.md) (EOL: 2016-11-07)
- [CHANGELOG-1.1](docs/CHANGELOG-1.1.md) (EOL: 2016-08-12)
- [CHANGELOG-1.0](docs/CHANGELOG-1.0.md) (EOL: 2016-06-24)
- [CHANGELOG-0.9](docs/CHANGELOG-0.9.md) (EOL: 2016-05-02)

5985
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,17 @@
[package]
description = "OpenEthereum"
name = "openethereum"
description = "Parity Ethereum client"
name = "parity-ethereum"
# NOTE Make sure to update util/version/Cargo.toml as well
version = "3.3.3"
version = "2.1.9"
license = "GPL-3.0"
authors = [
"OpenEthereum developers",
"Parity Technologies <admin@parity.io>"
]
authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
blooms-db = { path = "crates/db/blooms-db" }
blooms-db = { path = "util/blooms-db" }
log = "0.4"
env_logger = "0.5"
rustc-hex = "1.0"
docopt = "1.0"
docopt = "0.8"
clap = "2"
term_size = "0.3"
textwrap = "0.9"
@ -22,56 +20,56 @@ number_prefix = "0.2"
rpassword = "1.0"
semver = "0.9"
ansi_term = "0.10"
parking_lot = "0.11.1"
regex = "1.0"
parking_lot = "0.6"
regex = "0.2"
atty = "0.2.8"
toml = "0.4"
serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
futures = "0.1"
hyper = { version = "0.12" }
futures-cpupool = "0.1"
fdlimit = "0.1"
ctrlc = { git = "https://github.com/paritytech/rust-ctrlc.git" }
jsonrpc-core = "15.0.0"
jsonrpc-core = { git = "https://github.com/paritytech/jsonrpc.git", branch = "parity-1.11" }
ethcore = { path = "ethcore", features = ["parity"] }
parity-bytes = "0.1"
common-types = { path = "crates/ethcore/types" }
ethcore = { path = "crates/ethcore", features = ["parity"] }
ethcore-accounts = { path = "crates/accounts", optional = true }
ethcore-blockchain = { path = "crates/ethcore/blockchain" }
ethcore-call-contract = { path = "crates/vm/call-contract"}
ethcore-db = { path = "crates/db/db" }
ethcore-io = { path = "crates/runtime/io" }
ethcore-logger = { path = "bin/oe/logger" }
ethcore-miner = { path = "crates/concensus/miner" }
ethcore-network = { path = "crates/net/network" }
ethcore-service = { path = "crates/ethcore/service" }
ethcore-sync = { path = "crates/ethcore/sync" }
ethereum-types = "0.9.2"
ethkey = { path = "crates/accounts/ethkey" }
ethstore = { path = "crates/accounts/ethstore" }
fetch = { path = "crates/net/fetch" }
node-filter = { path = "crates/net/node-filter" }
parity-crypto = { version = "0.6.2", features = [ "publickey" ] }
rlp = { version = "0.4.6" }
cli-signer= { path = "crates/util/cli-signer" }
parity-daemonize = "0.3"
parity-local-store = { path = "crates/concensus/miner/local-store" }
parity-runtime = { path = "crates/runtime/runtime" }
parity-rpc = { path = "crates/rpc" }
parity-version = { path = "crates/util/version" }
ethcore-io = { path = "util/io" }
ethcore-light = { path = "ethcore/light" }
ethcore-logger = { path = "logger" }
ethcore-miner = { path = "miner" }
ethcore-network = { path = "util/network" }
ethcore-private-tx = { path = "ethcore/private-tx" }
ethcore-service = { path = "ethcore/service" }
ethcore-sync = { path = "ethcore/sync" }
ethcore-transaction = { path = "ethcore/transaction" }
ethereum-types = "0.4"
node-filter = { path = "ethcore/node_filter" }
ethkey = { path = "ethkey" }
rlp = { version = "0.2.4", features = ["ethereum"] }
rpc-cli = { path = "rpc_cli" }
parity-hash-fetch = { path = "hash-fetch" }
parity-ipfs-api = { path = "ipfs" }
parity-local-store = { path = "local-store" }
parity-reactor = { path = "util/reactor" }
parity-rpc = { path = "rpc" }
parity-rpc-client = { path = "rpc_client" }
parity-updater = { path = "updater" }
parity-version = { path = "util/version" }
parity-whisper = { path = "whisper" }
parity-path = "0.1"
dir = { path = "crates/util/dir" }
panic_hook = { path = "crates/util/panic-hook" }
keccak-hash = "0.5.0"
migration-rocksdb = { path = "crates/db/migration-rocksdb" }
dir = { path = "util/dir" }
panic_hook = { path = "util/panic_hook" }
keccak-hash = "0.1"
migration-rocksdb = { path = "util/migration-rocksdb" }
kvdb = "0.1"
kvdb-rocksdb = "0.1.3"
journaldb = { path = "crates/db/journaldb" }
stats = { path = "crates/util/stats" }
prometheus = "0.9.0"
journaldb = { path = "util/journaldb" }
mem = { path = "util/mem" }
# ethcore-secretstore = { path = "crates/util/secret-store", optional = true }
ethcore-secretstore = { path = "secret_store", optional = true }
registrar = { path = "registrar" }
[build-dependencies]
rustc_version = "0.2"
@ -80,22 +78,22 @@ rustc_version = "0.2"
pretty_assertions = "0.1"
ipnetwork = "0.12.6"
tempdir = "0.3"
fake-fetch = { path = "crates/net/fake-fetch" }
lazy_static = "1.2.0"
fake-fetch = { path = "util/fake-fetch" }
[target.'cfg(windows)'.dependencies]
winapi = { version = "0.3.4", features = ["winsock2", "winuser", "shellapi"] }
[target.'cfg(not(windows))'.dependencies]
daemonize = "0.3"
[features]
default = ["accounts"]
accounts = ["ethcore-accounts", "parity-rpc/accounts"]
miner-debug = ["ethcore/miner-debug"]
json-tests = ["ethcore/json-tests"]
ci-skip-tests = ["ethcore/ci-skip-tests"]
test-heavy = ["ethcore/test-heavy"]
evm-debug = ["ethcore/evm-debug"]
evm-debug-tests = ["ethcore/evm-debug-tests"]
slow-blocks = ["ethcore/slow-blocks"]
secretstore = ["ethcore-secretstore"]
final = ["parity-version/final"]
deadlock_detection = ["parking_lot/deadlock_detection"]
# to create a memory profile (requires nightly rust), use e.g.
@ -105,29 +103,42 @@ deadlock_detection = ["parking_lot/deadlock_detection"]
# `valgrind --tool=massif /path/to/parity <parity params>`
# and `massif-visualizer` for visualization
memory_profiling = []
# hardcode version number 1.3.7 of parity to force an update
# in order to manually test that parity fall-over to the local version
# in case of invalid or deprecated command line arguments are entered
test-updater = ["parity-updater/test-updater"]
[lib]
path = "bin/oe/lib.rs"
path = "parity/lib.rs"
[[bin]]
path = "bin/oe/main.rs"
name = "openethereum"
path = "parity/main.rs"
name = "parity"
[profile.test]
lto = false
opt-level = 3 # makes tests slower to compile, but faster to run
[profile.dev]
[profile.release]
debug = false
lto = true
[workspace]
# This should only list projects that are not
# in the dependency tree in any other way
# (i.e. pretty much only standalone CLI tools)
members = [
"bin/ethkey",
"bin/ethstore",
"bin/evmbin",
"bin/chainspec"
"chainspec",
"ethcore/wasm/run",
"ethcore/types",
"ethkey/cli",
"ethstore/cli",
"evmbin",
"miner",
"parity-clib",
"whisper",
"whisper/cli",
"util/triehash-ethereum",
"util/keccak-hasher",
"util/patricia-trie-ethereum",
"util/fastmap",
]
[patch.crates-io]
ring = { git = "https://github.com/paritytech/ring" }
tokio-proto = { git = "https://github.com/tokio-rs/tokio-proto.git", rev = "56c720e" }
untrusted = { git = "https://github.com/paritytech/untrusted", branch = "0.5" }

299
README.md
View File

@ -1,38 +1,13 @@
# OpenEthereum
![Parity Ethereum](docs/logo-parity-ethereum.svg)
Fast and feature-rich multi-network Ethereum client.
## The fastest and most advanced Ethereum client.
[» Download the latest release «](https://github.com/openethereum/openethereum/releases/latest)
<p align="center"><strong><a href="https://github.com/paritytech/parity-ethereum/releases/latest">» Download the latest release «</a></strong></p>
[![GPL licensed][license-badge]][license-url]
[![Build Status][ci-badge]][ci-url]
[![Discord chat][chat-badge]][chat-url]
<p align="center"><a href="https://gitlab.parity.io/parity/parity-ethereum/commits/master" target="_blank"><img src="https://gitlab.parity.io/parity/parity-ethereum/badges/master/build.svg" /></a>
<a href="https://www.gnu.org/licenses/gpl-3.0.en.html" target="_blank"><img src="https://img.shields.io/badge/license-GPL%20v3-green.svg" /></a></p>
[license-badge]: https://img.shields.io/badge/license-GPL_v3-green.svg
[license-url]: LICENSE
[ci-badge]: https://github.com/openethereum/openethereum/workflows/Build%20and%20Test%20Suite/badge.svg
[ci-url]: https://github.com/openethereum/openethereum/actions
[chat-badge]: https://img.shields.io/discord/669192218728202270.svg?logo=discord
[chat-url]: https://discord.io/openethereum
## Table of Contents
1. [Description](#chapter-001)
2. [Technical Overview](#chapter-002)
3. [Building](#chapter-003)<br>
3.1 [Building Dependencies](#chapter-0031)<br>
3.2 [Building from Source Code](#chapter-0032)<br>
3.3 [Starting OpenEthereum](#chapter-0034)
4. [Testing](#chapter-004)
5. [Documentation](#chapter-005)
6. [Toolchain](#chapter-006)
7. [Contributing](#chapter-008)
8. [License](#chapter-009)
## 1. Description <a id="chapter-001"></a>
**Built for mission-critical use**: Miners, service providers, and exchanges need fast synchronisation and maximum uptime. OpenEthereum provides the core infrastructure essential for speedy and reliable services.
**Built for mission-critical use**: Miners, service providers, and exchanges need fast synchronisation and maximum uptime. Parity Ethereum provides the core infrastructure essential for speedy and reliable services.
- Clean, modular codebase for easy customisation
- Advanced CLI-based client
@ -40,21 +15,19 @@ Fast and feature-rich multi-network Ethereum client.
- Synchronise in hours, not days with Warp Sync
- Modular for light integration into your service or product
## 2. Technical Overview <a id="chapter-002"></a>
## Technical Overview
OpenEthereum's goal is to be the fastest, lightest, and most secure Ethereum client. We are developing OpenEthereum using the **Rust programming language**. OpenEthereum is licensed under the GPLv3 and can be used for all your Ethereum needs.
Parity Ethereum's goal is to be the fastest, lightest, and most secure Ethereum client. We are developing Parity Ethereum using the sophisticated and cutting-edge **Rust programming language**. Parity Ethereum is licensed under the GPLv3 and can be used for all your Ethereum needs.
By default, OpenEthereum runs a JSON-RPC HTTP server on port `:8545` and a Web-Sockets server on port `:8546`. This is fully configurable and supports a number of APIs.
By default, Parity Ethereum runs a JSON-RPC HTTP server on port `:8545` and a Web-Sockets server on port `:8546`. This is fully configurable and supports a number of APIs.
If you run into problems while using OpenEthereum, check out the [old wiki for documentation](https://openethereum.github.io/), feel free to [file an issue in this repository](https://github.com/openethereum/openethereum/issues/new), or hop on our [Discord](https://discord.io/openethereum) chat room to ask a question. We are glad to help!
If you run into problems while using Parity Ethereum, check out the [wiki for documentation](https://wiki.parity.io/), feel free to [file an issue in this repository](https://github.com/paritytech/parity-ethereum/issues/new), or hop on our [Gitter](https://gitter.im/paritytech/parity) or [Riot](https://riot.im/app/#/group/+parity:matrix.parity.io) chat room to ask a question. We are glad to help! **For security-critical issues**, please refer to the security policy outlined in [SECURITY.md](SECURITY.md).
You can download OpenEthereum's latest release at [the releases page](https://github.com/openethereum/openethereum/releases) or follow the instructions below to build from source. Read the [CHANGELOG.md](CHANGELOG.md) for a list of all changes between different versions.
Parity Ethereum's current beta-release is 2.1. You can download it at [the releases page](https://github.com/paritytech/parity-ethereum/releases) or follow the instructions below to build from source. Please, mind the [CHANGELOG.md](CHANGELOG.md) for a list of all changes between different versions.
## 3. Building <a id="chapter-003"></a>
## Build Dependencies
### 3.1 Build Dependencies <a id="chapter-0031"></a>
OpenEthereum requires **latest stable Rust version** to build.
Parity Ethereum requires **Rust version 1.29.x** to build.
We recommend installing Rust through [rustup](https://www.rustup.rs/). If you don't already have `rustup`, you can install it like this:
@ -63,7 +36,7 @@ We recommend installing Rust through [rustup](https://www.rustup.rs/). If you do
$ curl https://sh.rustup.rs -sSf | sh
```
OpenEthereum also requires `clang` (>= 9.0), `clang++`, `pkg-config`, `file`, `make`, and `cmake` packages to be installed.
Parity Ethereum also requires `gcc`, `g++`, `libudev-dev`, `pkg-config`, `file`, `make`, and `cmake` packages to be installed.
- OSX:
```bash
@ -72,7 +45,7 @@ We recommend installing Rust through [rustup](https://www.rustup.rs/). If you do
`clang` is required. It comes with Xcode command line tools or can be installed with homebrew.
- Windows:
- Windows
Make sure you have Visual Studio 2015 with C++ support installed. Next, download and run the `rustup` installer from
https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe, start "VS2015 x64 Native Tools Command Prompt", and use the following command to install and set up the `msvc` toolchain:
```bash
@ -83,14 +56,14 @@ Once you have `rustup` installed, then you need to install:
* [Perl](https://www.perl.org)
* [Yasm](https://yasm.tortall.net)
Make sure that these binaries are in your `PATH`. After that, you should be able to build OpenEthereum from source.
Make sure that these binaries are in your `PATH`. After that, you should be able to build Parity Ethereum from source.
### 3.2 Build from Source Code <a id="chapter-0032"></a>
## Build from Source Code
```bash
# download OpenEthereum code
$ git clone https://github.com/openethereum/openethereum
$ cd openethereum
# download Parity Ethereum code
$ git clone https://github.com/paritytech/parity-ethereum
$ cd parity-ethereum
# build in release mode
$ cargo build --release --features final
@ -110,209 +83,73 @@ Note, when compiling a crate and you receive errors, it's in most cases your out
$ cargo clean
```
This always compiles the latest nightly builds. If you want to build stable, do a
This always compiles the latest nightly builds. If you want to build stable or beta, do a
```bash
$ git checkout stable
```
### 3.3 Starting OpenEthereum <a id="chapter-0034"></a>
#### Manually
To start OpenEthereum manually, just run
or
```bash
$ ./target/release/openethereum
$ git checkout beta
```
so OpenEthereum begins syncing the Ethereum blockchain.
## Simple One-Line Installer for Mac and Linux
#### Using `systemd` service file
```bash
bash <(curl https://get.parity.io -L)
```
To start OpenEthereum as a regular user using `systemd` init:
The one-line installer always defaults to the latest beta release. To install a stable release, run:
1. Copy `./scripts/openethereum.service` to your
```bash
bash <(curl https://get.parity.io -L) -r stable
```
## Start Parity Ethereum
### Manually
To start Parity Ethereum manually, just run
```bash
$ ./target/release/parity
```
so Parity Ethereum begins syncing the Ethereum blockchain.
### Using `systemd` service file
To start Parity Ethereum as a regular user using `systemd` init:
1. Copy `./scripts/parity.service` to your
`systemd` user directory (usually `~/.config/systemd/user`).
2. Copy release to bin folder, write `sudo install ./target/release/openethereum /usr/bin/openethereum`
3. To configure OpenEthereum, see [our wiki](https://openethereum.github.io/Configuring-OpenEthereum) for details.
2. To configure Parity Ethereum, write a `/etc/parity/config.toml` config file, see [Configuring Parity Ethereum](https://paritytech.github.io/wiki/Configuring-Parity) for details.
## 4. Testing <a id="chapter-004"></a>
## Parity Ethereum toolchain
Download the required test files: `git submodule update --init --recursive`. You can run tests with the following commands:
In addition to the Parity Ethereum client, there are additional tools in this repository available:
* **All** packages
```
cargo test --all
```
- [evmbin](https://github.com/paritytech/parity-ethereum/blob/master/evmbin/) - EVM implementation for Parity Ethereum.
- [ethabi](https://github.com/paritytech/ethabi) - Parity Ethereum function calls encoding.
- [ethstore](https://github.com/paritytech/parity-ethereum/blob/master/ethstore/) - Parity Ethereum key management.
- [ethkey](https://github.com/paritytech/parity-ethereum/blob/master/ethkey/) - Parity Ethereum keys generator.
- [whisper](https://github.com/paritytech/parity-ethereum/blob/master/whisper/) - Implementation of Whisper-v2 PoC.
* Specific package
```
cargo test --package <spec>
```
## Join the chat!
Replace `<spec>` with one of the packages from the [package list](#package-list) (e.g. `cargo test --package evmbin`).
Questions? Get in touch with us on Gitter:
[![Gitter: Parity](https://img.shields.io/badge/gitter-parity-4AB495.svg)](https://gitter.im/paritytech/parity)
[![Gitter: Parity.js](https://img.shields.io/badge/gitter-parity.js-4AB495.svg)](https://gitter.im/paritytech/parity.js)
[![Gitter: Parity/Miners](https://img.shields.io/badge/gitter-parity/miners-4AB495.svg)](https://gitter.im/paritytech/parity/miners)
[![Gitter: Parity-PoA](https://img.shields.io/badge/gitter-parity--poa-4AB495.svg)](https://gitter.im/paritytech/parity-poa)
You can show your logs in the test output by passing `--nocapture` (i.e. `cargo test --package evmbin -- --nocapture`)
Alternatively, join our community on Matrix:
[![Riot: +Parity](https://img.shields.io/badge/riot-%2Bparity%3Amatrix.parity.io-orange.svg)](https://riot.im/app/#/group/+parity:matrix.parity.io)
## 5. Documentation <a id="chapter-005"></a>
## Documentation
Be sure to [check out our wiki](https://openethereum.github.io/) for more information.
Official website: https://parity.io
### Viewing documentation for OpenEthereum packages
You can generate documentation for OpenEthereum Rust packages that automatically opens in your web browser using [rustdoc with Cargo](https://doc.rust-lang.org/rustdoc/what-is-rustdoc.html#using-rustdoc-with-cargo) (of the The Rustdoc Book), by running the the following commands:
* **All** packages
```
cargo doc --document-private-items --open
```
* Specific package
```
cargo doc --package <spec> -- --document-private-items --open
```
Use`--document-private-items` to also view private documentation and `--no-deps` to exclude building documentation for dependencies.
Replacing `<spec>` with one of the following from the details section below (i.e. `cargo doc --package openethereum --open`):
<a id="package-list"></a>
**Package List**
<details><p>
* OpenEthereum Client Application
```bash
openethereum
```
* OpenEthereum Account Management, Key Management Tool, and Keys Generator
```bash
ethcore-accounts, ethkey-cli, ethstore, ethstore-cli
```
* OpenEthereum Chain Specification
```bash
chainspec
```
* OpenEthereum CLI Signer Tool & RPC Client
```bash
cli-signer parity-rpc-client
```
* OpenEthereum Ethash & ProgPoW Implementations
```bash
ethash
```
* EthCore Library
```bash
ethcore
```
* OpenEthereum Blockchain Database, Test Generator, Configuration,
Caching, Importing Blocks, and Block Information
```bash
ethcore-blockchain
```
* OpenEthereum Contract Calls and Blockchain Service & Registry Information
```bash
ethcore-call-contract
```
* OpenEthereum Database Access & Utilities, Database Cache Manager
```bash
ethcore-db
```
* OpenEthereum Virtual Machine (EVM) Rust Implementation
```bash
evm
```
* OpenEthereum Light Client Implementation
```bash
ethcore-light
```
* Smart Contract based Node Filter, Manage Permissions of Network Connections
```bash
node-filter
```
* OpenEthereum Client & Network Service Creation & Registration with the I/O Subsystem
```bash
ethcore-service
```
* OpenEthereum Blockchain Synchronization
```bash
ethcore-sync
```
* OpenEthereum Common Types
```bash
common-types
```
* OpenEthereum Virtual Machines (VM) Support Library
```bash
vm
```
* OpenEthereum WASM Interpreter
```bash
wasm
```
* OpenEthereum WASM Test Runner
```bash
pwasm-run-test
```
* OpenEthereum EVM Implementation
```bash
evmbin
```
* OpenEthereum JSON Deserialization
```bash
ethjson
```
* OpenEthereum State Machine Generalization for Consensus Engines
```bash
parity-machine
```
* OpenEthereum Miner Interface
```bash
ethcore-miner parity-local-store price-info ethcore-stratum using_queue
```
* OpenEthereum Logger Implementation
```bash
ethcore-logger
```
* OpenEthereum JSON-RPC Servers
```bash
parity-rpc
```
* OpenEthereum Updater Service
```bash
parity-updater parity-hash-fetch
```
* OpenEthereum Core Libraries (`util`)
```bash
accounts-bloom blooms-db dir eip-712 fake-fetch fastmap fetch ethcore-io
journaldb keccak-hasher len-caching-lock memory-cache memzero
migration-rocksdb ethcore-network ethcore-network-devp2p panic_hook
patricia-trie-ethereum registrar rlp_compress stats
time-utils triehash-ethereum unexpected parity-version
```
</p></details>
## 6. Toolchain <a id="chapter-006"></a>
In addition to the OpenEthereum client, there are additional tools in this repository available:
- [evmbin](./bin/evmbin) - OpenEthereum EVM Implementation.
- [ethstore](./crates/accounts/ethstore) - OpenEthereum Key Management.
- [ethkey](./crates/accounts/ethkey) - OpenEthereum Keys Generator.
The following tools are available in a separate repository:
- [ethabi](https://github.com/openethereum/ethabi) - OpenEthereum Encoding of Function Calls. [Docs here](https://crates.io/crates/ethabi)
- [whisper](https://github.com/openethereum/whisper) - OpenEthereum Whisper-v2 PoC Implementation.
## 7. Contributing <a id="chapter-007"></a>
An introduction has been provided in the ["So You Want to be a Core Developer" presentation slides by Hernando Castano](http://tiny.cc/contrib-to-parity-eth). Additional guidelines are provided in [CONTRIBUTING](./.github/CONTRIBUTING.md).
### Contributor Code of Conduct
[CODE_OF_CONDUCT](./.github/CODE_OF_CONDUCT.md)
## 8. License <a id="chapter-008"></a>
[LICENSE](./LICENSE)
Be sure to [check out our wiki](https://wiki.parity.io) for more information.

80
SECURITY.md Normal file
View File

@ -0,0 +1,80 @@
# Security Policy
Parity Technologies is committed to resolving security vulnerabilities in our software quickly and carefully. We take the necessary steps to minimize risk, provide timely information, and deliver vulnerability fixes and mitigations required to address security issues.
## Reporting a Vulnerability
Security vulnerabilities in Parity software should be reported by email to security@parity.io. If you think your report might be eligible for the Parity Bug Bounty Program, your email should be send to bugbounty@parity.io.
Your report should include the following:
- your name
- description of the vulnerability
- attack scenario (if any)
- components
- reproduction
- other details
Try to include as much information in your report as you can, including a description of the vulnerability, its potential impact, and steps for reproducing it. Be sure to use a descriptive subject line.
You'll receive a response to your email within two business days indicating the next steps in handling your report. We encourage finders to use encrypted communication channels to protect the confidentiality of vulnerability reports. You can encrypt your report using our public key. This key is [on MIT's key server](https://pgp.mit.edu/pks/lookup?op=get&search=0x5D0F03018D07DE73) server and reproduced below.
After the initial reply to your report, our team will endeavor to keep you informed of the progress being made towards a fix. These updates will be sent at least every five business days.
Thank you for taking the time to responsibly disclose any vulnerabilities you find.
## Responsible Investigation and Reporting
Responsible investigation and reporting includes, but isn't limited to, the following:
- Don't violate the privacy of other users, destroy data, etc.
- Dont defraud or harm Parity Technologies Ltd or its users during your research; you should make a good faith effort to not interrupt or degrade our services.
- Don't target our physical security measures, or attempt to use social engineering, spam, distributed denial of service (DDOS) attacks, etc.
- Initially report the bug only to us and not to anyone else.
- Give us a reasonable amount of time to fix the bug before disclosing it to anyone else, and give us adequate written warning before disclosing it to anyone else.
- In general, please investigate and report bugs in a way that makes a reasonable, good faith effort not to be disruptive or harmful to us or our users. Otherwise your actions might be interpreted as an attack rather than an effort to be helpful.
## Bug Bounty Program
Our Bug Bounty Program allows us to recognise and reward members of the Parity community for helping us find and address significant bugs, in accordance with the terms of the Parity Bug Bounty Program. A detailed description on eligibility, rewards, legal information and terms & conditions for contributors can be found on [our website](https://paritytech.io/bug-bounty.html).
## Plaintext PGP Key
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBFlyIAwBCACe0keNPjgYzZ1Oy/8t3zj/Qw9bHHqrzx7FWy8NbXnYBM19NqOZ
DIP7Oe0DvCaf/uruBskCS0iVstHlEFQ2AYe0Ei0REt9lQdy61GylU/DEB3879IG+
6FO0SnFeYeerv1/hFI2K6uv8v7PyyVDiiJSW0I1KIs2OBwJicTKmWxLAeQsRgx9G
yRGalrVk4KP+6pWTA7k3DxmDZKZyfYV/Ej10NtuzmsemwDbv98HKeomp/kgFOfSy
3AZjeCpctlsNqpjUuXa0/HudmH2WLxZ0fz8XeoRh8XM9UudNIecjrDqmAFrt/btQ
/3guvlzhFCdhYPVGsUusKMECk/JG+Xx1/1ZjABEBAAG0LFBhcml0eSBTZWN1cml0
eSBDb250YWN0IDxzZWN1cml0eUBwYXJpdHkuaW8+iQFUBBMBCAA+FiEE2uUVYCjP
N6B8aTiDXQ8DAY0H3nMFAllyIAwCGwMFCQPCZwAFCwkIBwIGFQgJCgsCBBYCAwEC
HgECF4AACgkQXQ8DAY0H3nM60wgAkS3A36Zc+upiaxU7tumcGv+an17j7gin0sif
+0ELSjVfrXInM6ovai+NhUdcLkJ7tCrKS90fvlaELK5Sg9CXBWCTFccKN4A/B7ey
rOg2NPXUecnyBB/XqQgKYH7ujYlOlqBDXMfz6z8Hj6WToxg9PPMGGomyMGh8AWxM
3yRPFs5RKt0VKgN++5N00oly5Y8ri5pgCidDvCLYMGTVDHFKwkuc9w6BlWlu1R1e
/hXFWUFAP1ffTAul3QwyKhjPn2iotCdxXjvt48KaU8DN4iL7aMBN/ZBKqGS7yRdF
D/JbJyaaJ0ZRvFSTSXy/sWY3z1B5mtCPBxco8hqqNfRkCwuZ6LkBDQRZciAMAQgA
8BP8xrwe12TOUTqL/Vrbxv/FLdhKh53J6TrPKvC2TEEKOrTNo5ahRq+XOS5E7G2N
x3b+fq8gR9BzFcldAx0XWUtGs/Wv++ulaSNqTBxj13J3G3WGsUfMKxRgj//piCUD
bCFLQfGZdKk0M1o9QkPVARwwmvCNiNB/l++xGqPtfc44H5jWj3GoGvL2MkShPzrN
yN/bJ+m+R5gtFGdInqa5KXBuxxuW25eDKJ+LzjbgUgeC76wNcfOiQHTdMkcupjdO
bbGFwo10hcbRAOcZEv6//Zrlmk/6nPxEd2hN20St2bSN0+FqfZ267mWEu3ejsgF8
ArdCpv5h4fBvJyNwiTZwIQARAQABiQE8BBgBCAAmFiEE2uUVYCjPN6B8aTiDXQ8D
AY0H3nMFAllyIAwCGwwFCQPCZwAACgkQXQ8DAY0H3nNisggAl4fqhRlA34wIb190
sqXHVxiCuzPaqS6krE9xAa1+gncX485OtcJNqnjugHm2rFE48lv7oasviuPXuInE
/OgVFnXYv9d/Xx2JUeDs+bFTLouCDRY2Unh7KJZasfqnMcCHWcxHx5FvRNZRssaB
WTZVo6sizPurGUtbpYe4/OLFhadBqAE0EUmVRFEUMc1YTnu4eLaRBzoWN4d2UWwi
LN25RSrVSke7LTSFbgn9ntQrQ2smXSR+cdNkkfRCjFcpUaecvFl9HwIqoyVbT4Ym
0hbpbbX/cJdc91tKa+psa29uMeGL/cgL9fAu19yNFRyOTMxjZnvql1X/WE1pLmoP
ETBD1Q==
=K9Qw
-----END PGP PUBLIC KEY BLOCK-----
```

View File

@ -1,51 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate ethjson;
extern crate serde_json;
use ethjson::spec::Spec;
use std::{env, fs, process};
fn quit(s: &str) -> ! {
println!("{}", s);
process::exit(1);
}
fn main() {
let mut args = env::args();
if args.len() != 2 {
quit(
"You need to specify chainspec.json\n\
\n\
./chainspec <chainspec.json>",
);
}
let path = args.nth(1).expect("args.len() == 2; qed");
let file = match fs::File::open(&path) {
Ok(file) => file,
Err(_) => quit(&format!("{} could not be opened", path)),
};
let spec: Result<Spec, _> = serde_json::from_reader(file);
if let Err(err) = spec {
quit(&format!("{} {}", path, err.to_string()));
}
println!("{} is valid", path);
}

View File

@ -1,493 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate docopt;
extern crate env_logger;
extern crate ethkey;
extern crate panic_hook;
extern crate parity_crypto as crypto;
extern crate parity_wordlist;
extern crate rustc_hex;
extern crate serde;
extern crate threadpool;
#[macro_use]
extern crate serde_derive;
use std::{env, fmt, io, num::ParseIntError, process, sync};
use crypto::publickey::{
sign, verify_address, verify_public, Error as EthkeyError, Generator, KeyPair, Random,
};
use docopt::Docopt;
use ethkey::{brain_recover, Brain, BrainPrefix, Prefix};
use rustc_hex::{FromHex, FromHexError};
const USAGE: &'static str = r#"
Parity Ethereum keys generator.
Copyright 2015-2019 Parity Technologies (UK) Ltd.
Usage:
ethkey info <secret-or-phrase> [options]
ethkey generate random [options]
ethkey generate prefix <prefix> [options]
ethkey sign <secret> <message>
ethkey verify public <public> <signature> <message>
ethkey verify address <address> <signature> <message>
ethkey recover <address> <known-phrase>
ethkey [-h | --help]
Options:
-h, --help Display this message and exit.
-s, --secret Display only the secret key.
-p, --public Display only the public key.
-a, --address Display only the address.
-b, --brain Use parity brain wallet algorithm. Not recommended.
Commands:
info Display public key and address of the secret.
generate random Generates new random Ethereum key.
generate prefix Random generation, but address must start with a prefix ("vanity address").
sign Sign message using a secret key.
verify Verify signer of the signature by public key or address.
recover Try to find brain phrase matching given address from partial phrase.
"#;
#[derive(Debug, Deserialize)]
struct Args {
cmd_info: bool,
cmd_generate: bool,
cmd_random: bool,
cmd_prefix: bool,
cmd_sign: bool,
cmd_verify: bool,
cmd_public: bool,
cmd_address: bool,
cmd_recover: bool,
arg_prefix: String,
arg_secret: String,
arg_secret_or_phrase: String,
arg_known_phrase: String,
arg_message: String,
arg_public: String,
arg_address: String,
arg_signature: String,
flag_secret: bool,
flag_public: bool,
flag_address: bool,
flag_brain: bool,
}
#[derive(Debug)]
enum Error {
Ethkey(EthkeyError),
FromHex(FromHexError),
ParseInt(ParseIntError),
Docopt(docopt::Error),
Io(io::Error),
}
impl From<EthkeyError> for Error {
fn from(err: EthkeyError) -> Self {
Error::Ethkey(err)
}
}
impl From<FromHexError> for Error {
fn from(err: FromHexError) -> Self {
Error::FromHex(err)
}
}
impl From<ParseIntError> for Error {
fn from(err: ParseIntError) -> Self {
Error::ParseInt(err)
}
}
impl From<docopt::Error> for Error {
fn from(err: docopt::Error) -> Self {
Error::Docopt(err)
}
}
impl From<io::Error> for Error {
fn from(err: io::Error) -> Self {
Error::Io(err)
}
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match *self {
Error::Ethkey(ref e) => write!(f, "{}", e),
Error::FromHex(ref e) => write!(f, "{}", e),
Error::ParseInt(ref e) => write!(f, "{}", e),
Error::Docopt(ref e) => write!(f, "{}", e),
Error::Io(ref e) => write!(f, "{}", e),
}
}
}
enum DisplayMode {
KeyPair,
Secret,
Public,
Address,
}
impl DisplayMode {
fn new(args: &Args) -> Self {
if args.flag_secret {
DisplayMode::Secret
} else if args.flag_public {
DisplayMode::Public
} else if args.flag_address {
DisplayMode::Address
} else {
DisplayMode::KeyPair
}
}
}
fn main() {
panic_hook::set_abort();
env_logger::try_init().expect("Logger initialized only once.");
match execute(env::args()) {
Ok(ok) => println!("{}", ok),
Err(Error::Docopt(ref e)) => e.exit(),
Err(err) => {
eprintln!("{}", err);
process::exit(1);
}
}
}
fn display(result: (KeyPair, Option<String>), mode: DisplayMode) -> String {
let keypair = result.0;
match mode {
DisplayMode::KeyPair => match result.1 {
Some(extra_data) => format!("{}\n{}", extra_data, keypair),
None => format!("{}", keypair),
},
DisplayMode::Secret => format!("{:x}", keypair.secret()),
DisplayMode::Public => format!("{:x}", keypair.public()),
DisplayMode::Address => format!("{:x}", keypair.address()),
}
}
fn execute<S, I>(command: I) -> Result<String, Error>
where
I: IntoIterator<Item = S>,
S: AsRef<str>,
{
let args: Args = Docopt::new(USAGE).and_then(|d| d.argv(command).deserialize())?;
return if args.cmd_info {
let display_mode = DisplayMode::new(&args);
let result = if args.flag_brain {
let phrase = args.arg_secret_or_phrase;
let phrase_info = validate_phrase(&phrase);
let keypair = Brain::new(phrase).generate();
(keypair, Some(phrase_info))
} else {
let secret = args
.arg_secret_or_phrase
.parse()
.map_err(|_| EthkeyError::InvalidSecretKey)?;
(KeyPair::from_secret(secret)?, None)
};
Ok(display(result, display_mode))
} else if args.cmd_generate {
let display_mode = DisplayMode::new(&args);
let result = if args.cmd_random {
if args.flag_brain {
let mut brain = BrainPrefix::new(vec![0], usize::max_value(), BRAIN_WORDS);
let keypair = brain.generate()?;
let phrase = format!("recovery phrase: {}", brain.phrase());
(keypair, Some(phrase))
} else {
(Random.generate(), None)
}
} else if args.cmd_prefix {
let prefix = args.arg_prefix.from_hex()?;
let brain = args.flag_brain;
in_threads(move || {
let iterations = 1024;
let prefix = prefix.clone();
move || {
let prefix = prefix.clone();
let res = if brain {
let mut brain = BrainPrefix::new(prefix, iterations, BRAIN_WORDS);
let result = brain.generate();
let phrase = format!("recovery phrase: {}", brain.phrase());
result.map(|keypair| (keypair, Some(phrase)))
} else {
let result = Prefix::new(prefix, iterations).generate();
result.map(|res| (res, None))
};
Ok(res.map(Some).unwrap_or(None))
}
})?
} else {
return Ok(format!("{}", USAGE));
};
Ok(display(result, display_mode))
} else if args.cmd_sign {
let secret = args
.arg_secret
.parse()
.map_err(|_| EthkeyError::InvalidSecretKey)?;
let message = args
.arg_message
.parse()
.map_err(|_| EthkeyError::InvalidMessage)?;
let signature = sign(&secret, &message)?;
Ok(format!("{}", signature))
} else if args.cmd_verify {
let signature = args
.arg_signature
.parse()
.map_err(|_| EthkeyError::InvalidSignature)?;
let message = args
.arg_message
.parse()
.map_err(|_| EthkeyError::InvalidMessage)?;
let ok = if args.cmd_public {
let public = args
.arg_public
.parse()
.map_err(|_| EthkeyError::InvalidPublicKey)?;
verify_public(&public, &signature, &message)?
} else if args.cmd_address {
let address = args
.arg_address
.parse()
.map_err(|_| EthkeyError::InvalidAddress)?;
verify_address(&address, &signature, &message)?
} else {
return Ok(format!("{}", USAGE));
};
Ok(format!("{}", ok))
} else if args.cmd_recover {
let display_mode = DisplayMode::new(&args);
let known_phrase = args.arg_known_phrase;
let address = args
.arg_address
.parse()
.map_err(|_| EthkeyError::InvalidAddress)?;
let (phrase, keypair) = in_threads(move || {
let mut it =
brain_recover::PhrasesIterator::from_known_phrase(&known_phrase, BRAIN_WORDS);
move || {
let mut i = 0;
while let Some(phrase) = it.next() {
i += 1;
let keypair = Brain::new(phrase.clone()).generate();
if keypair.address() == address {
return Ok(Some((phrase, keypair)));
}
if i >= 1024 {
return Ok(None);
}
}
Err(EthkeyError::Custom("Couldn't find any results.".into()))
}
})?;
Ok(display((keypair, Some(phrase)), display_mode))
} else {
Ok(format!("{}", USAGE))
};
}
const BRAIN_WORDS: usize = 12;
fn validate_phrase(phrase: &str) -> String {
match Brain::validate_phrase(phrase, BRAIN_WORDS) {
Ok(()) => format!("The recovery phrase looks correct.\n"),
Err(err) => format!("The recover phrase was not generated by Parity: {}", err),
}
}
fn in_threads<F, X, O>(prepare: F) -> Result<O, EthkeyError>
where
O: Send + 'static,
X: Send + 'static,
F: Fn() -> X,
X: FnMut() -> Result<Option<O>, EthkeyError>,
{
let pool = threadpool::Builder::new().build();
let (tx, rx) = sync::mpsc::sync_channel(1);
let is_done = sync::Arc::new(sync::atomic::AtomicBool::default());
for _ in 0..pool.max_count() {
let is_done = is_done.clone();
let tx = tx.clone();
let mut task = prepare();
pool.execute(move || {
loop {
if is_done.load(sync::atomic::Ordering::SeqCst) {
return;
}
let res = match task() {
Ok(None) => continue,
Ok(Some(v)) => Ok(v),
Err(err) => Err(err),
};
// We are interested only in the first response.
let _ = tx.send(res);
}
});
}
if let Ok(solution) = rx.recv() {
is_done.store(true, sync::atomic::Ordering::SeqCst);
return solution;
}
Err(EthkeyError::Custom("No results found.".into()))
}
#[cfg(test)]
mod tests {
use super::execute;
#[test]
fn info() {
let command = vec![
"ethkey",
"info",
"17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55",
]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected =
"secret: 17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55
public: 689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124
address: 26d1ec50b4e62c1d1a40d16e7cacc6a6580757d5".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn brain() {
let command = vec!["ethkey", "info", "--brain", "this is sparta"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected =
"The recover phrase was not generated by Parity: The word 'this' does not come from the dictionary.
secret: aa22b54c0cb43ee30a014afe5ef3664b1cde299feabca46cd3167a85a57c39f2
public: c4c5398da6843632c123f543d714d2d2277716c11ff612b2a2f23c6bda4d6f0327c31cd58c55a9572c3cc141dade0c32747a13b7ef34c241b26c84adbb28fcf4
address: 006e27b6a72e1f34c626762f3c4761547aff1421".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn secret() {
let command = vec!["ethkey", "info", "--brain", "this is sparta", "--secret"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected =
"aa22b54c0cb43ee30a014afe5ef3664b1cde299feabca46cd3167a85a57c39f2".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn public() {
let command = vec!["ethkey", "info", "--brain", "this is sparta", "--public"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "c4c5398da6843632c123f543d714d2d2277716c11ff612b2a2f23c6bda4d6f0327c31cd58c55a9572c3cc141dade0c32747a13b7ef34c241b26c84adbb28fcf4".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn address() {
let command = vec!["ethkey", "info", "-b", "this is sparta", "--address"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "006e27b6a72e1f34c626762f3c4761547aff1421".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn sign() {
let command = vec![
"ethkey",
"sign",
"17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55",
"bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987",
]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn verify_valid_public() {
let command = vec!["ethkey", "verify", "public", "689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124", "c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200", "bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "true".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn verify_valid_address() {
let command = vec!["ethkey", "verify", "address", "26d1ec50b4e62c1d1a40d16e7cacc6a6580757d5", "c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200", "bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "true".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn verify_invalid() {
let command = vec!["ethkey", "verify", "public", "689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124", "c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200", "bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec986"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "false".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
}

View File

@ -1,25 +0,0 @@
[package]
description = "Parity Ethereum Key Management CLI"
name = "ethstore-cli"
version = "0.1.1"
authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
docopt = "1.0"
env_logger = "0.5"
num_cpus = "1.6"
rustc-hex = "1.0"
serde = "1.0"
serde_derive = "1.0"
parking_lot = "0.11.1"
ethstore = { path = "../../crates/accounts/ethstore" }
dir = { path = '../../crates/util/dir' }
panic_hook = { path = "../../crates/util/panic-hook" }
[[bin]]
name = "ethstore"
path = "src/main.rs"
doc = false
[dev-dependencies]
tempdir = "0.3.5"

View File

@ -1,66 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use parking_lot::Mutex;
use std::{cmp, collections::VecDeque, sync::Arc, thread};
use ethstore::{ethkey::Password, Error, PresaleWallet};
use num_cpus;
pub fn run(passwords: VecDeque<Password>, wallet_path: &str) -> Result<(), Error> {
let passwords = Arc::new(Mutex::new(passwords));
let mut handles = Vec::new();
for _ in 0..num_cpus::get() {
let passwords = passwords.clone();
let wallet = PresaleWallet::open(&wallet_path)?;
handles.push(thread::spawn(move || {
look_for_password(passwords, wallet);
}));
}
for handle in handles {
handle
.join()
.map_err(|err| Error::Custom(format!("Error finishing thread: {:?}", err)))?;
}
Ok(())
}
fn look_for_password(passwords: Arc<Mutex<VecDeque<Password>>>, wallet: PresaleWallet) {
let mut counter = 0;
while !passwords.lock().is_empty() {
let package = {
let mut passwords = passwords.lock();
let len = passwords.len();
passwords.split_off(cmp::min(len, 32))
};
for pass in package {
counter += 1;
match wallet.decrypt(&pass) {
Ok(_) => {
println!("Found password: {}", pass.as_str());
passwords.lock().clear();
return;
}
_ if counter % 100 == 0 => print!("."),
_ => {}
}
}
}
}

View File

@ -1,363 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate dir;
extern crate docopt;
extern crate ethstore;
extern crate num_cpus;
extern crate panic_hook;
extern crate parking_lot;
extern crate rustc_hex;
extern crate serde;
extern crate env_logger;
#[macro_use]
extern crate serde_derive;
use std::{collections::VecDeque, env, fmt, fs, io::Read, process};
use docopt::Docopt;
use ethstore::{
accounts_dir::{KeyDirectory, RootDiskDirectory},
ethkey::{Address, Password},
import_accounts, EthStore, PresaleWallet, SecretStore, SecretVaultRef, SimpleSecretStore,
StoreAccountRef,
};
mod crack;
pub const USAGE: &'static str = r#"
Parity Ethereum key management tool.
Copyright 2015-2019 Parity Technologies (UK) Ltd.
Usage:
ethstore insert <secret> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore change-pwd <address> <old-pwd> <new-pwd> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore list [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore import [<password>] [--src DIR] [--dir DIR]
ethstore import-wallet <path> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore find-wallet-pass <path> <password>
ethstore remove <address> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore sign <address> <password> <message> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore public <address> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore list-vaults [--dir DIR]
ethstore create-vault <vault> <password> [--dir DIR]
ethstore change-vault-pwd <vault> <old-pwd> <new-pwd> [--dir DIR]
ethstore move-to-vault <address> <vault> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore move-from-vault <address> <vault> <password> [--dir DIR]
ethstore [-h | --help]
Options:
-h, --help Display this message and exit.
--dir DIR Specify the secret store directory. It may be either
parity, parity-(chain), geth, geth-test
or a path [default: parity].
--vault VAULT Specify vault to use in this operation.
--vault-pwd VAULTPWD Specify vault password to use in this operation. Please note
that this option is required when vault option is set.
Otherwise it is ignored.
--src DIR Specify import source. It may be either
parity, parity-(chain), geth, geth-test
or a path [default: geth].
Commands:
insert Save account with password.
change-pwd Change password.
list List accounts.
import Import accounts from src.
import-wallet Import presale wallet.
find-wallet-pass Tries to open a wallet with list of passwords given.
remove Remove account.
sign Sign message.
public Displays public key for an address.
list-vaults List vaults.
create-vault Create new vault.
change-vault-pwd Change vault password.
move-to-vault Move account to vault from another vault/root directory.
move-from-vault Move account to root directory from given vault.
"#;
#[derive(Debug, Deserialize)]
struct Args {
cmd_insert: bool,
cmd_change_pwd: bool,
cmd_list: bool,
cmd_import: bool,
cmd_import_wallet: bool,
cmd_find_wallet_pass: bool,
cmd_remove: bool,
cmd_sign: bool,
cmd_public: bool,
cmd_list_vaults: bool,
cmd_create_vault: bool,
cmd_change_vault_pwd: bool,
cmd_move_to_vault: bool,
cmd_move_from_vault: bool,
arg_secret: String,
arg_password: String,
arg_old_pwd: String,
arg_new_pwd: String,
arg_address: String,
arg_message: String,
arg_path: String,
arg_vault: String,
flag_src: String,
flag_dir: String,
flag_vault: String,
flag_vault_pwd: String,
}
enum Error {
Ethstore(ethstore::Error),
Docopt(docopt::Error),
}
impl From<ethstore::Error> for Error {
fn from(err: ethstore::Error) -> Self {
Error::Ethstore(err)
}
}
impl From<docopt::Error> for Error {
fn from(err: docopt::Error) -> Self {
Error::Docopt(err)
}
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
Error::Ethstore(ref err) => fmt::Display::fmt(err, f),
Error::Docopt(ref err) => fmt::Display::fmt(err, f),
}
}
}
fn main() {
panic_hook::set_abort();
if env::var("RUST_LOG").is_err() {
env::set_var("RUST_LOG", "warn")
}
env_logger::try_init().expect("Logger initialized only once.");
match execute(env::args()) {
Ok(result) => println!("{}", result),
Err(Error::Docopt(ref e)) => e.exit(),
Err(err) => {
eprintln!("{}", err);
process::exit(1);
}
}
}
fn key_dir(location: &str, password: Option<Password>) -> Result<Box<dyn KeyDirectory>, Error> {
let dir: RootDiskDirectory = match location {
path if path.starts_with("parity") => {
let chain = path.split('-').nth(1).unwrap_or("ethereum");
let mut path = dir::default_data_pathbuf();
path.push("keys");
path.push(chain);
RootDiskDirectory::create(path)?
}
path => RootDiskDirectory::create(path)?,
};
Ok(Box::new(dir.with_password(password)))
}
fn open_args_vault(store: &EthStore, args: &Args) -> Result<SecretVaultRef, Error> {
if args.flag_vault.is_empty() {
return Ok(SecretVaultRef::Root);
}
let vault_pwd = load_password(&args.flag_vault_pwd)?;
store.open_vault(&args.flag_vault, &vault_pwd)?;
Ok(SecretVaultRef::Vault(args.flag_vault.clone()))
}
fn open_args_vault_account(
store: &EthStore,
address: Address,
args: &Args,
) -> Result<StoreAccountRef, Error> {
match open_args_vault(store, args)? {
SecretVaultRef::Root => Ok(StoreAccountRef::root(address)),
SecretVaultRef::Vault(name) => Ok(StoreAccountRef::vault(&name, address)),
}
}
fn format_accounts(accounts: &[Address]) -> String {
accounts
.iter()
.enumerate()
.map(|(i, a)| format!("{:2}: 0x{:x}", i, a))
.collect::<Vec<String>>()
.join("\n")
}
fn format_vaults(vaults: &[String]) -> String {
vaults.join("\n")
}
fn load_password(path: &str) -> Result<Password, Error> {
let mut file = fs::File::open(path).map_err(|e| {
ethstore::Error::Custom(format!("Error opening password file '{}': {}", path, e))
})?;
let mut password = String::new();
file.read_to_string(&mut password).map_err(|e| {
ethstore::Error::Custom(format!("Error reading password file '{}': {}", path, e))
})?;
// drop EOF
let _ = password.pop();
Ok(password.into())
}
fn execute<S, I>(command: I) -> Result<String, Error>
where
I: IntoIterator<Item = S>,
S: AsRef<str>,
{
let args: Args = Docopt::new(USAGE).and_then(|d| d.argv(command).deserialize())?;
let store = EthStore::open(key_dir(&args.flag_dir, None)?)?;
return if args.cmd_insert {
let secret = args
.arg_secret
.parse()
.map_err(|_| ethstore::Error::InvalidSecret)?;
let password = load_password(&args.arg_password)?;
let vault_ref = open_args_vault(&store, &args)?;
let account_ref = store.insert_account(vault_ref, secret, &password)?;
Ok(format!("0x{:x}", account_ref.address))
} else if args.cmd_change_pwd {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let old_pwd = load_password(&args.arg_old_pwd)?;
let new_pwd = load_password(&args.arg_new_pwd)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
let ok = store
.change_password(&account_ref, &old_pwd, &new_pwd)
.is_ok();
Ok(format!("{}", ok))
} else if args.cmd_list {
let vault_ref = open_args_vault(&store, &args)?;
let accounts = store.accounts()?;
let accounts: Vec<_> = accounts
.into_iter()
.filter(|a| &a.vault == &vault_ref)
.map(|a| a.address)
.collect();
Ok(format_accounts(&accounts))
} else if args.cmd_import {
let password = match args.arg_password.as_ref() {
"" => None,
_ => Some(load_password(&args.arg_password)?),
};
let src = key_dir(&args.flag_src, password)?;
let dst = key_dir(&args.flag_dir, None)?;
let accounts = import_accounts(&*src, &*dst)?;
Ok(format_accounts(&accounts))
} else if args.cmd_import_wallet {
let wallet = PresaleWallet::open(&args.arg_path)?;
let password = load_password(&args.arg_password)?;
let kp = wallet.decrypt(&password)?;
let vault_ref = open_args_vault(&store, &args)?;
let account_ref = store.insert_account(vault_ref, kp.secret().clone(), &password)?;
Ok(format!("0x{:x}", account_ref.address))
} else if args.cmd_find_wallet_pass {
let passwords = load_password(&args.arg_password)?;
let passwords = passwords
.as_str()
.lines()
.map(|line| str::to_owned(line).into())
.collect::<VecDeque<_>>();
crack::run(passwords, &args.arg_path)?;
Ok(format!("Password not found."))
} else if args.cmd_remove {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let password = load_password(&args.arg_password)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
let ok = store.remove_account(&account_ref, &password).is_ok();
Ok(format!("{}", ok))
} else if args.cmd_sign {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let message = args
.arg_message
.parse()
.map_err(|_| ethstore::Error::InvalidMessage)?;
let password = load_password(&args.arg_password)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
let signature = store.sign(&account_ref, &password, &message)?;
Ok(format!("0x{}", signature))
} else if args.cmd_public {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let password = load_password(&args.arg_password)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
let public = store.public(&account_ref, &password)?;
Ok(format!("0x{:x}", public))
} else if args.cmd_list_vaults {
let vaults = store.list_vaults()?;
Ok(format_vaults(&vaults))
} else if args.cmd_create_vault {
let password = load_password(&args.arg_password)?;
store.create_vault(&args.arg_vault, &password)?;
Ok("OK".to_owned())
} else if args.cmd_change_vault_pwd {
let old_pwd = load_password(&args.arg_old_pwd)?;
let new_pwd = load_password(&args.arg_new_pwd)?;
store.open_vault(&args.arg_vault, &old_pwd)?;
store.change_vault_password(&args.arg_vault, &new_pwd)?;
Ok("OK".to_owned())
} else if args.cmd_move_to_vault {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let password = load_password(&args.arg_password)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
store.open_vault(&args.arg_vault, &password)?;
store.change_account_vault(SecretVaultRef::Vault(args.arg_vault), account_ref)?;
Ok("OK".to_owned())
} else if args.cmd_move_from_vault {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let password = load_password(&args.arg_password)?;
store.open_vault(&args.arg_vault, &password)?;
store.change_account_vault(
SecretVaultRef::Root,
StoreAccountRef::vault(&args.arg_vault, address),
)?;
Ok("OK".to_owned())
} else {
Ok(format!("{}", USAGE))
};
}

View File

@ -1,33 +0,0 @@
[package]
description = "Parity EVM Implementation"
name = "evmbin"
version = "0.1.0"
authors = ["Parity Technologies <admin@parity.io>"]
[[bin]]
name = "openethereum-evm"
path = "./src/main.rs"
[dependencies]
common-types = { path = "../../crates/ethcore/types", features = ["test-helpers"] }
docopt = "1.0"
env_logger = "0.5"
ethcore = { path = "../../crates/ethcore", features = ["test-helpers", "json-tests", "to-pod-full"] }
ethereum-types = "0.9.2"
ethjson = { path = "../../crates/ethjson" }
evm = { path = "../../crates/vm/evm" }
panic_hook = { path = "../../crates/util/panic-hook" }
parity-bytes = "0.1"
rustc-hex = "1.0"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
vm = { path = "../../crates/vm/vm" }
[dev-dependencies]
criterion = "0.3.0"
pretty_assertions = "0.1"
tempdir = "0.3"
[features]
evm-debug = ["ethcore/evm-debug-tests"]

View File

@ -1,60 +0,0 @@
## evmbin
EVM implementation for OpenEthereum.
### Usage
```
EVM implementation for Parity.
Copyright 2015-2020 Parity Technologies (UK) Ltd.
Usage:
openethereum-evm state-test <file> [--json --std-json --std-dump-json --only NAME --chain CHAIN --std-out-only --std-err-only --omit-storage-output --omit-memory-output]
openethereum-evm stats [options]
openethereum-evm stats-jsontests-vm <file>
openethereum-evm [options]
openethereum-evm [-h | --help]
Commands:
state-test Run a state test from a json file.
stats Execute EVM runtime code and return the statistics.
stats-jsontests-vm Execute standard json-tests format VMTests and return
timing statistics in tsv format.
Transaction options:
--code CODE Contract code as hex (without 0x).
--to ADDRESS Recipient address (without 0x).
--from ADDRESS Sender address (without 0x).
--input DATA Input data as hex (without 0x).
--gas GAS Supplied gas as hex (without 0x).
--gas-price WEI Supplied gas price as hex (without 0x).
State test options:
--chain CHAIN Run only from specific chain name (i.e. one of EIP150, EIP158,
Frontier, Homestead, Byzantium, Constantinople,
ConstantinopleFix, Istanbul, EIP158ToByzantiumAt5, FrontierToHomesteadAt5,
HomesteadToDaoAt5, HomesteadToEIP150At5, Berlin, Yolo3).
--only NAME Runs only a single test matching the name.
General options:
--json Display verbose results in JSON.
--std-json Display results in standardized JSON format.
--std-err-only With --std-json redirect to err output only.
--std-out-only With --std-json redirect to out output only.
--omit-storage-output With --std-json omit storage output.
--omit-memory-output With --std-json omit memory output.
--std-dump-json Display results in standardized JSON format
with additional state dump.
Display result state dump in standardized JSON format.
--chain CHAIN Chain spec file path.
-h, --help Display this message and exit.
```
## OpenEthereum toolchain
_This project is a part of the OpenEthereum toolchain._
- [evmbin](https://github.com/openethereum/openethereum/blob/master/evmbin/) - EVM implementation for OpenEthereum
- [ethabi](https://github.com/openethereum/ethabi) - OpenEthereum function calls encoding.
- [ethstore](https://github.com/openethereum/openethereum/blob/master/accounts/ethstore) - OpenEthereum key management.
- [ethkey](https://github.com/openethereum/openethereum/blob/master/accounts/ethkey) - OpenEthereum keys generator.

View File

@ -1,98 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! benchmarking for EVM
//! should be started with:
//! ```bash
//! cargo bench
//! ```
#[macro_use]
extern crate criterion;
extern crate ethcore;
extern crate ethereum_types;
extern crate evm;
extern crate rustc_hex;
extern crate vm;
use criterion::{black_box, Criterion};
use std::sync::Arc;
use ethereum_types::U256;
use evm::Factory;
use rustc_hex::FromHex;
use vm::{tests::FakeExt, ActionParams, Ext};
criterion_group!(
evmbin,
bench_simple_loop_usize,
bench_simple_loop_u256,
bench_rng_usize,
bench_rng_u256
);
criterion_main!(evmbin);
fn bench_simple_loop_usize(c: &mut Criterion) {
simple_loop(U256::from(::std::usize::MAX), c, "simple_loop_usize")
}
fn bench_simple_loop_u256(c: &mut Criterion) {
simple_loop(!U256::zero(), c, "simple_loop_u256")
}
fn simple_loop(gas: U256, c: &mut Criterion, bench_id: &str) {
let code = black_box(
"606060405260005b620042408112156019575b6001016007565b600081905550600680602b6000396000f3606060405200".from_hex().unwrap()
);
c.bench_function(bench_id, move |b| {
b.iter(|| {
let mut params = ActionParams::default();
params.gas = gas;
params.code = Some(Arc::new(code.clone()));
let mut ext = FakeExt::new();
let evm = Factory::default().create(params, ext.schedule(), ext.depth());
let _ = evm.exec(&mut ext);
})
});
}
fn bench_rng_usize(c: &mut Criterion) {
rng(U256::from(::std::usize::MAX), c, "rng_usize")
}
fn bench_rng_u256(c: &mut Criterion) {
rng(!U256::zero(), c, "rng_u256")
}
fn rng(gas: U256, c: &mut Criterion, bench_id: &str) {
let code = black_box(
"6060604052600360056007600b60005b62004240811215607f5767ffe7649d5eca84179490940267f47ed85c4b9a6379019367f8e5dd9a5c994bba9390930267f91d87e4b8b74e55019267ff97f6f3b29cda529290920267f393ada8dd75c938019167fe8d437c45bb3735830267f47d9a7b5428ffec019150600101600f565b838518831882186000555050505050600680609a6000396000f3606060405200".from_hex().unwrap()
);
c.bench_function(bench_id, move |b| {
b.iter(|| {
let mut params = ActionParams::default();
params.gas = gas;
params.code = Some(Arc::new(code.clone()));
let mut ext = FakeExt::new();
let evm = Factory::default().create(params, ext.schedule(), ext.depth());
let _ = evm.exec(&mut ext);
})
});
}

View File

@ -1,40 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Config used by display informants
#[derive(Default, Copy, Clone, Debug)]
pub struct Config {
omit_storage_output: bool,
omit_memory_output: bool,
}
impl Config {
pub fn new(omit_storage_output: bool, omit_memory_output: bool) -> Config {
Config {
omit_storage_output,
omit_memory_output,
}
}
pub fn omit_storage_output(&self) -> bool {
self.omit_storage_output
}
pub fn omit_memory_output(&self) -> bool {
self.omit_memory_output
}
}

View File

@ -1,425 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! JSON VM output.
use std::{collections::HashMap, mem};
use super::config::Config;
use bytes::ToPretty;
use display;
use ethcore::trace;
use ethereum_types::{BigEndianHash, H256, U256};
use info as vm;
/// JSON formatting informant.
#[derive(Default)]
pub struct Informant {
code: Vec<u8>,
depth: usize,
pc: usize,
instruction: u8,
gas_cost: U256,
gas_used: U256,
mem_written: Option<(usize, usize)>,
store_written: Option<(U256, U256)>,
stack: Vec<U256>,
memory: Vec<u8>,
storage: HashMap<H256, H256>,
traces: Vec<String>,
subtraces: Vec<String>,
subinfos: Vec<Informant>,
subdepth: usize,
unmatched: bool,
config: Config,
}
impl Informant {
pub fn new(config: Config) -> Informant {
let mut def = Informant::default();
def.config = config;
def
}
fn with_informant_in_depth<F: Fn(&mut Informant)>(
informant: &mut Informant,
depth: usize,
f: F,
) {
if depth == 0 {
f(informant);
} else {
Self::with_informant_in_depth(
informant
.subinfos
.last_mut()
.expect("prepare/done_trace are not balanced"),
depth - 1,
f,
);
}
}
fn informant_trace(informant: &Informant, gas_used: U256) -> String {
let memory = if informant.config.omit_memory_output() {
"".to_string()
} else {
format!("0x{}", informant.memory.to_hex())
};
let storage = if informant.config.omit_storage_output() {
None
} else {
Some(&informant.storage)
};
let info = ::evm::Instruction::from_u8(informant.instruction).map(|i| i.info());
json!({
"pc": informant.pc,
"op": informant.instruction,
"opName": info.map(|i| i.name).unwrap_or(""),
"gas": format!("{:#x}", gas_used.saturating_add(informant.gas_cost)),
"gasCost": format!("{:#x}", informant.gas_cost),
"memory": memory,
"stack": informant.stack,
"storage": storage,
"depth": informant.depth,
})
.to_string()
}
}
impl vm::Informant for Informant {
type Sink = Config;
fn before_test(&mut self, name: &str, action: &str) {
println!("{}", json!({"action": action, "test": name}));
}
fn set_gas(&mut self, gas: U256) {
self.gas_used = gas;
}
fn clone_sink(&self) -> Self::Sink {
self.config
}
fn finish(result: vm::RunResult<Self::Output>, config: &mut Self::Sink) {
match result {
Ok(success) => {
for trace in success.traces.unwrap_or_else(Vec::new) {
println!("{}", trace);
}
let success_msg = json!({
"output": format!("0x{}", success.output.to_hex()),
"gasUsed": format!("{:#x}", success.gas_used),
"time": display::as_micros(&success.time),
});
println!("{}", success_msg)
}
Err(failure) => {
if !config.omit_storage_output() {
for trace in failure.traces.unwrap_or_else(Vec::new) {
println!("{}", trace);
}
}
let failure_msg = json!({
"error": &failure.error.to_string(),
"gasUsed": format!("{:#x}", failure.gas_used),
"time": display::as_micros(&failure.time),
});
println!("{}", failure_msg)
}
}
}
}
impl trace::VMTracer for Informant {
type Output = Vec<String>;
fn trace_next_instruction(&mut self, pc: usize, instruction: u8, _current_gas: U256) -> bool {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
informant.pc = pc;
informant.instruction = instruction;
informant.unmatched = true;
});
true
}
fn trace_prepare_execute(
&mut self,
pc: usize,
instruction: u8,
gas_cost: U256,
mem_written: Option<(usize, usize)>,
store_written: Option<(U256, U256)>,
) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
informant.pc = pc;
informant.instruction = instruction;
informant.gas_cost = gas_cost;
informant.mem_written = mem_written;
informant.store_written = store_written;
});
}
fn trace_executed(&mut self, gas_used: U256, stack_push: &[U256], mem: &[u8]) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
let store_diff = informant.store_written.clone();
let info = ::evm::Instruction::from_u8(informant.instruction).map(|i| i.info());
let trace = Self::informant_trace(informant, gas_used);
informant.traces.push(trace);
informant.unmatched = false;
informant.gas_used = gas_used;
let len = informant.stack.len();
let info_args = info.map(|i| i.args).unwrap_or(0);
informant
.stack
.truncate(if len > info_args { len - info_args } else { 0 });
informant.stack.extend_from_slice(stack_push);
// TODO [ToDr] Align memory?
if let Some((pos, size)) = informant.mem_written.clone() {
if informant.memory.len() < (pos + size) {
informant.memory.resize(pos + size, 0);
}
informant.memory[pos..(pos + size)].copy_from_slice(&mem[pos..(pos + size)]);
}
if let Some((pos, val)) = store_diff {
informant.storage.insert(
BigEndianHash::from_uint(&pos),
BigEndianHash::from_uint(&val),
);
}
if !informant.subtraces.is_empty() {
informant
.traces
.extend(mem::replace(&mut informant.subtraces, vec![]));
}
});
}
fn prepare_subtrace(&mut self, code: &[u8]) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
let mut vm = Informant::default();
vm.config = informant.config;
vm.depth = informant.depth + 1;
vm.code = code.to_vec();
vm.gas_used = informant.gas_used;
informant.subinfos.push(vm);
});
self.subdepth += 1;
}
fn done_subtrace(&mut self) {
self.subdepth -= 1;
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
if let Some(subtraces) = informant
.subinfos
.pop()
.expect("prepare/done_subtrace are not balanced")
.drain()
{
informant.subtraces.extend(subtraces);
}
});
}
fn drain(mut self) -> Option<Self::Output> {
if self.unmatched {
// print last line with final state:
self.gas_cost = 0.into();
let gas_used = self.gas_used;
let subdepth = self.subdepth;
Self::with_informant_in_depth(&mut self, subdepth, |informant: &mut Informant| {
let trace = Self::informant_trace(informant, gas_used);
informant.traces.push(trace);
});
} else if !self.subtraces.is_empty() {
self.traces
.extend(mem::replace(&mut self.subtraces, vec![]));
}
Some(self.traces)
}
}
#[cfg(test)]
mod tests {
use super::*;
use info::tests::run_test;
use serde_json;
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[serde(rename_all = "camelCase")]
struct TestTrace {
pc: usize,
#[serde(rename = "op")]
instruction: u8,
op_name: String,
#[serde(rename = "gas")]
gas_used: U256,
gas_cost: U256,
memory: String,
stack: Vec<U256>,
storage: Option<HashMap<H256, H256>>,
depth: usize,
}
fn assert_traces_eq(a: &[String], b: &[String]) {
let mut ita = a.iter();
let mut itb = b.iter();
loop {
match (ita.next(), itb.next()) {
(Some(a), Some(b)) => {
// Compare both without worrying about the order of the fields
let actual: TestTrace = serde_json::from_str(a).unwrap();
let expected: TestTrace = serde_json::from_str(b).unwrap();
assert_eq!(actual, expected);
println!("{}", a);
}
(None, None) => return,
e => {
panic!("Traces mismatch: {:?}", e);
}
}
}
}
fn compare_json(traces: Option<Vec<String>>, expected: &str) {
let expected = expected
.split("\n")
.map(|x| x.trim())
.map(|x| x.to_owned())
.filter(|x| !x.is_empty())
.collect::<Vec<_>>();
assert_traces_eq(&traces.unwrap(), &expected);
}
#[test]
fn should_trace_failure() {
run_test(
Informant::default(),
&compare_json,
"60F8d6",
0xffff,
r#"
{"pc":0,"op":96,"opName":"PUSH1","gas":"0xffff","gasCost":"0x3","memory":"0x","stack":[],"storage":{},"depth":1}
{"pc":2,"op":214,"opName":"","gas":"0xfffc","gasCost":"0x0","memory":"0x","stack":["0xf8"],"storage":{},"depth":1}
"#,
);
run_test(
Informant::default(),
&compare_json,
"F8d6",
0xffff,
r#"
{"pc":0,"op":248,"opName":"","gas":"0xffff","gasCost":"0x0","memory":"0x","stack":[],"storage":{},"depth":1}
"#,
);
run_test(
Informant::default(),
&compare_json,
"5A51",
0xfffff,
r#"
{"depth":1,"gas":"0xfffff","gasCost":"0x2","memory":"0x","op":90,"opName":"GAS","pc":0,"stack":[],"storage":{}}
{"depth":1,"gas":"0xffffd","gasCost":"0x0","memory":"0x","op":81,"opName":"MLOAD","pc":1,"stack":["0xffffd"],"storage":{}}
"#,
);
}
#[test]
fn should_trace_create_correctly() {
run_test(
Informant::default(),
&compare_json,
"32343434345830f138343438323439f0",
0xffff,
r#"
{"pc":0,"op":50,"opName":"ORIGIN","gas":"0xffff","gasCost":"0x2","memory":"0x","stack":[],"storage":{},"depth":1}
{"pc":1,"op":52,"opName":"CALLVALUE","gas":"0xfffd","gasCost":"0x2","memory":"0x","stack":["0x0"],"storage":{},"depth":1}
{"pc":2,"op":52,"opName":"CALLVALUE","gas":"0xfffb","gasCost":"0x2","memory":"0x","stack":["0x0","0x0"],"storage":{},"depth":1}
{"pc":3,"op":52,"opName":"CALLVALUE","gas":"0xfff9","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0"],"storage":{},"depth":1}
{"pc":4,"op":52,"opName":"CALLVALUE","gas":"0xfff7","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0"],"storage":{},"depth":1}
{"pc":5,"op":88,"opName":"PC","gas":"0xfff5","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0"],"storage":{},"depth":1}
{"pc":6,"op":48,"opName":"ADDRESS","gas":"0xfff3","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0","0x5"],"storage":{},"depth":1}
{"pc":7,"op":241,"opName":"CALL","gas":"0xfff1","gasCost":"0x61d0","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0","0x5","0x0"],"storage":{},"depth":1}
{"pc":8,"op":56,"opName":"CODESIZE","gas":"0x9e21","gasCost":"0x2","memory":"0x","stack":["0x1"],"storage":{},"depth":1}
{"pc":9,"op":52,"opName":"CALLVALUE","gas":"0x9e1f","gasCost":"0x2","memory":"0x","stack":["0x1","0x10"],"storage":{},"depth":1}
{"pc":10,"op":52,"opName":"CALLVALUE","gas":"0x9e1d","gasCost":"0x2","memory":"0x","stack":["0x1","0x10","0x0"],"storage":{},"depth":1}
{"pc":11,"op":56,"opName":"CODESIZE","gas":"0x9e1b","gasCost":"0x2","memory":"0x","stack":["0x1","0x10","0x0","0x0"],"storage":{},"depth":1}
{"pc":12,"op":50,"opName":"ORIGIN","gas":"0x9e19","gasCost":"0x2","memory":"0x","stack":["0x1","0x10","0x0","0x0","0x10"],"storage":{},"depth":1}
{"pc":13,"op":52,"opName":"CALLVALUE","gas":"0x9e17","gasCost":"0x2","memory":"0x","stack":["0x1","0x10","0x0","0x0","0x10","0x0"],"storage":{},"depth":1}
{"pc":14,"op":57,"opName":"CODECOPY","gas":"0x9e15","gasCost":"0x9","memory":"0x","stack":["0x1","0x10","0x0","0x0","0x10","0x0","0x0"],"storage":{},"depth":1}
{"pc":15,"op":240,"opName":"CREATE","gas":"0x9e0c","gasCost":"0x9e0c","memory":"0x32343434345830f138343438323439f0","stack":["0x1","0x10","0x0","0x0"],"storage":{},"depth":1}
{"pc":0,"op":50,"opName":"ORIGIN","gas":"0x210c","gasCost":"0x2","memory":"0x","stack":[],"storage":{},"depth":2}
{"pc":1,"op":52,"opName":"CALLVALUE","gas":"0x210a","gasCost":"0x2","memory":"0x","stack":["0x0"],"storage":{},"depth":2}
{"pc":2,"op":52,"opName":"CALLVALUE","gas":"0x2108","gasCost":"0x2","memory":"0x","stack":["0x0","0x0"],"storage":{},"depth":2}
{"pc":3,"op":52,"opName":"CALLVALUE","gas":"0x2106","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0"],"storage":{},"depth":2}
{"pc":4,"op":52,"opName":"CALLVALUE","gas":"0x2104","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0"],"storage":{},"depth":2}
{"pc":5,"op":88,"opName":"PC","gas":"0x2102","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0"],"storage":{},"depth":2}
{"pc":6,"op":48,"opName":"ADDRESS","gas":"0x2100","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0","0x5"],"storage":{},"depth":2}
{"pc":7,"op":241,"opName":"CALL","gas":"0x20fe","gasCost":"0x0","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0","0x5","0xbd770416a3345f91e4b34576cb804a576fa48eb1"],"storage":{},"depth":2}
"#,
);
run_test(
Informant::default(),
&compare_json,
"3260D85554",
0xffff,
r#"
{"pc":0,"op":50,"opName":"ORIGIN","gas":"0xffff","gasCost":"0x2","memory":"0x","stack":[],"storage":{},"depth":1}
{"pc":1,"op":96,"opName":"PUSH1","gas":"0xfffd","gasCost":"0x3","memory":"0x","stack":["0x0"],"storage":{},"depth":1}
{"pc":3,"op":85,"opName":"SSTORE","gas":"0xfffa","gasCost":"0x1388","memory":"0x","stack":["0x0","0xd8"],"storage":{},"depth":1}
{"pc":4,"op":84,"opName":"SLOAD","gas":"0xec72","gasCost":"0x0","memory":"0x","stack":[],"storage":{"0x00000000000000000000000000000000000000000000000000000000000000d8":"0x0000000000000000000000000000000000000000000000000000000000000000"},"depth":1}
"#,
);
}
#[test]
fn should_omit_storage_and_memory_flag() {
// should omit storage
run_test(
Informant::new(Config::new(true, true)),
&compare_json,
"3260D85554",
0xffff,
r#"
{"pc":0,"op":50,"opName":"ORIGIN","gas":"0xffff","gasCost":"0x2","memory":"","stack":[],"storage":null,"depth":1}
{"pc":1,"op":96,"opName":"PUSH1","gas":"0xfffd","gasCost":"0x3","memory":"","stack":["0x0"],"storage":null,"depth":1}
{"pc":3,"op":85,"opName":"SSTORE","gas":"0xfffa","gasCost":"0x1388","memory":"","stack":["0x0","0xd8"],"storage":null,"depth":1}
{"pc":4,"op":84,"opName":"SLOAD","gas":"0xec72","gasCost":"0x0","memory":"","stack":[],"storage":null,"depth":1}
"#,
)
}
}

View File

@ -1,74 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Simple VM output.
use super::config::Config;
use bytes::ToPretty;
use ethcore::trace;
use display;
use info as vm;
/// Simple formatting informant.
#[derive(Default)]
pub struct Informant {
config: Config,
}
impl Informant {
pub fn new(config: Config) -> Informant {
Informant { config }
}
}
impl vm::Informant for Informant {
type Sink = Config;
fn before_test(&mut self, name: &str, action: &str) {
println!("Test: {} ({})", name, action);
}
fn clone_sink(&self) -> Self::Sink {
self.config
}
fn finish(result: vm::RunResult<Self::Output>, _sink: &mut Self::Sink) {
match result {
Ok(success) => {
println!("Output: 0x{}", success.output.to_hex());
println!("Gas used: {:x}", success.gas_used);
println!("Time: {}", display::format_time(&success.time));
}
Err(failure) => {
println!("Error: {}", failure.error);
println!("Time: {}", display::format_time(&failure.time));
}
}
}
}
impl trace::VMTracer for Informant {
type Output = ();
fn prepare_subtrace(&mut self, _code: &[u8]) {
Default::default()
}
fn done_subtrace(&mut self) {}
fn drain(self) -> Option<()> {
None
}
}

View File

@ -1,413 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Standardized JSON VM output.
use std::{collections::HashMap, io};
use super::config::Config;
use bytes::ToPretty;
use display;
use ethcore::{pod_state, trace};
use ethereum_types::{BigEndianHash, H256, U256};
use info as vm;
pub trait Writer: io::Write + Send + Sized {
fn clone(&self) -> Self;
fn default() -> Self;
}
impl Writer for io::Stdout {
fn clone(&self) -> Self {
io::stdout()
}
fn default() -> Self {
io::stdout()
}
}
impl Writer for io::Stderr {
fn clone(&self) -> Self {
io::stderr()
}
fn default() -> Self {
io::stderr()
}
}
/// JSON formatting informant.
pub struct Informant<Trace, Out> {
code: Vec<u8>,
instruction: u8,
depth: usize,
stack: Vec<U256>,
storage: HashMap<H256, H256>,
subinfos: Vec<Informant<Trace, Out>>,
subdepth: usize,
trace_sink: Trace,
out_sink: Out,
config: Config,
}
impl Default for Informant<io::Stderr, io::Stdout> {
fn default() -> Self {
Self::new(io::stderr(), io::stdout(), Config::default())
}
}
impl Informant<io::Stdout, io::Stdout> {
/// std json informant using out only.
pub fn out_only(config: Config) -> Self {
Self::new(io::stdout(), io::stdout(), config)
}
}
impl Informant<io::Stderr, io::Stderr> {
/// std json informant using err only.
pub fn err_only(config: Config) -> Self {
Self::new(io::stderr(), io::stderr(), config)
}
}
impl Informant<io::Stderr, io::Stdout> {
pub fn new_default(config: Config) -> Self {
let mut informant = Self::default();
informant.config = config;
informant
}
}
impl<Trace: Writer, Out: Writer> Informant<Trace, Out> {
pub fn new(trace_sink: Trace, out_sink: Out, config: Config) -> Self {
Informant {
code: Default::default(),
instruction: Default::default(),
depth: Default::default(),
stack: Default::default(),
storage: Default::default(),
subinfos: Default::default(),
subdepth: 0,
trace_sink,
out_sink,
config,
}
}
fn with_informant_in_depth<F: Fn(&mut Informant<Trace, Out>)>(
informant: &mut Informant<Trace, Out>,
depth: usize,
f: F,
) {
if depth == 0 {
f(informant);
} else {
Self::with_informant_in_depth(
informant
.subinfos
.last_mut()
.expect("prepare/done_trace are not balanced"),
depth - 1,
f,
);
}
}
fn dump_state_into(
trace_sink: &mut Trace,
root: H256,
end_state: &Option<pod_state::PodState>,
) {
if let Some(ref end_state) = end_state {
let dump_data = json!({
"root": root,
"accounts": end_state,
});
writeln!(trace_sink, "{}", dump_data).expect("The sink must be writeable.");
}
}
}
impl<Trace: Writer, Out: Writer> vm::Informant for Informant<Trace, Out> {
type Sink = (Trace, Out, Config);
fn before_test(&mut self, name: &str, action: &str) {
let out_data = json!({
"action": action,
"test": name,
});
writeln!(&mut self.out_sink, "{}", out_data).expect("The sink must be writeable.");
}
fn set_gas(&mut self, _gas: U256) {}
fn clone_sink(&self) -> Self::Sink {
(
self.trace_sink.clone(),
self.out_sink.clone(),
self.config.clone(),
)
}
fn finish(
result: vm::RunResult<<Self as trace::VMTracer>::Output>,
(ref mut trace_sink, ref mut out_sink, _): &mut Self::Sink,
) {
match result {
Ok(success) => {
let trace_data = json!({"stateRoot": success.state_root});
writeln!(trace_sink, "{}", trace_data).expect("The sink must be writeable.");
Self::dump_state_into(trace_sink, success.state_root, &success.end_state);
let out_data = json!({
"output": format!("0x{}", success.output.to_hex()),
"gasUsed": format!("{:#x}", success.gas_used),
"time": display::as_micros(&success.time),
});
writeln!(out_sink, "{}", out_data).expect("The sink must be writeable.");
}
Err(failure) => {
let out_data = json!({
"error": &failure.error.to_string(),
"gasUsed": format!("{:#x}", failure.gas_used),
"time": display::as_micros(&failure.time),
});
Self::dump_state_into(trace_sink, failure.state_root, &failure.end_state);
writeln!(out_sink, "{}", out_data).expect("The sink must be writeable.");
}
}
}
}
impl<Trace: Writer, Out: Writer> trace::VMTracer for Informant<Trace, Out> {
type Output = ();
fn trace_next_instruction(&mut self, pc: usize, instruction: u8, current_gas: U256) -> bool {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
let storage = if informant.config.omit_storage_output() {
None
} else {
Some(&informant.storage)
};
let info = ::evm::Instruction::from_u8(instruction).map(|i| i.info());
informant.instruction = instruction;
let trace_data = json!({
"pc": pc,
"op": instruction,
"opName": info.map(|i| i.name).unwrap_or(""),
"gas": format!("{:#x}", current_gas),
"stack": informant.stack,
"storage": storage,
"depth": informant.depth,
});
writeln!(&mut informant.trace_sink, "{}", trace_data)
.expect("The sink must be writeable.");
});
true
}
fn trace_prepare_execute(
&mut self,
_pc: usize,
_instruction: u8,
_gas_cost: U256,
_mem_written: Option<(usize, usize)>,
store_written: Option<(U256, U256)>,
) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
if let Some((pos, val)) = store_written {
informant.storage.insert(
BigEndianHash::from_uint(&pos),
BigEndianHash::from_uint(&val),
);
}
});
}
fn trace_executed(&mut self, _gas_used: U256, stack_push: &[U256], _mem: &[u8]) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
let info = ::evm::Instruction::from_u8(informant.instruction).map(|i| i.info());
let len = informant.stack.len();
let info_args = info.map(|i| i.args).unwrap_or(0);
informant
.stack
.truncate(if len > info_args { len - info_args } else { 0 });
informant.stack.extend_from_slice(stack_push);
});
}
fn prepare_subtrace(&mut self, code: &[u8]) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
let mut vm = Informant::new(
informant.trace_sink.clone(),
informant.out_sink.clone(),
informant.config,
);
vm.depth = informant.depth + 1;
vm.code = code.to_vec();
informant.subinfos.push(vm);
});
self.subdepth += 1;
}
fn done_subtrace(&mut self) {
self.subdepth -= 1;
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
informant.subinfos.pop();
});
}
fn drain(self) -> Option<Self::Output> {
None
}
}
#[cfg(test)]
pub mod tests {
use super::*;
use info::tests::run_test;
use std::sync::{Arc, Mutex};
#[derive(Debug, Clone, Default)]
pub struct TestWriter(pub Arc<Mutex<Vec<u8>>>);
impl Writer for TestWriter {
fn clone(&self) -> Self {
Clone::clone(self)
}
fn default() -> Self {
Default::default()
}
}
impl io::Write for TestWriter {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.0.lock().unwrap().write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.0.lock().unwrap().flush()
}
}
pub fn informant(config: Config) -> (Informant<TestWriter, TestWriter>, Arc<Mutex<Vec<u8>>>) {
let trace_writer: TestWriter = Default::default();
let out_writer: TestWriter = Default::default();
let res = trace_writer.0.clone();
(Informant::new(trace_writer, out_writer, config), res)
}
#[test]
fn should_trace_failure() {
let (inf, res) = informant(Config::default());
run_test(
inf,
move |_, expected| {
let bytes = res.lock().unwrap();
assert_eq!(expected, &String::from_utf8_lossy(&**bytes))
},
"60F8d6",
0xffff,
r#"{"depth":1,"gas":"0xffff","op":96,"opName":"PUSH1","pc":0,"stack":[],"storage":{}}
{"depth":1,"gas":"0xfffc","op":214,"opName":"","pc":2,"stack":["0xf8"],"storage":{}}
"#,
);
let (inf, res) = informant(Config::default());
run_test(
inf,
move |_, expected| {
let bytes = res.lock().unwrap();
assert_eq!(expected, &String::from_utf8_lossy(&**bytes))
},
"F8d6",
0xffff,
r#"{"depth":1,"gas":"0xffff","op":248,"opName":"","pc":0,"stack":[],"storage":{}}
"#,
);
}
#[test]
fn should_trace_create_correctly() {
let (informant, res) = informant(Config::default());
run_test(
informant,
move |_, expected| {
let bytes = res.lock().unwrap();
assert_eq!(expected, &String::from_utf8_lossy(&**bytes))
},
"32343434345830f138343438323439f0",
0xffff,
r#"{"depth":1,"gas":"0xffff","op":50,"opName":"ORIGIN","pc":0,"stack":[],"storage":{}}
{"depth":1,"gas":"0xfffd","op":52,"opName":"CALLVALUE","pc":1,"stack":["0x0"],"storage":{}}
{"depth":1,"gas":"0xfffb","op":52,"opName":"CALLVALUE","pc":2,"stack":["0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0xfff9","op":52,"opName":"CALLVALUE","pc":3,"stack":["0x0","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0xfff7","op":52,"opName":"CALLVALUE","pc":4,"stack":["0x0","0x0","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0xfff5","op":88,"opName":"PC","pc":5,"stack":["0x0","0x0","0x0","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0xfff3","op":48,"opName":"ADDRESS","pc":6,"stack":["0x0","0x0","0x0","0x0","0x0","0x5"],"storage":{}}
{"depth":1,"gas":"0xfff1","op":241,"opName":"CALL","pc":7,"stack":["0x0","0x0","0x0","0x0","0x0","0x5","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e21","op":56,"opName":"CODESIZE","pc":8,"stack":["0x1"],"storage":{}}
{"depth":1,"gas":"0x9e1f","op":52,"opName":"CALLVALUE","pc":9,"stack":["0x1","0x10"],"storage":{}}
{"depth":1,"gas":"0x9e1d","op":52,"opName":"CALLVALUE","pc":10,"stack":["0x1","0x10","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e1b","op":56,"opName":"CODESIZE","pc":11,"stack":["0x1","0x10","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e19","op":50,"opName":"ORIGIN","pc":12,"stack":["0x1","0x10","0x0","0x0","0x10"],"storage":{}}
{"depth":1,"gas":"0x9e17","op":52,"opName":"CALLVALUE","pc":13,"stack":["0x1","0x10","0x0","0x0","0x10","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e15","op":57,"opName":"CODECOPY","pc":14,"stack":["0x1","0x10","0x0","0x0","0x10","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e0c","op":240,"opName":"CREATE","pc":15,"stack":["0x1","0x10","0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x210c","op":50,"opName":"ORIGIN","pc":0,"stack":[],"storage":{}}
{"depth":2,"gas":"0x210a","op":52,"opName":"CALLVALUE","pc":1,"stack":["0x0"],"storage":{}}
{"depth":2,"gas":"0x2108","op":52,"opName":"CALLVALUE","pc":2,"stack":["0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x2106","op":52,"opName":"CALLVALUE","pc":3,"stack":["0x0","0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x2104","op":52,"opName":"CALLVALUE","pc":4,"stack":["0x0","0x0","0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x2102","op":88,"opName":"PC","pc":5,"stack":["0x0","0x0","0x0","0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x2100","op":48,"opName":"ADDRESS","pc":6,"stack":["0x0","0x0","0x0","0x0","0x0","0x5"],"storage":{}}
{"depth":2,"gas":"0x20fe","op":241,"opName":"CALL","pc":7,"stack":["0x0","0x0","0x0","0x0","0x0","0x5","0xbd770416a3345f91e4b34576cb804a576fa48eb1"],"storage":{}}
"#,
)
}
#[test]
fn should_omit_storage_and_memory_flag() {
// should omit storage
let (informant, res) = informant(Config::new(true, true));
run_test(
informant,
move |_, expected| {
let bytes = res.lock().unwrap();
assert_eq!(expected, &String::from_utf8_lossy(&**bytes))
},
"3260D85554",
0xffff,
r#"{"depth":1,"gas":"0xffff","op":50,"opName":"ORIGIN","pc":0,"stack":[],"storage":null}
{"depth":1,"gas":"0xfffd","op":96,"opName":"PUSH1","pc":1,"stack":["0x0"],"storage":null}
{"depth":1,"gas":"0xfffa","op":85,"opName":"SSTORE","pc":3,"stack":["0x0","0xd8"],"storage":null}
{"depth":1,"gas":"0xec72","op":84,"opName":"SLOAD","pc":4,"stack":[],"storage":null}
"#,
)
}
}

View File

@ -1,316 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! VM runner.
use ethcore::{
client::{self, EvmTestClient, EvmTestError, TransactErr, TransactSuccess},
pod_state, spec, state, state_db, trace, TrieSpec,
};
use ethereum_types::{H256, U256};
use ethjson;
use std::time::{Duration, Instant};
use types::transaction;
use vm::ActionParams;
/// VM execution informant
pub trait Informant: trace::VMTracer {
/// Sink to use with finish
type Sink;
/// Display a single run init message
fn before_test(&mut self, test: &str, action: &str);
/// Set initial gas.
fn set_gas(&mut self, _gas: U256) {}
/// Clone sink.
fn clone_sink(&self) -> Self::Sink;
/// Display final result.
fn finish(result: RunResult<Self::Output>, &mut Self::Sink);
}
/// Execution finished correctly
#[derive(Debug)]
pub struct Success<T> {
/// State root
pub state_root: H256,
/// Used gas
pub gas_used: U256,
/// Output as bytes
pub output: Vec<u8>,
/// Time Taken
pub time: Duration,
/// Traces
pub traces: Option<T>,
/// Optional end state dump
pub end_state: Option<pod_state::PodState>,
}
/// Execution failed
#[derive(Debug)]
pub struct Failure<T> {
/// State root
pub state_root: H256,
/// Used gas
pub gas_used: U256,
/// Internal error
pub error: EvmTestError,
/// Duration
pub time: Duration,
/// Traces
pub traces: Option<T>,
/// Optional end state dump
pub end_state: Option<pod_state::PodState>,
}
/// EVM Execution result
pub type RunResult<T> = Result<Success<T>, Failure<T>>;
/// Execute given `ActionParams` and return the result.
pub fn run_action<T: Informant>(
spec: &spec::Spec,
mut params: ActionParams,
mut informant: T,
trie_spec: TrieSpec,
) -> RunResult<T::Output> {
informant.set_gas(params.gas);
// if the code is not overwritten from CLI, use code from spec file.
if params.code.is_none() {
if let Some(acc) = spec.genesis_state().get().get(&params.code_address) {
params.code = acc.code.clone().map(::std::sync::Arc::new);
params.code_hash = None;
}
}
run(
spec,
trie_spec,
params.gas,
spec.genesis_state(),
|mut client| {
let result = match client.call(params, &mut trace::NoopTracer, &mut informant) {
Ok(r) => (Ok(r.return_data.to_vec()), Some(r.gas_left)),
Err(err) => (Err(err), None),
};
(result.0, H256::zero(), None, result.1, informant.drain())
},
)
}
/// Execute given Transaction and verify resulting state root.
pub fn run_transaction<T: Informant>(
name: &str,
idx: usize,
spec: &ethjson::spec::ForkSpec,
pre_state: &pod_state::PodState,
post_root: H256,
env_info: &client::EnvInfo,
transaction: transaction::SignedTransaction,
mut informant: T,
trie_spec: TrieSpec,
) {
let spec_name = format!("{:?}", spec).to_lowercase();
let spec = match EvmTestClient::spec_from_json(spec) {
Some(spec) => {
informant.before_test(&format!("{}:{}:{}", name, spec_name, idx), "starting");
spec
}
None => {
informant.before_test(
&format!("{}:{}:{}", name, spec_name, idx),
"skipping because of missing spec",
);
return;
}
};
informant.set_gas(env_info.gas_limit);
let mut sink = informant.clone_sink();
let result = run(
&spec,
trie_spec,
transaction.tx().gas,
pre_state,
|mut client| {
let result = client.transact(env_info, transaction, trace::NoopTracer, informant);
match result {
Ok(TransactSuccess {
state_root,
gas_left,
output,
vm_trace,
end_state,
..
}) => {
if state_root != post_root {
(
Err(EvmTestError::PostCondition(format!(
"State root mismatch (got: {:#x}, expected: {:#x})",
state_root, post_root,
))),
state_root,
end_state,
Some(gas_left),
None,
)
} else {
(Ok(output), state_root, end_state, Some(gas_left), vm_trace)
}
}
Err(TransactErr {
state_root,
error,
end_state,
}) => (
Err(EvmTestError::PostCondition(format!(
"Unexpected execution error: {:?}",
error
))),
state_root,
end_state,
None,
None,
),
}
},
);
T::finish(result, &mut sink)
}
fn dump_state(state: &state::State<state_db::StateDB>) -> Option<pod_state::PodState> {
state.to_pod_full().ok()
}
/// Execute VM with given `ActionParams`
pub fn run<'a, F, X>(
spec: &'a spec::Spec,
trie_spec: TrieSpec,
initial_gas: U256,
pre_state: &'a pod_state::PodState,
run: F,
) -> RunResult<X>
where
F: FnOnce(
EvmTestClient,
) -> (
Result<Vec<u8>, EvmTestError>,
H256,
Option<pod_state::PodState>,
Option<U256>,
Option<X>,
),
{
let do_dump = trie_spec == TrieSpec::Fat;
let mut test_client =
EvmTestClient::from_pod_state_with_trie(spec, pre_state.clone(), trie_spec).map_err(
|error| Failure {
gas_used: 0.into(),
error,
time: Duration::from_secs(0),
traces: None,
state_root: H256::default(),
end_state: None,
},
)?;
if do_dump {
test_client.set_dump_state_fn(dump_state);
}
let start = Instant::now();
let result = run(test_client);
let time = start.elapsed();
match result {
(Ok(output), state_root, end_state, gas_left, traces) => Ok(Success {
state_root,
gas_used: gas_left
.map(|gas_left| initial_gas - gas_left)
.unwrap_or(initial_gas),
output,
time,
traces,
end_state,
}),
(Err(error), state_root, end_state, gas_left, traces) => Err(Failure {
gas_used: gas_left
.map(|gas_left| initial_gas - gas_left)
.unwrap_or(initial_gas),
error,
time,
traces,
state_root,
end_state,
}),
}
}
#[cfg(test)]
pub mod tests {
use super::*;
use ethereum_types::Address;
use rustc_hex::FromHex;
use std::sync::Arc;
use tempdir::TempDir;
pub fn run_test<T, I, F>(informant: I, compare: F, code: &str, gas: T, expected: &str)
where
T: Into<U256>,
I: Informant,
F: FnOnce(Option<I::Output>, &str),
{
let mut params = ActionParams::default();
params.code = Some(Arc::new(code.from_hex().unwrap()));
params.gas = gas.into();
let tempdir = TempDir::new("").unwrap();
let spec = ::ethcore::ethereum::new_foundation(&tempdir.path());
let result = run_action(&spec, params, informant, TrieSpec::Secure);
match result {
Ok(Success { traces, .. }) => compare(traces, expected),
Err(Failure { traces, .. }) => compare(traces, expected),
}
}
#[test]
fn should_call_account_from_spec() {
use display::{config::Config, std_json::tests::informant};
let (inf, res) = informant(Config::default());
let mut params = ActionParams::default();
params.code_address = Address::from_low_u64_be(0x20);
params.gas = 0xffff.into();
let spec = ::ethcore::ethereum::load(None, include_bytes!("../res/testchain.json"));
let _result = run_action(&spec, params, inf, TrieSpec::Secure);
assert_eq!(
&String::from_utf8_lossy(&**res.lock().unwrap()),
r#"{"depth":1,"gas":"0xffff","op":98,"opName":"PUSH3","pc":0,"stack":[],"storage":{}}
{"depth":1,"gas":"0xfffc","op":96,"opName":"PUSH1","pc":4,"stack":["0xaaaaaa"],"storage":{}}
{"depth":1,"gas":"0xfff9","op":96,"opName":"PUSH1","pc":6,"stack":["0xaaaaaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xfff6","op":80,"opName":"POP","pc":8,"stack":["0xaaaaaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xfff4","op":96,"opName":"PUSH1","pc":9,"stack":["0xaaaaaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xfff1","op":96,"opName":"PUSH1","pc":11,"stack":["0xaaaaaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xffee","op":96,"opName":"PUSH1","pc":13,"stack":["0xaaaaaa","0xaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xffeb","op":96,"opName":"PUSH1","pc":15,"stack":["0xaaaaaa","0xaa","0xaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xffe8","op":96,"opName":"PUSH1","pc":17,"stack":["0xaaaaaa","0xaa","0xaa","0xaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xffe5","op":96,"opName":"PUSH1","pc":19,"stack":["0xaaaaaa","0xaa","0xaa","0xaa","0xaa","0xaa","0xaa"],"storage":{}}
"#
);
}
}

View File

@ -1,511 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! OpenEthereum EVM interpreter binary.
#![warn(missing_docs)]
extern crate common_types as types;
extern crate ethcore;
extern crate ethjson;
extern crate rustc_hex;
extern crate serde;
#[macro_use]
extern crate serde_derive;
#[macro_use]
extern crate serde_json;
extern crate docopt;
extern crate env_logger;
extern crate ethereum_types;
extern crate evm;
extern crate panic_hook;
extern crate parity_bytes as bytes;
extern crate vm;
#[cfg(test)]
#[macro_use]
extern crate pretty_assertions;
#[cfg(test)]
extern crate tempdir;
use bytes::Bytes;
use docopt::Docopt;
use ethcore::{json_tests, spec, TrieSpec};
use ethereum_types::{Address, U256};
use ethjson::spec::ForkSpec;
use evm::EnvInfo;
use rustc_hex::FromHex;
use std::{fmt, fs, path::PathBuf, sync::Arc};
use vm::{ActionParams, CallType};
mod display;
mod info;
use info::Informant;
const USAGE: &'static str = r#"
EVM implementation for Parity.
Copyright 2015-2020 Parity Technologies (UK) Ltd.
Usage:
openethereum-evm state-test <file> [--json --std-json --std-dump-json --only NAME --chain CHAIN --std-out-only --std-err-only --omit-storage-output --omit-memory-output]
openethereum-evm stats [options]
openethereum-evm stats-jsontests-vm <file>
openethereum-evm [options]
openethereum-evm [-h | --help]
Commands:
state-test Run a state test from a json file.
stats Execute EVM runtime code and return the statistics.
stats-jsontests-vm Execute standard json-tests format VMTests and return
timing statistics in tsv format.
Transaction options:
--code CODE Contract code as hex (without 0x).
--to ADDRESS Recipient address (without 0x).
--from ADDRESS Sender address (without 0x).
--input DATA Input data as hex (without 0x).
--gas GAS Supplied gas as hex (without 0x).
--gas-price WEI Supplied gas price as hex (without 0x).
State test options:
--chain CHAIN Run only from specific chain name (i.e. one of EIP150, EIP158,
Frontier, Homestead, Byzantium, Constantinople,
ConstantinopleFix, Istanbul, EIP158ToByzantiumAt5, FrontierToHomesteadAt5,
HomesteadToDaoAt5, HomesteadToEIP150At5, Berlin, Yolo3).
--only NAME Runs only a single test matching the name.
General options:
--json Display verbose results in JSON.
--std-json Display results in standardized JSON format.
--std-err-only With --std-json redirect to err output only.
--std-out-only With --std-json redirect to out output only.
--omit-storage-output With --std-json omit storage output.
--omit-memory-output With --std-json omit memory output.
--std-dump-json Display results in standardized JSON format
with additional state dump.
Display result state dump in standardized JSON format.
--chain CHAIN Chain spec file path.
-h, --help Display this message and exit.
"#;
fn main() {
panic_hook::set_abort();
env_logger::init();
let args: Args = Docopt::new(USAGE)
.and_then(|d| d.deserialize())
.unwrap_or_else(|e| e.exit());
let config = args.config();
if args.cmd_state_test {
run_state_test(args)
} else if args.cmd_stats_jsontests_vm {
run_stats_jsontests_vm(args)
} else if args.flag_json {
run_call(args, display::json::Informant::new(config))
} else if args.flag_std_dump_json || args.flag_std_json {
if args.flag_std_err_only {
run_call(args, display::std_json::Informant::err_only(config))
} else if args.flag_std_out_only {
run_call(args, display::std_json::Informant::out_only(config))
} else {
run_call(args, display::std_json::Informant::new_default(config))
};
} else {
run_call(args, display::simple::Informant::new(config))
}
}
fn run_stats_jsontests_vm(args: Args) {
use json_tests::HookType;
use std::{
collections::HashMap,
time::{Duration, Instant},
};
let file = args.arg_file.expect("FILE (or PATH) is required");
let mut timings: HashMap<String, (Instant, Option<Duration>)> = HashMap::new();
{
let mut record_time = |name: &str, typ: HookType| match typ {
HookType::OnStart => {
timings.insert(name.to_string(), (Instant::now(), None));
}
HookType::OnStop => {
timings.entry(name.to_string()).and_modify(|v| {
v.1 = Some(v.0.elapsed());
});
}
};
for file_path in json_tests::find_json_files_recursive(&file) {
let json_data = std::fs::read(&file_path).unwrap();
json_tests::json_executive_test(&file_path, &json_data, &mut record_time);
}
}
for (name, v) in timings {
println!(
"{}\t{}",
name,
display::as_micros(&v.1.expect("All hooks are called with OnStop; qed"))
);
}
}
fn run_state_test(args: Args) {
use ethjson::state::test::Test;
let config = args.config();
let file = args.arg_file.expect("FILE is required");
let mut file = match fs::File::open(&file) {
Err(err) => die(format!("Unable to open: {:?}: {}", file, err)),
Ok(file) => file,
};
let state_test = match Test::load(&mut file) {
Err(err) => die(format!("Unable to load the test file: {}", err)),
Ok(test) => test,
};
let only_test = args.flag_only.map(|s| s.to_lowercase());
let only_chain = args.flag_chain.map(|s| s.to_lowercase());
for (name, test) in state_test {
if let Some(false) = only_test
.as_ref()
.map(|only_test| &name.to_lowercase() == only_test)
{
continue;
}
let multitransaction = test.transaction;
let env_info: EnvInfo = test.env.into();
let pre = test.pre_state.into();
for (spec, states) in test.post_states {
//hardcode base fee for part of the london tests, that miss base fee field in env
let mut test_env = env_info.clone();
if spec >= ForkSpec::London {
if test_env.base_fee.is_none() {
test_env.base_fee = Some(0x0a.into());
}
}
if let Some(false) = only_chain
.as_ref()
.map(|only_chain| &format!("{:?}", spec).to_lowercase() == only_chain)
{
continue;
}
for (idx, state) in states.into_iter().enumerate() {
let post_root = state.hash.into();
let transaction = multitransaction.select(&state.indexes);
let trie_spec = if args.flag_std_dump_json {
TrieSpec::Fat
} else {
TrieSpec::Secure
};
if args.flag_json {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::json::Informant::new(config),
trie_spec,
)
} else if args.flag_std_dump_json || args.flag_std_json {
if args.flag_std_err_only {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::std_json::Informant::err_only(config),
trie_spec,
)
} else if args.flag_std_out_only {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::std_json::Informant::out_only(config),
trie_spec,
)
} else {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::std_json::Informant::new_default(config),
trie_spec,
)
}
} else {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::simple::Informant::new(config),
trie_spec,
)
}
}
}
}
}
fn run_call<T: Informant>(args: Args, informant: T) {
let from = arg(args.from(), "--from");
let to = arg(args.to(), "--to");
let code = arg(args.code(), "--code");
let spec = arg(args.spec(), "--chain");
let gas = arg(args.gas(), "--gas");
let gas_price = arg(args.gas_price(), "--gas-price");
let data = arg(args.data(), "--input");
if code.is_none() && to == Address::default() {
die("Either --code or --to is required.");
}
let mut params = ActionParams::default();
if spec.engine.params().eip2929_transition == 0 {
params.access_list.enable();
params.access_list.insert_address(from);
params.access_list.insert_address(to);
for (builtin, _) in spec.engine.builtins() {
params.access_list.insert_address(*builtin);
}
}
params.call_type = if code.is_none() {
CallType::Call
} else {
CallType::None
};
params.code_address = to;
params.address = to;
params.sender = from;
params.origin = from;
params.gas = gas;
params.gas_price = gas_price;
params.code = code.map(Arc::new);
params.data = data;
let mut sink = informant.clone_sink();
let result = if args.flag_std_dump_json {
info::run_action(&spec, params, informant, TrieSpec::Fat)
} else {
info::run_action(&spec, params, informant, TrieSpec::Secure)
};
T::finish(result, &mut sink);
}
#[derive(Debug, Deserialize)]
struct Args {
cmd_stats: bool,
cmd_state_test: bool,
cmd_stats_jsontests_vm: bool,
arg_file: Option<PathBuf>,
flag_only: Option<String>,
flag_from: Option<String>,
flag_to: Option<String>,
flag_code: Option<String>,
flag_gas: Option<String>,
flag_gas_price: Option<String>,
flag_input: Option<String>,
flag_chain: Option<String>,
flag_json: bool,
flag_std_json: bool,
flag_std_dump_json: bool,
flag_std_err_only: bool,
flag_std_out_only: bool,
flag_omit_storage_output: bool,
flag_omit_memory_output: bool,
}
impl Args {
pub fn gas(&self) -> Result<U256, String> {
match self.flag_gas {
Some(ref gas) => gas.parse().map_err(to_string),
None => Ok(U256::from(u64::max_value())),
}
}
pub fn gas_price(&self) -> Result<U256, String> {
match self.flag_gas_price {
Some(ref gas_price) => gas_price.parse().map_err(to_string),
None => Ok(U256::zero()),
}
}
pub fn from(&self) -> Result<Address, String> {
match self.flag_from {
Some(ref from) => from.parse().map_err(to_string),
None => Ok(Address::default()),
}
}
pub fn to(&self) -> Result<Address, String> {
match self.flag_to {
Some(ref to) => to.parse().map_err(to_string),
None => Ok(Address::default()),
}
}
pub fn code(&self) -> Result<Option<Bytes>, String> {
match self.flag_code {
Some(ref code) => code.from_hex().map(Some).map_err(to_string),
None => Ok(None),
}
}
pub fn data(&self) -> Result<Option<Bytes>, String> {
match self.flag_input {
Some(ref input) => input.from_hex().map_err(to_string).map(Some),
None => Ok(None),
}
}
pub fn spec(&self) -> Result<spec::Spec, String> {
Ok(match self.flag_chain {
Some(ref spec_name) => {
let fork_spec: Result<ethjson::spec::ForkSpec, _> =
serde_json::from_str(&format!("{:?}", spec_name));
if let Ok(fork_spec) = fork_spec {
ethcore::client::EvmTestClient::spec_from_json(&fork_spec)
.expect("this forkspec is not defined")
} else {
let file = fs::File::open(spec_name).map_err(|e| format!("{}", e))?;
spec::Spec::load(&::std::env::temp_dir(), file)?
}
}
None => ethcore::ethereum::new_foundation(&::std::env::temp_dir()),
})
}
pub fn config(&self) -> display::config::Config {
display::config::Config::new(self.flag_omit_storage_output, self.flag_omit_memory_output)
}
}
fn arg<T>(v: Result<T, String>, param: &str) -> T {
v.unwrap_or_else(|e| die(format!("Invalid {}: {}", param, e)))
}
fn to_string<T: fmt::Display>(msg: T) -> String {
format!("{}", msg)
}
fn die<T: fmt::Display>(msg: T) -> ! {
println!("{}", msg);
::std::process::exit(-1)
}
#[cfg(test)]
mod tests {
use super::{Args, USAGE};
use docopt::Docopt;
use ethereum_types::Address;
fn run<T: AsRef<str>>(args: &[T]) -> Args {
Docopt::new(USAGE)
.and_then(|d| d.argv(args.into_iter()).deserialize())
.unwrap()
}
#[test]
fn should_parse_all_the_options() {
let args = run(&[
"openethereum-evm",
"--json",
"--std-json",
"--std-dump-json",
"--gas",
"1",
"--gas-price",
"2",
"--from",
"0000000000000000000000000000000000000003",
"--to",
"0000000000000000000000000000000000000004",
"--code",
"05",
"--input",
"06",
"--chain",
"./testfile",
"--std-err-only",
"--std-out-only",
]);
assert_eq!(args.flag_json, true);
assert_eq!(args.flag_std_json, true);
assert_eq!(args.flag_std_dump_json, true);
assert_eq!(args.flag_std_err_only, true);
assert_eq!(args.flag_std_out_only, true);
assert_eq!(args.gas(), Ok(1.into()));
assert_eq!(args.gas_price(), Ok(2.into()));
assert_eq!(args.from(), Ok(Address::from_low_u64_be(3)));
assert_eq!(args.to(), Ok(Address::from_low_u64_be(4)));
assert_eq!(args.code(), Ok(Some(vec![05])));
assert_eq!(args.data(), Ok(Some(vec![06])));
assert_eq!(args.flag_chain, Some("./testfile".to_owned()));
}
#[test]
fn should_parse_state_test_command() {
let args = run(&[
"openethereum-evm",
"state-test",
"./file.json",
"--chain",
"homestead",
"--only=add11",
"--json",
"--std-json",
"--std-dump-json",
]);
assert_eq!(args.cmd_state_test, true);
assert!(args.arg_file.is_some());
assert_eq!(args.flag_json, true);
assert_eq!(args.flag_std_json, true);
assert_eq!(args.flag_std_dump_json, true);
assert_eq!(args.flag_chain, Some("homestead".to_owned()));
assert_eq!(args.flag_only, Some("add11".to_owned()));
}
}

View File

@ -1,141 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crate::params::SpecType;
use std::num::NonZeroU32;
#[derive(Debug, PartialEq)]
pub enum AccountCmd {
New(NewAccount),
List(ListAccounts),
Import(ImportAccounts),
}
#[derive(Debug, PartialEq)]
pub struct ListAccounts {
pub path: String,
pub spec: SpecType,
}
#[derive(Debug, PartialEq)]
pub struct NewAccount {
pub iterations: NonZeroU32,
pub path: String,
pub spec: SpecType,
pub password_file: Option<String>,
}
#[derive(Debug, PartialEq)]
pub struct ImportAccounts {
pub from: Vec<String>,
pub to: String,
pub spec: SpecType,
}
#[cfg(not(feature = "accounts"))]
pub fn execute(_cmd: AccountCmd) -> Result<String, String> {
Err("Account management is deprecated. Please see #9997 for alternatives:\nhttps://github.com/openethereum/openethereum/issues/9997".into())
}
#[cfg(feature = "accounts")]
mod command {
use super::*;
use crate::{
accounts::{AccountProvider, AccountProviderSettings},
helpers::{password_from_file, password_prompt},
};
use ethstore::{accounts_dir::RootDiskDirectory, import_account, import_accounts, EthStore};
use std::path::PathBuf;
pub fn execute(cmd: AccountCmd) -> Result<String, String> {
match cmd {
AccountCmd::New(new_cmd) => new(new_cmd),
AccountCmd::List(list_cmd) => list(list_cmd),
AccountCmd::Import(import_cmd) => import(import_cmd),
}
}
fn keys_dir(path: String, spec: SpecType) -> Result<RootDiskDirectory, String> {
let spec = spec.spec(&::std::env::temp_dir())?;
let mut path = PathBuf::from(&path);
path.push(spec.data_dir);
RootDiskDirectory::create(path).map_err(|e| format!("Could not open keys directory: {}", e))
}
fn secret_store(
dir: Box<RootDiskDirectory>,
iterations: Option<NonZeroU32>,
) -> Result<EthStore, String> {
match iterations {
Some(i) => EthStore::open_with_iterations(dir, i),
_ => EthStore::open(dir),
}
.map_err(|e| format!("Could not open keys store: {}", e))
}
fn new(n: NewAccount) -> Result<String, String> {
let password = match n.password_file {
Some(file) => password_from_file(file)?,
None => password_prompt()?,
};
let dir = Box::new(keys_dir(n.path, n.spec)?);
let secret_store = Box::new(secret_store(dir, Some(n.iterations))?);
let acc_provider = AccountProvider::new(secret_store, AccountProviderSettings::default());
let new_account = acc_provider
.new_account(&password)
.map_err(|e| format!("Could not create new account: {}", e))?;
Ok(format!("0x{:x}", new_account))
}
fn list(list_cmd: ListAccounts) -> Result<String, String> {
let dir = Box::new(keys_dir(list_cmd.path, list_cmd.spec)?);
let secret_store = Box::new(secret_store(dir, None)?);
let acc_provider = AccountProvider::new(secret_store, AccountProviderSettings::default());
let accounts = acc_provider.accounts().map_err(|e| format!("{}", e))?;
let result = accounts
.into_iter()
.map(|a| format!("0x{:x}", a))
.collect::<Vec<String>>()
.join("\n");
Ok(result)
}
fn import(i: ImportAccounts) -> Result<String, String> {
let to = keys_dir(i.to, i.spec)?;
let mut imported = 0;
for path in &i.from {
let path = PathBuf::from(path);
if path.is_dir() {
let from = RootDiskDirectory::at(&path);
imported += import_accounts(&from, &to)
.map_err(|e| format!("Importing accounts from {:?} failed: {}", path, e))?
.len();
} else if path.is_file() {
import_account(&path, &to)
.map_err(|e| format!("Importing account from {:?} failed: {}", path, e))?;
imported += 1;
}
}
Ok(format!("{} account(s) imported", imported))
}
}
#[cfg(feature = "accounts")]
pub use self::command::execute;

View File

@ -1,263 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::sync::Arc;
use crypto::publickey;
use dir::Directories;
use ethereum_types::{Address, H160};
use ethkey::Password;
use crate::params::{AccountsConfig, SpecType};
#[cfg(not(feature = "accounts"))]
mod accounts {
use super::*;
/// Dummy AccountProvider
pub struct AccountProvider;
impl ::ethcore::miner::LocalAccounts for AccountProvider {
fn is_local(&self, _address: &Address) -> bool {
false
}
}
pub fn prepare_account_provider(
_spec: &SpecType,
_dirs: &Directories,
_data_dir: &str,
_cfg: AccountsConfig,
_passwords: &[Password],
) -> Result<AccountProvider, String> {
warn!("Note: Your instance of OpenEthereum is running without account support. Some CLI options are ignored.");
Ok(AccountProvider)
}
pub fn miner_local_accounts(_: Arc<AccountProvider>) -> AccountProvider {
AccountProvider
}
pub fn miner_author(
_spec: &SpecType,
_dirs: &Directories,
_account_provider: &Arc<AccountProvider>,
_engine_signer: Address,
_passwords: &[Password],
) -> Result<Option<::ethcore::miner::Author>, String> {
Ok(None)
}
pub fn accounts_list(
_account_provider: Arc<AccountProvider>,
) -> Arc<dyn Fn() -> Vec<Address> + Send + Sync> {
Arc::new(|| vec![])
}
}
#[cfg(feature = "accounts")]
mod accounts {
use super::*;
use crate::{ethereum_types::H256, upgrade::upgrade_key_location};
use std::str::FromStr;
pub use crate::accounts::AccountProvider;
/// Pops along with error messages when a password is missing or invalid.
const VERIFY_PASSWORD_HINT: &str = "Make sure valid password is present in files passed using `--password` or in the configuration file.";
/// Initialize account provider
pub fn prepare_account_provider(
spec: &SpecType,
dirs: &Directories,
data_dir: &str,
cfg: AccountsConfig,
passwords: &[Password],
) -> Result<AccountProvider, String> {
use crate::accounts::AccountProviderSettings;
use ethstore::{accounts_dir::RootDiskDirectory, EthStore};
let path = dirs.keys_path(data_dir);
upgrade_key_location(&dirs.legacy_keys_path(cfg.testnet), &path);
let dir = Box::new(
RootDiskDirectory::create(&path)
.map_err(|e| format!("Could not open keys directory: {}", e))?,
);
let account_settings = AccountProviderSettings {
unlock_keep_secret: cfg.enable_fast_unlock,
blacklisted_accounts: match *spec {
SpecType::Morden
| SpecType::Ropsten
| SpecType::Kovan
| SpecType::Goerli
| SpecType::Sokol
| SpecType::Dev => vec![],
_ => vec![H160::from_str("00a329c0648769a73afac7f9381e08fb43dbea72")
.expect("the string is valid hex; qed")],
},
};
let ethstore = EthStore::open_with_iterations(dir, cfg.iterations)
.map_err(|e| format!("Could not open keys directory: {}", e))?;
if cfg.refresh_time > 0 {
ethstore.set_refresh_time(::std::time::Duration::from_secs(cfg.refresh_time));
}
let account_provider = AccountProvider::new(Box::new(ethstore), account_settings);
// Add development account if running dev chain:
if let SpecType::Dev = *spec {
insert_dev_account(&account_provider);
}
for a in cfg.unlocked_accounts {
// Check if the account exists
if !account_provider.has_account(a) {
return Err(format!(
"Account {} not found for the current chain. {}",
a,
build_create_account_hint(spec, &dirs.keys)
));
}
// Check if any passwords have been read from the password file(s)
if passwords.is_empty() {
return Err(format!(
"No password found to unlock account {}. {}",
a, VERIFY_PASSWORD_HINT
));
}
if !passwords.iter().any(|p| {
account_provider
.unlock_account_permanently(a, (*p).clone())
.is_ok()
}) {
return Err(format!(
"No valid password to unlock account {}. {}",
a, VERIFY_PASSWORD_HINT
));
}
}
Ok(account_provider)
}
pub struct LocalAccounts(Arc<AccountProvider>);
impl ::ethcore::miner::LocalAccounts for LocalAccounts {
fn is_local(&self, address: &Address) -> bool {
self.0.has_account(*address)
}
}
pub fn miner_local_accounts(account_provider: Arc<AccountProvider>) -> LocalAccounts {
LocalAccounts(account_provider)
}
pub fn miner_author(
spec: &SpecType,
dirs: &Directories,
account_provider: &Arc<AccountProvider>,
engine_signer: Address,
passwords: &[Password],
) -> Result<Option<::ethcore::miner::Author>, String> {
use ethcore::engines::EngineSigner;
// Check if engine signer exists
if !account_provider.has_account(engine_signer) {
return Err(format!(
"Consensus signer account not found for the current chain. {}",
build_create_account_hint(spec, &dirs.keys)
));
}
// Check if any passwords have been read from the password file(s)
if passwords.is_empty() {
return Err(format!(
"No password found for the consensus signer {}. {}",
engine_signer, VERIFY_PASSWORD_HINT
));
}
let mut author = None;
for password in passwords {
let signer = parity_rpc::signer::EngineSigner::new(
account_provider.clone(),
engine_signer,
password.clone(),
);
// sign dummy msg to check if password and account can be used.
if signer.sign(H256::from_low_u64_be(1)).is_ok() {
author = Some(::ethcore::miner::Author::Sealer(Box::new(signer)));
}
}
if author.is_none() {
return Err(format!(
"No valid password for the consensus signer {}. {}",
engine_signer, VERIFY_PASSWORD_HINT
));
}
Ok(author)
}
pub fn accounts_list(
account_provider: Arc<AccountProvider>,
) -> Arc<dyn Fn() -> Vec<Address> + Send + Sync> {
Arc::new(move || account_provider.accounts().unwrap_or_default())
}
fn insert_dev_account(account_provider: &AccountProvider) {
let secret = publickey::Secret::from_str(
"4d5db4107d237df6a3d58ee5f70ae63d73d7658d4026f2eefd2f204c81682cb7",
)
.expect("Valid account;qed");
let dev_account = publickey::KeyPair::from_secret(secret.clone())
.expect("Valid secret produces valid key;qed");
if !account_provider.has_account(dev_account.address()) {
match account_provider.insert_account(secret, &Password::from(String::new())) {
Err(e) => warn!("Unable to add development account: {}", e),
Ok(address) => {
let _ = account_provider
.set_account_name(address.clone(), "Development Account".into());
let _ = account_provider.set_account_meta(
address,
::serde_json::to_string(
&(vec![
(
"description",
"Never use this account outside of development chain!",
),
("passwordHint", "Password is empty string"),
]
.into_iter()
.collect::<::std::collections::HashMap<_, _>>()),
)
.expect("Serialization of hashmap does not fail."),
);
}
}
}
}
// Construct an error `String` with an adaptive hint on how to create an account.
fn build_create_account_hint(spec: &SpecType, keys: &str) -> String {
format!("You can create an account via RPC, UI or `openethereum account new --chain {} --keys-path {}`.", spec, keys)
}
}
pub use self::accounts::{
accounts_list, miner_author, miner_local_accounts, prepare_account_provider, AccountProvider,
};

View File

@ -1,551 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{fs, io, sync::Arc, time::Instant};
use crate::{
bytes::ToPretty,
cache::CacheConfig,
db,
hash::{keccak, KECCAK_NULL_RLP},
helpers::{execute_upgrades, to_client_config},
informant::{FullNodeInformantData, Informant, MillisecondDuration},
params::{fatdb_switch_to_bool, tracing_switch_to_bool, Pruning, SpecType, Switch},
types::data_format::DataFormat,
user_defaults::UserDefaults,
};
use ansi_term::Colour;
use dir::Directories;
use ethcore::{
client::{
Balance, BlockChainClient, BlockChainReset, BlockId, DatabaseCompactionProfile,
ImportExportBlocks, Mode, Nonce, VMType,
},
miner::Miner,
verification::queue::VerifierSettings,
};
use ethcore_service::ClientService;
use ethereum_types::{Address, H256, U256};
#[derive(Debug, PartialEq)]
pub enum BlockchainCmd {
Kill(KillBlockchain),
Import(ImportBlockchain),
Export(ExportBlockchain),
ExportState(ExportState),
Reset(ResetBlockchain),
}
#[derive(Debug, PartialEq)]
pub struct ResetBlockchain {
pub dirs: Directories,
pub spec: SpecType,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub tracing: Switch,
pub fat_db: Switch,
pub compaction: DatabaseCompactionProfile,
pub cache_config: CacheConfig,
pub num: u32,
}
#[derive(Debug, PartialEq)]
pub struct KillBlockchain {
pub spec: SpecType,
pub dirs: Directories,
pub pruning: Pruning,
}
#[derive(Debug, PartialEq)]
pub struct ImportBlockchain {
pub spec: SpecType,
pub cache_config: CacheConfig,
pub dirs: Directories,
pub file_path: Option<String>,
pub format: Option<DataFormat>,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub compaction: DatabaseCompactionProfile,
pub tracing: Switch,
pub fat_db: Switch,
pub vm_type: VMType,
pub check_seal: bool,
pub with_color: bool,
pub verifier_settings: VerifierSettings,
pub max_round_blocks_to_import: usize,
}
#[derive(Debug, PartialEq)]
pub struct ExportBlockchain {
pub spec: SpecType,
pub cache_config: CacheConfig,
pub dirs: Directories,
pub file_path: Option<String>,
pub format: Option<DataFormat>,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub compaction: DatabaseCompactionProfile,
pub fat_db: Switch,
pub tracing: Switch,
pub from_block: BlockId,
pub to_block: BlockId,
pub check_seal: bool,
pub max_round_blocks_to_import: usize,
}
#[derive(Debug, PartialEq)]
pub struct ExportState {
pub spec: SpecType,
pub cache_config: CacheConfig,
pub dirs: Directories,
pub file_path: Option<String>,
pub format: Option<DataFormat>,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub compaction: DatabaseCompactionProfile,
pub fat_db: Switch,
pub tracing: Switch,
pub at: BlockId,
pub storage: bool,
pub code: bool,
pub min_balance: Option<U256>,
pub max_balance: Option<U256>,
pub max_round_blocks_to_import: usize,
}
pub fn execute(cmd: BlockchainCmd) -> Result<(), String> {
match cmd {
BlockchainCmd::Kill(kill_cmd) => kill_db(kill_cmd),
BlockchainCmd::Import(import_cmd) => execute_import(import_cmd),
BlockchainCmd::Export(export_cmd) => execute_export(export_cmd),
BlockchainCmd::ExportState(export_cmd) => execute_export_state(export_cmd),
BlockchainCmd::Reset(reset_cmd) => execute_reset(reset_cmd),
}
}
fn execute_import(cmd: ImportBlockchain) -> Result<(), String> {
let timer = Instant::now();
// load spec file
let spec = cmd.spec.spec(&cmd.dirs.cache)?;
// load genesis hash
let genesis_hash = spec.genesis_header().hash();
// database paths
let db_dirs = cmd.dirs.database(genesis_hash, None, spec.data_dir.clone());
// user defaults path
let user_defaults_path = db_dirs.user_defaults_path();
// load user defaults
let mut user_defaults = UserDefaults::load(&user_defaults_path)?;
// select pruning algorithm
let algorithm = cmd.pruning.to_algorithm(&user_defaults);
// check if tracing is on
let tracing = tracing_switch_to_bool(cmd.tracing, &user_defaults)?;
// check if fatdb is on
let fat_db = fatdb_switch_to_bool(cmd.fat_db, &user_defaults, algorithm)?;
// prepare client and snapshot paths.
let client_path = db_dirs.client_path(algorithm);
let snapshot_path = db_dirs.snapshot_path();
// execute upgrades
execute_upgrades(&cmd.dirs.base, &db_dirs, algorithm, &cmd.compaction)?;
// create dirs used by parity
cmd.dirs.create_dirs(false, false)?;
// prepare client config
let mut client_config = to_client_config(
&cmd.cache_config,
spec.name.to_lowercase(),
Mode::Active,
tracing,
fat_db,
cmd.compaction,
cmd.vm_type,
"".into(),
algorithm,
cmd.pruning_history,
cmd.pruning_memory,
cmd.check_seal,
12,
);
client_config.queue.verifier_settings = cmd.verifier_settings;
let restoration_db_handler = db::restoration_db_handler(&client_path, &client_config);
let client_db = restoration_db_handler
.open(&client_path)
.map_err(|e| format!("Failed to open database {:?}", e))?;
// build client
let service = ClientService::start(
client_config,
&spec,
client_db,
&snapshot_path,
restoration_db_handler,
&cmd.dirs.ipc_path(),
// TODO [ToDr] don't use test miner here
// (actually don't require miner at all)
Arc::new(Miner::new_for_tests(&spec, None)),
)
.map_err(|e| format!("Client service error: {:?}", e))?;
// free up the spec in memory.
drop(spec);
let client = service.client();
let instream: Box<dyn io::Read> = match cmd.file_path {
Some(f) => {
Box::new(fs::File::open(&f).map_err(|_| format!("Cannot open given file: {}", f))?)
}
None => Box::new(io::stdin()),
};
let informant = Arc::new(Informant::new(
FullNodeInformantData {
client: client.clone(),
sync: None,
net: None,
},
None,
None,
cmd.with_color,
));
service
.register_io_handler(informant)
.map_err(|_| "Unable to register informant handler".to_owned())?;
client.import_blocks(instream, cmd.format)?;
// save user defaults
user_defaults.pruning = algorithm;
user_defaults.tracing = tracing;
user_defaults.fat_db = fat_db;
user_defaults.save(&user_defaults_path)?;
let report = client.report();
let ms = timer.elapsed().as_milliseconds();
info!("Import completed in {} seconds, {} blocks, {} blk/s, {} transactions, {} tx/s, {} Mgas, {} Mgas/s",
ms / 1000,
report.blocks_imported,
(report.blocks_imported * 1000) as u64 / ms,
report.transactions_applied,
(report.transactions_applied * 1000) as u64 / ms,
report.gas_processed / 1_000_000,
(report.gas_processed / (ms * 1000)).low_u64(),
);
Ok(())
}
fn start_client(
dirs: Directories,
spec: SpecType,
pruning: Pruning,
pruning_history: u64,
pruning_memory: usize,
tracing: Switch,
fat_db: Switch,
compaction: DatabaseCompactionProfile,
cache_config: CacheConfig,
require_fat_db: bool,
max_round_blocks_to_import: usize,
) -> Result<ClientService, String> {
// load spec file
let spec = spec.spec(&dirs.cache)?;
// load genesis hash
let genesis_hash = spec.genesis_header().hash();
// database paths
let db_dirs = dirs.database(genesis_hash, None, spec.data_dir.clone());
// user defaults path
let user_defaults_path = db_dirs.user_defaults_path();
// load user defaults
let user_defaults = UserDefaults::load(&user_defaults_path)?;
// select pruning algorithm
let algorithm = pruning.to_algorithm(&user_defaults);
// check if tracing is on
let tracing = tracing_switch_to_bool(tracing, &user_defaults)?;
// check if fatdb is on
let fat_db = fatdb_switch_to_bool(fat_db, &user_defaults, algorithm)?;
if !fat_db && require_fat_db {
return Err("This command requires OpenEthereum to be synced with --fat-db on.".to_owned());
}
// prepare client and snapshot paths.
let client_path = db_dirs.client_path(algorithm);
let snapshot_path = db_dirs.snapshot_path();
// execute upgrades
execute_upgrades(&dirs.base, &db_dirs, algorithm, &compaction)?;
// create dirs used by OpenEthereum.
dirs.create_dirs(false, false)?;
// prepare client config
let client_config = to_client_config(
&cache_config,
spec.name.to_lowercase(),
Mode::Active,
tracing,
fat_db,
compaction,
VMType::default(),
"".into(),
algorithm,
pruning_history,
pruning_memory,
true,
max_round_blocks_to_import,
);
let restoration_db_handler = db::restoration_db_handler(&client_path, &client_config);
let client_db = restoration_db_handler
.open(&client_path)
.map_err(|e| format!("Failed to open database {:?}", e))?;
let service = ClientService::start(
client_config,
&spec,
client_db,
&snapshot_path,
restoration_db_handler,
&dirs.ipc_path(),
// It's fine to use test version here,
// since we don't care about miner parameters at all
Arc::new(Miner::new_for_tests(&spec, None)),
)
.map_err(|e| format!("Client service error: {:?}", e))?;
drop(spec);
Ok(service)
}
fn execute_export(cmd: ExportBlockchain) -> Result<(), String> {
let service = start_client(
cmd.dirs,
cmd.spec,
cmd.pruning,
cmd.pruning_history,
cmd.pruning_memory,
cmd.tracing,
cmd.fat_db,
cmd.compaction,
cmd.cache_config,
false,
cmd.max_round_blocks_to_import,
)?;
let client = service.client();
let out: Box<dyn io::Write> = match cmd.file_path {
Some(f) => Box::new(
fs::File::create(&f).map_err(|_| format!("Cannot write to file given: {}", f))?,
),
None => Box::new(io::stdout()),
};
client.export_blocks(out, cmd.from_block, cmd.to_block, cmd.format)?;
info!("Export completed.");
Ok(())
}
fn execute_export_state(cmd: ExportState) -> Result<(), String> {
let service = start_client(
cmd.dirs,
cmd.spec,
cmd.pruning,
cmd.pruning_history,
cmd.pruning_memory,
cmd.tracing,
cmd.fat_db,
cmd.compaction,
cmd.cache_config,
true,
cmd.max_round_blocks_to_import,
)?;
let client = service.client();
let mut out: Box<dyn io::Write> = match cmd.file_path {
Some(f) => Box::new(
fs::File::create(&f).map_err(|_| format!("Cannot write to file given: {}", f))?,
),
None => Box::new(io::stdout()),
};
let mut last: Option<Address> = None;
let at = cmd.at;
let mut i = 0usize;
out.write_fmt(format_args!("{{ \"state\": {{",))
.expect("Couldn't write to stream.");
loop {
let accounts = client
.list_accounts(at, last.as_ref(), 1000)
.ok_or("Specified block not found")?;
if accounts.is_empty() {
break;
}
for account in accounts.into_iter() {
let balance = client
.balance(&account, at.into())
.unwrap_or_else(U256::zero);
if cmd.min_balance.map_or(false, |m| balance < m)
|| cmd.max_balance.map_or(false, |m| balance > m)
{
last = Some(account);
continue; //filtered out
}
if i != 0 {
out.write(b",").expect("Write error");
}
out.write_fmt(format_args!(
"\n\"0x{:x}\": {{\"balance\": \"{:x}\", \"nonce\": \"{:x}\"",
account,
balance,
client.nonce(&account, at).unwrap_or_else(U256::zero)
))
.expect("Write error");
let code = client
.code(&account, at.into())
.unwrap_or(None)
.unwrap_or_else(Vec::new);
if !code.is_empty() {
out.write_fmt(format_args!(", \"code_hash\": \"0x{:x}\"", keccak(&code)))
.expect("Write error");
if cmd.code {
out.write_fmt(format_args!(", \"code\": \"{}\"", code.to_hex()))
.expect("Write error");
}
}
let storage_root = client.storage_root(&account, at).unwrap_or(KECCAK_NULL_RLP);
if storage_root != KECCAK_NULL_RLP {
out.write_fmt(format_args!(", \"storage_root\": \"0x{:x}\"", storage_root))
.expect("Write error");
if cmd.storage {
out.write_fmt(format_args!(", \"storage\": {{"))
.expect("Write error");
let mut last_storage: Option<H256> = None;
loop {
let keys = client
.list_storage(at, &account, last_storage.as_ref(), 1000)
.ok_or("Specified block not found")?;
if keys.is_empty() {
break;
}
for key in keys.into_iter() {
if last_storage.is_some() {
out.write(b",").expect("Write error");
}
out.write_fmt(format_args!(
"\n\t\"0x{:x}\": \"0x{:x}\"",
key,
client
.storage_at(&account, &key, at.into())
.unwrap_or_else(Default::default)
))
.expect("Write error");
last_storage = Some(key);
}
}
out.write(b"\n}").expect("Write error");
}
}
out.write(b"}").expect("Write error");
i += 1;
if i % 10000 == 0 {
info!("Account #{}", i);
}
last = Some(account);
}
}
out.write_fmt(format_args!("\n}}}}")).expect("Write error");
info!("Export completed.");
Ok(())
}
fn execute_reset(cmd: ResetBlockchain) -> Result<(), String> {
let service = start_client(
cmd.dirs,
cmd.spec,
cmd.pruning,
cmd.pruning_history,
cmd.pruning_memory,
cmd.tracing,
cmd.fat_db,
cmd.compaction,
cmd.cache_config,
false,
0,
)?;
let client = service.client();
client.reset(cmd.num)?;
info!("{}", Colour::Green.bold().paint("Successfully reset db!"));
Ok(())
}
pub fn kill_db(cmd: KillBlockchain) -> Result<(), String> {
let spec = cmd.spec.spec(&cmd.dirs.cache)?;
let genesis_hash = spec.genesis_header().hash();
let db_dirs = cmd.dirs.database(genesis_hash, None, spec.data_dir);
let user_defaults_path = db_dirs.user_defaults_path();
let mut user_defaults = UserDefaults::load(&user_defaults_path)?;
let algorithm = cmd.pruning.to_algorithm(&user_defaults);
let dir = db_dirs.db_path(algorithm);
fs::remove_dir_all(&dir).map_err(|e| format!("Error removing database: {:?}", e))?;
user_defaults.is_first_launch = true;
user_defaults.save(&user_defaults_path)?;
info!("Database deleted.");
Ok(())
}
#[cfg(test)]
mod test {
use super::DataFormat;
#[test]
fn test_data_format_parsing() {
assert_eq!(DataFormat::Binary, "binary".parse().unwrap());
assert_eq!(DataFormat::Binary, "bin".parse().unwrap());
assert_eq!(DataFormat::Hex, "hex".parse().unwrap());
}
}

View File

@ -1,142 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::cmp::max;
const MIN_BC_CACHE_MB: u32 = 4;
const MIN_DB_CACHE_MB: u32 = 8;
const MIN_BLOCK_QUEUE_SIZE_LIMIT_MB: u32 = 16;
const DEFAULT_DB_CACHE_SIZE: u32 = 128;
const DEFAULT_BC_CACHE_SIZE: u32 = 8;
const DEFAULT_BLOCK_QUEUE_SIZE_LIMIT_MB: u32 = 40;
const DEFAULT_TRACE_CACHE_SIZE: u32 = 20;
const DEFAULT_STATE_CACHE_SIZE: u32 = 25;
/// Configuration for application cache sizes.
/// All values are represented in MB.
#[derive(Debug, PartialEq)]
pub struct CacheConfig {
/// Size of rocksDB cache. Almost all goes to the state column.
db: u32,
/// Size of blockchain cache.
blockchain: u32,
/// Size of transaction queue cache.
queue: u32,
/// Size of traces cache.
traces: u32,
/// Size of the state cache.
state: u32,
}
impl Default for CacheConfig {
fn default() -> Self {
CacheConfig::new(
DEFAULT_DB_CACHE_SIZE,
DEFAULT_BC_CACHE_SIZE,
DEFAULT_BLOCK_QUEUE_SIZE_LIMIT_MB,
DEFAULT_STATE_CACHE_SIZE,
)
}
}
impl CacheConfig {
/// Creates new cache config with cumulative size equal `total`.
pub fn new_with_total_cache_size(total: u32) -> Self {
CacheConfig {
db: total * 7 / 10,
blockchain: total / 10,
queue: DEFAULT_BLOCK_QUEUE_SIZE_LIMIT_MB,
traces: DEFAULT_TRACE_CACHE_SIZE,
state: total * 2 / 10,
}
}
/// Creates new cache config with gitven details.
pub fn new(db: u32, blockchain: u32, queue: u32, state: u32) -> Self {
CacheConfig {
db: db,
blockchain: blockchain,
queue: queue,
traces: DEFAULT_TRACE_CACHE_SIZE,
state: state,
}
}
/// Size of db cache.
pub fn db_cache_size(&self) -> u32 {
max(MIN_DB_CACHE_MB, self.db)
}
/// Size of block queue size limit
pub fn queue(&self) -> u32 {
max(self.queue, MIN_BLOCK_QUEUE_SIZE_LIMIT_MB)
}
/// Size of the blockchain cache.
pub fn blockchain(&self) -> u32 {
max(self.blockchain, MIN_BC_CACHE_MB)
}
/// Size of the traces cache.
pub fn traces(&self) -> u32 {
self.traces
}
/// Size of the state cache.
pub fn state(&self) -> u32 {
self.state * 3 / 4
}
/// Size of the jump-tables cache.
pub fn jump_tables(&self) -> u32 {
self.state / 4
}
}
#[cfg(test)]
mod tests {
use super::CacheConfig;
#[test]
fn test_cache_config_constructor() {
let config = CacheConfig::new_with_total_cache_size(200);
assert_eq!(config.db, 140);
assert_eq!(config.blockchain(), 20);
assert_eq!(config.queue(), 40);
assert_eq!(config.state(), 30);
assert_eq!(config.jump_tables(), 10);
}
#[test]
fn test_cache_config_db_cache_sizes() {
let config = CacheConfig::new_with_total_cache_size(400);
assert_eq!(config.db, 280);
assert_eq!(config.db_cache_size(), 280);
}
#[test]
fn test_cache_config_default() {
assert_eq!(
CacheConfig::default(),
CacheConfig::new(
super::DEFAULT_DB_CACHE_SIZE,
super::DEFAULT_BC_CACHE_SIZE,
super::DEFAULT_BLOCK_QUEUE_SIZE_LIMIT_MB,
super::DEFAULT_STATE_CACHE_SIZE
)
);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +0,0 @@
[rpc]
interface = "all"
apis = ["all"]
hosts = ["all"]

View File

@ -1,6 +0,0 @@
OpenEthereum Client.
By Wood/Paronyan/Kotewicz/Drwięga/Volf/Greeff
Habermeier/Czaban/Gotchac/Redman/Nikolsky
Schoedon/Tang/Adolfsson/Silva/Palm/Hirsz et al.
Copyright 2015-2020 Parity Technologies (UK) Ltd.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.

File diff suppressed because it is too large Load Diff

View File

@ -1,83 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Blooms migration from rocksdb to blooms-db
use super::{kvdb_rocksdb::DatabaseConfig, open_database};
use ethcore::error::Error;
use ethereum_types::Bloom;
use rlp;
use std::path::Path;
const LOG_BLOOMS_ELEMENTS_PER_INDEX: u64 = 16;
pub fn migrate_blooms<P: AsRef<Path>>(path: P, config: &DatabaseConfig) -> Result<(), Error> {
// init
let db = open_database(&path.as_ref().to_string_lossy(), config)?;
// possible optimization:
// pre-allocate space on disk for faster migration
// iterate over header blooms and insert them in blooms-db
// Some(3) -> COL_EXTRA
// 3u8 -> ExtrasIndex::BlocksBlooms
// 0u8 -> level 0
let blooms_iterator = db
.key_value()
.iter_from_prefix(Some(3), &[3u8, 0u8])
.filter(|(key, _)| key.len() == 6)
.take_while(|(key, _)| key[0] == 3u8 && key[1] == 0u8)
.map(|(key, group)| {
let index = (key[2] as u64) << 24
| (key[3] as u64) << 16
| (key[4] as u64) << 8
| (key[5] as u64);
let number = index * LOG_BLOOMS_ELEMENTS_PER_INDEX;
let blooms = rlp::decode_list::<Bloom>(&group);
(number, blooms)
});
for (number, blooms) in blooms_iterator {
db.blooms().insert_blooms(number, blooms.iter())?;
}
// iterate over trace blooms and insert them in blooms-db
// Some(4) -> COL_TRACE
// 1u8 -> TraceDBIndex::BloomGroups
// 0u8 -> level 0
let trace_blooms_iterator = db
.key_value()
.iter_from_prefix(Some(4), &[1u8, 0u8])
.filter(|(key, _)| key.len() == 6)
.take_while(|(key, _)| key[0] == 1u8 && key[1] == 0u8)
.map(|(key, group)| {
let index = (key[2] as u64)
| (key[3] as u64) << 8
| (key[4] as u64) << 16
| (key[5] as u64) << 24;
let number = index * LOG_BLOOMS_ELEMENTS_PER_INDEX;
let blooms = rlp::decode_list::<Bloom>(&group);
(number, blooms)
});
for (number, blooms) in trace_blooms_iterator {
db.trace_blooms().insert_blooms(number, blooms.iter())?;
}
Ok(())
}

View File

@ -1,40 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::kvdb_rocksdb::{CompactionProfile, DatabaseConfig};
use ethcore::client::{ClientConfig, DatabaseCompactionProfile};
use ethcore_db::NUM_COLUMNS;
use std::path::Path;
pub fn compaction_profile(
profile: &DatabaseCompactionProfile,
db_path: &Path,
) -> CompactionProfile {
match profile {
&DatabaseCompactionProfile::Auto => CompactionProfile::auto(db_path),
&DatabaseCompactionProfile::SSD => CompactionProfile::ssd(),
&DatabaseCompactionProfile::HDD => CompactionProfile::hdd(),
}
}
pub fn client_db_config(client_path: &Path, client_config: &ClientConfig) -> DatabaseConfig {
let mut client_db_config = DatabaseConfig::with_columns(NUM_COLUMNS);
client_db_config.memory_budget = client_config.db_cache_size;
client_db_config.compaction = compaction_profile(&client_config.db_compaction, &client_path);
client_db_config
}

View File

@ -1,262 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{
kvdb_rocksdb::{CompactionProfile, DatabaseConfig},
migration_rocksdb::{ChangeColumns, Config as MigrationConfig, Manager as MigrationManager},
};
use ethcore::{self, client::DatabaseCompactionProfile};
use std::{
fmt::{Display, Error as FmtError, Formatter},
fs,
io::{Error as IoError, ErrorKind, Read, Write},
path::{Path, PathBuf},
};
use super::{blooms::migrate_blooms, helpers};
/// The migration from v10 to v11.
/// Adds a column for node info.
pub const TO_V11: ChangeColumns = ChangeColumns {
pre_columns: Some(6),
post_columns: Some(7),
version: 11,
};
/// The migration from v11 to v12.
/// Adds a column for light chain storage.
pub const TO_V12: ChangeColumns = ChangeColumns {
pre_columns: Some(7),
post_columns: Some(8),
version: 12,
};
/// Database is assumed to be at default version, when no version file is found.
const DEFAULT_VERSION: u32 = 5;
/// Current version of database models.
const CURRENT_VERSION: u32 = 16;
/// Until this version please use upgrade tool.
const USE_MIGRATION_TOOL: u32 = 15;
/// A version of database at which blooms-db was introduced
const BLOOMS_DB_VERSION: u32 = 13;
/// Defines how many items are migrated to the new version of database at once.
const BATCH_SIZE: usize = 1024;
/// Version file name.
const VERSION_FILE_NAME: &'static str = "db_version";
/// Migration related erorrs.
#[derive(Debug)]
pub enum Error {
/// Returned when current version cannot be read or guessed.
UnknownDatabaseVersion,
/// Existing DB is newer than the known one.
FutureDBVersion,
/// Migration is not possible.
MigrationImpossible,
/// For old versions use external migration tool
UseMigrationTool,
/// Blooms-db migration error.
BloomsDB(ethcore::error::Error),
/// Migration was completed succesfully,
/// but there was a problem with io.
Io(IoError),
}
impl Display for Error {
fn fmt(&self, f: &mut Formatter) -> Result<(), FmtError> {
let out = match *self {
Error::UnknownDatabaseVersion => "Current database version cannot be read".into(),
Error::FutureDBVersion => "Database was created with newer client version. Upgrade your client or delete DB and resync.".into(),
Error::MigrationImpossible => format!("Database migration to version {} is not possible.", CURRENT_VERSION),
Error::BloomsDB(ref err) => format!("blooms-db migration error: {}", err),
Error::UseMigrationTool => "For db versions 15 and lower (v2.5.13=>13, 2.7.2=>14, v3.0.1=>15) please use upgrade db tool to manually upgrade db: https://github.com/openethereum/3.1-db-upgrade-tool".into(),
Error::Io(ref err) => format!("Unexpected io error on DB migration: {}.", err),
};
write!(f, "{}", out)
}
}
impl From<IoError> for Error {
fn from(err: IoError) -> Self {
Error::Io(err)
}
}
/// Returns the version file path.
fn version_file_path(path: &Path) -> PathBuf {
let mut file_path = path.to_owned();
file_path.push(VERSION_FILE_NAME);
file_path
}
/// Reads current database version from the file at given path.
/// If the file does not exist returns `DEFAULT_VERSION`.
fn current_version(path: &Path) -> Result<u32, Error> {
match fs::File::open(version_file_path(path)) {
Err(ref err) if err.kind() == ErrorKind::NotFound => Ok(DEFAULT_VERSION),
Err(_) => Err(Error::UnknownDatabaseVersion),
Ok(mut file) => {
let mut s = String::new();
file.read_to_string(&mut s)
.map_err(|_| Error::UnknownDatabaseVersion)?;
u32::from_str_radix(&s, 10).map_err(|_| Error::UnknownDatabaseVersion)
}
}
}
/// Writes current database version to the file.
/// Creates a new file if the version file does not exist yet.
fn update_version(path: &Path) -> Result<(), Error> {
fs::create_dir_all(path)?;
let mut file = fs::File::create(version_file_path(path))?;
file.write_all(format!("{}", CURRENT_VERSION).as_bytes())?;
Ok(())
}
/// Consolidated database path
fn consolidated_database_path(path: &Path) -> PathBuf {
let mut state_path = path.to_owned();
state_path.push("db");
state_path
}
/// Database backup
fn backup_database_path(path: &Path) -> PathBuf {
let mut backup_path = path.to_owned();
backup_path.pop();
backup_path.push("temp_backup");
backup_path
}
/// Default migration settings.
pub fn default_migration_settings(compaction_profile: &CompactionProfile) -> MigrationConfig {
MigrationConfig {
batch_size: BATCH_SIZE,
compaction_profile: *compaction_profile,
}
}
/// Migrations on the consolidated database.
fn consolidated_database_migrations(
compaction_profile: &CompactionProfile,
) -> Result<MigrationManager, Error> {
let mut manager = MigrationManager::new(default_migration_settings(compaction_profile));
manager
.add_migration(TO_V11)
.map_err(|_| Error::MigrationImpossible)?;
manager
.add_migration(TO_V12)
.map_err(|_| Error::MigrationImpossible)?;
Ok(manager)
}
/// Migrates database at given position with given migration rules.
fn migrate_database(
version: u32,
db_path: &Path,
mut migrations: MigrationManager,
) -> Result<(), Error> {
// check if migration is needed
if !migrations.is_needed(version) {
return Ok(());
}
let backup_path = backup_database_path(&db_path);
// remove the backup dir if it exists
let _ = fs::remove_dir_all(&backup_path);
// migrate old database to the new one
let temp_path = migrations.execute(&db_path, version)?;
// completely in-place migration leads to the paths being equal.
// in that case, no need to shuffle directories.
if temp_path == db_path {
return Ok(());
}
// create backup
fs::rename(&db_path, &backup_path)?;
// replace the old database with the new one
if let Err(err) = fs::rename(&temp_path, &db_path) {
// if something went wrong, bring back backup
fs::rename(&backup_path, &db_path)?;
return Err(err.into());
}
// remove backup
fs::remove_dir_all(&backup_path).map_err(Into::into)
}
fn exists(path: &Path) -> bool {
fs::metadata(path).is_ok()
}
/// Migrates the database.
pub fn migrate(path: &Path, compaction_profile: &DatabaseCompactionProfile) -> Result<(), Error> {
let compaction_profile = helpers::compaction_profile(&compaction_profile, path);
// read version file.
let version = current_version(path)?;
// migrate the databases.
// main db directory may already exists, so let's check if we have blocks dir
if version > CURRENT_VERSION {
return Err(Error::FutureDBVersion);
}
// We are in the latest version, yay!
if version == CURRENT_VERSION {
return Ok(());
}
if version != DEFAULT_VERSION && version <= USE_MIGRATION_TOOL {
return Err(Error::UseMigrationTool);
}
let db_path = consolidated_database_path(path);
// Further migrations
if version < CURRENT_VERSION && exists(&db_path) {
println!(
"Migrating database from version {} to {}",
version, CURRENT_VERSION
);
migrate_database(
version,
&db_path,
consolidated_database_migrations(&compaction_profile)?,
)?;
if version < BLOOMS_DB_VERSION {
println!("Migrating blooms to blooms-db...");
let db_config = DatabaseConfig {
max_open_files: 64,
memory_budget: None,
compaction: compaction_profile,
columns: ethcore_db::NUM_COLUMNS,
};
migrate_blooms(&db_path, &db_config).map_err(Error::BloomsDB)?;
}
println!("Migration finished");
}
// update version file.
update_version(path)
}

View File

@ -1,119 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate ethcore_blockchain;
extern crate kvdb_rocksdb;
extern crate migration_rocksdb;
use self::{
ethcore_blockchain::{BlockChainDB, BlockChainDBHandler},
kvdb_rocksdb::{Database, DatabaseConfig},
};
use blooms_db;
use ethcore::client::ClientConfig;
use ethcore_db::KeyValueDB;
use stats::PrometheusMetrics;
use std::{fs, io, path::Path, sync::Arc};
mod blooms;
mod helpers;
mod migration;
pub use self::migration::migrate;
struct AppDB {
key_value: Arc<dyn KeyValueDB>,
blooms: blooms_db::Database,
trace_blooms: blooms_db::Database,
}
impl BlockChainDB for AppDB {
fn key_value(&self) -> &Arc<dyn KeyValueDB> {
&self.key_value
}
fn blooms(&self) -> &blooms_db::Database {
&self.blooms
}
fn trace_blooms(&self) -> &blooms_db::Database {
&self.trace_blooms
}
}
impl PrometheusMetrics for AppDB {
fn prometheus_metrics(&self, _: &mut stats::PrometheusRegistry) {}
}
/// Open a secret store DB using the given secret store data path. The DB path is one level beneath the data path.
#[cfg(feature = "secretstore")]
pub fn open_secretstore_db(data_path: &str) -> Result<Arc<dyn KeyValueDB>, String> {
use std::path::PathBuf;
let mut db_path = PathBuf::from(data_path);
db_path.push("db");
let db_path = db_path
.to_str()
.ok_or_else(|| "Invalid secretstore path".to_string())?;
Ok(Arc::new(
Database::open_default(&db_path).map_err(|e| format!("Error opening database: {:?}", e))?,
))
}
/// Create a restoration db handler using the config generated by `client_path` and `client_config`.
pub fn restoration_db_handler(
client_path: &Path,
client_config: &ClientConfig,
) -> Box<dyn BlockChainDBHandler> {
let client_db_config = helpers::client_db_config(client_path, client_config);
struct RestorationDBHandler {
config: DatabaseConfig,
}
impl BlockChainDBHandler for RestorationDBHandler {
fn open(&self, db_path: &Path) -> io::Result<Arc<dyn BlockChainDB>> {
open_database(&db_path.to_string_lossy(), &self.config)
}
}
Box::new(RestorationDBHandler {
config: client_db_config,
})
}
pub fn open_database(
client_path: &str,
config: &DatabaseConfig,
) -> io::Result<Arc<dyn BlockChainDB>> {
let path = Path::new(client_path);
let blooms_path = path.join("blooms");
let trace_blooms_path = path.join("trace_blooms");
fs::create_dir_all(&blooms_path)?;
fs::create_dir_all(&trace_blooms_path)?;
let db = Database::open(&config, client_path)?;
let db_with_metrics = ethcore_db::DatabaseWithMetrics::new(db);
let db = AppDB {
key_value: Arc::new(db_with_metrics),
blooms: blooms_db::Database::open(blooms_path)?,
trace_blooms: blooms_db::Database::open(trace_blooms_path)?,
};
Ok(Arc::new(db))
}

View File

@ -1,600 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crate::{
cache::CacheConfig,
db::migrate,
miner::pool::PrioritizationStrategy,
sync::{self, validate_node_url},
upgrade::{upgrade, upgrade_data_paths},
};
use dir::{helpers::replace_home, DatabaseDirectories};
use ethcore::{
client::{BlockId, ClientConfig, DatabaseCompactionProfile, Mode, VMType, VerifierType},
miner::{Penalization, PendingSet},
};
use ethereum_types::{Address, U256};
use ethkey::Password;
use journaldb::Algorithm;
use std::{
collections::HashSet,
fs::File,
io,
io::{BufRead, BufReader, Write},
time::Duration,
};
pub fn to_duration(s: &str) -> Result<Duration, String> {
to_seconds(s).map(Duration::from_secs)
}
fn clean_0x(s: &str) -> &str {
if s.starts_with("0x") {
&s[2..]
} else {
s
}
}
fn to_seconds(s: &str) -> Result<u64, String> {
let bad = |_| {
format!(
"{}: Invalid duration given. See openethereum --help for more information.",
s
)
};
match s {
"twice-daily" => Ok(12 * 60 * 60),
"half-hourly" => Ok(30 * 60),
"1second" | "1 second" | "second" => Ok(1),
"1minute" | "1 minute" | "minute" => Ok(60),
"hourly" | "1hour" | "1 hour" | "hour" => Ok(60 * 60),
"daily" | "1day" | "1 day" | "day" => Ok(24 * 60 * 60),
x if x.ends_with("seconds") => x[0..x.len() - 7].trim().parse().map_err(bad),
x if x.ends_with("minutes") => x[0..x.len() - 7]
.trim()
.parse::<u64>()
.map_err(bad)
.map(|x| x * 60),
x if x.ends_with("hours") => x[0..x.len() - 5]
.trim()
.parse::<u64>()
.map_err(bad)
.map(|x| x * 60 * 60),
x if x.ends_with("days") => x[0..x.len() - 4]
.trim()
.parse::<u64>()
.map_err(bad)
.map(|x| x * 24 * 60 * 60),
x => x.trim().parse().map_err(bad),
}
}
pub fn to_mode(s: &str, timeout: u64, alarm: u64) -> Result<Mode, String> {
match s {
"active" => Ok(Mode::Active),
"passive" => Ok(Mode::Passive(
Duration::from_secs(timeout),
Duration::from_secs(alarm),
)),
"dark" => Ok(Mode::Dark(Duration::from_secs(timeout))),
"offline" => Ok(Mode::Off),
_ => Err(format!(
"{}: Invalid value for --mode. Must be one of active, passive, dark or offline.",
s
)),
}
}
pub fn to_block_id(s: &str) -> Result<BlockId, String> {
if s == "latest" {
Ok(BlockId::Latest)
} else if let Ok(num) = s.parse() {
Ok(BlockId::Number(num))
} else if let Ok(hash) = s.parse() {
Ok(BlockId::Hash(hash))
} else {
Err("Invalid block.".into())
}
}
pub fn to_u256(s: &str) -> Result<U256, String> {
if let Ok(decimal) = U256::from_dec_str(s) {
Ok(decimal)
} else {
clean_0x(s)
.parse()
.map_err(|_| format!("Invalid numeric value: {}", s))
}
}
pub fn to_pending_set(s: &str) -> Result<PendingSet, String> {
match s {
"cheap" => Ok(PendingSet::AlwaysQueue),
"strict" => Ok(PendingSet::AlwaysSealing),
"lenient" => Ok(PendingSet::SealingOrElseQueue),
other => Err(format!("Invalid pending set value: {:?}", other)),
}
}
pub fn to_queue_strategy(s: &str) -> Result<PrioritizationStrategy, String> {
match s {
"gas_price" => Ok(PrioritizationStrategy::GasPriceOnly),
other => Err(format!("Invalid queue strategy: {}", other)),
}
}
pub fn to_queue_penalization(time: Option<u64>) -> Result<Penalization, String> {
Ok(match time {
Some(threshold_ms) => Penalization::Enabled {
offend_threshold: Duration::from_millis(threshold_ms),
},
None => Penalization::Disabled,
})
}
pub fn to_address(s: Option<String>) -> Result<Address, String> {
match s {
Some(ref a) => clean_0x(a)
.parse()
.map_err(|_| format!("Invalid address: {:?}", a)),
None => Ok(Address::default()),
}
}
pub fn to_addresses(s: &Option<String>) -> Result<Vec<Address>, String> {
match *s {
Some(ref adds) if !adds.is_empty() => adds
.split(',')
.map(|a| {
clean_0x(a)
.parse()
.map_err(|_| format!("Invalid address: {:?}", a))
})
.collect(),
_ => Ok(Vec::new()),
}
}
/// Tries to parse string as a price.
pub fn to_price(s: &str) -> Result<f32, String> {
s.parse::<f32>().map_err(|_| {
format!(
"Invalid transaction price {:?} given. Must be a decimal number.",
s
)
})
}
pub fn join_set(set: Option<&HashSet<String>>) -> Option<String> {
set.map(|s| {
s.iter()
.map(|s| s.as_str())
.collect::<Vec<&str>>()
.join(",")
})
}
/// Flush output buffer.
pub fn flush_stdout() {
io::stdout().flush().expect("stdout is flushable; qed");
}
/// Formats and returns parity ipc path.
pub fn parity_ipc_path(base: &str, path: &str, shift: u16) -> String {
let mut path = path.to_owned();
if shift != 0 {
path = path.replace("jsonrpc.ipc", &format!("jsonrpc-{}.ipc", shift));
}
replace_home(base, &path)
}
/// Validates and formats bootnodes option.
pub fn to_bootnodes(bootnodes: &Option<String>) -> Result<Vec<String>, String> {
match *bootnodes {
Some(ref x) if !x.is_empty() => x
.split(',')
.map(|s| match validate_node_url(s).map(Into::into) {
None => Ok(s.to_owned()),
Some(sync::ErrorKind::AddressResolve(_)) => {
Err(format!("Failed to resolve hostname of a boot node: {}", s))
}
Some(_) => Err(format!(
"Invalid node address format given for a boot node: {}",
s
)),
})
.collect(),
Some(_) => Ok(vec![]),
None => Ok(vec![]),
}
}
#[cfg(test)]
pub fn default_network_config() -> crate::sync::NetworkConfiguration {
use super::network::IpFilter;
use crate::sync::NetworkConfiguration;
NetworkConfiguration {
config_path: Some(replace_home(&::dir::default_data_path(), "$BASE/network")),
net_config_path: None,
listen_address: Some("0.0.0.0:30303".into()),
public_address: None,
udp_port: None,
nat_enabled: true,
discovery_enabled: true,
boot_nodes: Vec::new(),
use_secret: None,
max_peers: 50,
min_peers: 25,
snapshot_peers: 0,
max_pending_peers: 64,
ip_filter: IpFilter::default(),
reserved_nodes: Vec::new(),
allow_non_reserved: true,
client_version: ::parity_version::version(),
}
}
pub fn to_client_config(
cache_config: &CacheConfig,
spec_name: String,
mode: Mode,
tracing: bool,
fat_db: bool,
compaction: DatabaseCompactionProfile,
vm_type: VMType,
name: String,
pruning: Algorithm,
pruning_history: u64,
pruning_memory: usize,
check_seal: bool,
max_round_blocks_to_import: usize,
) -> ClientConfig {
let mut client_config = ClientConfig::default();
let mb = 1024 * 1024;
// in bytes
client_config.blockchain.max_cache_size = cache_config.blockchain() as usize * mb;
// in bytes
client_config.blockchain.pref_cache_size = cache_config.blockchain() as usize * 3 / 4 * mb;
// db cache size, in megabytes
client_config.db_cache_size = Some(cache_config.db_cache_size() as usize);
// db queue cache size, in bytes
client_config.queue.max_mem_use = cache_config.queue() as usize * mb;
// in bytes
client_config.tracing.max_cache_size = cache_config.traces() as usize * mb;
// in bytes
client_config.tracing.pref_cache_size = cache_config.traces() as usize * 3 / 4 * mb;
// in bytes
client_config.state_cache_size = cache_config.state() as usize * mb;
// in bytes
client_config.jump_table_size = cache_config.jump_tables() as usize * mb;
// in bytes
client_config.history_mem = pruning_memory * mb;
client_config.mode = mode;
client_config.tracing.enabled = tracing;
client_config.fat_db = fat_db;
client_config.pruning = pruning;
client_config.history = pruning_history;
client_config.db_compaction = compaction;
client_config.vm_type = vm_type;
client_config.name = name;
client_config.verifier_type = if check_seal {
VerifierType::Canon
} else {
VerifierType::CanonNoSeal
};
client_config.spec_name = spec_name;
client_config.max_round_blocks_to_import = max_round_blocks_to_import;
client_config
}
pub fn execute_upgrades(
base_path: &str,
dirs: &DatabaseDirectories,
pruning: Algorithm,
compaction_profile: &DatabaseCompactionProfile,
) -> Result<(), String> {
upgrade_data_paths(base_path, dirs, pruning);
match upgrade(&dirs.path) {
Ok(upgrades_applied) if upgrades_applied > 0 => {
debug!("Executed {} upgrade scripts - ok", upgrades_applied);
}
Err(e) => {
return Err(format!("Error upgrading OpenEthereum data: {:?}", e));
}
_ => {}
}
let client_path = dirs.db_path(pruning);
migrate(&client_path, compaction_profile).map_err(|e| format!("{}", e))
}
/// Prompts user asking for password.
pub fn password_prompt() -> Result<Password, String> {
use rpassword::read_password;
const STDIN_ERROR: &'static str = "Unable to ask for password on non-interactive terminal.";
println!("Please note that password is NOT RECOVERABLE.");
print!("Type password: ");
flush_stdout();
let password = read_password().map_err(|_| STDIN_ERROR.to_owned())?.into();
print!("Repeat password: ");
flush_stdout();
let password_repeat = read_password().map_err(|_| STDIN_ERROR.to_owned())?.into();
if password != password_repeat {
return Err("Passwords do not match!".into());
}
Ok(password)
}
/// Read a password from password file.
pub fn password_from_file(path: String) -> Result<Password, String> {
let passwords = passwords_from_files(&[path])?;
// use only first password from the file
passwords
.get(0)
.map(Password::clone)
.ok_or_else(|| "Password file seems to be empty.".to_owned())
}
/// Reads passwords from files. Treats each line as a separate password.
pub fn passwords_from_files(files: &[String]) -> Result<Vec<Password>, String> {
let passwords = files.iter().map(|filename| {
let file = File::open(filename).map_err(|_| format!("{} Unable to read password file. Ensure it exists and permissions are correct.", filename))?;
let reader = BufReader::new(&file);
let lines = reader.lines()
.filter_map(|l| l.ok())
.map(|pwd| pwd.trim().to_owned().into())
.collect::<Vec<Password>>();
Ok(lines)
}).collect::<Result<Vec<Vec<Password>>, String>>();
Ok(passwords?.into_iter().flat_map(|x| x).collect())
}
#[cfg(test)]
mod tests {
use super::{
join_set, password_from_file, to_address, to_addresses, to_block_id, to_bootnodes,
to_duration, to_mode, to_pending_set, to_price, to_u256,
};
use ethcore::{
client::{BlockId, Mode},
miner::PendingSet,
};
use ethereum_types::U256;
use ethkey::Password;
use std::{collections::HashSet, fs::File, io::Write, time::Duration};
use tempdir::TempDir;
#[test]
fn test_to_duration() {
assert_eq!(
to_duration("twice-daily").unwrap(),
Duration::from_secs(12 * 60 * 60)
);
assert_eq!(
to_duration("half-hourly").unwrap(),
Duration::from_secs(30 * 60)
);
assert_eq!(to_duration("1second").unwrap(), Duration::from_secs(1));
assert_eq!(to_duration("2seconds").unwrap(), Duration::from_secs(2));
assert_eq!(to_duration("15seconds").unwrap(), Duration::from_secs(15));
assert_eq!(to_duration("1minute").unwrap(), Duration::from_secs(1 * 60));
assert_eq!(
to_duration("2minutes").unwrap(),
Duration::from_secs(2 * 60)
);
assert_eq!(
to_duration("15minutes").unwrap(),
Duration::from_secs(15 * 60)
);
assert_eq!(to_duration("hourly").unwrap(), Duration::from_secs(60 * 60));
assert_eq!(
to_duration("daily").unwrap(),
Duration::from_secs(24 * 60 * 60)
);
assert_eq!(
to_duration("1hour").unwrap(),
Duration::from_secs(1 * 60 * 60)
);
assert_eq!(
to_duration("2hours").unwrap(),
Duration::from_secs(2 * 60 * 60)
);
assert_eq!(
to_duration("15hours").unwrap(),
Duration::from_secs(15 * 60 * 60)
);
assert_eq!(
to_duration("1day").unwrap(),
Duration::from_secs(1 * 24 * 60 * 60)
);
assert_eq!(
to_duration("2days").unwrap(),
Duration::from_secs(2 * 24 * 60 * 60)
);
assert_eq!(
to_duration("15days").unwrap(),
Duration::from_secs(15 * 24 * 60 * 60)
);
assert_eq!(
to_duration("15 days").unwrap(),
Duration::from_secs(15 * 24 * 60 * 60)
);
assert_eq!(to_duration("2 seconds").unwrap(), Duration::from_secs(2));
}
#[test]
fn test_to_mode() {
assert_eq!(to_mode("active", 0, 0).unwrap(), Mode::Active);
assert_eq!(
to_mode("passive", 10, 20).unwrap(),
Mode::Passive(Duration::from_secs(10), Duration::from_secs(20))
);
assert_eq!(
to_mode("dark", 20, 30).unwrap(),
Mode::Dark(Duration::from_secs(20))
);
assert!(to_mode("other", 20, 30).is_err());
}
#[test]
fn test_to_block_id() {
assert_eq!(to_block_id("latest").unwrap(), BlockId::Latest);
assert_eq!(to_block_id("0").unwrap(), BlockId::Number(0));
assert_eq!(to_block_id("2").unwrap(), BlockId::Number(2));
assert_eq!(to_block_id("15").unwrap(), BlockId::Number(15));
assert_eq!(
to_block_id("9fc84d84f6a785dc1bd5abacfcf9cbdd3b6afb80c0f799bfb2fd42c44a0c224e")
.unwrap(),
BlockId::Hash(
"9fc84d84f6a785dc1bd5abacfcf9cbdd3b6afb80c0f799bfb2fd42c44a0c224e"
.parse()
.unwrap()
)
);
}
#[test]
fn test_to_u256() {
assert_eq!(to_u256("0").unwrap(), U256::from(0));
assert_eq!(to_u256("11").unwrap(), U256::from(11));
assert_eq!(to_u256("0x11").unwrap(), U256::from(17));
assert!(to_u256("u").is_err())
}
#[test]
fn test_pending_set() {
assert_eq!(to_pending_set("cheap").unwrap(), PendingSet::AlwaysQueue);
assert_eq!(to_pending_set("strict").unwrap(), PendingSet::AlwaysSealing);
assert_eq!(
to_pending_set("lenient").unwrap(),
PendingSet::SealingOrElseQueue
);
assert!(to_pending_set("othe").is_err());
}
#[test]
fn test_to_address() {
assert_eq!(
to_address(Some("0xD9A111feda3f362f55Ef1744347CDC8Dd9964a41".into())).unwrap(),
"D9A111feda3f362f55Ef1744347CDC8Dd9964a41".parse().unwrap()
);
assert_eq!(
to_address(Some("D9A111feda3f362f55Ef1744347CDC8Dd9964a41".into())).unwrap(),
"D9A111feda3f362f55Ef1744347CDC8Dd9964a41".parse().unwrap()
);
assert_eq!(to_address(None).unwrap(), Default::default());
}
#[test]
fn test_to_addresses() {
let addresses = to_addresses(&Some(
"0xD9A111feda3f362f55Ef1744347CDC8Dd9964a41,D9A111feda3f362f55Ef1744347CDC8Dd9964a42"
.into(),
))
.unwrap();
assert_eq!(
addresses,
vec![
"D9A111feda3f362f55Ef1744347CDC8Dd9964a41".parse().unwrap(),
"D9A111feda3f362f55Ef1744347CDC8Dd9964a42".parse().unwrap(),
]
);
}
#[test]
fn test_password() {
let tempdir = TempDir::new("").unwrap();
let path = tempdir.path().join("file");
let mut file = File::create(&path).unwrap();
file.write_all(b"a bc ").unwrap();
assert_eq!(
password_from_file(path.to_str().unwrap().into())
.unwrap()
.as_bytes(),
b"a bc"
);
}
#[test]
fn test_password_multiline() {
let tempdir = TempDir::new("").unwrap();
let path = tempdir.path().join("file");
let mut file = File::create(path.as_path()).unwrap();
file.write_all(
br#" password with trailing whitespace
those passwords should be
ignored
but the first password is trimmed
"#,
)
.unwrap();
assert_eq!(
password_from_file(path.to_str().unwrap().into()).unwrap(),
Password::from("password with trailing whitespace")
);
}
#[test]
fn test_to_price() {
assert_eq!(to_price("1").unwrap(), 1.0);
assert_eq!(to_price("2.3").unwrap(), 2.3);
assert_eq!(to_price("2.33").unwrap(), 2.33);
}
#[test]
fn test_to_bootnodes() {
let one_bootnode = "enode://e731347db0521f3476e6bbbb83375dcd7133a1601425ebd15fd10f3835fd4c304fba6282087ca5a0deeafadf0aa0d4fd56c3323331901c1f38bd181c283e3e35@128.199.55.137:30303";
let two_bootnodes = "enode://e731347db0521f3476e6bbbb83375dcd7133a1601425ebd15fd10f3835fd4c304fba6282087ca5a0deeafadf0aa0d4fd56c3323331901c1f38bd181c283e3e35@128.199.55.137:30303,enode://e731347db0521f3476e6bbbb83375dcd7133a1601425ebd15fd10f3835fd4c304fba6282087ca5a0deeafadf0aa0d4fd56c3323331901c1f38bd181c283e3e35@128.199.55.137:30303";
assert_eq!(to_bootnodes(&Some("".into())), Ok(vec![]));
assert_eq!(to_bootnodes(&None), Ok(vec![]));
assert_eq!(
to_bootnodes(&Some(one_bootnode.into())),
Ok(vec![one_bootnode.into()])
);
assert_eq!(
to_bootnodes(&Some(two_bootnodes.into())),
Ok(vec![one_bootnode.into(), one_bootnode.into()])
);
}
#[test]
fn test_join_set() {
let mut test_set = HashSet::new();
test_set.insert("0x1111111111111111111111111111111111111111".to_string());
test_set.insert("0x0000000000000000000000000000000000000000".to_string());
let res = join_set(Some(&test_set)).unwrap();
assert!(
res == "0x1111111111111111111111111111111111111111,0x0000000000000000000000000000000000000000"
||
res == "0x0000000000000000000000000000000000000000,0x1111111111111111111111111111111111111111"
);
}
}

View File

@ -1,429 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate ansi_term;
use self::ansi_term::{
Colour,
Colour::{Blue, Cyan, Green, White, Yellow},
Style,
};
use std::{
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering as AtomicOrdering},
Arc,
},
time::{Duration, Instant},
};
use crate::{
io::{IoContext, IoHandler, TimerToken},
sync::{ManageNetwork, SyncProvider},
types::BlockNumber,
};
use atty;
use ethcore::{
client::{
BlockChainClient, BlockChainInfo, BlockId, BlockInfo, BlockQueueInfo, ChainInfo,
ChainNotify, Client, ClientIoMessage, ClientReport, NewBlocks,
},
snapshot::{service::Service as SnapshotService, RestorationStatus, SnapshotService as SS},
};
use number_prefix::{binary_prefix, Prefixed, Standalone};
use parity_rpc::{informant::RpcStats, is_major_importing_or_waiting};
use parking_lot::{Mutex, RwLock};
/// Format byte counts to standard denominations.
pub fn format_bytes(b: usize) -> String {
match binary_prefix(b as f64) {
Standalone(bytes) => format!("{} bytes", bytes),
Prefixed(prefix, n) => format!("{:.0} {}B", n, prefix),
}
}
/// Something that can be converted to milliseconds.
pub trait MillisecondDuration {
/// Get the value in milliseconds.
fn as_milliseconds(&self) -> u64;
}
impl MillisecondDuration for Duration {
fn as_milliseconds(&self) -> u64 {
self.as_secs() * 1000 + self.subsec_nanos() as u64 / 1_000_000
}
}
#[derive(Default)]
struct CacheSizes {
sizes: ::std::collections::BTreeMap<&'static str, usize>,
}
impl CacheSizes {
fn insert(&mut self, key: &'static str, bytes: usize) {
self.sizes.insert(key, bytes);
}
fn display<F>(&self, style: Style, paint: F) -> String
where
F: Fn(Style, String) -> String,
{
use std::fmt::Write;
let mut buf = String::new();
for (name, &size) in &self.sizes {
write!(buf, " {:>8} {}", paint(style, format_bytes(size)), name)
.expect("writing to string won't fail unless OOM; qed")
}
buf
}
}
pub struct SyncInfo {
last_imported_block_number: BlockNumber,
last_imported_ancient_number: Option<BlockNumber>,
num_peers: usize,
max_peers: u32,
snapshot_sync: bool,
}
pub struct Report {
importing: bool,
chain_info: BlockChainInfo,
client_report: ClientReport,
queue_info: BlockQueueInfo,
cache_sizes: CacheSizes,
sync_info: Option<SyncInfo>,
}
/// Something which can provide data to the informant.
pub trait InformantData: Send + Sync {
/// Whether it executes transactions
fn executes_transactions(&self) -> bool;
/// Whether it is currently importing (also included in `Report`)
fn is_major_importing(&self) -> bool;
/// Generate a report of blockchain status, memory usage, and sync info.
fn report(&self) -> Report;
}
/// Informant data for a full node.
pub struct FullNodeInformantData {
pub client: Arc<Client>,
pub sync: Option<Arc<dyn SyncProvider>>,
pub net: Option<Arc<dyn ManageNetwork>>,
}
impl InformantData for FullNodeInformantData {
fn executes_transactions(&self) -> bool {
true
}
fn is_major_importing(&self) -> bool {
let state = self.sync.as_ref().map(|sync| sync.status().state);
is_major_importing_or_waiting(state, self.client.queue_info(), false)
}
fn report(&self) -> Report {
let (client_report, queue_info, blockchain_cache_info) = (
self.client.report(),
self.client.queue_info(),
self.client.blockchain_cache_info(),
);
let chain_info = self.client.chain_info();
let mut cache_sizes = CacheSizes::default();
cache_sizes.insert("queue", queue_info.mem_used);
cache_sizes.insert("chain", blockchain_cache_info.total());
let importing = self.is_major_importing();
let sync_info = match (self.sync.as_ref(), self.net.as_ref()) {
(Some(sync), Some(net)) => {
let status = sync.status();
let num_peers_range = net.num_peers_range();
debug_assert!(num_peers_range.end() >= num_peers_range.start());
Some(SyncInfo {
last_imported_block_number: status
.last_imported_block_number
.unwrap_or(chain_info.best_block_number),
last_imported_ancient_number: status.last_imported_old_block_number,
num_peers: status.num_peers,
max_peers: status
.current_max_peers(*num_peers_range.start(), *num_peers_range.end()),
snapshot_sync: status.is_snapshot_syncing(),
})
}
_ => None,
};
Report {
importing,
chain_info,
client_report,
queue_info,
cache_sizes,
sync_info,
}
}
}
pub struct Informant<T> {
last_tick: RwLock<Instant>,
with_color: bool,
target: T,
snapshot: Option<Arc<SnapshotService>>,
rpc_stats: Option<Arc<RpcStats>>,
last_import: Mutex<Instant>,
skipped: AtomicUsize,
skipped_txs: AtomicUsize,
in_shutdown: AtomicBool,
last_report: Mutex<ClientReport>,
}
impl<T: InformantData> Informant<T> {
/// Make a new instance potentially `with_color` output.
pub fn new(
target: T,
snapshot: Option<Arc<SnapshotService>>,
rpc_stats: Option<Arc<RpcStats>>,
with_color: bool,
) -> Self {
Informant {
last_tick: RwLock::new(Instant::now()),
with_color: with_color,
target: target,
snapshot: snapshot,
rpc_stats: rpc_stats,
last_import: Mutex::new(Instant::now()),
skipped: AtomicUsize::new(0),
skipped_txs: AtomicUsize::new(0),
in_shutdown: AtomicBool::new(false),
last_report: Mutex::new(Default::default()),
}
}
/// Signal that we're shutting down; no more output necessary.
pub fn shutdown(&self) {
self.in_shutdown
.store(true, ::std::sync::atomic::Ordering::SeqCst);
}
pub fn tick(&self) {
let now = Instant::now();
let elapsed;
{
let last_tick = self.last_tick.read();
if now < *last_tick + Duration::from_millis(1500) {
return;
}
elapsed = now - *last_tick;
}
let (client_report, full_report) = {
let last_report = self.last_report.lock();
let full_report = self.target.report();
let diffed = full_report.client_report.clone() - &*last_report;
(diffed, full_report)
};
let Report {
importing,
chain_info,
queue_info,
cache_sizes,
sync_info,
..
} = full_report;
let rpc_stats = self.rpc_stats.as_ref();
let snapshot_sync = sync_info.as_ref().map_or(false, |s| s.snapshot_sync)
&& self
.snapshot
.as_ref()
.map_or(false, |s| match s.restoration_status() {
RestorationStatus::Ongoing { .. } | RestorationStatus::Initializing { .. } => {
true
}
_ => false,
});
if !importing && !snapshot_sync && elapsed < Duration::from_secs(30) {
return;
}
*self.last_tick.write() = now;
*self.last_report.lock() = full_report.client_report.clone();
let paint = |c: Style, t: String| match self.with_color && atty::is(atty::Stream::Stdout) {
true => format!("{}", c.paint(t)),
false => t,
};
info!(target: "import", "{}{} {} {} {}",
match importing {
true => match snapshot_sync {
false => format!("Syncing {} {} {} {}+{} Qed",
paint(White.bold(), format!("{:>8}", format!("#{}", chain_info.best_block_number))),
paint(White.bold(), format!("{}", chain_info.best_block_hash)),
if self.target.executes_transactions() {
format!("{} blk/s {} tx/s {} Mgas/s",
paint(Yellow.bold(), format!("{:7.2}", (client_report.blocks_imported * 1000) as f64 / elapsed.as_milliseconds() as f64)),
paint(Yellow.bold(), format!("{:6.1}", (client_report.transactions_applied * 1000) as f64 / elapsed.as_milliseconds() as f64)),
paint(Yellow.bold(), format!("{:6.1}", (client_report.gas_processed / 1000).low_u64() as f64 / elapsed.as_milliseconds() as f64))
)
} else {
format!("{} hdr/s",
paint(Yellow.bold(), format!("{:6.1}", (client_report.blocks_imported * 1000) as f64 / elapsed.as_milliseconds() as f64))
)
},
paint(Green.bold(), format!("{:5}", queue_info.unverified_queue_size)),
paint(Green.bold(), format!("{:5}", queue_info.verified_queue_size))
),
true => {
self.snapshot.as_ref().map_or(String::new(), |s|
match s.restoration_status() {
RestorationStatus::Ongoing { state_chunks, block_chunks, state_chunks_done, block_chunks_done, .. } => {
format!("Syncing snapshot {}/{}", state_chunks_done + block_chunks_done, state_chunks + block_chunks)
},
RestorationStatus::Initializing { chunks_done } => {
format!("Snapshot initializing ({} chunks restored)", chunks_done)
},
_ => String::new(),
}
)
},
},
false => String::new(),
},
match chain_info.ancient_block_number {
Some(ancient_number) => format!(" Ancient:#{}", ancient_number),
None => String::new(),
},
match sync_info.as_ref() {
Some(ref sync_info) => format!("{}{}/{} peers",
match importing {
true => format!("{}",
if self.target.executes_transactions() {
paint(Green.bold(), format!("{:>8} ", format!("LI:#{}", sync_info.last_imported_block_number)))
} else {
String::new()
}
),
false => match sync_info.last_imported_ancient_number {
Some(number) => format!("{} ", paint(Yellow.bold(), format!("{:>8}", format!("AB:#{}", number)))),
None => String::new(),
}
},
paint(Cyan.bold(), format!("{:2}", sync_info.num_peers)),
paint(Cyan.bold(), format!("{:2}", sync_info.max_peers)),
),
_ => String::new(),
},
cache_sizes.display(Blue.bold(), &paint),
match rpc_stats {
Some(ref rpc_stats) => format!(
"RPC: {} conn, {} req/s, {} µs",
paint(Blue.bold(), format!("{:2}", rpc_stats.sessions())),
paint(Blue.bold(), format!("{:4}", rpc_stats.requests_rate())),
paint(Blue.bold(), format!("{:4}", rpc_stats.approximated_roundtrip())),
),
_ => String::new(),
},
);
}
}
impl ChainNotify for Informant<FullNodeInformantData> {
// t_nb 11.2 Informant. Prints new block inclusiong to console/log.
fn new_blocks(&self, new_blocks: NewBlocks) {
if new_blocks.has_more_blocks_to_import {
return;
}
let mut last_import = self.last_import.lock();
let client = &self.target.client;
let importing = self.target.is_major_importing();
let ripe = Instant::now() > *last_import + Duration::from_secs(1) && !importing;
let txs_imported = new_blocks
.imported
.iter()
.take(
new_blocks
.imported
.len()
.saturating_sub(if ripe { 1 } else { 0 }),
)
.filter_map(|h| client.block(BlockId::Hash(*h)))
.map(|b| b.transactions_count())
.sum();
if ripe {
if let Some(block) = new_blocks
.imported
.last()
.and_then(|h| client.block(BlockId::Hash(*h)))
{
let header_view = block.header_view();
let size = block.rlp().as_raw().len();
let (skipped, skipped_txs) = (
self.skipped.load(AtomicOrdering::Relaxed) + new_blocks.imported.len() - 1,
self.skipped_txs.load(AtomicOrdering::Relaxed) + txs_imported,
);
info!(target: "import", "Imported {} {} ({} txs, {} Mgas, {} ms, {} KiB){}",
Colour::White.bold().paint(format!("#{}", header_view.number())),
Colour::White.bold().paint(format!("{}", header_view.hash())),
Colour::Yellow.bold().paint(format!("{}", block.transactions_count())),
Colour::Yellow.bold().paint(format!("{:.2}", header_view.gas_used().low_u64() as f32 / 1000000f32)),
Colour::Purple.bold().paint(format!("{}", new_blocks.duration.as_milliseconds())),
Colour::Blue.bold().paint(format!("{:.2}", size as f32 / 1024f32)),
if skipped > 0 {
format!(" + another {} block(s) containing {} tx(s)",
Colour::Red.bold().paint(format!("{}", skipped)),
Colour::Red.bold().paint(format!("{}", skipped_txs))
)
} else {
String::new()
}
);
self.skipped.store(0, AtomicOrdering::Relaxed);
self.skipped_txs.store(0, AtomicOrdering::Relaxed);
*last_import = Instant::now();
}
} else {
self.skipped
.fetch_add(new_blocks.imported.len(), AtomicOrdering::Relaxed);
self.skipped_txs
.fetch_add(txs_imported, AtomicOrdering::Relaxed);
}
}
}
const INFO_TIMER: TimerToken = 0;
impl<T: InformantData> IoHandler<ClientIoMessage> for Informant<T> {
fn initialize(&self, io: &IoContext<ClientIoMessage>) {
io.register_timer(INFO_TIMER, Duration::from_secs(5))
.expect("Error registering timer");
}
fn timeout(&self, _io: &IoContext<ClientIoMessage>, timer: TimerToken) {
if timer == INFO_TIMER && !self.in_shutdown.load(AtomicOrdering::SeqCst) {
self.tick();
}
}
}

View File

@ -1,240 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Ethcore client application.
#![warn(missing_docs)]
extern crate ansi_term;
extern crate docopt;
#[macro_use]
extern crate clap;
extern crate atty;
extern crate dir;
extern crate futures;
extern crate jsonrpc_core;
extern crate num_cpus;
extern crate number_prefix;
extern crate parking_lot;
extern crate regex;
extern crate rlp;
extern crate rpassword;
extern crate rustc_hex;
extern crate semver;
extern crate serde;
extern crate serde_json;
#[macro_use]
extern crate serde_derive;
extern crate toml;
extern crate blooms_db;
extern crate cli_signer;
extern crate common_types as types;
extern crate ethcore;
extern crate ethcore_call_contract as call_contract;
extern crate ethcore_db;
extern crate ethcore_io as io;
extern crate ethcore_logger;
extern crate ethcore_miner as miner;
extern crate ethcore_network as network;
extern crate ethcore_service;
extern crate ethcore_sync as sync;
extern crate ethereum_types;
extern crate ethkey;
extern crate ethstore;
extern crate fetch;
extern crate hyper;
extern crate journaldb;
extern crate keccak_hash as hash;
extern crate kvdb;
extern crate node_filter;
extern crate parity_bytes as bytes;
extern crate parity_crypto as crypto;
extern crate parity_local_store as local_store;
extern crate parity_path as path;
extern crate parity_rpc;
extern crate parity_runtime;
extern crate parity_version;
extern crate prometheus;
extern crate stats;
#[macro_use]
extern crate log as rlog;
#[cfg(feature = "ethcore-accounts")]
extern crate ethcore_accounts as accounts;
#[cfg(feature = "secretstore")]
extern crate ethcore_secretstore;
#[cfg(test)]
#[macro_use]
extern crate pretty_assertions;
#[cfg(test)]
extern crate tempdir;
#[cfg(test)]
#[macro_use]
extern crate lazy_static;
mod account;
mod account_utils;
mod blockchain;
mod cache;
mod cli;
mod configuration;
mod db;
mod helpers;
mod informant;
mod metrics;
mod modules;
mod params;
mod presale;
mod rpc;
mod rpc_apis;
mod run;
mod secretstore;
mod signer;
mod snapshot;
mod upgrade;
mod user_defaults;
use std::{fs::File, io::BufReader, sync::Arc};
use crate::{
cli::Args,
configuration::{Cmd, Execute},
hash::keccak_buffer,
};
#[cfg(feature = "memory_profiling")]
use std::alloc::System;
pub use self::{configuration::Configuration, run::RunningClient};
pub use ethcore_logger::{setup_log, Config as LoggerConfig, RotatingLogger};
pub use parity_rpc::PubSubSession;
#[cfg(feature = "memory_profiling")]
#[global_allocator]
static A: System = System;
fn print_hash_of(maybe_file: Option<String>) -> Result<String, String> {
if let Some(file) = maybe_file {
let mut f =
BufReader::new(File::open(&file).map_err(|_| "Unable to open file".to_owned())?);
let hash = keccak_buffer(&mut f).map_err(|_| "Unable to read from file".to_owned())?;
Ok(format!("{:x}", hash))
} else {
Err("Streaming from standard input not yet supported. Specify a file.".to_owned())
}
}
#[cfg(feature = "deadlock_detection")]
fn run_deadlock_detection_thread() {
use ansi_term::Style;
use parking_lot::deadlock;
use std::{thread, time::Duration};
info!("Starting deadlock detection thread.");
// Create a background thread which checks for deadlocks every 10s
thread::spawn(move || loop {
thread::sleep(Duration::from_secs(10));
let deadlocks = deadlock::check_deadlock();
if deadlocks.is_empty() {
continue;
}
warn!(
"{} {} detected",
deadlocks.len(),
Style::new().bold().paint("deadlock(s)")
);
for (i, threads) in deadlocks.iter().enumerate() {
warn!("{} #{}", Style::new().bold().paint("Deadlock"), i);
for t in threads {
warn!("Thread Id {:#?}", t.thread_id());
warn!("{:#?}", t.backtrace());
}
}
});
}
/// Action that OpenEthereum performed when running `start`.
pub enum ExecutionAction {
/// The execution didn't require starting a node, and thus has finished.
/// Contains the string to print on stdout, if any.
Instant(Option<String>),
/// The client has started running and must be shut down manually by calling `shutdown`.
///
/// If you don't call `shutdown()`, execution will continue in the background.
Running(RunningClient),
}
fn execute(command: Execute, logger: Arc<RotatingLogger>) -> Result<ExecutionAction, String> {
#[cfg(feature = "deadlock_detection")]
run_deadlock_detection_thread();
match command.cmd {
Cmd::Run(run_cmd) => {
let outcome = run::execute(run_cmd, logger)?;
Ok(ExecutionAction::Running(outcome))
}
Cmd::Version => Ok(ExecutionAction::Instant(Some(Args::print_version()))),
Cmd::Hash(maybe_file) => {
print_hash_of(maybe_file).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::Account(account_cmd) => {
account::execute(account_cmd).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::ImportPresaleWallet(presale_cmd) => {
presale::execute(presale_cmd).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::Blockchain(blockchain_cmd) => {
blockchain::execute(blockchain_cmd).map(|_| ExecutionAction::Instant(None))
}
Cmd::SignerToken(ws_conf, logger_config) => {
signer::execute(ws_conf, logger_config).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::SignerSign {
id,
pwfile,
port,
authfile,
} => cli_signer::signer_sign(id, pwfile, port, authfile)
.map(|s| ExecutionAction::Instant(Some(s))),
Cmd::SignerList { port, authfile } => {
cli_signer::signer_list(port, authfile).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::SignerReject { id, port, authfile } => {
cli_signer::signer_reject(id, port, authfile).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::Snapshot(snapshot_cmd) => {
snapshot::execute(snapshot_cmd).map(|s| ExecutionAction::Instant(Some(s)))
}
}
}
/// Starts the OpenEthereum client.
///
/// The first parameter is the command line arguments that you would pass when running the openethereum
/// binary.
///
/// On error, returns what to print on stderr.
// FIXME: totally independent logging capability, see https://github.com/openethereum/openethereum/issues/10252
pub fn start(conf: Configuration, logger: Arc<RotatingLogger>) -> Result<ExecutionAction, String> {
execute(conf.into_command()?, logger)
}

View File

@ -1,191 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Logger for OpenEthereum executables
extern crate ansi_term;
extern crate arrayvec;
extern crate atty;
extern crate env_logger;
extern crate log as rlog;
extern crate parking_lot;
extern crate regex;
extern crate time;
#[macro_use]
extern crate lazy_static;
mod rotating;
use ansi_term::Colour;
use env_logger::{Builder as LogBuilder, Formatter};
use parking_lot::Mutex;
use regex::Regex;
use std::{
env, fs,
io::Write,
sync::{Arc, Weak},
thread,
};
pub use rotating::{init_log, RotatingLogger};
#[derive(Debug, PartialEq, Clone)]
pub struct Config {
pub mode: Option<String>,
pub color: bool,
pub file: Option<String>,
}
impl Default for Config {
fn default() -> Self {
Config {
mode: None,
color: !cfg!(windows),
file: None,
}
}
}
lazy_static! {
static ref ROTATING_LOGGER: Mutex<Weak<RotatingLogger>> = Mutex::new(Default::default());
}
/// Sets up the logger
pub fn setup_log(config: &Config) -> Result<Arc<RotatingLogger>, String> {
use rlog::*;
let mut levels = String::new();
let mut builder = LogBuilder::new();
// Disable info logging by default for some modules:
builder.filter(Some("ws"), LevelFilter::Warn);
builder.filter(Some("hyper"), LevelFilter::Warn);
builder.filter(Some("rustls"), LevelFilter::Error);
// Enable info for others.
builder.filter(None, LevelFilter::Info);
if let Ok(lvl) = env::var("RUST_LOG") {
levels.push_str(&lvl);
levels.push_str(",");
builder.parse(&lvl);
}
if let Some(ref s) = config.mode {
levels.push_str(s);
builder.parse(s);
}
let isatty = atty::is(atty::Stream::Stderr);
let enable_color = config.color && isatty;
let logs = Arc::new(RotatingLogger::new(levels));
let logger = logs.clone();
let mut open_options = fs::OpenOptions::new();
let maybe_file = match config.file.as_ref() {
Some(f) => Some(
open_options
.append(true)
.create(true)
.open(f)
.map_err(|e| format!("Cannot write to log file given: {}, {}", f, e))?,
),
None => None,
};
let format = move |buf: &mut Formatter, record: &Record| {
let timestamp = time::strftime("%Y-%m-%d %H:%M:%S %Z", &time::now()).unwrap();
let with_color = if max_level() <= LevelFilter::Info {
format!(
"{} {}",
Colour::Black.bold().paint(timestamp),
record.args()
)
} else {
let name = thread::current().name().map_or_else(Default::default, |x| {
format!("{}", Colour::Blue.bold().paint(x))
});
format!(
"{} {} {} {} {}",
Colour::Black.bold().paint(timestamp),
name,
record.level(),
record.target(),
record.args()
)
};
let removed_color = kill_color(with_color.as_ref());
let ret = match enable_color {
true => with_color,
false => removed_color.clone(),
};
if let Some(mut file) = maybe_file.as_ref() {
// ignore errors - there's nothing we can do
let _ = file.write_all(removed_color.as_bytes());
let _ = file.write_all(b"\n");
}
logger.append(removed_color);
if !isatty && record.level() <= Level::Info && atty::is(atty::Stream::Stdout) {
// duplicate INFO/WARN output to console
println!("{}", ret);
}
writeln!(buf, "{}", ret)
};
builder.format(format);
builder
.try_init()
.and_then(|_| {
*ROTATING_LOGGER.lock() = Arc::downgrade(&logs);
Ok(logs)
})
// couldn't create new logger - try to fall back on previous logger.
.or_else(|err| {
ROTATING_LOGGER
.lock()
.upgrade()
.ok_or_else(|| format!("{:?}", err))
})
}
fn kill_color(s: &str) -> String {
lazy_static! {
static ref RE: Regex = Regex::new("\x1b\\[[^m]+m").unwrap();
}
RE.replace_all(s, "").to_string()
}
#[test]
fn should_remove_colour() {
let before = "test";
let after = kill_color(&Colour::Red.bold().paint(before));
assert_eq!(after, "test");
}
#[test]
fn should_remove_multiple_colour() {
let t = format!(
"{} {}",
Colour::Red.bold().paint("test"),
Colour::White.normal().paint("again")
);
let after = kill_color(&t);
assert_eq!(after, "test again");
}

View File

@ -1,121 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Common log helper functions
use arrayvec::ArrayVec;
use env_logger::Builder as LogBuilder;
use rlog::LevelFilter;
use std::env;
use parking_lot::{RwLock, RwLockReadGuard};
lazy_static! {
static ref LOG_DUMMY: () = {
let mut builder = LogBuilder::new();
builder.filter(None, LevelFilter::Info);
if let Ok(log) = env::var("RUST_LOG") {
builder.parse(&log);
}
if !builder.try_init().is_ok() {
println!("logger initialization failed!");
}
};
}
/// Intialize log with default settings
pub fn init_log() {
*LOG_DUMMY
}
const LOG_SIZE: usize = 128;
/// Logger implementation that keeps up to `LOG_SIZE` log elements.
pub struct RotatingLogger {
/// Defined logger levels
levels: String,
/// Logs array. Latest log is always at index 0
logs: RwLock<ArrayVec<[String; LOG_SIZE]>>,
}
impl RotatingLogger {
/// Creates new `RotatingLogger` with given levels.
/// It does not enforce levels - it's just read only.
pub fn new(levels: String) -> Self {
RotatingLogger {
levels: levels,
logs: RwLock::new(ArrayVec::<[_; LOG_SIZE]>::new()),
}
}
/// Append new log entry
pub fn append(&self, log: String) {
let mut logs = self.logs.write();
if logs.is_full() {
logs.pop();
}
logs.insert(0, log);
}
/// Return levels
pub fn levels(&self) -> &str {
&self.levels
}
/// Return logs
pub fn logs(&self) -> RwLockReadGuard<ArrayVec<[String; LOG_SIZE]>> {
self.logs.read()
}
}
#[cfg(test)]
mod test {
use super::RotatingLogger;
fn logger() -> RotatingLogger {
RotatingLogger::new("test".to_owned())
}
#[test]
fn should_return_log_levels() {
// given
let logger = logger();
// when
let levels = logger.levels();
// then
assert_eq!(levels, "test");
}
#[test]
fn should_return_latest_logs() {
// given
let logger = logger();
// when
logger.append("a".to_owned());
logger.append("b".to_owned());
// then
let logs = logger.logs();
assert_eq!(logs[0], "b".to_owned());
assert_eq!(logs[1], "a".to_owned());
assert_eq!(logs.len(), 2);
}
}

View File

@ -1,180 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Ethcore client application.
#![warn(missing_docs)]
extern crate ctrlc;
extern crate dir;
extern crate fdlimit;
#[macro_use]
extern crate log;
extern crate ansi_term;
extern crate openethereum;
extern crate panic_hook;
extern crate parity_daemonize;
extern crate parking_lot;
extern crate ethcore_logger;
#[cfg(windows)]
extern crate winapi;
use std::{
io::Write,
process,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
};
use ansi_term::Colour;
use ctrlc::CtrlC;
use ethcore_logger::setup_log;
use fdlimit::raise_fd_limit;
use openethereum::{start, ExecutionAction};
use parity_daemonize::AsHandle;
use parking_lot::{Condvar, Mutex};
#[derive(Debug)]
/// Status used to exit or restart the program.
struct ExitStatus {
/// Whether the program panicked.
panicking: bool,
/// Whether the program should exit.
should_exit: bool,
}
fn main() -> Result<(), i32> {
let conf = {
let args = std::env::args().collect::<Vec<_>>();
openethereum::Configuration::parse_cli(&args).unwrap_or_else(|e| e.exit())
};
let logger = setup_log(&conf.logger_config()).unwrap_or_else(|e| {
eprintln!("{}", e);
process::exit(2)
});
// FIXME: `pid_file` shouldn't need to cloned here
// see: `https://github.com/paritytech/parity-daemonize/pull/13` for more info
let handle = if let Some(pid) = conf.args.arg_daemon_pid_file.clone() {
info!(
"{}",
Colour::Blue.paint("starting in daemon mode").to_string()
);
let _ = std::io::stdout().flush();
match parity_daemonize::daemonize(pid) {
Ok(h) => Some(h),
Err(e) => {
error!("{}", Colour::Red.paint(format!("{}", e)));
return Err(1);
}
}
} else {
None
};
// increase max number of open files
raise_fd_limit();
let exit = Arc::new((
Mutex::new(ExitStatus {
panicking: false,
should_exit: false,
}),
Condvar::new(),
));
// Double panic can happen. So when we lock `ExitStatus` after the main thread is notified, it cannot be locked
// again.
let exiting = Arc::new(AtomicBool::new(false));
trace!(target: "mode", "Not hypervised: not setting exit handlers.");
let exec = start(conf, logger);
match exec {
Ok(result) => match result {
ExecutionAction::Instant(output) => {
if let Some(s) = output {
println!("{}", s);
}
}
ExecutionAction::Running(client) => {
panic_hook::set_with({
let e = exit.clone();
let exiting = exiting.clone();
move |panic_msg| {
warn!("Panic occured, see stderr for details");
eprintln!("{}", panic_msg);
if !exiting.swap(true, Ordering::SeqCst) {
*e.0.lock() = ExitStatus {
panicking: true,
should_exit: true,
};
e.1.notify_all();
}
}
});
CtrlC::set_handler({
let e = exit.clone();
let exiting = exiting.clone();
move || {
if !exiting.swap(true, Ordering::SeqCst) {
*e.0.lock() = ExitStatus {
panicking: false,
should_exit: true,
};
e.1.notify_all();
}
}
});
// so the client has started successfully
// if this is a daemon, detach from the parent process
if let Some(mut handle) = handle {
handle.detach()
}
// Wait for signal
let mut lock = exit.0.lock();
if !lock.should_exit {
let _ = exit.1.wait(&mut lock);
}
client.shutdown();
if lock.panicking {
return Err(1);
}
}
},
Err(err) => {
// error occured during start up
// if this is a daemon, detach from the parent process
if let Some(mut handle) = handle {
handle.detach_with_msg(format!("{}", Colour::Red.paint(&err)))
}
eprintln!("{}", err);
return Err(1);
}
};
Ok(())
}

View File

@ -1,117 +0,0 @@
use std::{sync::Arc, time::Instant};
use crate::{futures::Future, rpc, rpc_apis};
use parking_lot::Mutex;
use hyper::{service::service_fn_ok, Body, Method, Request, Response, Server, StatusCode};
use stats::{
prometheus::{self, Encoder},
PrometheusMetrics, PrometheusRegistry,
};
#[derive(Debug, Clone, PartialEq)]
pub struct MetricsConfiguration {
/// Are metrics enabled (default is false)?
pub enabled: bool,
/// Prefix
pub prefix: String,
/// The IP of the network interface used (default is 127.0.0.1).
pub interface: String,
/// The network port (default is 3000).
pub port: u16,
}
impl Default for MetricsConfiguration {
fn default() -> Self {
MetricsConfiguration {
enabled: false,
prefix: "".into(),
interface: "127.0.0.1".into(),
port: 3000,
}
}
}
struct State {
rpc_apis: Arc<rpc_apis::FullDependencies>,
}
fn handle_request(
req: Request<Body>,
conf: Arc<MetricsConfiguration>,
state: Arc<Mutex<State>>,
) -> Response<Body> {
let (parts, _body) = req.into_parts();
match (parts.method, parts.uri.path()) {
(Method::GET, "/metrics") => {
let start = Instant::now();
let mut reg = PrometheusRegistry::new(conf.prefix.clone());
let state = state.lock();
state.rpc_apis.client.prometheus_metrics(&mut reg);
state.rpc_apis.sync.prometheus_metrics(&mut reg);
let elapsed = start.elapsed();
reg.register_gauge(
"metrics_time",
"Time to perform rpc metrics",
elapsed.as_millis() as i64,
);
let mut buffer = vec![];
let encoder = prometheus::TextEncoder::new();
let metric_families = reg.registry().gather();
encoder
.encode(&metric_families, &mut buffer)
.expect("all source of metrics are static; qed");
let text = String::from_utf8(buffer).expect("metrics encoding is ASCII; qed");
Response::new(Body::from(text))
}
(_, _) => {
let mut res = Response::new(Body::from("not found"));
*res.status_mut() = StatusCode::NOT_FOUND;
res
}
}
}
/// Start the prometheus metrics server accessible via GET <host>:<port>/metrics
pub fn start_prometheus_metrics(
conf: &MetricsConfiguration,
deps: &rpc::Dependencies<rpc_apis::FullDependencies>,
) -> Result<(), String> {
if !conf.enabled {
return Ok(());
}
let addr = format!("{}:{}", conf.interface, conf.port);
let addr = addr
.parse()
.map_err(|err| format!("Failed to parse address '{}': {}", addr, err))?;
let state = State {
rpc_apis: deps.apis.clone(),
};
let state = Arc::new(Mutex::new(state));
let conf = Arc::new(conf.to_owned());
let server = Server::bind(&addr)
.serve(move || {
// This is the `Service` that will handle the connection.
// `service_fn_ok` is a helper to convert a function that
// returns a Response into a `Service`.
let state = state.clone();
let conf = conf.clone();
service_fn_ok(move |req: Request<Body>| {
handle_request(req, conf.clone(), state.clone())
})
})
.map_err(|e| eprintln!("server error: {}", e));
info!("Started prometeus metrics at http://{}/metrics", addr);
deps.executor.spawn(server);
Ok(())
}

View File

@ -1,63 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::sync::{mpsc, Arc};
use crate::{
sync::{self, ConnectionFilter, NetworkConfiguration, Params, SyncConfig},
types::BlockNumber,
};
use ethcore::{client::BlockChainClient, snapshot::SnapshotService};
use std::collections::BTreeSet;
pub use crate::sync::{EthSync, ManageNetwork, SyncProvider};
pub use ethcore::client::ChainNotify;
use ethcore_logger::Config as LogConfig;
pub type SyncModules = (
Arc<dyn SyncProvider>,
Arc<dyn ManageNetwork>,
Arc<dyn ChainNotify>,
mpsc::Sender<sync::PriorityTask>,
);
pub fn sync(
config: SyncConfig,
network_config: NetworkConfiguration,
chain: Arc<dyn BlockChainClient>,
forks: BTreeSet<BlockNumber>,
snapshot_service: Arc<dyn SnapshotService>,
_log_settings: &LogConfig,
connection_filter: Option<Arc<dyn ConnectionFilter>>,
) -> Result<SyncModules, sync::Error> {
let eth_sync = EthSync::new(
Params {
config,
chain,
forks,
snapshot_service,
network_config,
},
connection_filter,
)?;
Ok((
eth_sync.clone() as Arc<dyn SyncProvider>,
eth_sync.clone() as Arc<dyn ManageNetwork>,
eth_sync.clone() as Arc<dyn ChainNotify>,
eth_sync.priority_tasks(),
))
}

View File

@ -1,542 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{collections::HashSet, fmt, fs, num::NonZeroU32, str, time::Duration};
use crate::{
miner::{
gas_price_calibrator::{GasPriceCalibrator, GasPriceCalibratorOptions},
gas_pricer::GasPricer,
},
user_defaults::UserDefaults,
};
use ethcore::{
client::Mode,
ethereum,
spec::{Spec, SpecParams},
};
use ethereum_types::{Address, U256};
use fetch::Client as FetchClient;
use journaldb::Algorithm;
use parity_runtime::Executor;
use parity_version::version_data;
use crate::configuration;
#[derive(Debug, PartialEq)]
pub enum SpecType {
Foundation,
Poanet,
Xdai,
Volta,
Ewc,
Musicoin,
Ellaism,
Mix,
Callisto,
Morden,
Ropsten,
Kovan,
Rinkeby,
Goerli,
Sokol,
Yolo3,
Dev,
Custom(String),
}
impl Default for SpecType {
fn default() -> Self {
SpecType::Foundation
}
}
impl str::FromStr for SpecType {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let spec = match s {
"eth" | "ethereum" | "foundation" | "mainnet" => SpecType::Foundation,
"poanet" | "poacore" => SpecType::Poanet,
"xdai" => SpecType::Xdai,
"volta" => SpecType::Volta,
"ewc" | "energyweb" => SpecType::Ewc,
"musicoin" => SpecType::Musicoin,
"ellaism" => SpecType::Ellaism,
"mix" => SpecType::Mix,
"callisto" => SpecType::Callisto,
"morden" => SpecType::Morden,
"ropsten" => SpecType::Ropsten,
"kovan" => SpecType::Kovan,
"rinkeby" => SpecType::Rinkeby,
"goerli" | "görli" | "testnet" => SpecType::Goerli,
"sokol" | "poasokol" => SpecType::Sokol,
"yolo3" => SpecType::Yolo3,
"dev" => SpecType::Dev,
other => SpecType::Custom(other.into()),
};
Ok(spec)
}
}
impl fmt::Display for SpecType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str(match *self {
SpecType::Foundation => "foundation",
SpecType::Poanet => "poanet",
SpecType::Xdai => "xdai",
SpecType::Volta => "volta",
SpecType::Ewc => "energyweb",
SpecType::Musicoin => "musicoin",
SpecType::Ellaism => "ellaism",
SpecType::Mix => "mix",
SpecType::Callisto => "callisto",
SpecType::Morden => "morden",
SpecType::Ropsten => "ropsten",
SpecType::Kovan => "kovan",
SpecType::Rinkeby => "rinkeby",
SpecType::Goerli => "goerli",
SpecType::Sokol => "sokol",
SpecType::Yolo3 => "yolo3",
SpecType::Dev => "dev",
SpecType::Custom(ref custom) => custom,
})
}
}
impl SpecType {
pub fn spec<'a, T: Into<SpecParams<'a>>>(&self, params: T) -> Result<Spec, String> {
let params = params.into();
match *self {
SpecType::Foundation => Ok(ethereum::new_foundation(params)),
SpecType::Poanet => Ok(ethereum::new_poanet(params)),
SpecType::Xdai => Ok(ethereum::new_xdai(params)),
SpecType::Volta => Ok(ethereum::new_volta(params)),
SpecType::Ewc => Ok(ethereum::new_ewc(params)),
SpecType::Musicoin => Ok(ethereum::new_musicoin(params)),
SpecType::Ellaism => Ok(ethereum::new_ellaism(params)),
SpecType::Mix => Ok(ethereum::new_mix(params)),
SpecType::Callisto => Ok(ethereum::new_callisto(params)),
SpecType::Morden => Ok(ethereum::new_morden(params)),
SpecType::Ropsten => Ok(ethereum::new_ropsten(params)),
SpecType::Kovan => Ok(ethereum::new_kovan(params)),
SpecType::Rinkeby => Ok(ethereum::new_rinkeby(params)),
SpecType::Goerli => Ok(ethereum::new_goerli(params)),
SpecType::Sokol => Ok(ethereum::new_sokol(params)),
SpecType::Yolo3 => Ok(ethereum::new_yolo3(params)),
SpecType::Dev => Ok(Spec::new_instant()),
SpecType::Custom(ref filename) => {
let file = fs::File::open(filename).map_err(|e| {
format!("Could not load specification file at {}: {}", filename, e)
})?;
Spec::load(params, file)
}
}
}
pub fn legacy_fork_name(&self) -> Option<String> {
match *self {
SpecType::Musicoin => Some("musicoin".to_owned()),
_ => None,
}
}
}
#[derive(Debug, PartialEq)]
pub enum Pruning {
Specific(Algorithm),
Auto,
}
impl Default for Pruning {
fn default() -> Self {
Pruning::Auto
}
}
impl str::FromStr for Pruning {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"auto" => Ok(Pruning::Auto),
other => other.parse().map(Pruning::Specific),
}
}
}
impl Pruning {
pub fn to_algorithm(&self, user_defaults: &UserDefaults) -> Algorithm {
match *self {
Pruning::Specific(algo) => algo,
Pruning::Auto => user_defaults.pruning,
}
}
}
#[derive(Debug, PartialEq)]
pub struct ResealPolicy {
pub own: bool,
pub external: bool,
}
impl Default for ResealPolicy {
fn default() -> Self {
ResealPolicy {
own: true,
external: true,
}
}
}
impl str::FromStr for ResealPolicy {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let (own, external) = match s {
"none" => (false, false),
"own" => (true, false),
"ext" => (false, true),
"all" => (true, true),
x => return Err(format!("Invalid reseal value: {}", x)),
};
let reseal = ResealPolicy {
own: own,
external: external,
};
Ok(reseal)
}
}
#[derive(Debug, PartialEq)]
pub struct AccountsConfig {
pub iterations: NonZeroU32,
pub refresh_time: u64,
pub testnet: bool,
pub password_files: Vec<String>,
pub unlocked_accounts: Vec<Address>,
pub enable_fast_unlock: bool,
}
impl Default for AccountsConfig {
fn default() -> Self {
AccountsConfig {
iterations: NonZeroU32::new(10240).expect("10240 > 0; qed"),
refresh_time: 5,
testnet: false,
password_files: Vec::new(),
unlocked_accounts: Vec::new(),
enable_fast_unlock: false,
}
}
}
#[derive(Debug, PartialEq)]
pub enum GasPricerConfig {
Fixed(U256),
Calibrated {
usd_per_tx: f32,
recalibration_period: Duration,
api_endpoint: String,
},
}
impl Default for GasPricerConfig {
fn default() -> Self {
GasPricerConfig::Calibrated {
usd_per_tx: 0.0001f32,
recalibration_period: Duration::from_secs(3600),
api_endpoint: configuration::ETHERSCAN_ETH_PRICE_ENDPOINT.to_string(),
}
}
}
impl GasPricerConfig {
pub fn to_gas_pricer(&self, fetch: FetchClient, p: Executor) -> GasPricer {
match *self {
GasPricerConfig::Fixed(u) => GasPricer::Fixed(u),
GasPricerConfig::Calibrated {
usd_per_tx,
recalibration_period,
ref api_endpoint,
} => GasPricer::new_calibrated(GasPriceCalibrator::new(
GasPriceCalibratorOptions {
usd_per_tx: usd_per_tx,
recalibration_period: recalibration_period,
},
fetch,
p,
api_endpoint.clone(),
)),
}
}
}
#[derive(Debug, PartialEq)]
pub struct MinerExtras {
pub author: Address,
pub engine_signer: Address,
pub extra_data: Vec<u8>,
pub gas_range_target: (U256, U256),
pub work_notify: Vec<String>,
pub local_accounts: HashSet<Address>,
}
impl Default for MinerExtras {
fn default() -> Self {
MinerExtras {
author: Default::default(),
engine_signer: Default::default(),
extra_data: version_data(),
gas_range_target: (8_000_000.into(), 10_000_000.into()),
work_notify: Default::default(),
local_accounts: Default::default(),
}
}
}
/// 3-value enum.
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum Switch {
/// True.
On,
/// False.
Off,
/// Auto.
Auto,
}
impl Default for Switch {
fn default() -> Self {
Switch::Auto
}
}
impl str::FromStr for Switch {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"on" => Ok(Switch::On),
"off" => Ok(Switch::Off),
"auto" => Ok(Switch::Auto),
other => Err(format!("Invalid switch value: {}", other)),
}
}
}
pub fn tracing_switch_to_bool(
switch: Switch,
user_defaults: &UserDefaults,
) -> Result<bool, String> {
match (user_defaults.is_first_launch, switch, user_defaults.tracing) {
(false, Switch::On, false) => Err("TraceDB resync required".into()),
(_, Switch::On, _) => Ok(true),
(_, Switch::Off, _) => Ok(false),
(_, Switch::Auto, def) => Ok(def),
}
}
pub fn fatdb_switch_to_bool(
switch: Switch,
user_defaults: &UserDefaults,
_algorithm: Algorithm,
) -> Result<bool, String> {
let result = match (user_defaults.is_first_launch, switch, user_defaults.fat_db) {
(false, Switch::On, false) => Err("FatDB resync required".into()),
(_, Switch::On, _) => Ok(true),
(_, Switch::Off, _) => Ok(false),
(_, Switch::Auto, def) => Ok(def),
};
result
}
pub fn mode_switch_to_bool(
switch: Option<Mode>,
user_defaults: &UserDefaults,
) -> Result<Mode, String> {
Ok(switch.unwrap_or(user_defaults.mode().clone()))
}
#[cfg(test)]
mod tests {
use super::{tracing_switch_to_bool, Pruning, ResealPolicy, SpecType, Switch};
use crate::user_defaults::UserDefaults;
use journaldb::Algorithm;
#[test]
fn test_spec_type_parsing() {
assert_eq!(SpecType::Foundation, "eth".parse().unwrap());
assert_eq!(SpecType::Foundation, "ethereum".parse().unwrap());
assert_eq!(SpecType::Foundation, "foundation".parse().unwrap());
assert_eq!(SpecType::Foundation, "mainnet".parse().unwrap());
assert_eq!(SpecType::Poanet, "poanet".parse().unwrap());
assert_eq!(SpecType::Poanet, "poacore".parse().unwrap());
assert_eq!(SpecType::Xdai, "xdai".parse().unwrap());
assert_eq!(SpecType::Volta, "volta".parse().unwrap());
assert_eq!(SpecType::Ewc, "ewc".parse().unwrap());
assert_eq!(SpecType::Ewc, "energyweb".parse().unwrap());
assert_eq!(SpecType::Musicoin, "musicoin".parse().unwrap());
assert_eq!(SpecType::Ellaism, "ellaism".parse().unwrap());
assert_eq!(SpecType::Mix, "mix".parse().unwrap());
assert_eq!(SpecType::Callisto, "callisto".parse().unwrap());
assert_eq!(SpecType::Morden, "morden".parse().unwrap());
assert_eq!(SpecType::Ropsten, "ropsten".parse().unwrap());
assert_eq!(SpecType::Kovan, "kovan".parse().unwrap());
assert_eq!(SpecType::Rinkeby, "rinkeby".parse().unwrap());
assert_eq!(SpecType::Goerli, "goerli".parse().unwrap());
assert_eq!(SpecType::Goerli, "görli".parse().unwrap());
assert_eq!(SpecType::Goerli, "testnet".parse().unwrap());
assert_eq!(SpecType::Sokol, "sokol".parse().unwrap());
assert_eq!(SpecType::Sokol, "poasokol".parse().unwrap());
}
#[test]
fn test_spec_type_default() {
assert_eq!(SpecType::Foundation, SpecType::default());
}
#[test]
fn test_spec_type_display() {
assert_eq!(format!("{}", SpecType::Foundation), "foundation");
assert_eq!(format!("{}", SpecType::Poanet), "poanet");
assert_eq!(format!("{}", SpecType::Xdai), "xdai");
assert_eq!(format!("{}", SpecType::Volta), "volta");
assert_eq!(format!("{}", SpecType::Ewc), "energyweb");
assert_eq!(format!("{}", SpecType::Musicoin), "musicoin");
assert_eq!(format!("{}", SpecType::Ellaism), "ellaism");
assert_eq!(format!("{}", SpecType::Mix), "mix");
assert_eq!(format!("{}", SpecType::Callisto), "callisto");
assert_eq!(format!("{}", SpecType::Morden), "morden");
assert_eq!(format!("{}", SpecType::Ropsten), "ropsten");
assert_eq!(format!("{}", SpecType::Kovan), "kovan");
assert_eq!(format!("{}", SpecType::Rinkeby), "rinkeby");
assert_eq!(format!("{}", SpecType::Goerli), "goerli");
assert_eq!(format!("{}", SpecType::Sokol), "sokol");
assert_eq!(format!("{}", SpecType::Dev), "dev");
assert_eq!(format!("{}", SpecType::Custom("foo/bar".into())), "foo/bar");
}
#[test]
fn test_pruning_parsing() {
assert_eq!(Pruning::Auto, "auto".parse().unwrap());
assert_eq!(
Pruning::Specific(Algorithm::Archive),
"archive".parse().unwrap()
);
assert_eq!(
Pruning::Specific(Algorithm::EarlyMerge),
"light".parse().unwrap()
);
assert_eq!(
Pruning::Specific(Algorithm::OverlayRecent),
"fast".parse().unwrap()
);
assert_eq!(
Pruning::Specific(Algorithm::RefCounted),
"basic".parse().unwrap()
);
}
#[test]
fn test_pruning_default() {
assert_eq!(Pruning::Auto, Pruning::default());
}
#[test]
fn test_reseal_policy_parsing() {
let none = ResealPolicy {
own: false,
external: false,
};
let own = ResealPolicy {
own: true,
external: false,
};
let ext = ResealPolicy {
own: false,
external: true,
};
let all = ResealPolicy {
own: true,
external: true,
};
assert_eq!(none, "none".parse().unwrap());
assert_eq!(own, "own".parse().unwrap());
assert_eq!(ext, "ext".parse().unwrap());
assert_eq!(all, "all".parse().unwrap());
}
#[test]
fn test_reseal_policy_default() {
let all = ResealPolicy {
own: true,
external: true,
};
assert_eq!(all, ResealPolicy::default());
}
#[test]
fn test_switch_parsing() {
assert_eq!(Switch::On, "on".parse().unwrap());
assert_eq!(Switch::Off, "off".parse().unwrap());
assert_eq!(Switch::Auto, "auto".parse().unwrap());
}
#[test]
fn test_switch_default() {
assert_eq!(Switch::default(), Switch::Auto);
}
fn user_defaults_with_tracing(first_launch: bool, tracing: bool) -> UserDefaults {
let mut ud = UserDefaults::default();
ud.is_first_launch = first_launch;
ud.tracing = tracing;
ud
}
#[test]
fn test_switch_to_bool() {
assert!(
!tracing_switch_to_bool(Switch::Off, &user_defaults_with_tracing(true, true)).unwrap()
);
assert!(
!tracing_switch_to_bool(Switch::Off, &user_defaults_with_tracing(true, false)).unwrap()
);
assert!(
!tracing_switch_to_bool(Switch::Off, &user_defaults_with_tracing(false, true)).unwrap()
);
assert!(
!tracing_switch_to_bool(Switch::Off, &user_defaults_with_tracing(false, false))
.unwrap()
);
assert!(
tracing_switch_to_bool(Switch::On, &user_defaults_with_tracing(true, true)).unwrap()
);
assert!(
tracing_switch_to_bool(Switch::On, &user_defaults_with_tracing(true, false)).unwrap()
);
assert!(
tracing_switch_to_bool(Switch::On, &user_defaults_with_tracing(false, true)).unwrap()
);
assert!(
tracing_switch_to_bool(Switch::On, &user_defaults_with_tracing(false, false)).is_err()
);
}
}

View File

@ -1,65 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crate::{
helpers::{password_from_file, password_prompt},
params::SpecType,
};
use crypto::publickey;
use ethkey::Password;
use ethstore::PresaleWallet;
use std::num::NonZeroU32;
#[derive(Debug, PartialEq)]
pub struct ImportWallet {
pub iterations: NonZeroU32,
pub path: String,
pub spec: SpecType,
pub wallet_path: String,
pub password_file: Option<String>,
}
pub fn execute(cmd: ImportWallet) -> Result<String, String> {
let password = match cmd.password_file.clone() {
Some(file) => password_from_file(file)?,
None => password_prompt()?,
};
let wallet = PresaleWallet::open(cmd.wallet_path.clone())
.map_err(|_| "Unable to open presale wallet.")?;
let kp = wallet.decrypt(&password).map_err(|_| "Invalid password.")?;
let address = kp.address();
import_account(&cmd, kp, password);
Ok(format!("{:?}", address))
}
#[cfg(feature = "accounts")]
pub fn import_account(cmd: &ImportWallet, kp: publickey::KeyPair, password: Password) {
use accounts::{AccountProvider, AccountProviderSettings};
use ethstore::{accounts_dir::RootDiskDirectory, EthStore};
let dir = Box::new(RootDiskDirectory::create(cmd.path.clone()).unwrap());
let secret_store = Box::new(EthStore::open_with_iterations(dir, cmd.iterations).unwrap());
let acc_provider = AccountProvider::new(secret_store, AccountProviderSettings::default());
acc_provider
.insert_account(kp.secret().clone(), &password)
.unwrap();
}
#[cfg(not(feature = "accounts"))]
pub fn import_account(_cmd: &ImportWallet, _kp: publickey::KeyPair, _password: Password) {}

View File

@ -1,364 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{collections::HashSet, io, path::PathBuf, sync::Arc};
use crate::{
helpers::parity_ipc_path,
rpc_apis::{self, ApiSet},
};
use dir::{default_data_path, helpers::replace_home};
use jsonrpc_core::MetaIoHandler;
use parity_rpc::{
self as rpc,
informant::{Middleware, RpcStats},
DomainsValidation, Metadata,
};
use parity_runtime::Executor;
pub use parity_rpc::{HttpServer, IpcServer, RequestMiddleware};
//pub use parity_rpc::ws::Server as WsServer;
pub use parity_rpc::ws::{ws, Server as WsServer};
pub const DAPPS_DOMAIN: &'static str = "web3.site";
#[derive(Debug, Clone, PartialEq)]
pub struct HttpConfiguration {
pub enabled: bool,
pub interface: String,
pub port: u16,
pub apis: ApiSet,
pub cors: Option<Vec<String>>,
pub hosts: Option<Vec<String>>,
pub server_threads: usize,
pub processing_threads: usize,
pub max_payload: usize,
pub keep_alive: bool,
}
impl Default for HttpConfiguration {
fn default() -> Self {
HttpConfiguration {
enabled: true,
interface: "127.0.0.1".into(),
port: 8545,
apis: ApiSet::UnsafeContext,
cors: Some(vec![]),
hosts: Some(vec![]),
server_threads: 1,
processing_threads: 4,
max_payload: 5,
keep_alive: true,
}
}
}
#[derive(Debug, PartialEq)]
pub struct IpcConfiguration {
pub enabled: bool,
pub socket_addr: String,
pub apis: ApiSet,
}
impl Default for IpcConfiguration {
fn default() -> Self {
IpcConfiguration {
enabled: true,
socket_addr: if cfg!(windows) {
r"\\.\pipe\jsonrpc.ipc".into()
} else {
let data_dir = ::dir::default_data_path();
parity_ipc_path(&data_dir, "$BASE/jsonrpc.ipc", 0)
},
apis: ApiSet::IpcContext,
}
}
}
#[derive(Debug, Clone, PartialEq)]
pub struct WsConfiguration {
pub enabled: bool,
pub interface: String,
pub port: u16,
pub apis: ApiSet,
pub max_connections: usize,
pub origins: Option<Vec<String>>,
pub hosts: Option<Vec<String>>,
pub signer_path: PathBuf,
pub support_token_api: bool,
pub max_payload: usize,
}
impl Default for WsConfiguration {
fn default() -> Self {
let data_dir = default_data_path();
WsConfiguration {
enabled: true,
interface: "127.0.0.1".into(),
port: 8546,
apis: ApiSet::UnsafeContext,
max_connections: 100,
origins: Some(vec![
"parity://*".into(),
"chrome-extension://*".into(),
"moz-extension://*".into(),
]),
hosts: Some(Vec::new()),
signer_path: replace_home(&data_dir, "$BASE/signer").into(),
support_token_api: true,
max_payload: 5,
}
}
}
impl WsConfiguration {
pub fn address(&self) -> Option<rpc::Host> {
address(self.enabled, &self.interface, self.port, &self.hosts)
}
}
fn address(
enabled: bool,
bind_iface: &str,
bind_port: u16,
hosts: &Option<Vec<String>>,
) -> Option<rpc::Host> {
if !enabled {
return None;
}
match *hosts {
Some(ref hosts) if !hosts.is_empty() => Some(hosts[0].clone().into()),
_ => Some(format!("{}:{}", bind_iface, bind_port).into()),
}
}
pub struct Dependencies<D: rpc_apis::Dependencies> {
pub apis: Arc<D>,
pub executor: Executor,
pub stats: Arc<RpcStats>,
}
pub fn new_ws<D: rpc_apis::Dependencies>(
conf: WsConfiguration,
deps: &Dependencies<D>,
) -> Result<Option<WsServer>, String> {
if !conf.enabled {
return Ok(None);
}
let domain = DAPPS_DOMAIN;
let url = format!("{}:{}", conf.interface, conf.port);
let addr = url
.parse()
.map_err(|_| format!("Invalid WebSockets listen host/port given: {}", url))?;
let full_handler = setup_apis(rpc_apis::ApiSet::All, deps);
let handler = {
let mut handler = MetaIoHandler::with_middleware((
rpc::WsDispatcher::new(full_handler),
Middleware::new(deps.stats.clone(), deps.apis.activity_notifier()),
));
let apis = conf.apis.list_apis();
deps.apis.extend_with_set(&mut handler, &apis);
handler
};
let allowed_origins = into_domains(with_domain(conf.origins, domain, &None));
let allowed_hosts = into_domains(with_domain(conf.hosts, domain, &Some(url.clone().into())));
let signer_path;
let path = match conf.support_token_api {
true => {
signer_path = crate::signer::codes_path(&conf.signer_path);
Some(signer_path.as_path())
}
false => None,
};
let start_result = rpc::start_ws(
&addr,
handler,
allowed_origins,
allowed_hosts,
conf.max_connections,
rpc::WsExtractor::new(path.clone()),
rpc::WsExtractor::new(path.clone()),
rpc::WsStats::new(deps.stats.clone()),
conf.max_payload,
);
// match start_result {
// Ok(server) => Ok(Some(server)),
// Err(rpc::ws::Error::Io(rpc::ws::ErrorKind::Io(ref err), _)) if err.kind() == io::ErrorKind::AddrInUse => Err(
// format!("WebSockets address {} is already in use, make sure that another instance of an Ethereum client is not running or change the address using the --ws-port and --ws-interface options.", url)
// ),
// Err(e) => Err(format!("WebSockets error: {:?}", e)),
// }
match start_result {
Ok(server) => Ok(Some(server)),
Err(rpc::ws::Error::WsError(ws::Error {
kind: ws::ErrorKind::Io(ref err), ..
})) if err.kind() == io::ErrorKind::AddrInUse => Err(
format!("WebSockets address {} is already in use, make sure that another instance of an Ethereum client is not running or change the address using the --ws-port and --ws-interface options.", url)
),
Err(e) => Err(format!("WebSockets error: {:?}", e)),
}
}
pub fn new_http<D: rpc_apis::Dependencies>(
id: &str,
options: &str,
conf: HttpConfiguration,
deps: &Dependencies<D>,
) -> Result<Option<HttpServer>, String> {
if !conf.enabled {
return Ok(None);
}
let domain = DAPPS_DOMAIN;
let url = format!("{}:{}", conf.interface, conf.port);
let addr = url
.parse()
.map_err(|_| format!("Invalid {} listen host/port given: {}", id, url))?;
let handler = setup_apis(conf.apis, deps);
let cors_domains = into_domains(conf.cors);
let allowed_hosts = into_domains(with_domain(conf.hosts, domain, &Some(url.clone().into())));
let start_result = rpc::start_http(
&addr,
cors_domains,
allowed_hosts,
handler,
rpc::RpcExtractor,
conf.server_threads,
conf.max_payload,
conf.keep_alive,
);
match start_result {
Ok(server) => Ok(Some(server)),
Err(ref err) if err.kind() == io::ErrorKind::AddrInUse => Err(
format!("{} address {} is already in use, make sure that another instance of an Ethereum client is not running or change the address using the --{}-port and --{}-interface options.", id, url, options, options)
),
Err(e) => Err(format!("{} error: {:?}", id, e)),
}
}
pub fn new_ipc<D: rpc_apis::Dependencies>(
conf: IpcConfiguration,
dependencies: &Dependencies<D>,
) -> Result<Option<IpcServer>, String> {
if !conf.enabled {
return Ok(None);
}
let handler = setup_apis(conf.apis, dependencies);
let path = PathBuf::from(&conf.socket_addr);
// Make sure socket file can be created on unix-like OS.
// Windows pipe paths are not on the FS.
if !cfg!(windows) {
if let Some(dir) = path.parent() {
::std::fs::create_dir_all(&dir).map_err(|err| {
format!(
"Unable to create IPC directory at {}: {}",
dir.display(),
err
)
})?;
}
}
match rpc::start_ipc(&conf.socket_addr, handler, rpc::RpcExtractor) {
Ok(server) => Ok(Some(server)),
Err(io_error) => Err(format!("IPC error: {}", io_error)),
}
}
fn into_domains<T: From<String>>(items: Option<Vec<String>>) -> DomainsValidation<T> {
items
.map(|vals| vals.into_iter().map(T::from).collect())
.into()
}
fn with_domain(
items: Option<Vec<String>>,
domain: &str,
dapps_address: &Option<rpc::Host>,
) -> Option<Vec<String>> {
fn extract_port(s: &str) -> Option<u16> {
s.split(':').nth(1).and_then(|s| s.parse().ok())
}
items.map(move |items| {
let mut items = items.into_iter().collect::<HashSet<_>>();
{
let mut add_hosts = |address: &Option<rpc::Host>| {
if let Some(host) = address.clone() {
items.insert(host.to_string());
items.insert(host.replace("127.0.0.1", "localhost"));
items.insert(format!("http://*.{}", domain)); //proxypac
if let Some(port) = extract_port(&*host) {
items.insert(format!("http://*.{}:{}", domain, port));
}
}
};
add_hosts(dapps_address);
}
items.into_iter().collect()
})
}
pub fn setup_apis<D>(
apis: ApiSet,
deps: &Dependencies<D>,
) -> MetaIoHandler<Metadata, Middleware<D::Notifier>>
where
D: rpc_apis::Dependencies,
{
let mut handler = MetaIoHandler::with_middleware(Middleware::new(
deps.stats.clone(),
deps.apis.activity_notifier(),
));
let apis = apis.list_apis();
deps.apis.extend_with_set(&mut handler, &apis);
handler
}
#[cfg(test)]
mod tests {
use super::address;
#[test]
fn should_return_proper_address() {
assert_eq!(address(false, "localhost", 8180, &None), None);
assert_eq!(
address(true, "localhost", 8180, &None),
Some("localhost:8180".into())
);
assert_eq!(
address(true, "localhost", 8180, &Some(vec!["host:443".into()])),
Some("host:443".into())
);
assert_eq!(
address(true, "localhost", 8180, &Some(vec!["host".into()])),
Some("host".into())
);
}
}

View File

@ -1,646 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{
cmp::PartialEq,
collections::{BTreeMap, HashSet},
str::FromStr,
sync::Arc,
};
pub use parity_rpc::signer::SignerService;
use crate::{
account_utils::{self, AccountProvider},
miner::external::ExternalMiner,
sync::{ManageNetwork, SyncProvider},
};
use ethcore::{client::Client, miner::Miner, snapshot::SnapshotService};
use ethcore_logger::RotatingLogger;
use fetch::Client as FetchClient;
use jsonrpc_core::{self as core, MetaIoHandler};
use parity_rpc::{
dispatch::FullDispatcher,
informant::{ActivityNotifier, ClientNotifier},
Host, Metadata, NetworkSettings,
};
use parity_runtime::Executor;
use parking_lot::Mutex;
#[derive(Debug, PartialEq, Clone, Eq, Hash)]
pub enum Api {
/// Web3 (Safe)
Web3,
/// Net (Safe)
Net,
/// Eth (Safe)
Eth,
/// Eth Pub-Sub (Safe)
EthPubSub,
/// Geth-compatible "personal" API (DEPRECATED; only used in `--geth` mode.)
Personal,
/// Signer - Confirm transactions in Signer (UNSAFE: Passwords, List of transactions)
Signer,
/// Parity - Custom extensions (Safe)
Parity,
/// Traces (Safe)
Traces,
/// Rpc (Safe)
Rpc,
/// Parity PubSub - Generic Publish-Subscriber (Safety depends on other APIs exposed).
ParityPubSub,
/// Parity Accounts extensions (UNSAFE: Passwords, Side Effects (new account))
ParityAccounts,
/// Parity - Set methods (UNSAFE: Side Effects affecting node operation)
ParitySet,
/// SecretStore (UNSAFE: arbitrary hash signing)
SecretStore,
/// Geth-compatible (best-effort) debug API (Potentially UNSAFE)
/// NOTE We don't aim to support all methods, only the ones that are useful.
Debug,
}
impl FromStr for Api {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
use self::Api::*;
match s {
"debug" => Ok(Debug),
"eth" => Ok(Eth),
"net" => Ok(Net),
"parity" => Ok(Parity),
"parity_accounts" => Ok(ParityAccounts),
"parity_pubsub" => Ok(ParityPubSub),
"parity_set" => Ok(ParitySet),
"personal" => Ok(Personal),
"pubsub" => Ok(EthPubSub),
"rpc" => Ok(Rpc),
"secretstore" => Ok(SecretStore),
"signer" => Ok(Signer),
"traces" => Ok(Traces),
"web3" => Ok(Web3),
api => Err(format!("Unknown api: {}", api)),
}
}
}
#[derive(Debug, Clone)]
pub enum ApiSet {
// Unsafe context (like jsonrpc over http)
UnsafeContext,
// All possible APIs (safe context like token-protected WS interface)
All,
// Local "unsafe" context and accounts access
IpcContext,
// APIs for Parity Generic Pub-Sub
PubSub,
// Fixed list of APis
List(HashSet<Api>),
}
impl Default for ApiSet {
fn default() -> Self {
ApiSet::UnsafeContext
}
}
impl PartialEq for ApiSet {
fn eq(&self, other: &Self) -> bool {
self.list_apis() == other.list_apis()
}
}
impl FromStr for ApiSet {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut apis = HashSet::new();
for api in s.split(',') {
match api {
"all" => {
apis.extend(ApiSet::All.list_apis());
}
"safe" => {
// Safe APIs are those that are safe even in UnsafeContext.
apis.extend(ApiSet::UnsafeContext.list_apis());
}
// Remove the API
api if api.starts_with("-") => {
let api = api[1..].parse()?;
apis.remove(&api);
}
api => {
let api = api.parse()?;
apis.insert(api);
}
}
}
Ok(ApiSet::List(apis))
}
}
fn to_modules(apis: &HashSet<Api>) -> BTreeMap<String, String> {
let mut modules = BTreeMap::new();
for api in apis {
let (name, version) = match *api {
Api::Debug => ("debug", "1.0"),
Api::Eth => ("eth", "1.0"),
Api::EthPubSub => ("pubsub", "1.0"),
Api::Net => ("net", "1.0"),
Api::Parity => ("parity", "1.0"),
Api::ParityAccounts => ("parity_accounts", "1.0"),
Api::ParityPubSub => ("parity_pubsub", "1.0"),
Api::ParitySet => ("parity_set", "1.0"),
Api::Personal => ("personal", "1.0"),
Api::Rpc => ("rpc", "1.0"),
Api::SecretStore => ("secretstore", "1.0"),
Api::Signer => ("signer", "1.0"),
Api::Traces => ("traces", "1.0"),
Api::Web3 => ("web3", "1.0"),
};
modules.insert(name.into(), version.into());
}
modules
}
macro_rules! add_signing_methods {
($namespace:ident, $handler:expr, $deps:expr, $dispatch:expr) => {{
let deps = &$deps;
let (dispatcher, accounts) = $dispatch;
if deps.signer_service.is_enabled() {
$handler.extend_with($namespace::to_delegate(SigningQueueClient::new(
&deps.signer_service,
dispatcher.clone(),
deps.executor.clone(),
accounts,
)))
} else {
$handler.extend_with($namespace::to_delegate(SigningUnsafeClient::new(
accounts,
dispatcher.clone(),
)))
}
}};
}
/// RPC dependencies can be used to initialize RPC endpoints from APIs.
pub trait Dependencies {
type Notifier: ActivityNotifier;
/// Create the activity notifier.
fn activity_notifier(&self) -> Self::Notifier;
/// Extend the given I/O handler with endpoints for each API.
fn extend_with_set<S>(&self, handler: &mut MetaIoHandler<Metadata, S>, apis: &HashSet<Api>)
where
S: core::Middleware<Metadata>;
}
/// RPC dependencies for a full node.
pub struct FullDependencies {
pub signer_service: Arc<SignerService>,
pub client: Arc<Client>,
pub snapshot: Arc<dyn SnapshotService>,
pub sync: Arc<dyn SyncProvider>,
pub net: Arc<dyn ManageNetwork>,
pub accounts: Arc<AccountProvider>,
pub miner: Arc<Miner>,
pub external_miner: Arc<ExternalMiner>,
pub logger: Arc<RotatingLogger>,
pub settings: Arc<NetworkSettings>,
pub net_service: Arc<dyn ManageNetwork>,
pub experimental_rpcs: bool,
pub ws_address: Option<Host>,
pub fetch: FetchClient,
pub executor: Executor,
pub gas_price_percentile: usize,
pub poll_lifetime: u32,
pub allow_missing_blocks: bool,
pub no_ancient_blocks: bool,
}
impl FullDependencies {
fn extend_api<S>(
&self,
handler: &mut MetaIoHandler<Metadata, S>,
apis: &HashSet<Api>,
for_generic_pubsub: bool,
) where
S: core::Middleware<Metadata>,
{
use parity_rpc::v1::*;
let nonces = Arc::new(Mutex::new(dispatch::Reservations::new(
self.executor.clone(),
)));
let dispatcher = FullDispatcher::new(
self.client.clone(),
self.miner.clone(),
nonces.clone(),
self.gas_price_percentile,
);
let account_signer = Arc::new(dispatch::Signer::new(self.accounts.clone())) as _;
let accounts = account_utils::accounts_list(self.accounts.clone());
for api in apis {
match *api {
Api::Debug => {
handler.extend_with(DebugClient::new(self.client.clone()).to_delegate());
}
Api::Web3 => {
handler.extend_with(Web3Client::default().to_delegate());
}
Api::Net => {
handler.extend_with(NetClient::new(&self.sync).to_delegate());
}
Api::Eth => {
let client = EthClient::new(
&self.client,
&self.snapshot,
&self.sync,
&accounts,
&self.miner,
&self.external_miner,
EthClientOptions {
gas_price_percentile: self.gas_price_percentile,
allow_missing_blocks: self.allow_missing_blocks,
allow_experimental_rpcs: self.experimental_rpcs,
no_ancient_blocks: self.no_ancient_blocks,
},
);
handler.extend_with(client.to_delegate());
if !for_generic_pubsub {
let filter_client = EthFilterClient::new(
self.client.clone(),
self.miner.clone(),
self.poll_lifetime,
);
handler.extend_with(filter_client.to_delegate());
add_signing_methods!(
EthSigning,
handler,
self,
(&dispatcher, &account_signer)
);
}
}
Api::EthPubSub => {
if !for_generic_pubsub {
let client =
EthPubSubClient::new(self.client.clone(), self.executor.clone());
let h = client.handler();
self.miner
.add_transactions_listener(Box::new(move |hashes| {
if let Some(h) = h.upgrade() {
h.notify_new_transactions(hashes);
}
}));
if let Some(h) = client.handler().upgrade() {
self.client.add_notify(h);
}
handler.extend_with(client.to_delegate());
}
}
Api::Personal => {
#[cfg(feature = "accounts")]
handler.extend_with(
PersonalClient::new(
&self.accounts,
dispatcher.clone(),
self.experimental_rpcs,
)
.to_delegate(),
);
}
Api::Signer => {
handler.extend_with(
SignerClient::new(
account_signer.clone(),
dispatcher.clone(),
&self.signer_service,
self.executor.clone(),
)
.to_delegate(),
);
}
Api::Parity => {
let signer = match self.signer_service.is_enabled() {
true => Some(self.signer_service.clone()),
false => None,
};
handler.extend_with(
ParityClient::new(
self.client.clone(),
self.miner.clone(),
self.sync.clone(),
self.net_service.clone(),
self.logger.clone(),
self.settings.clone(),
signer,
self.ws_address.clone(),
self.snapshot.clone().into(),
)
.to_delegate(),
);
#[cfg(feature = "accounts")]
handler.extend_with(ParityAccountsInfo::to_delegate(
ParityAccountsClient::new(&self.accounts),
));
if !for_generic_pubsub {
add_signing_methods!(
ParitySigning,
handler,
self,
(&dispatcher, &account_signer)
);
}
}
Api::ParityPubSub => {
if !for_generic_pubsub {
let mut rpc = MetaIoHandler::default();
let apis = ApiSet::List(apis.clone())
.retain(ApiSet::PubSub)
.list_apis();
self.extend_api(&mut rpc, &apis, true);
handler.extend_with(
PubSubClient::new(rpc, self.executor.clone()).to_delegate(),
);
}
}
Api::ParityAccounts => {
#[cfg(feature = "accounts")]
handler.extend_with(ParityAccounts::to_delegate(ParityAccountsClient::new(
&self.accounts,
)));
}
Api::ParitySet => {
handler.extend_with(
ParitySetClient::new(
&self.client,
&self.miner,
&self.net_service,
self.fetch.clone(),
)
.to_delegate(),
);
#[cfg(feature = "accounts")]
handler.extend_with(
ParitySetAccountsClient::new(&self.accounts, &self.miner).to_delegate(),
);
}
Api::Traces => handler.extend_with(TracesClient::new(&self.client).to_delegate()),
Api::Rpc => {
let modules = to_modules(&apis);
handler.extend_with(RpcClient::new(modules).to_delegate());
}
Api::SecretStore => {
#[cfg(feature = "accounts")]
handler.extend_with(SecretStoreClient::new(&self.accounts).to_delegate());
}
}
}
}
}
impl Dependencies for FullDependencies {
type Notifier = ClientNotifier;
fn activity_notifier(&self) -> ClientNotifier {
ClientNotifier {
client: self.client.clone(),
}
}
fn extend_with_set<S>(&self, handler: &mut MetaIoHandler<Metadata, S>, apis: &HashSet<Api>)
where
S: core::Middleware<Metadata>,
{
self.extend_api(handler, apis, false)
}
}
impl ApiSet {
/// Retains only APIs in given set.
pub fn retain(self, set: Self) -> Self {
ApiSet::List(&self.list_apis() & &set.list_apis())
}
pub fn list_apis(&self) -> HashSet<Api> {
let mut public_list: HashSet<Api> = [
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::Rpc,
]
.iter()
.cloned()
.collect();
match *self {
ApiSet::List(ref apis) => apis.clone(),
ApiSet::UnsafeContext => {
public_list.insert(Api::Traces);
public_list.insert(Api::ParityPubSub);
public_list
}
ApiSet::IpcContext => {
public_list.insert(Api::Traces);
public_list.insert(Api::ParityPubSub);
public_list.insert(Api::ParityAccounts);
public_list
}
ApiSet::All => {
public_list.insert(Api::Debug);
public_list.insert(Api::Traces);
public_list.insert(Api::ParityPubSub);
public_list.insert(Api::ParityAccounts);
public_list.insert(Api::ParitySet);
public_list.insert(Api::Signer);
public_list.insert(Api::Personal);
public_list.insert(Api::SecretStore);
public_list
}
ApiSet::PubSub => [
Api::Eth,
Api::Parity,
Api::ParityAccounts,
Api::ParitySet,
Api::Traces,
]
.iter()
.cloned()
.collect(),
}
}
}
#[cfg(test)]
mod test {
use super::{Api, ApiSet};
#[test]
fn test_api_parsing() {
assert_eq!(Api::Debug, "debug".parse().unwrap());
assert_eq!(Api::Web3, "web3".parse().unwrap());
assert_eq!(Api::Net, "net".parse().unwrap());
assert_eq!(Api::Eth, "eth".parse().unwrap());
assert_eq!(Api::EthPubSub, "pubsub".parse().unwrap());
assert_eq!(Api::Personal, "personal".parse().unwrap());
assert_eq!(Api::Signer, "signer".parse().unwrap());
assert_eq!(Api::Parity, "parity".parse().unwrap());
assert_eq!(Api::ParityAccounts, "parity_accounts".parse().unwrap());
assert_eq!(Api::ParitySet, "parity_set".parse().unwrap());
assert_eq!(Api::Traces, "traces".parse().unwrap());
assert_eq!(Api::Rpc, "rpc".parse().unwrap());
assert_eq!(Api::SecretStore, "secretstore".parse().unwrap());
assert!("rp".parse::<Api>().is_err());
}
#[test]
fn test_api_set_default() {
assert_eq!(ApiSet::UnsafeContext, ApiSet::default());
}
#[test]
fn test_api_set_parsing() {
assert_eq!(
ApiSet::List(vec![Api::Web3, Api::Eth].into_iter().collect()),
"web3,eth".parse().unwrap()
);
}
#[test]
fn test_api_set_unsafe_context() {
let expected = vec![
// make sure this list contains only SAFE methods
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
]
.into_iter()
.collect();
assert_eq!(ApiSet::UnsafeContext.list_apis(), expected);
}
#[test]
fn test_api_set_ipc_context() {
let expected = vec![
// safe
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
// semi-safe
Api::ParityAccounts,
]
.into_iter()
.collect();
assert_eq!(ApiSet::IpcContext.list_apis(), expected);
}
#[test]
fn test_all_apis() {
assert_eq!(
"all".parse::<ApiSet>().unwrap(),
ApiSet::List(
vec![
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
Api::SecretStore,
Api::ParityAccounts,
Api::ParitySet,
Api::Signer,
Api::Personal,
Api::Debug,
]
.into_iter()
.collect()
)
);
}
#[test]
fn test_all_without_personal_apis() {
assert_eq!(
"personal,all,-personal".parse::<ApiSet>().unwrap(),
ApiSet::List(
vec![
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
Api::SecretStore,
Api::ParityAccounts,
Api::ParitySet,
Api::Signer,
Api::Debug,
]
.into_iter()
.collect()
)
);
}
#[test]
fn test_safe_parsing() {
assert_eq!(
"safe".parse::<ApiSet>().unwrap(),
ApiSet::List(
vec![
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
]
.into_iter()
.collect()
)
);
}
}

View File

@ -1,751 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{
any::Any,
str::FromStr,
sync::{atomic, Arc, Weak},
thread,
time::{Duration, Instant},
};
use crate::{
account_utils,
cache::CacheConfig,
db,
helpers::{execute_upgrades, passwords_from_files, to_client_config},
informant::{FullNodeInformantData, Informant},
metrics::{start_prometheus_metrics, MetricsConfiguration},
miner::{external::ExternalMiner, work_notify::WorkPoster},
modules,
params::{
fatdb_switch_to_bool, mode_switch_to_bool, tracing_switch_to_bool, AccountsConfig,
GasPricerConfig, MinerExtras, Pruning, SpecType, Switch,
},
rpc, rpc_apis, secretstore, signer,
sync::{self, SyncConfig},
user_defaults::UserDefaults,
};
use ansi_term::Colour;
use dir::{DatabaseDirectories, Directories};
use ethcore::{
client::{BlockChainClient, BlockInfo, Client, DatabaseCompactionProfile, Mode, VMType},
miner::{self, stratum, Miner, MinerOptions, MinerService},
snapshot::{self, SnapshotConfiguration},
verification::queue::VerifierSettings,
};
use ethcore_logger::{Config as LogConfig, RotatingLogger};
use ethcore_service::ClientService;
use ethereum_types::{H256, U64};
use journaldb::Algorithm;
use jsonrpc_core;
use node_filter::NodeFilter;
use parity_rpc::{
informant, is_major_importing, FutureOutput, FutureResponse, FutureResult, Metadata,
NetworkSettings, Origin, PubSubSession,
};
use parity_runtime::Runtime;
use parity_version::version;
// How often we attempt to take a snapshot: only snapshot on blocknumbers that are multiples of this.
const SNAPSHOT_PERIOD: u64 = 20000;
// Start snapshoting from `tip`-`history, with this we want to bypass reorgs. Should be smaller than prunning history.
const SNAPSHOT_HISTORY: u64 = 50;
// Full client number of DNS threads
const FETCH_FULL_NUM_DNS_THREADS: usize = 4;
#[derive(Debug, PartialEq)]
pub struct RunCmd {
pub cache_config: CacheConfig,
pub dirs: Directories,
pub spec: SpecType,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
/// Some if execution should be daemonized. Contains pid_file path.
pub daemon: Option<String>,
pub logger_config: LogConfig,
pub miner_options: MinerOptions,
pub gas_price_percentile: usize,
pub poll_lifetime: u32,
pub ws_conf: rpc::WsConfiguration,
pub http_conf: rpc::HttpConfiguration,
pub ipc_conf: rpc::IpcConfiguration,
pub net_conf: sync::NetworkConfiguration,
pub network_id: Option<u64>,
pub warp_sync: bool,
pub warp_barrier: Option<u64>,
pub acc_conf: AccountsConfig,
pub gas_pricer_conf: GasPricerConfig,
pub miner_extras: MinerExtras,
pub mode: Option<Mode>,
pub tracing: Switch,
pub fat_db: Switch,
pub compaction: DatabaseCompactionProfile,
pub vm_type: VMType,
pub experimental_rpcs: bool,
pub net_settings: NetworkSettings,
pub secretstore_conf: secretstore::Configuration,
pub name: String,
pub custom_bootnodes: bool,
pub stratum: Option<stratum::Options>,
pub snapshot_conf: SnapshotConfiguration,
pub check_seal: bool,
pub allow_missing_blocks: bool,
pub download_old_blocks: bool,
pub verifier_settings: VerifierSettings,
pub no_persistent_txqueue: bool,
pub max_round_blocks_to_import: usize,
pub metrics_conf: MetricsConfiguration,
}
// node info fetcher for the local store.
struct FullNodeInfo {
miner: Option<Arc<Miner>>, // TODO: only TXQ needed, just use that after decoupling.
}
impl crate::local_store::NodeInfo for FullNodeInfo {
fn pending_transactions(&self) -> Vec<crate::types::transaction::PendingTransaction> {
let miner = match self.miner.as_ref() {
Some(m) => m,
None => return Vec::new(),
};
miner
.local_transactions()
.values()
.filter_map(|status| match *status {
crate::miner::pool::local_transactions::Status::Pending(ref tx) => {
Some(tx.pending().clone())
}
_ => None,
})
.collect()
}
}
/// Executes the given run command.
///
/// On error, returns what to print on stderr.
pub fn execute(cmd: RunCmd, logger: Arc<RotatingLogger>) -> Result<RunningClient, String> {
// load spec
let spec = cmd.spec.spec(&cmd.dirs.cache)?;
// load genesis hash
let genesis_hash = spec.genesis_header().hash();
// database paths
let db_dirs = cmd.dirs.database(
genesis_hash,
cmd.spec.legacy_fork_name(),
spec.data_dir.clone(),
);
// user defaults path
let user_defaults_path = db_dirs.user_defaults_path();
// load user defaults
let mut user_defaults = UserDefaults::load(&user_defaults_path)?;
// select pruning algorithm
let algorithm = cmd.pruning.to_algorithm(&user_defaults);
// check if tracing is on
let tracing = tracing_switch_to_bool(cmd.tracing, &user_defaults)?;
// check if fatdb is on
let fat_db = fatdb_switch_to_bool(cmd.fat_db, &user_defaults, algorithm)?;
// get the mode
let mode = mode_switch_to_bool(cmd.mode, &user_defaults)?;
trace!(target: "mode", "mode is {:?}", mode);
let network_enabled = match mode {
Mode::Dark(_) | Mode::Off => false,
_ => true,
};
// prepare client and snapshot paths.
let client_path = db_dirs.client_path(algorithm);
let snapshot_path = db_dirs.snapshot_path();
// execute upgrades
execute_upgrades(&cmd.dirs.base, &db_dirs, algorithm, &cmd.compaction)?;
// create dirs used by parity
cmd.dirs.create_dirs(
cmd.acc_conf.unlocked_accounts.len() == 0,
cmd.secretstore_conf.enabled,
)?;
//print out running parity environment
print_running_environment(&spec.data_dir, &cmd.dirs, &db_dirs);
// display info about used pruning algorithm
info!(
"State DB configuration: {}{}{}",
Colour::White.bold().paint(algorithm.as_str()),
match fat_db {
true => Colour::White.bold().paint(" +Fat").to_string(),
false => "".to_owned(),
},
match tracing {
true => Colour::White.bold().paint(" +Trace").to_string(),
false => "".to_owned(),
}
);
info!(
"Operating mode: {}",
Colour::White.bold().paint(format!("{}", mode))
);
// display warning about using experimental journaldb algorithm
if !algorithm.is_stable() {
warn!(
"Your chosen strategy is {}! You can re-run with --pruning to change.",
Colour::Red.bold().paint("unstable")
);
}
// create sync config
let mut sync_config = SyncConfig::default();
sync_config.network_id = match cmd.network_id {
Some(id) => id,
None => spec.network_id(),
};
if spec.subprotocol_name().len() > 8 {
warn!("Your chain specification's subprotocol length is more then 8. Ignoring.");
} else {
sync_config.subprotocol_name = U64::from(spec.subprotocol_name().as_bytes())
}
sync_config.fork_block = spec.fork_block();
let mut warp_sync = spec.engine.supports_warp() && cmd.warp_sync;
if warp_sync {
// Logging is not initialized yet, so we print directly to stderr
if fat_db {
warn!("Warning: Warp Sync is disabled because Fat DB is turned on.");
warp_sync = false;
} else if tracing {
warn!("Warning: Warp Sync is disabled because tracing is turned on.");
warp_sync = false;
} else if algorithm != Algorithm::OverlayRecent {
warn!("Warning: Warp Sync is disabled because of non-default pruning mode.");
warp_sync = false;
}
}
sync_config.warp_sync = match (warp_sync, cmd.warp_barrier) {
(true, Some(block)) => sync::WarpSync::OnlyAndAfter(block),
(true, _) => sync::WarpSync::Enabled,
_ => sync::WarpSync::Disabled,
};
sync_config.download_old_blocks = cmd.download_old_blocks;
sync_config.eip1559_transition = spec.params().eip1559_transition;
let passwords = passwords_from_files(&cmd.acc_conf.password_files)?;
// prepare account provider
let account_provider = Arc::new(account_utils::prepare_account_provider(
&cmd.spec,
&cmd.dirs,
&spec.data_dir,
cmd.acc_conf,
&passwords,
)?);
// spin up event loop
let runtime = Runtime::with_default_thread_count();
// fetch service
let fetch = fetch::Client::new(FETCH_FULL_NUM_DNS_THREADS)
.map_err(|e| format!("Error starting fetch client: {:?}", e))?;
let txpool_size = cmd.miner_options.pool_limits.max_count;
// create miner
let miner = Arc::new(Miner::new(
cmd.miner_options,
cmd.gas_pricer_conf
.to_gas_pricer(fetch.clone(), runtime.executor()),
&spec,
(
cmd.miner_extras.local_accounts,
account_utils::miner_local_accounts(account_provider.clone()),
),
));
miner.set_author(miner::Author::External(cmd.miner_extras.author));
miner.set_gas_range_target(cmd.miner_extras.gas_range_target);
miner.set_extra_data(cmd.miner_extras.extra_data);
if !cmd.miner_extras.work_notify.is_empty() {
miner.add_work_listener(Box::new(WorkPoster::new(
&cmd.miner_extras.work_notify,
fetch.clone(),
runtime.executor(),
)));
}
let engine_signer = cmd.miner_extras.engine_signer;
if engine_signer != Default::default() {
if let Some(author) = account_utils::miner_author(
&cmd.spec,
&cmd.dirs,
&account_provider,
engine_signer,
&passwords,
)? {
miner.set_author(author);
}
}
// create client config
let mut client_config = to_client_config(
&cmd.cache_config,
spec.name.to_lowercase(),
mode.clone(),
tracing,
fat_db,
cmd.compaction,
cmd.vm_type,
cmd.name,
algorithm,
cmd.pruning_history,
cmd.pruning_memory,
cmd.check_seal,
cmd.max_round_blocks_to_import,
);
client_config.queue.verifier_settings = cmd.verifier_settings;
client_config.queue.verifier_settings.bad_hashes = verification_bad_blocks(&cmd.spec);
client_config.transaction_verification_queue_size = ::std::cmp::max(2048, txpool_size / 4);
client_config.snapshot = cmd.snapshot_conf.clone();
// set up bootnodes
let mut net_conf = cmd.net_conf;
if !cmd.custom_bootnodes {
net_conf.boot_nodes = spec.nodes.clone();
}
// set network path.
net_conf.net_config_path = Some(db_dirs.network_path().to_string_lossy().into_owned());
let restoration_db_handler = db::restoration_db_handler(&client_path, &client_config);
let client_db = restoration_db_handler
.open(&client_path)
.map_err(|e| format!("Failed to open database {:?}", e))?;
// create client service.
let service = ClientService::start(
client_config,
&spec,
client_db,
&snapshot_path,
restoration_db_handler,
&cmd.dirs.ipc_path(),
miner.clone(),
)
.map_err(|e| format!("Client service error: {:?}", e))?;
let connection_filter_address = spec.params().node_permission_contract;
// drop the spec to free up genesis state.
let forks = spec.hard_forks.clone();
drop(spec);
// take handle to client
let client = service.client();
// Update miners block gas limit and base_fee
let base_fee = client
.engine()
.calculate_base_fee(&client.best_block_header());
let allow_non_eoa_sender = client
.engine()
.allow_non_eoa_sender(client.best_block_header().number() + 1);
miner.update_transaction_queue_limits(
*client.best_block_header().gas_limit(),
base_fee,
allow_non_eoa_sender,
);
let connection_filter = connection_filter_address.map(|a| {
Arc::new(NodeFilter::new(
Arc::downgrade(&client) as Weak<dyn BlockChainClient>,
a,
))
});
let snapshot_service = service.snapshot_service();
// initialize the local node information store.
let store = {
let db = service.db();
let node_info = FullNodeInfo {
miner: match cmd.no_persistent_txqueue {
true => None,
false => Some(miner.clone()),
},
};
let store = crate::local_store::create(
db.key_value().clone(),
::ethcore_db::COL_NODE_INFO,
node_info,
);
if cmd.no_persistent_txqueue {
info!("Running without a persistent transaction queue.");
if let Err(e) = store.clear() {
warn!("Error clearing persistent transaction queue: {}", e);
}
}
// re-queue pending transactions.
match store.pending_transactions() {
Ok(pending) => {
for pending_tx in pending {
if let Err(e) = miner.import_own_transaction(&*client, pending_tx) {
warn!("Error importing saved transaction: {}", e)
}
}
}
Err(e) => warn!("Error loading cached pending transactions from disk: {}", e),
}
Arc::new(store)
};
// register it as an IO service to update periodically.
service
.register_io_handler(store)
.map_err(|_| "Unable to register local store handler".to_owned())?;
// create external miner
let external_miner = Arc::new(ExternalMiner::default());
// start stratum
if let Some(ref stratum_config) = cmd.stratum {
stratum::Stratum::register(stratum_config, miner.clone(), Arc::downgrade(&client))
.map_err(|e| format!("Stratum start error: {:?}", e))?;
}
// create sync object
let (sync_provider, manage_network, chain_notify, priority_tasks) = modules::sync(
sync_config,
net_conf.clone().into(),
client.clone(),
forks,
snapshot_service.clone(),
&cmd.logger_config,
connection_filter
.clone()
.map(|f| f as Arc<dyn crate::sync::ConnectionFilter + 'static>),
)
.map_err(|e| format!("Sync error: {}", e))?;
service.add_notify(chain_notify.clone());
// Propagate transactions as soon as they are imported.
let tx = ::parking_lot::Mutex::new(priority_tasks);
let is_ready = Arc::new(atomic::AtomicBool::new(true));
miner.add_transactions_listener(Box::new(move |_hashes| {
// we want to have only one PendingTransactions task in the queue.
if is_ready
.compare_exchange(
true,
false,
atomic::Ordering::SeqCst,
atomic::Ordering::SeqCst,
)
.is_ok()
{
let task =
crate::sync::PriorityTask::PropagateTransactions(Instant::now(), is_ready.clone());
// we ignore error cause it means that we are closing
let _ = tx.lock().send(task);
}
}));
// start network
if network_enabled {
chain_notify.start();
}
// set up dependencies for rpc servers
let rpc_stats = Arc::new(informant::RpcStats::default());
let secret_store = account_provider.clone();
let signer_service = Arc::new(signer::new_service(&cmd.ws_conf, &cmd.logger_config));
let deps_for_rpc_apis = Arc::new(rpc_apis::FullDependencies {
signer_service: signer_service,
snapshot: snapshot_service.clone(),
client: client.clone(),
sync: sync_provider.clone(),
net: manage_network.clone(),
accounts: secret_store,
miner: miner.clone(),
external_miner: external_miner.clone(),
logger: logger.clone(),
settings: Arc::new(cmd.net_settings.clone()),
net_service: manage_network.clone(),
experimental_rpcs: cmd.experimental_rpcs,
ws_address: cmd.ws_conf.address(),
fetch: fetch.clone(),
executor: runtime.executor(),
gas_price_percentile: cmd.gas_price_percentile,
poll_lifetime: cmd.poll_lifetime,
allow_missing_blocks: cmd.allow_missing_blocks,
no_ancient_blocks: !cmd.download_old_blocks,
});
let dependencies = rpc::Dependencies {
apis: deps_for_rpc_apis.clone(),
executor: runtime.executor(),
stats: rpc_stats.clone(),
};
// start rpc servers
let rpc_direct = rpc::setup_apis(rpc_apis::ApiSet::All, &dependencies);
let ws_server = rpc::new_ws(cmd.ws_conf.clone(), &dependencies)?;
let ipc_server = rpc::new_ipc(cmd.ipc_conf, &dependencies)?;
// start the prometheus metrics server
start_prometheus_metrics(&cmd.metrics_conf, &dependencies)?;
let http_server = rpc::new_http(
"HTTP JSON-RPC",
"jsonrpc",
cmd.http_conf.clone(),
&dependencies,
)?;
// secret store key server
let secretstore_deps = secretstore::Dependencies {
client: client.clone(),
sync: sync_provider.clone(),
miner: miner.clone(),
account_provider,
accounts_passwords: &passwords,
};
let secretstore_key_server = secretstore::start(
cmd.secretstore_conf.clone(),
secretstore_deps,
runtime.executor(),
)?;
// the informant
let informant = Arc::new(Informant::new(
FullNodeInformantData {
client: service.client(),
sync: Some(sync_provider.clone()),
net: Some(manage_network.clone()),
},
Some(snapshot_service.clone()),
Some(rpc_stats.clone()),
cmd.logger_config.color,
));
service.add_notify(informant.clone());
service
.register_io_handler(informant.clone())
.map_err(|_| "Unable to register informant handler".to_owned())?;
// save user defaults
user_defaults.is_first_launch = false;
user_defaults.pruning = algorithm;
user_defaults.tracing = tracing;
user_defaults.fat_db = fat_db;
user_defaults.set_mode(mode);
user_defaults.save(&user_defaults_path)?;
// tell client how to save the default mode if it gets changed.
client.on_user_defaults_change(move |mode: Option<Mode>| {
if let Some(mode) = mode {
user_defaults.set_mode(mode);
}
let _ = user_defaults.save(&user_defaults_path); // discard failures - there's nothing we can do
});
// the watcher must be kept alive.
let watcher = match cmd.snapshot_conf.enable {
false => None,
true => {
let sync = sync_provider.clone();
let client = client.clone();
let watcher = Arc::new(snapshot::Watcher::new(
service.client(),
move || is_major_importing(Some(sync.status().state), client.queue_info()),
service.io().channel(),
SNAPSHOT_PERIOD,
SNAPSHOT_HISTORY,
));
service.add_notify(watcher.clone());
Some(watcher)
}
};
Ok(RunningClient {
inner: RunningClientInner::Full {
rpc: rpc_direct,
informant,
client,
client_service: Arc::new(service),
keep_alive: Box::new((
watcher,
ws_server,
http_server,
ipc_server,
secretstore_key_server,
runtime,
)),
},
})
}
/// Set bad blocks in VerificationQeueu. By omiting header we can omit particular fork of chain.
fn verification_bad_blocks(spec: &SpecType) -> Vec<H256> {
match *spec {
SpecType::Ropsten => {
vec![
H256::from_str("1eac3d16c642411f13c287e29144c6f58fda859407c8f24c38deb168e1040714")
.expect("Valid hex string"),
]
}
_ => vec![],
}
}
/// Parity client currently executing in background threads.
///
/// Should be destroyed by calling `shutdown()`, otherwise execution will continue in the
/// background.
pub struct RunningClient {
inner: RunningClientInner,
}
enum RunningClientInner {
Full {
rpc:
jsonrpc_core::MetaIoHandler<Metadata, informant::Middleware<informant::ClientNotifier>>,
informant: Arc<Informant<FullNodeInformantData>>,
client: Arc<Client>,
client_service: Arc<ClientService>,
keep_alive: Box<dyn Any>,
},
}
impl RunningClient {
/// Performs an asynchronous RPC query.
// FIXME: [tomaka] This API should be better, with for example a Future
pub fn rpc_query(
&self,
request: &str,
session: Option<Arc<PubSubSession>>,
) -> FutureResult<FutureResponse, FutureOutput> {
let metadata = Metadata {
origin: Origin::CApi,
session,
};
match self.inner {
RunningClientInner::Full { ref rpc, .. } => rpc.handle_request(request, metadata),
}
}
/// Shuts down the client.
pub fn shutdown(self) {
match self.inner {
RunningClientInner::Full {
rpc,
informant,
client,
client_service,
keep_alive,
} => {
info!("Finishing work, please wait...");
// Create a weak reference to the client so that we can wait on shutdown
// until it is dropped
let weak_client = Arc::downgrade(&client);
// Shutdown and drop the ClientService
client_service.shutdown();
trace!(target: "shutdown", "ClientService shut down");
drop(client_service);
trace!(target: "shutdown", "ClientService dropped");
// drop this stuff as soon as exit detected.
drop(rpc);
trace!(target: "shutdown", "RPC dropped");
drop(keep_alive);
trace!(target: "shutdown", "KeepAlive dropped");
// to make sure timer does not spawn requests while shutdown is in progress
informant.shutdown();
trace!(target: "shutdown", "Informant shut down");
// just Arc is dropping here, to allow other reference release in its default time
drop(informant);
trace!(target: "shutdown", "Informant dropped");
drop(client);
trace!(target: "shutdown", "Client dropped");
// This may help when debugging ref cycles. Requires nightly-only `#![feature(weak_counts)]`
// trace!(target: "shutdown", "Waiting for refs to Client to shutdown, strong_count={:?}, weak_count={:?}", weak_client.strong_count(), weak_client.weak_count());
trace!(target: "shutdown", "Waiting for refs to Client to shutdown");
wait_for_drop(weak_client);
}
}
}
}
fn print_running_environment(data_dir: &str, dirs: &Directories, db_dirs: &DatabaseDirectories) {
info!("Starting {}", Colour::White.bold().paint(version()));
info!(
"Keys path {}",
Colour::White
.bold()
.paint(dirs.keys_path(data_dir).to_string_lossy().into_owned())
);
info!(
"DB path {}",
Colour::White
.bold()
.paint(db_dirs.db_root_path().to_string_lossy().into_owned())
);
}
fn wait_for_drop<T>(w: Weak<T>) {
const SLEEP_DURATION: Duration = Duration::from_secs(1);
const WARN_TIMEOUT: Duration = Duration::from_secs(60);
const MAX_TIMEOUT: Duration = Duration::from_secs(300);
let instant = Instant::now();
let mut warned = false;
while instant.elapsed() < MAX_TIMEOUT {
if w.upgrade().is_none() {
return;
}
if !warned && instant.elapsed() > WARN_TIMEOUT {
warned = true;
warn!("Shutdown is taking longer than expected.");
}
thread::sleep(SLEEP_DURATION);
// When debugging shutdown issues on a nightly build it can help to enable this with the
// `#![feature(weak_counts)]` added to lib.rs (TODO: enable when
// https://github.com/rust-lang/rust/issues/57977 is stable)
// trace!(target: "shutdown", "Waiting for client to drop, strong_count={:?}, weak_count={:?}", w.strong_count(), w.weak_count());
trace!(target: "shutdown", "Waiting for client to drop");
}
warn!("Shutdown timeout reached, exiting uncleanly.");
}

View File

@ -1,333 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crate::{account_utils::AccountProvider, sync::SyncProvider};
use crypto::publickey::{Public, Secret};
use dir::{default_data_path, helpers::replace_home};
use ethcore::{client::Client, miner::Miner};
use ethereum_types::Address;
use ethkey::Password;
use parity_runtime::Executor;
use std::{collections::BTreeMap, sync::Arc};
/// This node secret key.
#[derive(Debug, PartialEq, Clone)]
pub enum NodeSecretKey {
/// Stored as plain text in configuration file.
Plain(Secret),
/// Stored as account in key store.
#[cfg(feature = "accounts")]
KeyStore(Address),
}
/// Secret store service contract address.
#[derive(Debug, PartialEq, Clone)]
pub enum ContractAddress {
/// Contract address is read from registry.
Registry,
/// Contract address is specified.
Address(Address),
}
#[derive(Debug, PartialEq, Clone)]
/// Secret store configuration
pub struct Configuration {
/// Is secret store functionality enabled?
pub enabled: bool,
/// Is HTTP API enabled?
pub http_enabled: bool,
/// Is auto migrate enabled.
pub auto_migrate_enabled: bool,
/// ACL check contract address.
pub acl_check_contract_address: Option<ContractAddress>,
/// Service contract address.
pub service_contract_address: Option<ContractAddress>,
/// Server key generation service contract address.
pub service_contract_srv_gen_address: Option<ContractAddress>,
/// Server key retrieval service contract address.
pub service_contract_srv_retr_address: Option<ContractAddress>,
/// Document key store service contract address.
pub service_contract_doc_store_address: Option<ContractAddress>,
/// Document key shadow retrieval service contract address.
pub service_contract_doc_sretr_address: Option<ContractAddress>,
/// This node secret.
pub self_secret: Option<NodeSecretKey>,
/// Other nodes IDs + addresses.
pub nodes: BTreeMap<Public, (String, u16)>,
/// Key Server Set contract address. If None, 'nodes' map is used.
pub key_server_set_contract_address: Option<ContractAddress>,
/// Interface to listen to
pub interface: String,
/// Port to listen to
pub port: u16,
/// Interface to listen to
pub http_interface: String,
/// Port to listen to
pub http_port: u16,
/// Data directory path for secret store
pub data_path: String,
/// Administrator public key.
pub admin_public: Option<Public>,
}
/// Secret store dependencies
pub struct Dependencies<'a> {
/// Blockchain client.
pub client: Arc<Client>,
/// Sync provider.
pub sync: Arc<dyn SyncProvider>,
/// Miner service.
pub miner: Arc<Miner>,
/// Account provider.
pub account_provider: Arc<AccountProvider>,
/// Passed accounts passwords.
pub accounts_passwords: &'a [Password],
}
#[cfg(not(feature = "secretstore"))]
mod server {
use super::{Configuration, Dependencies, Executor};
/// Noop key server implementation
pub struct KeyServer;
impl KeyServer {
/// Create new noop key server
pub fn new(
_conf: Configuration,
_deps: Dependencies,
_executor: Executor,
) -> Result<Self, String> {
Ok(KeyServer)
}
}
}
#[cfg(feature = "secretstore")]
mod server {
use super::{Configuration, ContractAddress, Dependencies, Executor, NodeSecretKey};
use ansi_term::Colour::{Red, White};
use db;
use ethcore_secretstore;
use ethkey::KeyPair;
use std::sync::Arc;
fn into_service_contract_address(
address: ContractAddress,
) -> ethcore_secretstore::ContractAddress {
match address {
ContractAddress::Registry => ethcore_secretstore::ContractAddress::Registry,
ContractAddress::Address(address) => {
ethcore_secretstore::ContractAddress::Address(address)
}
}
}
/// Key server
pub struct KeyServer {
_key_server: Box<dyn ethcore_secretstore::KeyServer>,
}
impl KeyServer {
/// Create new key server
pub fn new(
mut conf: Configuration,
deps: Dependencies,
executor: Executor,
) -> Result<Self, String> {
let self_secret: Arc<dyn ethcore_secretstore::NodeKeyPair> =
match conf.self_secret.take() {
Some(NodeSecretKey::Plain(secret)) => {
Arc::new(ethcore_secretstore::PlainNodeKeyPair::new(
KeyPair::from_secret(secret)
.map_err(|e| format!("invalid secret: {}", e))?,
))
}
#[cfg(feature = "accounts")]
Some(NodeSecretKey::KeyStore(account)) => {
// Check if account exists
if !deps.account_provider.has_account(account.clone()) {
return Err(format!(
"Account {} passed as secret store node key is not found",
account
));
}
// Check if any passwords have been read from the password file(s)
if deps.accounts_passwords.is_empty() {
return Err(format!(
"No password found for the secret store node account {}",
account
));
}
// Attempt to sign in the engine signer.
let password = deps
.accounts_passwords
.iter()
.find(|p| {
deps.account_provider
.sign(account.clone(), Some((*p).clone()), Default::default())
.is_ok()
})
.ok_or_else(|| {
format!(
"No valid password for the secret store node account {}",
account
)
})?;
Arc::new(
ethcore_secretstore::KeyStoreNodeKeyPair::new(
deps.account_provider,
account,
password.clone(),
)
.map_err(|e| format!("{}", e))?,
)
}
None => return Err("self secret is required when using secretstore".into()),
};
info!(
"Starting SecretStore node: {}",
White.bold().paint(format!("{:?}", self_secret.public()))
);
if conf.acl_check_contract_address.is_none() {
warn!(
"Running SecretStore with disabled ACL check: {}",
Red.bold().paint("everyone has access to stored keys")
);
}
let key_server_name = format!("{}:{}", conf.interface, conf.port);
let mut cconf = ethcore_secretstore::ServiceConfiguration {
listener_address: if conf.http_enabled {
Some(ethcore_secretstore::NodeAddress {
address: conf.http_interface.clone(),
port: conf.http_port,
})
} else {
None
},
service_contract_address: conf
.service_contract_address
.map(into_service_contract_address),
service_contract_srv_gen_address: conf
.service_contract_srv_gen_address
.map(into_service_contract_address),
service_contract_srv_retr_address: conf
.service_contract_srv_retr_address
.map(into_service_contract_address),
service_contract_doc_store_address: conf
.service_contract_doc_store_address
.map(into_service_contract_address),
service_contract_doc_sretr_address: conf
.service_contract_doc_sretr_address
.map(into_service_contract_address),
acl_check_contract_address: conf
.acl_check_contract_address
.map(into_service_contract_address),
cluster_config: ethcore_secretstore::ClusterConfiguration {
listener_address: ethcore_secretstore::NodeAddress {
address: conf.interface.clone(),
port: conf.port,
},
nodes: conf
.nodes
.into_iter()
.map(|(p, (ip, port))| {
(
p,
ethcore_secretstore::NodeAddress {
address: ip,
port: port,
},
)
})
.collect(),
key_server_set_contract_address: conf
.key_server_set_contract_address
.map(into_service_contract_address),
allow_connecting_to_higher_nodes: true,
admin_public: conf.admin_public,
auto_migrate_enabled: conf.auto_migrate_enabled,
},
};
cconf.cluster_config.nodes.insert(
self_secret.public().clone(),
cconf.cluster_config.listener_address.clone(),
);
let db = db::open_secretstore_db(&conf.data_path)?;
let key_server = ethcore_secretstore::start(
deps.client,
deps.sync,
deps.miner,
self_secret,
cconf,
db,
executor,
)
.map_err(|e| format!("Error starting KeyServer {}: {}", key_server_name, e))?;
Ok(KeyServer {
_key_server: key_server,
})
}
}
}
pub use self::server::KeyServer;
impl Default for Configuration {
fn default() -> Self {
let data_dir = default_data_path();
Configuration {
enabled: true,
http_enabled: true,
auto_migrate_enabled: true,
acl_check_contract_address: Some(ContractAddress::Registry),
service_contract_address: None,
service_contract_srv_gen_address: None,
service_contract_srv_retr_address: None,
service_contract_doc_store_address: None,
service_contract_doc_sretr_address: None,
self_secret: None,
admin_public: None,
nodes: BTreeMap::new(),
key_server_set_contract_address: Some(ContractAddress::Registry),
interface: "127.0.0.1".to_owned(),
port: 8083,
http_interface: "127.0.0.1".to_owned(),
http_port: 8082,
data_path: replace_home(&data_dir, "$BASE/secretstore"),
}
}
}
/// Start secret store-related functionality
pub fn start(
conf: Configuration,
deps: Dependencies,
executor: Executor,
) -> Result<Option<KeyServer>, String> {
if !conf.enabled {
return Ok(None);
}
KeyServer::new(conf, deps, executor).map(|s| Some(s))
}

View File

@ -1,98 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{
io,
path::{Path, PathBuf},
};
use crate::{path::restrict_permissions_owner, rpc, rpc_apis};
use ansi_term::Colour::White;
use ethcore_logger::Config as LogConfig;
use parity_rpc;
pub const CODES_FILENAME: &'static str = "authcodes";
pub struct NewToken {
pub token: String,
pub message: String,
}
pub fn new_service(
ws_conf: &rpc::WsConfiguration,
logger_config: &LogConfig,
) -> rpc_apis::SignerService {
let logger_config_color = logger_config.color;
let signer_path = ws_conf.signer_path.clone();
let signer_enabled = ws_conf.support_token_api;
rpc_apis::SignerService::new(
move || {
generate_new_token(&signer_path, logger_config_color).map_err(|e| format!("{:?}", e))
},
signer_enabled,
)
}
pub fn codes_path(path: &Path) -> PathBuf {
let mut p = path.to_owned();
p.push(CODES_FILENAME);
let _ = restrict_permissions_owner(&p, true, false);
p
}
pub fn execute(ws_conf: rpc::WsConfiguration, logger_config: LogConfig) -> Result<String, String> {
Ok(generate_token_and_url(&ws_conf, &logger_config)?.message)
}
pub fn generate_token_and_url(
ws_conf: &rpc::WsConfiguration,
logger_config: &LogConfig,
) -> Result<NewToken, String> {
let code = generate_new_token(&ws_conf.signer_path, logger_config.color)
.map_err(|err| format!("Error generating token: {:?}", err))?;
let colored = |s: String| match logger_config.color {
true => format!("{}", White.bold().paint(s)),
false => s,
};
Ok(NewToken {
token: code.clone(),
message: format!(
r#"
Generated token:
{}
"#,
colored(code)
),
})
}
fn generate_new_token(path: &Path, logger_config_color: bool) -> io::Result<String> {
let path = codes_path(path);
let mut codes = parity_rpc::AuthCodes::from_file(&path)?;
codes.clear_garbage();
let code = codes.generate_new()?;
codes.to_file(&path)?;
trace!(
"New key code created: {}",
match logger_config_color {
true => format!("{}", White.bold().paint(&code[..])),
false => format!("{}", &code[..]),
}
);
Ok(code)
}

View File

@ -1,347 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Snapshot and restoration commands.
use std::{
path::{Path, PathBuf},
sync::Arc,
time::Duration,
};
use crate::{hash::keccak, types::ids::BlockId};
use ethcore::{
client::{DatabaseCompactionProfile, Mode, VMType},
miner::Miner,
snapshot::{
io::{PackedReader, PackedWriter, SnapshotReader},
service::Service as SnapshotService,
Progress, RestorationStatus, SnapshotConfiguration, SnapshotService as SS,
},
};
use ethcore_service::ClientService;
use crate::{
cache::CacheConfig,
db,
helpers::{execute_upgrades, to_client_config},
params::{fatdb_switch_to_bool, tracing_switch_to_bool, Pruning, SpecType, Switch},
user_defaults::UserDefaults,
};
use dir::Directories;
/// Kinds of snapshot commands.
#[derive(Debug, PartialEq, Clone, Copy)]
pub enum Kind {
/// Take a snapshot.
Take,
/// Restore a snapshot.
Restore,
}
/// Command for snapshot creation or restoration.
#[derive(Debug, PartialEq)]
pub struct SnapshotCommand {
pub cache_config: CacheConfig,
pub dirs: Directories,
pub spec: SpecType,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub tracing: Switch,
pub fat_db: Switch,
pub compaction: DatabaseCompactionProfile,
pub file_path: Option<String>,
pub kind: Kind,
pub block_at: BlockId,
pub max_round_blocks_to_import: usize,
pub snapshot_conf: SnapshotConfiguration,
}
// helper for reading chunks from arbitrary reader and feeding them into the
// service.
fn restore_using<R: SnapshotReader>(
snapshot: Arc<SnapshotService>,
reader: &R,
recover: bool,
) -> Result<(), String> {
let manifest = reader.manifest();
info!(
"Restoring to block #{} (0x{:?})",
manifest.block_number, manifest.block_hash
);
snapshot
.init_restore(manifest.clone(), recover)
.map_err(|e| format!("Failed to begin restoration: {}", e))?;
let (num_state, num_blocks) = (manifest.state_hashes.len(), manifest.block_hashes.len());
let informant_handle = snapshot.clone();
::std::thread::spawn(move || {
while let RestorationStatus::Ongoing {
state_chunks_done,
block_chunks_done,
..
} = informant_handle.restoration_status()
{
info!(
"Processed {}/{} state chunks and {}/{} block chunks.",
state_chunks_done, num_state, block_chunks_done, num_blocks
);
::std::thread::sleep(Duration::from_secs(5));
}
});
info!("Restoring state");
for &state_hash in &manifest.state_hashes {
if snapshot.restoration_status() == RestorationStatus::Failed {
return Err("Restoration failed".into());
}
let chunk = reader.chunk(state_hash).map_err(|e| {
format!(
"Encountered error while reading chunk {:?}: {}",
state_hash, e
)
})?;
let hash = keccak(&chunk);
if hash != state_hash {
return Err(format!(
"Mismatched chunk hash. Expected {:?}, got {:?}",
state_hash, hash
));
}
snapshot.feed_state_chunk(state_hash, &chunk);
}
info!("Restoring blocks");
for &block_hash in &manifest.block_hashes {
if snapshot.restoration_status() == RestorationStatus::Failed {
return Err("Restoration failed".into());
}
let chunk = reader.chunk(block_hash).map_err(|e| {
format!(
"Encountered error while reading chunk {:?}: {}",
block_hash, e
)
})?;
let hash = keccak(&chunk);
if hash != block_hash {
return Err(format!(
"Mismatched chunk hash. Expected {:?}, got {:?}",
block_hash, hash
));
}
snapshot.feed_block_chunk(block_hash, &chunk);
}
match snapshot.restoration_status() {
RestorationStatus::Ongoing { .. } => {
Err("Snapshot file is incomplete and missing chunks.".into())
}
RestorationStatus::Initializing { .. } => {
Err("Snapshot restoration is still initializing.".into())
}
RestorationStatus::Failed => Err("Snapshot restoration failed.".into()),
RestorationStatus::Inactive => {
info!("Restoration complete.");
Ok(())
}
}
}
impl SnapshotCommand {
// shared portion of snapshot commands: start the client service
fn start_service(self) -> Result<ClientService, String> {
// load spec file
let spec = self.spec.spec(&self.dirs.cache)?;
// load genesis hash
let genesis_hash = spec.genesis_header().hash();
// database paths
let db_dirs = self
.dirs
.database(genesis_hash, None, spec.data_dir.clone());
// user defaults path
let user_defaults_path = db_dirs.user_defaults_path();
// load user defaults
let user_defaults = UserDefaults::load(&user_defaults_path)?;
// select pruning algorithm
let algorithm = self.pruning.to_algorithm(&user_defaults);
// check if tracing is on
let tracing = tracing_switch_to_bool(self.tracing, &user_defaults)?;
// check if fatdb is on
let fat_db = fatdb_switch_to_bool(self.fat_db, &user_defaults, algorithm)?;
// prepare client and snapshot paths.
let client_path = db_dirs.client_path(algorithm);
let snapshot_path = db_dirs.snapshot_path();
// execute upgrades
execute_upgrades(&self.dirs.base, &db_dirs, algorithm, &self.compaction)?;
// prepare client config
let mut client_config = to_client_config(
&self.cache_config,
spec.name.to_lowercase(),
Mode::Active,
tracing,
fat_db,
self.compaction,
VMType::default(),
"".into(),
algorithm,
self.pruning_history,
self.pruning_memory,
true,
self.max_round_blocks_to_import,
);
client_config.snapshot = self.snapshot_conf;
let restoration_db_handler = db::restoration_db_handler(&client_path, &client_config);
let client_db = restoration_db_handler
.open(&client_path)
.map_err(|e| format!("Failed to open database {:?}", e))?;
let service = ClientService::start(
client_config,
&spec,
client_db,
&snapshot_path,
restoration_db_handler,
&self.dirs.ipc_path(),
// TODO [ToDr] don't use test miner here
// (actually don't require miner at all)
Arc::new(Miner::new_for_tests(&spec, None)),
)
.map_err(|e| format!("Client service error: {:?}", e))?;
Ok(service)
}
/// restore from a snapshot
pub fn restore(self) -> Result<(), String> {
let file = self.file_path.clone();
let service = self.start_service()?;
warn!("Snapshot restoration is experimental and the format may be subject to change.");
warn!(
"On encountering an unexpected error, please ensure that you have a recent snapshot."
);
let snapshot = service.snapshot_service();
if let Some(file) = file {
info!("Attempting to restore from snapshot at '{}'", file);
let reader = PackedReader::new(Path::new(&file))
.map_err(|e| format!("Couldn't open snapshot file: {}", e))
.and_then(|x| x.ok_or("Snapshot file has invalid format.".into()));
let reader = reader?;
restore_using(snapshot, &reader, true)?;
} else {
info!("Attempting to restore from local snapshot.");
// attempting restoration with recovery will lead to deadlock
// as we currently hold a read lock on the service's reader.
match *snapshot.reader() {
Some(ref reader) => restore_using(snapshot.clone(), reader, false)?,
None => return Err("No local snapshot found.".into()),
}
}
Ok(())
}
/// Take a snapshot from the head of the chain.
pub fn take_snapshot(self) -> Result<(), String> {
let file_path = self
.file_path
.clone()
.ok_or("No file path provided.".to_owned())?;
let file_path: PathBuf = file_path.into();
let block_at = self.block_at;
let service = self.start_service()?;
warn!("Snapshots are currently experimental. File formats may be subject to change.");
let writer = PackedWriter::new(&file_path)
.map_err(|e| format!("Failed to open snapshot writer: {}", e))?;
let progress = Arc::new(Progress::default());
let p = progress.clone();
let informant_handle = ::std::thread::spawn(move || {
::std::thread::sleep(Duration::from_secs(5));
let mut last_size = 0;
while !p.done() {
let cur_size = p.size();
if cur_size != last_size {
last_size = cur_size;
let bytes = crate::informant::format_bytes(cur_size as usize);
info!(
"Snapshot: {} accounts {} blocks {}",
p.accounts(),
p.blocks(),
bytes
);
}
::std::thread::sleep(Duration::from_secs(5));
}
});
if let Err(e) = service.client().take_snapshot(writer, block_at, &*progress) {
let _ = ::std::fs::remove_file(&file_path);
return Err(format!(
"Encountered fatal error while creating snapshot: {}",
e
));
}
info!("snapshot creation complete");
assert!(progress.done());
informant_handle
.join()
.map_err(|_| "failed to join logger thread")?;
Ok(())
}
}
/// Execute this snapshot command.
pub fn execute(cmd: SnapshotCommand) -> Result<String, String> {
match cmd.kind {
Kind::Take => cmd.take_snapshot()?,
Kind::Restore => cmd.restore()?,
}
Ok(String::new())
}

View File

@ -1,246 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Parity upgrade logic
use dir::{default_data_path, helpers::replace_home, home_dir, DatabaseDirectories};
use journaldb::Algorithm;
use semver::{SemVerError, Version};
use std::{
collections::*,
fs::{self, create_dir_all, File},
io,
io::{Read, Write},
path::{Path, PathBuf},
};
#[derive(Debug)]
pub enum Error {
CannotCreateConfigPath(io::Error),
CannotWriteVersionFile(io::Error),
CannotUpdateVersionFile(io::Error),
SemVer(SemVerError),
}
impl From<SemVerError> for Error {
fn from(err: SemVerError) -> Self {
Error::SemVer(err)
}
}
const CURRENT_VERSION: &'static str = env!("CARGO_PKG_VERSION");
#[derive(Hash, PartialEq, Eq)]
struct UpgradeKey {
pub old_version: Version,
pub new_version: Version,
}
type UpgradeList = HashMap<UpgradeKey, fn() -> Result<(), Error>>;
impl UpgradeKey {
// given the following config exist
// ver.lock 1.1 (`previous_version`)
//
// current_version 1.4 (`current_version`)
//
//
//upgrades (set of `UpgradeKey`)
// 1.0 -> 1.1 (u1)
// 1.1 -> 1.2 (u2)
// 1.2 -> 1.3 (u3)
// 1.3 -> 1.4 (u4)
// 1.4 -> 1.5 (u5)
//
// then the following upgrades should be applied:
// u2, u3, u4
fn is_applicable(&self, previous_version: &Version, current_version: &Version) -> bool {
self.old_version >= *previous_version && self.new_version <= *current_version
}
}
// dummy upgrade (remove when the first one is in)
fn dummy_upgrade() -> Result<(), Error> {
Ok(())
}
fn push_upgrades(upgrades: &mut UpgradeList) {
// dummy upgrade (remove when the first one is in)
upgrades.insert(
UpgradeKey {
old_version: Version::new(0, 9, 0),
new_version: Version::new(1, 0, 0),
},
dummy_upgrade,
);
}
fn upgrade_from_version(previous_version: &Version) -> Result<usize, Error> {
let mut upgrades = HashMap::new();
push_upgrades(&mut upgrades);
let current_version = Version::parse(CURRENT_VERSION)?;
let mut count = 0;
for upgrade_key in upgrades.keys() {
if upgrade_key.is_applicable(previous_version, &current_version) {
let upgrade_script = upgrades[upgrade_key];
upgrade_script()?;
count += 1;
}
}
Ok(count)
}
fn with_locked_version<F>(db_path: &str, script: F) -> Result<usize, Error>
where
F: Fn(&Version) -> Result<usize, Error>,
{
let mut path = PathBuf::from(db_path);
create_dir_all(&path).map_err(Error::CannotCreateConfigPath)?;
path.push("ver.lock");
let version = File::open(&path)
.ok()
.and_then(|ref mut file| {
let mut version_string = String::new();
file.read_to_string(&mut version_string)
.ok()
.and_then(|_| Version::parse(&version_string).ok())
})
.unwrap_or(Version::new(0, 9, 0));
let mut lock = File::create(&path).map_err(Error::CannotWriteVersionFile)?;
let result = script(&version);
let written_version = Version::parse(CURRENT_VERSION)?;
lock.write_all(written_version.to_string().as_bytes())
.map_err(Error::CannotUpdateVersionFile)?;
result
}
pub fn upgrade(db_path: &str) -> Result<usize, Error> {
with_locked_version(db_path, |ver| upgrade_from_version(ver))
}
fn file_exists(path: &Path) -> bool {
match fs::metadata(&path) {
Err(ref e) if e.kind() == io::ErrorKind::NotFound => false,
_ => true,
}
}
#[cfg(any(test, feature = "accounts"))]
pub fn upgrade_key_location(from: &PathBuf, to: &PathBuf) {
match fs::create_dir_all(&to).and_then(|()| fs::read_dir(from)) {
Ok(entries) => {
let files: Vec<_> = entries
.filter_map(|f| {
f.ok().and_then(|f| {
if f.file_type().ok().map_or(false, |f| f.is_file()) {
f.file_name().to_str().map(|s| s.to_owned())
} else {
None
}
})
})
.collect();
let mut num: usize = 0;
for name in files {
let mut from = from.clone();
from.push(&name);
let mut to = to.clone();
to.push(&name);
if !file_exists(&to) {
if let Err(e) = fs::rename(&from, &to) {
debug!("Error upgrading key {:?}: {:?}", from, e);
} else {
num += 1;
}
} else {
debug!("Skipped upgrading key {:?}", from);
}
}
if num > 0 {
info!(
"Moved {} keys from {} to {}",
num,
from.to_string_lossy(),
to.to_string_lossy()
);
}
}
Err(e) => {
debug!("Error moving keys from {:?} to {:?}: {:?}", from, to, e);
}
}
}
fn upgrade_dir_location(source: &PathBuf, dest: &PathBuf) {
if file_exists(&source) {
if !file_exists(&dest) {
let mut parent = dest.clone();
parent.pop();
if let Err(e) = fs::create_dir_all(&parent).and_then(|()| fs::rename(&source, &dest)) {
debug!("Skipped path {:?} -> {:?} :{:?}", source, dest, e);
} else {
info!(
"Moved {} to {}",
source.to_string_lossy(),
dest.to_string_lossy()
);
}
} else {
debug!(
"Skipped upgrading directory {:?}, Destination already exists at {:?}",
source, dest
);
}
}
}
fn upgrade_user_defaults(dirs: &DatabaseDirectories) {
let source = dirs.legacy_user_defaults_path();
let dest = dirs.user_defaults_path();
if file_exists(&source) {
if !file_exists(&dest) {
if let Err(e) = fs::rename(&source, &dest) {
debug!("Skipped upgrading user defaults {:?}:{:?}", dest, e);
}
} else {
debug!(
"Skipped upgrading user defaults {:?}, File exists at {:?}",
source, dest
);
}
}
}
pub fn upgrade_data_paths(base_path: &str, dirs: &DatabaseDirectories, pruning: Algorithm) {
if home_dir().is_none() {
return;
}
let legacy_root_path = replace_home("", "$HOME/.parity");
let default_path = default_data_path();
if legacy_root_path != base_path && base_path == default_path {
upgrade_dir_location(&PathBuf::from(legacy_root_path), &PathBuf::from(&base_path));
}
upgrade_dir_location(&dirs.legacy_version_path(pruning), &dirs.db_path(pruning));
upgrade_dir_location(&dirs.legacy_snapshot_path(), &dirs.snapshot_path());
upgrade_dir_location(&dirs.legacy_network_path(), &dirs.network_path());
upgrade_user_defaults(&dirs);
}

View File

@ -1,188 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use ethcore::client::Mode as ClientMode;
use journaldb::Algorithm;
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use serde_json::{de::from_reader, ser::to_string};
use std::{fs::File, io::Write, path::Path, time::Duration};
#[derive(Clone)]
pub struct Seconds(Duration);
impl Seconds {
pub fn value(&self) -> u64 {
self.0.as_secs()
}
}
impl From<u64> for Seconds {
fn from(s: u64) -> Seconds {
Seconds(Duration::from_secs(s))
}
}
impl From<Duration> for Seconds {
fn from(d: Duration) -> Seconds {
Seconds(d)
}
}
impl Into<Duration> for Seconds {
fn into(self) -> Duration {
self.0
}
}
impl Serialize for Seconds {
fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {
serializer.serialize_u64(self.value())
}
}
impl<'de> Deserialize<'de> for Seconds {
fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<Self, D::Error> {
let secs = u64::deserialize(deserializer)?;
Ok(Seconds::from(secs))
}
}
#[derive(Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase", tag = "mode")]
pub enum Mode {
Active,
Passive {
#[serde(rename = "mode.timeout")]
timeout: Seconds,
#[serde(rename = "mode.alarm")]
alarm: Seconds,
},
Dark {
#[serde(rename = "mode.timeout")]
timeout: Seconds,
},
Offline,
}
impl Into<ClientMode> for Mode {
fn into(self) -> ClientMode {
match self {
Mode::Active => ClientMode::Active,
Mode::Passive { timeout, alarm } => ClientMode::Passive(timeout.into(), alarm.into()),
Mode::Dark { timeout } => ClientMode::Dark(timeout.into()),
Mode::Offline => ClientMode::Off,
}
}
}
impl From<ClientMode> for Mode {
fn from(mode: ClientMode) -> Mode {
match mode {
ClientMode::Active => Mode::Active,
ClientMode::Passive(timeout, alarm) => Mode::Passive {
timeout: timeout.into(),
alarm: alarm.into(),
},
ClientMode::Dark(timeout) => Mode::Dark {
timeout: timeout.into(),
},
ClientMode::Off => Mode::Offline,
}
}
}
#[derive(Serialize, Deserialize)]
pub struct UserDefaults {
pub is_first_launch: bool,
#[serde(with = "algorithm_serde")]
pub pruning: Algorithm,
pub tracing: bool,
pub fat_db: bool,
#[serde(flatten)]
mode: Mode,
}
impl UserDefaults {
pub fn mode(&self) -> ClientMode {
self.mode.clone().into()
}
pub fn set_mode(&mut self, mode: ClientMode) {
self.mode = mode.into();
}
}
mod algorithm_serde {
use journaldb::Algorithm;
use serde::{de::Error, Deserialize, Deserializer, Serialize, Serializer};
pub fn serialize<S>(algorithm: &Algorithm, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
algorithm.as_str().serialize(serializer)
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Algorithm, D::Error>
where
D: Deserializer<'de>,
{
let pruning = String::deserialize(deserializer)?;
pruning
.parse()
.map_err(|_| Error::custom("invalid pruning method"))
}
}
impl Default for UserDefaults {
fn default() -> Self {
UserDefaults {
is_first_launch: true,
pruning: Algorithm::OverlayRecent,
tracing: false,
fat_db: false,
mode: Mode::Active,
}
}
}
impl UserDefaults {
pub fn load<P>(path: P) -> Result<Self, String>
where
P: AsRef<Path>,
{
match File::open(path) {
Ok(file) => match from_reader(file) {
Ok(defaults) => Ok(defaults),
Err(e) => {
warn!("Error loading user defaults file: {:?}", e);
Ok(UserDefaults::default())
}
},
_ => Ok(UserDefaults::default()),
}
}
pub fn save<P>(&self, path: P) -> Result<(), String>
where
P: AsRef<Path>,
{
let mut file: File =
File::create(path).map_err(|_| "Cannot create user defaults file".to_owned())?;
file.write_all(to_string(&self).unwrap().as_bytes())
.map_err(|_| "Failed to save user defaults".to_owned())
}
}

View File

@ -1,9 +1,9 @@
[package]
description = "Parity Ethereum Chain Specification"
name = "chainspec"
version = "0.1.0"
authors = ["Marek Kotewicz <marek@parity.io>"]
[dependencies]
ethjson = { path = "../../crates/ethjson" }
ethjson = { path = "../json" }
serde_json = "1.0"
serde_ignored = "0.0.4"

64
chainspec/src/main.rs Normal file
View File

@ -0,0 +1,64 @@
// Copyright 2015-2018 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
extern crate serde_json;
extern crate serde_ignored;
extern crate ethjson;
use std::collections::BTreeSet;
use std::{fs, env, process};
use ethjson::spec::Spec;
fn quit(s: &str) -> ! {
println!("{}", s);
process::exit(1);
}
fn main() {
let mut args = env::args();
if args.len() != 2 {
quit("You need to specify chainspec.json\n\
\n\
./chainspec <chainspec.json>");
}
let path = args.nth(1).expect("args.len() == 2; qed");
let file = match fs::File::open(&path) {
Ok(file) => file,
Err(_) => quit(&format!("{} could not be opened", path)),
};
let mut unused = BTreeSet::new();
let mut deserializer = serde_json::Deserializer::from_reader(file);
let spec: Result<Spec, _> = serde_ignored::deserialize(&mut deserializer, |field| {
unused.insert(field.to_string());
});
if let Err(err) = spec {
quit(&format!("{} {}", path, err.to_string()));
}
if !unused.is_empty() {
let err = unused.into_iter()
.map(|field| format!("{} unexpected field `{}`", path, field))
.collect::<Vec<_>>()
.join("\n");
quit(&err);
}
println!("{} is valid", path);
}

View File

@ -1,23 +0,0 @@
[package]
description = "OpenEthereum Account Management"
homepage = "https://github.com/openethereum/openethereum"
license = "GPL-3.0"
name = "ethcore-accounts"
version = "0.1.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2018"
[dependencies]
common-types = { path = "../ethcore/types" }
ethkey = { path = "ethkey" }
ethstore = { path = "ethstore" }
log = "0.4"
parity-crypto = { version = "0.6.2", features = [ "publickey" ] }
parking_lot = "0.11.1"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
[dev-dependencies]
ethereum-types = "0.9.2"
tempdir = "0.3"

View File

@ -1,69 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use parity_crypto::{
publickey::{KeyPair, Secret},
Keccak256,
};
use parity_wordlist;
/// Simple brainwallet.
pub struct Brain(String);
impl Brain {
pub fn new(s: String) -> Self {
Brain(s)
}
pub fn validate_phrase(phrase: &str, expected_words: usize) -> Result<(), ::WordlistError> {
parity_wordlist::validate_phrase(phrase, expected_words)
}
pub fn generate(&mut self) -> KeyPair {
let seed = self.0.clone();
let mut secret = seed.into_bytes().keccak256();
let mut i = 0;
loop {
secret = secret.keccak256();
match i > 16384 {
false => i += 1,
true => {
if let Ok(pair) = Secret::import_key(&secret).and_then(KeyPair::from_secret) {
if pair.address()[0] == 0 {
trace!("Testing: {}, got: {:?}", self.0, pair.address());
return pair;
}
}
}
}
}
}
}
#[cfg(test)]
mod tests {
use Brain;
#[test]
fn test_brain() {
let words = "this is sparta!".to_owned();
let first_keypair = Brain::new(words.clone()).generate();
let second_keypair = Brain::new(words.clone()).generate();
assert_eq!(first_keypair.secret(), second_keypair.secret());
}
}

View File

@ -1,69 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::Brain;
use parity_crypto::publickey::{Error, KeyPair};
use parity_wordlist as wordlist;
/// Tries to find brain-seed keypair with address starting with given prefix.
pub struct BrainPrefix {
prefix: Vec<u8>,
iterations: usize,
no_of_words: usize,
last_phrase: String,
}
impl BrainPrefix {
pub fn new(prefix: Vec<u8>, iterations: usize, no_of_words: usize) -> Self {
BrainPrefix {
prefix,
iterations,
no_of_words,
last_phrase: String::new(),
}
}
pub fn phrase(&self) -> &str {
&self.last_phrase
}
pub fn generate(&mut self) -> Result<KeyPair, Error> {
for _ in 0..self.iterations {
let phrase = wordlist::random_phrase(self.no_of_words);
let keypair = Brain::new(phrase.clone()).generate();
if keypair.address().as_ref().starts_with(&self.prefix) {
self.last_phrase = phrase;
return Ok(keypair);
}
}
Err(Error::Custom("Could not find keypair".into()))
}
}
#[cfg(test)]
mod tests {
use BrainPrefix;
#[test]
fn prefix_generator() {
let prefix = vec![0x00u8];
let keypair = BrainPrefix::new(prefix.clone(), usize::max_value(), 12)
.generate()
.unwrap();
assert!(keypair.address().as_bytes().starts_with(&prefix));
}
}

View File

@ -1,177 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::collections::HashSet;
use edit_distance::edit_distance;
use parity_crypto::publickey::Address;
use parity_wordlist;
use super::Brain;
/// Tries to find a phrase for address, given the number
/// of expected words and a partial phrase.
///
/// Returns `None` if phrase couldn't be found.
pub fn brain_recover(
address: &Address,
known_phrase: &str,
expected_words: usize,
) -> Option<String> {
let it = PhrasesIterator::from_known_phrase(known_phrase, expected_words);
for phrase in it {
let keypair = Brain::new(phrase.clone()).generate();
trace!("Testing: {}, got: {:?}", phrase, keypair.address());
if &keypair.address() == address {
return Some(phrase);
}
}
None
}
fn generate_substitutions(word: &str) -> Vec<&'static str> {
let mut words = parity_wordlist::WORDS
.iter()
.cloned()
.map(|w| (edit_distance(w, word), w))
.collect::<Vec<_>>();
words.sort_by(|a, b| a.0.cmp(&b.0));
words.into_iter().map(|pair| pair.1).collect()
}
/// Iterator over possible
pub struct PhrasesIterator {
words: Vec<Vec<&'static str>>,
combinations: u64,
indexes: Vec<usize>,
has_next: bool,
}
impl PhrasesIterator {
pub fn from_known_phrase(known_phrase: &str, expected_words: usize) -> Self {
let known_words = parity_wordlist::WORDS
.iter()
.cloned()
.collect::<HashSet<_>>();
let mut words = known_phrase
.split(' ')
.map(|word| match known_words.get(word) {
None => {
info!(
"Invalid word '{}', looking for potential substitutions.",
word
);
let substitutions = generate_substitutions(word);
info!("Closest words: {:?}", &substitutions[..10]);
substitutions
}
Some(word) => vec![*word],
})
.collect::<Vec<_>>();
// add missing words
if words.len() < expected_words {
let to_add = expected_words - words.len();
info!("Number of words is insuficcient adding {} more.", to_add);
for _ in 0..to_add {
words.push(parity_wordlist::WORDS.iter().cloned().collect());
}
}
// start searching
PhrasesIterator::new(words)
}
pub fn new(words: Vec<Vec<&'static str>>) -> Self {
let combinations = words.iter().fold(1u64, |acc, x| acc * x.len() as u64);
let indexes = words.iter().map(|_| 0).collect();
info!("Starting to test {} possible combinations.", combinations);
PhrasesIterator {
words,
combinations,
indexes,
has_next: combinations > 0,
}
}
pub fn combinations(&self) -> u64 {
self.combinations
}
fn current(&self) -> String {
let mut s = self.words[0][self.indexes[0]].to_owned();
for i in 1..self.indexes.len() {
s.push(' ');
s.push_str(self.words[i][self.indexes[i]]);
}
s
}
fn next_index(&mut self) -> bool {
let mut pos = self.indexes.len();
while pos > 0 {
pos -= 1;
self.indexes[pos] += 1;
if self.indexes[pos] >= self.words[pos].len() {
self.indexes[pos] = 0;
} else {
return true;
}
}
false
}
}
impl Iterator for PhrasesIterator {
type Item = String;
fn next(&mut self) -> Option<String> {
if !self.has_next {
return None;
}
let phrase = self.current();
self.has_next = self.next_index();
Some(phrase)
}
}
#[cfg(test)]
mod tests {
use super::PhrasesIterator;
#[test]
fn should_generate_possible_combinations() {
let mut it =
PhrasesIterator::new(vec![vec!["1", "2", "3"], vec!["test"], vec!["a", "b", "c"]]);
assert_eq!(it.combinations(), 9);
assert_eq!(it.next(), Some("1 test a".to_owned()));
assert_eq!(it.next(), Some("1 test b".to_owned()));
assert_eq!(it.next(), Some("1 test c".to_owned()));
assert_eq!(it.next(), Some("2 test a".to_owned()));
assert_eq!(it.next(), Some("2 test b".to_owned()));
assert_eq!(it.next(), Some("2 test c".to_owned()));
assert_eq!(it.next(), Some("3 test a".to_owned()));
assert_eq!(it.next(), Some("3 test b".to_owned()));
assert_eq!(it.next(), Some("3 test c".to_owned()));
assert_eq!(it.next(), None);
}
}

View File

@ -1,88 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crypto::Error as CryptoError;
use std::{error, fmt};
#[derive(Debug)]
/// Crypto error
pub enum Error {
/// Invalid secret key
InvalidSecret,
/// Invalid public key
InvalidPublic,
/// Invalid address
InvalidAddress,
/// Invalid EC signature
InvalidSignature,
/// Invalid AES message
InvalidMessage,
/// IO Error
Io(::std::io::Error),
/// Custom
Custom(String),
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let msg = match *self {
Error::InvalidSecret => "Invalid secret".into(),
Error::InvalidPublic => "Invalid public".into(),
Error::InvalidAddress => "Invalid address".into(),
Error::InvalidSignature => "Invalid EC signature".into(),
Error::InvalidMessage => "Invalid AES message".into(),
Error::Io(ref err) => format!("I/O error: {}", err),
Error::Custom(ref s) => s.clone(),
};
f.write_fmt(format_args!("Crypto error ({})", msg))
}
}
impl error::Error for Error {
fn description(&self) -> &str {
format!("{:?}", &self)
}
}
impl Into<String> for Error {
fn into(self) -> String {
format!("{}", self)
}
}
impl From<CryptoError> for Error {
fn from(e: CryptoError) -> Error {
Error::Custom(e.to_string())
}
}
impl From<::secp256k1::Error> for Error {
fn from(e: ::secp256k1::Error) -> Error {
match e {
::secp256k1::Error::InvalidMessage => Error::InvalidMessage,
::secp256k1::Error::InvalidPublicKey => Error::InvalidPublic,
::secp256k1::Error::InvalidSecretKey => Error::InvalidSecret,
_ => Error::InvalidSignature,
}
}
}
impl From<::std::io::Error> for Error {
fn from(err: ::std::io::Error) -> Error {
Error::Io(err)
}
}

View File

@ -1,39 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
// #![warn(missing_docs)]
extern crate edit_distance;
extern crate parity_crypto;
extern crate parity_wordlist;
extern crate serde;
#[macro_use]
extern crate log;
#[macro_use]
extern crate serde_derive;
mod brain;
mod brain_prefix;
mod password;
mod prefix;
pub mod brain_recover;
pub use self::{
brain::Brain, brain_prefix::BrainPrefix, parity_wordlist::Error as WordlistError,
password::Password, prefix::Prefix,
};

View File

@ -1,59 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{fmt, ptr};
#[derive(Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Password(String);
impl fmt::Debug for Password {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "Password(******)")
}
}
impl Password {
pub fn as_bytes(&self) -> &[u8] {
self.0.as_bytes()
}
pub fn as_str(&self) -> &str {
self.0.as_str()
}
}
// Custom drop impl to zero out memory.
impl Drop for Password {
fn drop(&mut self) {
unsafe {
for byte_ref in self.0.as_mut_vec() {
ptr::write_volatile(byte_ref, 0)
}
}
}
}
impl From<String> for Password {
fn from(s: String) -> Password {
Password(s)
}
}
impl<'a> From<&'a str> for Password {
fn from(s: &'a str) -> Password {
Password::from(String::from(s))
}
}

View File

@ -1,54 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use parity_crypto::publickey::{Error, Generator, KeyPair, Random};
/// Tries to find keypair with address starting with given prefix.
pub struct Prefix {
prefix: Vec<u8>,
iterations: usize,
}
impl Prefix {
pub fn new(prefix: Vec<u8>, iterations: usize) -> Self {
Prefix { prefix, iterations }
}
pub fn generate(&mut self) -> Result<KeyPair, Error> {
for _ in 0..self.iterations {
let keypair = Random.generate();
if keypair.address().as_ref().starts_with(&self.prefix) {
return Ok(keypair);
}
}
Err(Error::Custom("Could not find keypair".into()))
}
}
#[cfg(test)]
mod tests {
use Prefix;
#[test]
fn prefix_generator() {
let prefix = vec![0xffu8];
let keypair = Prefix::new(prefix.clone(), usize::max_value())
.generate()
.unwrap();
assert!(keypair.address().as_bytes().starts_with(&prefix));
}
}

View File

@ -1,57 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use json;
#[derive(Debug, PartialEq, Clone)]
pub struct Aes128Ctr {
pub iv: [u8; 16],
}
#[derive(Debug, PartialEq, Clone)]
pub enum Cipher {
Aes128Ctr(Aes128Ctr),
}
impl From<json::Aes128Ctr> for Aes128Ctr {
fn from(json: json::Aes128Ctr) -> Self {
Aes128Ctr { iv: json.iv.into() }
}
}
impl Into<json::Aes128Ctr> for Aes128Ctr {
fn into(self) -> json::Aes128Ctr {
json::Aes128Ctr {
iv: From::from(self.iv),
}
}
}
impl From<json::Cipher> for Cipher {
fn from(json: json::Cipher) -> Self {
match json {
json::Cipher::Aes128Ctr(params) => Cipher::Aes128Ctr(From::from(params)),
}
}
}
impl Into<json::Cipher> for Cipher {
fn into(self) -> json::Cipher {
match self {
Cipher::Aes128Ctr(params) => json::Cipher::Aes128Ctr(params.into()),
}
}
}

View File

@ -1,234 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use account::{Aes128Ctr, Cipher, Kdf, Pbkdf2, Prf};
use crypto::{self, publickey::Secret, Keccak256};
use ethkey::Password;
use json;
use random::Random;
use smallvec::SmallVec;
use std::{num::NonZeroU32, str};
use Error;
/// Encrypted data
#[derive(Debug, PartialEq, Clone)]
pub struct Crypto {
/// Encryption parameters
pub cipher: Cipher,
/// Encrypted data buffer
pub ciphertext: Vec<u8>,
/// Key derivation function parameters
pub kdf: Kdf,
/// Message authentication code
pub mac: [u8; 32],
}
impl From<json::Crypto> for Crypto {
fn from(json: json::Crypto) -> Self {
Crypto {
cipher: json.cipher.into(),
ciphertext: json.ciphertext.into(),
kdf: json.kdf.into(),
mac: json.mac.into(),
}
}
}
impl From<Crypto> for json::Crypto {
fn from(c: Crypto) -> Self {
json::Crypto {
cipher: c.cipher.into(),
ciphertext: c.ciphertext.into(),
kdf: c.kdf.into(),
mac: c.mac.into(),
}
}
}
impl str::FromStr for Crypto {
type Err = <json::Crypto as str::FromStr>::Err;
fn from_str(s: &str) -> Result<Self, Self::Err> {
s.parse::<json::Crypto>().map(Into::into)
}
}
impl From<Crypto> for String {
fn from(c: Crypto) -> Self {
json::Crypto::from(c).into()
}
}
impl Crypto {
/// Encrypt account secret
pub fn with_secret(
secret: &Secret,
password: &Password,
iterations: NonZeroU32,
) -> Result<Self, crypto::Error> {
Crypto::with_plain(secret.as_bytes(), password, iterations)
}
/// Encrypt custom plain data
pub fn with_plain(
plain: &[u8],
password: &Password,
iterations: NonZeroU32,
) -> Result<Self, crypto::Error> {
let salt: [u8; 32] = Random::random();
let iv: [u8; 16] = Random::random();
// two parts of derived key
// DK = [ DK[0..15] DK[16..31] ] = [derived_left_bits, derived_right_bits]
let (derived_left_bits, derived_right_bits) =
crypto::derive_key_iterations(password.as_bytes(), &salt, iterations.get());
// preallocated (on-stack in case of `Secret`) buffer to hold cipher
// length = length(plain) as we are using CTR-approach
let plain_len = plain.len();
let mut ciphertext: SmallVec<[u8; 32]> = SmallVec::from_vec(vec![0; plain_len]);
// aes-128-ctr with initial vector of iv
crypto::aes::encrypt_128_ctr(&derived_left_bits, &iv, plain, &mut *ciphertext)?;
// KECCAK(DK[16..31] ++ <ciphertext>), where DK[16..31] - derived_right_bits
let mac = crypto::derive_mac(&derived_right_bits, &*ciphertext).keccak256();
Ok(Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr { iv: iv }),
ciphertext: ciphertext.into_vec(),
kdf: Kdf::Pbkdf2(Pbkdf2 {
dklen: crypto::KEY_LENGTH as u32,
salt: salt.to_vec(),
c: iterations,
prf: Prf::HmacSha256,
}),
mac: mac,
})
}
/// Try to decrypt and convert result to account secret
pub fn secret(&self, password: &Password) -> Result<Secret, Error> {
if self.ciphertext.len() > 32 {
return Err(Error::InvalidSecret);
}
let secret = self.do_decrypt(password, 32)?;
Ok(Secret::import_key(&secret)?)
}
/// Try to decrypt and return result as is
pub fn decrypt(&self, password: &Password) -> Result<Vec<u8>, Error> {
let expected_len = self.ciphertext.len();
self.do_decrypt(password, expected_len)
}
fn do_decrypt(&self, password: &Password, expected_len: usize) -> Result<Vec<u8>, Error> {
let (derived_left_bits, derived_right_bits) = match self.kdf {
Kdf::Pbkdf2(ref params) => {
crypto::derive_key_iterations(password.as_bytes(), &params.salt, params.c.get())
}
Kdf::Scrypt(ref params) => crypto::scrypt::derive_key(
password.as_bytes(),
&params.salt,
params.n,
params.p,
params.r,
)?,
};
let mac = crypto::derive_mac(&derived_right_bits, &self.ciphertext).keccak256();
if !crypto::is_equal(&mac, &self.mac) {
return Err(Error::InvalidPassword);
}
let mut plain: SmallVec<[u8; 32]> = SmallVec::from_vec(vec![0; expected_len]);
match self.cipher {
Cipher::Aes128Ctr(ref params) => {
// checker by callers
debug_assert!(expected_len >= self.ciphertext.len());
let from = expected_len - self.ciphertext.len();
crypto::aes::decrypt_128_ctr(
&derived_left_bits,
&params.iv,
&self.ciphertext,
&mut plain[from..],
)?;
Ok(plain.into_iter().collect())
}
}
}
}
#[cfg(test)]
mod tests {
use super::{Crypto, Error, NonZeroU32};
use crypto::publickey::{Generator, Random};
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(10240).expect("10240 > 0; qed");
}
#[test]
fn crypto_with_secret_create() {
let keypair = Random.generate();
let passwd = "this is sparta".into();
let crypto = Crypto::with_secret(keypair.secret(), &passwd, *ITERATIONS).unwrap();
let secret = crypto.secret(&passwd).unwrap();
assert_eq!(keypair.secret(), &secret);
}
#[test]
fn crypto_with_secret_invalid_password() {
let keypair = Random.generate();
let crypto =
Crypto::with_secret(keypair.secret(), &"this is sparta".into(), *ITERATIONS).unwrap();
assert_matches!(
crypto.secret(&"this is sparta!".into()),
Err(Error::InvalidPassword)
)
}
#[test]
fn crypto_with_null_plain_data() {
let original_data = b"";
let passwd = "this is sparta".into();
let crypto = Crypto::with_plain(&original_data[..], &passwd, *ITERATIONS).unwrap();
let decrypted_data = crypto.decrypt(&passwd).unwrap();
assert_eq!(original_data[..], *decrypted_data);
}
#[test]
fn crypto_with_tiny_plain_data() {
let original_data = b"{}";
let passwd = "this is sparta".into();
let crypto = Crypto::with_plain(&original_data[..], &passwd, *ITERATIONS).unwrap();
let decrypted_data = crypto.decrypt(&passwd).unwrap();
assert_eq!(original_data[..], *decrypted_data);
}
#[test]
fn crypto_with_huge_plain_data() {
let original_data: Vec<_> = (1..65536).map(|i| (i % 256) as u8).collect();
let passwd = "this is sparta".into();
let crypto = Crypto::with_plain(&original_data, &passwd, *ITERATIONS).unwrap();
let decrypted_data = crypto.decrypt(&passwd).unwrap();
assert_eq!(&original_data, &decrypted_data);
}
}

View File

@ -1,126 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use json;
use std::num::NonZeroU32;
#[derive(Debug, PartialEq, Clone)]
pub enum Prf {
HmacSha256,
}
#[derive(Debug, PartialEq, Clone)]
pub struct Pbkdf2 {
pub c: NonZeroU32,
pub dklen: u32,
pub prf: Prf,
pub salt: Vec<u8>,
}
#[derive(Debug, PartialEq, Clone)]
pub struct Scrypt {
pub dklen: u32,
pub p: u32,
pub n: u32,
pub r: u32,
pub salt: Vec<u8>,
}
#[derive(Debug, PartialEq, Clone)]
pub enum Kdf {
Pbkdf2(Pbkdf2),
Scrypt(Scrypt),
}
impl From<json::Prf> for Prf {
fn from(json: json::Prf) -> Self {
match json {
json::Prf::HmacSha256 => Prf::HmacSha256,
}
}
}
impl Into<json::Prf> for Prf {
fn into(self) -> json::Prf {
match self {
Prf::HmacSha256 => json::Prf::HmacSha256,
}
}
}
impl From<json::Pbkdf2> for Pbkdf2 {
fn from(json: json::Pbkdf2) -> Self {
Pbkdf2 {
c: json.c,
dklen: json.dklen,
prf: From::from(json.prf),
salt: json.salt.into(),
}
}
}
impl Into<json::Pbkdf2> for Pbkdf2 {
fn into(self) -> json::Pbkdf2 {
json::Pbkdf2 {
c: self.c,
dklen: self.dklen,
prf: self.prf.into(),
salt: From::from(self.salt),
}
}
}
impl From<json::Scrypt> for Scrypt {
fn from(json: json::Scrypt) -> Self {
Scrypt {
dklen: json.dklen,
p: json.p,
n: json.n,
r: json.r,
salt: json.salt.into(),
}
}
}
impl Into<json::Scrypt> for Scrypt {
fn into(self) -> json::Scrypt {
json::Scrypt {
dklen: self.dklen,
p: self.p,
n: self.n,
r: self.r,
salt: From::from(self.salt),
}
}
}
impl From<json::Kdf> for Kdf {
fn from(json: json::Kdf) -> Self {
match json {
json::Kdf::Pbkdf2(params) => Kdf::Pbkdf2(From::from(params)),
json::Kdf::Scrypt(params) => Kdf::Scrypt(From::from(params)),
}
}
}
impl Into<json::Kdf> for Kdf {
fn into(self) -> json::Kdf {
match self {
Kdf::Pbkdf2(params) => json::Kdf::Pbkdf2(params.into()),
Kdf::Scrypt(params) => json::Kdf::Scrypt(params.into()),
}
}
}

View File

@ -1,287 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::crypto::Crypto;
use account::Version;
use crypto::{
self,
publickey::{ecdh::agree, sign, Address, KeyPair, Message, Public, Secret, Signature},
};
use ethkey::Password;
use json;
use std::num::NonZeroU32;
use Error;
/// Account representation.
#[derive(Debug, PartialEq, Clone)]
pub struct SafeAccount {
/// Account ID
pub id: [u8; 16],
/// Account version
pub version: Version,
/// Account address
pub address: Address,
/// Account private key derivation definition.
pub crypto: Crypto,
/// Account filename
pub filename: Option<String>,
/// Account name
pub name: String,
/// Account metadata
pub meta: String,
}
impl Into<json::KeyFile> for SafeAccount {
fn into(self) -> json::KeyFile {
json::KeyFile {
id: From::from(self.id),
version: self.version.into(),
address: Some(self.address.into()),
crypto: self.crypto.into(),
name: Some(self.name.into()),
meta: Some(self.meta.into()),
}
}
}
impl SafeAccount {
/// Create a new account
pub fn create(
keypair: &KeyPair,
id: [u8; 16],
password: &Password,
iterations: NonZeroU32,
name: String,
meta: String,
) -> Result<Self, crypto::Error> {
Ok(SafeAccount {
id: id,
version: Version::V3,
crypto: Crypto::with_secret(keypair.secret(), password, iterations)?,
address: keypair.address(),
filename: None,
name: name,
meta: meta,
})
}
/// Create a new `SafeAccount` from the given `json`; if it was read from a
/// file, the `filename` should be `Some` name. If it is as yet anonymous, then it
/// can be left `None`.
/// In case `password` is provided, we will attempt to read the secret from the keyfile
/// and derive the address from it instead of reading it directly.
/// Providing password is required for `json::KeyFile`s with no address.
pub fn from_file(
json: json::KeyFile,
filename: Option<String>,
password: &Option<Password>,
) -> Result<Self, Error> {
let crypto = Crypto::from(json.crypto);
let address = match (password, &json.address) {
(None, Some(json_address)) => json_address.into(),
(None, None) => Err(Error::Custom(
"This keystore does not contain address. You need to provide password to import it"
.into(),
))?,
(Some(password), json_address) => {
let derived_address = KeyPair::from_secret(
crypto
.secret(&password)
.map_err(|_| Error::InvalidPassword)?,
)?
.address();
match json_address {
Some(json_address) => {
let json_address = json_address.into();
if derived_address != json_address {
warn!("Detected address mismatch when opening an account. Derived: {:?}, in json got: {:?}",
derived_address, json_address);
}
}
_ => {}
}
derived_address
}
};
Ok(SafeAccount {
id: json.id.into(),
version: json.version.into(),
address,
crypto,
filename,
name: json.name.unwrap_or(String::new()),
meta: json.meta.unwrap_or("{}".to_owned()),
})
}
/// Create a new `SafeAccount` from the given vault `json`; if it was read from a
/// file, the `filename` should be `Some` name. If it is as yet anonymous, then it
/// can be left `None`.
pub fn from_vault_file(
password: &Password,
json: json::VaultKeyFile,
filename: Option<String>,
) -> Result<Self, Error> {
let meta_crypto: Crypto = json.metacrypto.into();
let meta_plain = meta_crypto.decrypt(password)?;
let meta_plain =
json::VaultKeyMeta::load(&meta_plain).map_err(|e| Error::Custom(format!("{:?}", e)))?;
SafeAccount::from_file(
json::KeyFile {
id: json.id,
version: json.version,
crypto: json.crypto,
address: Some(meta_plain.address),
name: meta_plain.name,
meta: meta_plain.meta,
},
filename,
&None,
)
}
/// Create a new `VaultKeyFile` from the given `self`
pub fn into_vault_file(
self,
iterations: NonZeroU32,
password: &Password,
) -> Result<json::VaultKeyFile, Error> {
let meta_plain = json::VaultKeyMeta {
address: self.address.into(),
name: Some(self.name),
meta: Some(self.meta),
};
let meta_plain = meta_plain
.write()
.map_err(|e| Error::Custom(format!("{:?}", e)))?;
let meta_crypto = Crypto::with_plain(&meta_plain, password, iterations)?;
Ok(json::VaultKeyFile {
id: self.id.into(),
version: self.version.into(),
crypto: self.crypto.into(),
metacrypto: meta_crypto.into(),
})
}
/// Sign a message.
pub fn sign(&self, password: &Password, message: &Message) -> Result<Signature, Error> {
let secret = self.crypto.secret(password)?;
sign(&secret, message).map_err(From::from)
}
/// Decrypt a message.
pub fn decrypt(
&self,
password: &Password,
shared_mac: &[u8],
message: &[u8],
) -> Result<Vec<u8>, Error> {
let secret = self.crypto.secret(password)?;
crypto::publickey::ecies::decrypt(&secret, shared_mac, message).map_err(From::from)
}
/// Agree on shared key.
pub fn agree(&self, password: &Password, other: &Public) -> Result<Secret, Error> {
let secret = self.crypto.secret(password)?;
agree(&secret, other).map_err(From::from)
}
/// Derive public key.
pub fn public(&self, password: &Password) -> Result<Public, Error> {
let secret = self.crypto.secret(password)?;
Ok(KeyPair::from_secret(secret)?.public().clone())
}
/// Change account's password.
pub fn change_password(
&self,
old_password: &Password,
new_password: &Password,
iterations: NonZeroU32,
) -> Result<Self, Error> {
let secret = self.crypto.secret(old_password)?;
let result = SafeAccount {
id: self.id.clone(),
version: self.version.clone(),
crypto: Crypto::with_secret(&secret, new_password, iterations)?,
address: self.address.clone(),
filename: self.filename.clone(),
name: self.name.clone(),
meta: self.meta.clone(),
};
Ok(result)
}
/// Check if password matches the account.
pub fn check_password(&self, password: &Password) -> bool {
self.crypto.secret(password).is_ok()
}
}
#[cfg(test)]
mod tests {
use super::{NonZeroU32, SafeAccount};
use crypto::publickey::{verify_public, Generator, Random};
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(10240).expect("10240 > 0; qed");
}
#[test]
fn sign_and_verify_public() {
let keypair = Random.generate();
let password = "hello world".into();
let message = [1u8; 32].into();
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
);
let signature = account.unwrap().sign(&password, &message).unwrap();
assert!(verify_public(keypair.public(), &signature, &message).unwrap());
}
#[test]
fn change_password() {
let keypair = Random.generate();
let first_password = "hello world".into();
let sec_password = "this is sparta".into();
let message = [1u8; 32].into();
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&first_password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
)
.unwrap();
let new_account = account
.change_password(&first_password, &sec_password, *ITERATIONS)
.unwrap();
assert!(account.sign(&first_password, &message).is_ok());
assert!(account.sign(&sec_password, &message).is_err());
assert!(new_account.sign(&first_password, &message).is_err());
assert!(new_account.sign(&sec_password, &message).is_ok());
}
}

View File

@ -1,608 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{
vault::{VaultDiskDirectory, VAULT_FILE_NAME},
KeyDirectory, VaultKey, VaultKeyDirectory, VaultKeyDirectoryProvider,
};
use ethkey::Password;
use json::{self, Uuid};
use std::{
collections::HashMap,
fs, io,
io::Write,
path::{Path, PathBuf},
};
use time;
use Error;
use SafeAccount;
const IGNORED_FILES: &'static [&'static str] = &[
"thumbs.db",
"address_book.json",
"dapps_policy.json",
"dapps_accounts.json",
"dapps_history.json",
"vault.json",
];
/// Find a unique filename that does not exist using four-letter random suffix.
pub fn find_unique_filename_using_random_suffix(
parent_path: &Path,
original_filename: &str,
) -> io::Result<String> {
let mut path = parent_path.join(original_filename);
let mut deduped_filename = original_filename.to_string();
if path.exists() {
const MAX_RETRIES: usize = 500;
let mut retries = 0;
while path.exists() {
if retries >= MAX_RETRIES {
return Err(io::Error::new(
io::ErrorKind::Other,
"Exceeded maximum retries when deduplicating filename.",
));
}
let suffix = ::random::random_string(4);
deduped_filename = format!("{}-{}", original_filename, suffix);
path.set_file_name(&deduped_filename);
retries += 1;
}
}
Ok(deduped_filename)
}
/// Create a new file and restrict permissions to owner only. It errors if the file already exists.
#[cfg(unix)]
pub fn create_new_file_with_permissions_to_owner(file_path: &Path) -> io::Result<fs::File> {
use std::os::unix::fs::OpenOptionsExt;
fs::OpenOptions::new()
.write(true)
.create_new(true)
.mode((libc::S_IWUSR | libc::S_IRUSR) as u32)
.open(file_path)
}
/// Create a new file and restrict permissions to owner only. It errors if the file already exists.
#[cfg(not(unix))]
pub fn create_new_file_with_permissions_to_owner(file_path: &Path) -> io::Result<fs::File> {
fs::OpenOptions::new()
.write(true)
.create_new(true)
.open(file_path)
}
/// Create a new file and restrict permissions to owner only. It replaces the existing file if it already exists.
#[cfg(unix)]
pub fn replace_file_with_permissions_to_owner(file_path: &Path) -> io::Result<fs::File> {
use std::os::unix::fs::PermissionsExt;
let file = fs::File::create(file_path)?;
let mut permissions = file.metadata()?.permissions();
permissions.set_mode((libc::S_IWUSR | libc::S_IRUSR) as u32);
file.set_permissions(permissions)?;
Ok(file)
}
/// Create a new file and restrict permissions to owner only. It replaces the existing file if it already exists.
#[cfg(not(unix))]
pub fn replace_file_with_permissions_to_owner(file_path: &Path) -> io::Result<fs::File> {
fs::File::create(file_path)
}
/// Root keys directory implementation
pub type RootDiskDirectory = DiskDirectory<DiskKeyFileManager>;
/// Disk directory key file manager
pub trait KeyFileManager: Send + Sync {
/// Read `SafeAccount` from given key file stream
fn read<T>(&self, filename: Option<String>, reader: T) -> Result<SafeAccount, Error>
where
T: io::Read;
/// Write `SafeAccount` to given key file stream
fn write<T>(&self, account: SafeAccount, writer: &mut T) -> Result<(), Error>
where
T: io::Write;
}
/// Disk-based keys directory implementation
pub struct DiskDirectory<T>
where
T: KeyFileManager,
{
path: PathBuf,
key_manager: T,
}
/// Keys file manager for root keys directory
#[derive(Default)]
pub struct DiskKeyFileManager {
password: Option<Password>,
}
impl RootDiskDirectory {
pub fn create<P>(path: P) -> Result<Self, Error>
where
P: AsRef<Path>,
{
fs::create_dir_all(&path)?;
Ok(Self::at(path))
}
/// allows to read keyfiles with given password (needed for keyfiles w/o address)
pub fn with_password(&self, password: Option<Password>) -> Self {
DiskDirectory::new(&self.path, DiskKeyFileManager { password })
}
pub fn at<P>(path: P) -> Self
where
P: AsRef<Path>,
{
DiskDirectory::new(path, DiskKeyFileManager::default())
}
}
impl<T> DiskDirectory<T>
where
T: KeyFileManager,
{
/// Create new disk directory instance
pub fn new<P>(path: P, key_manager: T) -> Self
where
P: AsRef<Path>,
{
DiskDirectory {
path: path.as_ref().to_path_buf(),
key_manager: key_manager,
}
}
fn files(&self) -> Result<Vec<PathBuf>, Error> {
Ok(fs::read_dir(&self.path)?
.flat_map(Result::ok)
.filter(|entry| {
let metadata = entry.metadata().ok();
let file_name = entry.file_name();
let name = file_name.to_string_lossy();
// filter directories
metadata.map_or(false, |m| !m.is_dir()) &&
// hidden files
!name.starts_with(".") &&
// other ignored files
!IGNORED_FILES.contains(&&*name)
})
.map(|entry| entry.path())
.collect::<Vec<PathBuf>>())
}
pub fn files_hash(&self) -> Result<u64, Error> {
use std::{collections::hash_map::DefaultHasher, hash::Hasher};
let mut hasher = DefaultHasher::new();
let files = self.files()?;
for file in files {
hasher.write(file.to_str().unwrap_or("").as_bytes())
}
Ok(hasher.finish())
}
fn last_modification_date(&self) -> Result<u64, Error> {
use std::time::{Duration, UNIX_EPOCH};
let duration = fs::metadata(&self.path)?
.modified()?
.duration_since(UNIX_EPOCH)
.unwrap_or(Duration::default());
let timestamp = duration.as_secs() ^ (duration.subsec_nanos() as u64);
Ok(timestamp)
}
/// all accounts found in keys directory
fn files_content(&self) -> Result<HashMap<PathBuf, SafeAccount>, Error> {
// it's not done using one iterator cause
// there is an issue with rustc and it takes tooo much time to compile
let paths = self.files()?;
Ok(paths
.into_iter()
.filter_map(|path| {
let filename = Some(
path.file_name()
.and_then(|n| n.to_str())
.expect("Keys have valid UTF8 names only.")
.to_owned(),
);
fs::File::open(path.clone())
.map_err(Into::into)
.and_then(|file| self.key_manager.read(filename, file))
.map_err(|err| {
warn!("Invalid key file: {:?} ({})", path, err);
err
})
.map(|account| (path, account))
.ok()
})
.collect())
}
/// insert account with given filename. if the filename is a duplicate of any stored account and dedup is set to
/// true, a random suffix is appended to the filename.
pub fn insert_with_filename(
&self,
account: SafeAccount,
mut filename: String,
dedup: bool,
) -> Result<SafeAccount, Error> {
if dedup {
filename = find_unique_filename_using_random_suffix(&self.path, &filename)?;
}
// path to keyfile
let keyfile_path = self.path.join(filename.as_str());
// update account filename
let original_account = account.clone();
let mut account = account;
account.filename = Some(filename);
{
// save the file
let mut file = if dedup {
create_new_file_with_permissions_to_owner(&keyfile_path)?
} else {
replace_file_with_permissions_to_owner(&keyfile_path)?
};
// write key content
self.key_manager
.write(original_account, &mut file)
.map_err(|e| Error::Custom(format!("{:?}", e)))?;
file.flush()?;
file.sync_all()?;
}
Ok(account)
}
/// Get key file manager referece
pub fn key_manager(&self) -> &T {
&self.key_manager
}
}
impl<T> KeyDirectory for DiskDirectory<T>
where
T: KeyFileManager,
{
fn load(&self) -> Result<Vec<SafeAccount>, Error> {
let accounts = self
.files_content()?
.into_iter()
.map(|(_, account)| account)
.collect();
Ok(accounts)
}
fn update(&self, account: SafeAccount) -> Result<SafeAccount, Error> {
// Disk store handles updates correctly iff filename is the same
let filename = account_filename(&account);
self.insert_with_filename(account, filename, false)
}
fn insert(&self, account: SafeAccount) -> Result<SafeAccount, Error> {
let filename = account_filename(&account);
self.insert_with_filename(account, filename, true)
}
fn remove(&self, account: &SafeAccount) -> Result<(), Error> {
// enumerate all entries in keystore
// and find entry with given address
let to_remove = self
.files_content()?
.into_iter()
.find(|&(_, ref acc)| acc.id == account.id && acc.address == account.address);
// remove it
match to_remove {
None => Err(Error::InvalidAccount),
Some((path, _)) => fs::remove_file(path).map_err(From::from),
}
}
fn path(&self) -> Option<&PathBuf> {
Some(&self.path)
}
fn as_vault_provider(&self) -> Option<&dyn VaultKeyDirectoryProvider> {
Some(self)
}
fn unique_repr(&self) -> Result<u64, Error> {
self.last_modification_date()
}
}
impl<T> VaultKeyDirectoryProvider for DiskDirectory<T>
where
T: KeyFileManager,
{
fn create(&self, name: &str, key: VaultKey) -> Result<Box<dyn VaultKeyDirectory>, Error> {
let vault_dir = VaultDiskDirectory::create(&self.path, name, key)?;
Ok(Box::new(vault_dir))
}
fn open(&self, name: &str, key: VaultKey) -> Result<Box<dyn VaultKeyDirectory>, Error> {
let vault_dir = VaultDiskDirectory::at(&self.path, name, key)?;
Ok(Box::new(vault_dir))
}
fn list_vaults(&self) -> Result<Vec<String>, Error> {
Ok(fs::read_dir(&self.path)?
.filter_map(|e| e.ok().map(|e| e.path()))
.filter_map(|path| {
let mut vault_file_path = path.clone();
vault_file_path.push(VAULT_FILE_NAME);
if vault_file_path.is_file() {
path.file_name()
.and_then(|f| f.to_str())
.map(|f| f.to_owned())
} else {
None
}
})
.collect())
}
fn vault_meta(&self, name: &str) -> Result<String, Error> {
VaultDiskDirectory::meta_at(&self.path, name)
}
}
impl KeyFileManager for DiskKeyFileManager {
fn read<T>(&self, filename: Option<String>, reader: T) -> Result<SafeAccount, Error>
where
T: io::Read,
{
let key_file =
json::KeyFile::load(reader).map_err(|e| Error::Custom(format!("{:?}", e)))?;
SafeAccount::from_file(key_file, filename, &self.password)
}
fn write<T>(&self, mut account: SafeAccount, writer: &mut T) -> Result<(), Error>
where
T: io::Write,
{
// when account is moved back to root directory from vault
// => remove vault field from meta
account.meta = json::remove_vault_name_from_json_meta(&account.meta)
.map_err(|err| Error::Custom(format!("{:?}", err)))?;
let key_file: json::KeyFile = account.into();
key_file
.write(writer)
.map_err(|e| Error::Custom(format!("{:?}", e)))
}
}
fn account_filename(account: &SafeAccount) -> String {
// build file path
account.filename.clone().unwrap_or_else(|| {
let timestamp = time::strftime("%Y-%m-%dT%H-%M-%S", &time::now_utc())
.expect("Time-format string is valid.");
format!("UTC--{}Z--{}", timestamp, Uuid::from(account.id))
})
}
#[cfg(test)]
mod test {
extern crate tempdir;
use self::tempdir::TempDir;
use super::{KeyDirectory, RootDiskDirectory, VaultKey};
use account::SafeAccount;
use crypto::publickey::{Generator, Random};
use std::{env, fs, num::NonZeroU32};
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(1024).expect("1024 > 0; qed");
}
#[test]
fn should_create_new_account() {
// given
let mut dir = env::temp_dir();
dir.push("ethstore_should_create_new_account");
let keypair = Random.generate();
let password = "hello world".into();
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
// when
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
);
let res = directory.insert(account.unwrap());
// then
assert!(res.is_ok(), "Should save account succesfuly.");
assert!(
res.unwrap().filename.is_some(),
"Filename has been assigned."
);
// cleanup
let _ = fs::remove_dir_all(dir);
}
#[test]
fn should_handle_duplicate_filenames() {
// given
let mut dir = env::temp_dir();
dir.push("ethstore_should_handle_duplicate_filenames");
let keypair = Random.generate();
let password = "hello world".into();
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
// when
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
)
.unwrap();
let filename = "test".to_string();
let dedup = true;
directory
.insert_with_filename(account.clone(), "foo".to_string(), dedup)
.unwrap();
let file1 = directory
.insert_with_filename(account.clone(), filename.clone(), dedup)
.unwrap()
.filename
.unwrap();
let file2 = directory
.insert_with_filename(account.clone(), filename.clone(), dedup)
.unwrap()
.filename
.unwrap();
let file3 = directory
.insert_with_filename(account.clone(), filename.clone(), dedup)
.unwrap()
.filename
.unwrap();
// then
// the first file should have the original names
assert_eq!(file1, filename);
// the following duplicate files should have a suffix appended
assert!(file2 != file3);
assert_eq!(file2.len(), filename.len() + 5);
assert_eq!(file3.len(), filename.len() + 5);
// cleanup
let _ = fs::remove_dir_all(dir);
}
#[test]
fn should_manage_vaults() {
// given
let mut dir = env::temp_dir();
dir.push("should_create_new_vault");
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
let vault_name = "vault";
let password = "password".into();
// then
assert!(directory.as_vault_provider().is_some());
// and when
let before_root_items_count = fs::read_dir(&dir).unwrap().count();
let vault = directory
.as_vault_provider()
.unwrap()
.create(vault_name, VaultKey::new(&password, *ITERATIONS));
// then
assert!(vault.is_ok());
let after_root_items_count = fs::read_dir(&dir).unwrap().count();
assert!(after_root_items_count > before_root_items_count);
// and when
let vault = directory
.as_vault_provider()
.unwrap()
.open(vault_name, VaultKey::new(&password, *ITERATIONS));
// then
assert!(vault.is_ok());
let after_root_items_count2 = fs::read_dir(&dir).unwrap().count();
assert!(after_root_items_count == after_root_items_count2);
// cleanup
let _ = fs::remove_dir_all(dir);
}
#[test]
fn should_list_vaults() {
// given
let temp_path = TempDir::new("").unwrap();
let directory = RootDiskDirectory::create(&temp_path).unwrap();
let vault_provider = directory.as_vault_provider().unwrap();
let iter = NonZeroU32::new(1).expect("1 > 0; qed");
vault_provider
.create("vault1", VaultKey::new(&"password1".into(), iter))
.unwrap();
vault_provider
.create("vault2", VaultKey::new(&"password2".into(), iter))
.unwrap();
// then
let vaults = vault_provider.list_vaults().unwrap();
assert_eq!(vaults.len(), 2);
assert!(vaults.iter().any(|v| &*v == "vault1"));
assert!(vaults.iter().any(|v| &*v == "vault2"));
}
#[test]
fn hash_of_files() {
let temp_path = TempDir::new("").unwrap();
let directory = RootDiskDirectory::create(&temp_path).unwrap();
let hash = directory
.files_hash()
.expect("Files hash should be calculated ok");
assert_eq!(hash, 15130871412783076140);
let keypair = Random.generate();
let password = "test pass".into();
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
);
directory
.insert(account.unwrap())
.expect("Account should be inserted ok");
let new_hash = directory
.files_hash()
.expect("New files hash should be calculated ok");
assert!(
new_hash != hash,
"hash of the file list should change once directory content changed"
);
}
}

View File

@ -1,77 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crypto::publickey::Address;
use itertools;
use parking_lot::RwLock;
use std::collections::HashMap;
use super::KeyDirectory;
use Error;
use SafeAccount;
/// Accounts in-memory storage.
#[derive(Default)]
pub struct MemoryDirectory {
accounts: RwLock<HashMap<Address, Vec<SafeAccount>>>,
}
impl KeyDirectory for MemoryDirectory {
fn load(&self) -> Result<Vec<SafeAccount>, Error> {
Ok(itertools::Itertools::flatten(self.accounts.read().values().cloned()).collect())
}
fn update(&self, account: SafeAccount) -> Result<SafeAccount, Error> {
let mut lock = self.accounts.write();
let accounts = lock.entry(account.address.clone()).or_insert_with(Vec::new);
// If the filename is the same we just need to replace the entry
accounts.retain(|acc| acc.filename != account.filename);
accounts.push(account.clone());
Ok(account)
}
fn insert(&self, account: SafeAccount) -> Result<SafeAccount, Error> {
let mut lock = self.accounts.write();
let accounts = lock.entry(account.address.clone()).or_insert_with(Vec::new);
accounts.push(account.clone());
Ok(account)
}
fn remove(&self, account: &SafeAccount) -> Result<(), Error> {
let mut accounts = self.accounts.write();
let is_empty = if let Some(accounts) = accounts.get_mut(&account.address) {
if let Some(position) = accounts.iter().position(|acc| acc == account) {
accounts.remove(position);
}
accounts.is_empty()
} else {
false
};
if is_empty {
accounts.remove(&account.address);
}
Ok(())
}
fn unique_repr(&self) -> Result<u64, Error> {
let mut val = 0u64;
let accounts = self.accounts.read();
for acc in accounts.keys() {
val = val ^ acc.to_low_u64_be()
}
Ok(val)
}
}

View File

@ -1,112 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Accounts Directory
use ethkey::Password;
use std::{num::NonZeroU32, path::PathBuf};
use Error;
use SafeAccount;
mod disk;
mod memory;
mod vault;
/// `VaultKeyDirectory::set_key` error
#[derive(Debug)]
pub enum SetKeyError {
/// Error is fatal and directory is probably in inconsistent state
Fatal(Error),
/// Error is non fatal, directory is reverted to pre-operation state
NonFatalOld(Error),
/// Error is non fatal, directory is consistent with new key
NonFatalNew(Error),
}
/// Vault key
#[derive(Clone, PartialEq, Eq)]
pub struct VaultKey {
/// Vault password
pub password: Password,
/// Number of iterations to produce a derived key from password
pub iterations: NonZeroU32,
}
/// Keys directory
pub trait KeyDirectory: Send + Sync {
/// Read keys from directory
fn load(&self) -> Result<Vec<SafeAccount>, Error>;
/// Insert new key to directory
fn insert(&self, account: SafeAccount) -> Result<SafeAccount, Error>;
/// Update key in the directory
fn update(&self, account: SafeAccount) -> Result<SafeAccount, Error>;
/// Remove key from directory
fn remove(&self, account: &SafeAccount) -> Result<(), Error>;
/// Get directory filesystem path, if available
fn path(&self) -> Option<&PathBuf> {
None
}
/// Return vault provider, if available
fn as_vault_provider(&self) -> Option<&dyn VaultKeyDirectoryProvider> {
None
}
/// Unique representation of directory account collection
fn unique_repr(&self) -> Result<u64, Error>;
}
/// Vaults provider
pub trait VaultKeyDirectoryProvider {
/// Create new vault with given key
fn create(&self, name: &str, key: VaultKey) -> Result<Box<dyn VaultKeyDirectory>, Error>;
/// Open existing vault with given key
fn open(&self, name: &str, key: VaultKey) -> Result<Box<dyn VaultKeyDirectory>, Error>;
/// List all vaults
fn list_vaults(&self) -> Result<Vec<String>, Error>;
/// Get vault meta
fn vault_meta(&self, name: &str) -> Result<String, Error>;
}
/// Vault directory
pub trait VaultKeyDirectory: KeyDirectory {
/// Cast to `KeyDirectory`
fn as_key_directory(&self) -> &dyn KeyDirectory;
/// Vault name
fn name(&self) -> &str;
/// Get vault key
fn key(&self) -> VaultKey;
/// Set new key for vault
fn set_key(&self, key: VaultKey) -> Result<(), SetKeyError>;
/// Get vault meta
fn meta(&self) -> String;
/// Set vault meta
fn set_meta(&self, meta: &str) -> Result<(), Error>;
}
pub use self::{
disk::{DiskKeyFileManager, KeyFileManager, RootDiskDirectory},
memory::MemoryDirectory,
vault::VaultDiskDirectory,
};
impl VaultKey {
/// Create new vault key
pub fn new(password: &Password, iterations: NonZeroU32) -> Self {
VaultKey {
password: password.clone(),
iterations: iterations,
}
}
}

View File

@ -1,527 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{
super::account::Crypto,
disk::{self, DiskDirectory, KeyFileManager},
KeyDirectory, SetKeyError, VaultKey, VaultKeyDirectory,
};
use crypto::Keccak256;
use json;
use parking_lot::Mutex;
use std::{
fs, io,
path::{Path, PathBuf},
};
use Error;
use SafeAccount;
/// Name of vault metadata file
pub const VAULT_FILE_NAME: &'static str = "vault.json";
/// Name of temporary vault metadata file
pub const VAULT_TEMP_FILE_NAME: &'static str = "vault_temp.json";
/// Vault directory implementation
pub type VaultDiskDirectory = DiskDirectory<VaultKeyFileManager>;
/// Vault key file manager
pub struct VaultKeyFileManager {
name: String,
key: VaultKey,
meta: Mutex<String>,
}
impl VaultDiskDirectory {
/// Create new vault directory with given key
pub fn create<P>(root: P, name: &str, key: VaultKey) -> Result<Self, Error>
where
P: AsRef<Path>,
{
// check that vault directory does not exists
let vault_dir_path = make_vault_dir_path(root, name, true)?;
if vault_dir_path.exists() {
return Err(Error::CreationFailed);
}
// create vault && vault file
let vault_meta = "{}";
fs::create_dir_all(&vault_dir_path)?;
if let Err(err) = create_vault_file(&vault_dir_path, &key, vault_meta) {
let _ = fs::remove_dir_all(&vault_dir_path); // can't do anything with this
return Err(err);
}
Ok(DiskDirectory::new(
vault_dir_path,
VaultKeyFileManager::new(name, key, vault_meta),
))
}
/// Open existing vault directory with given key
pub fn at<P>(root: P, name: &str, key: VaultKey) -> Result<Self, Error>
where
P: AsRef<Path>,
{
// check that vault directory exists
let vault_dir_path = make_vault_dir_path(root, name, true)?;
if !vault_dir_path.is_dir() {
return Err(Error::CreationFailed);
}
// check that passed key matches vault file
let meta = read_vault_file(&vault_dir_path, Some(&key))?;
Ok(DiskDirectory::new(
vault_dir_path,
VaultKeyFileManager::new(name, key, &meta),
))
}
/// Read vault meta without actually opening the vault
pub fn meta_at<P>(root: P, name: &str) -> Result<String, Error>
where
P: AsRef<Path>,
{
// check that vault directory exists
let vault_dir_path = make_vault_dir_path(root, name, true)?;
if !vault_dir_path.is_dir() {
return Err(Error::VaultNotFound);
}
// check that passed key matches vault file
read_vault_file(&vault_dir_path, None)
}
fn create_temp_vault(&self, key: VaultKey) -> Result<VaultDiskDirectory, Error> {
let original_path = self
.path()
.expect("self is instance of DiskDirectory; DiskDirectory always returns path; qed");
let mut path: PathBuf = original_path.clone();
let name = self.name();
path.push(name); // to jump to the next level
let mut index = 0;
loop {
let name = format!("{}_temp_{}", name, index);
path.set_file_name(&name);
if !path.exists() {
return VaultDiskDirectory::create(original_path, &name, key);
}
index += 1;
}
}
fn copy_to_vault(&self, vault: &VaultDiskDirectory) -> Result<(), Error> {
for account in self.load()? {
let filename = account.filename.clone().expect(
"self is instance of DiskDirectory; DiskDirectory fills filename in load; qed",
);
vault.insert_with_filename(account, filename, true)?;
}
Ok(())
}
fn delete(&self) -> Result<(), Error> {
let path = self
.path()
.expect("self is instance of DiskDirectory; DiskDirectory always returns path; qed");
fs::remove_dir_all(path).map_err(Into::into)
}
}
impl VaultKeyDirectory for VaultDiskDirectory {
fn as_key_directory(&self) -> &dyn KeyDirectory {
self
}
fn name(&self) -> &str {
&self.key_manager().name
}
fn key(&self) -> VaultKey {
self.key_manager().key.clone()
}
fn set_key(&self, new_key: VaultKey) -> Result<(), SetKeyError> {
let temp_vault = VaultDiskDirectory::create_temp_vault(self, new_key.clone())
.map_err(|err| SetKeyError::NonFatalOld(err))?;
let mut source_path = temp_vault
.path()
.expect(
"temp_vault is instance of DiskDirectory; DiskDirectory always returns path; qed",
)
.clone();
let mut target_path = self
.path()
.expect("self is instance of DiskDirectory; DiskDirectory always returns path; qed")
.clone();
// preserve meta
temp_vault
.set_meta(&self.meta())
.map_err(SetKeyError::NonFatalOld)?;
// jump to next fs level
source_path.push("next");
target_path.push("next");
let temp_accounts = self
.copy_to_vault(&temp_vault)
.and_then(|_| temp_vault.load())
.map_err(|err| {
// ignore error, as we already processing error
let _ = temp_vault.delete();
SetKeyError::NonFatalOld(err)
})?;
// we can't just delete temp vault until all files moved, because
// original vault content has already been partially replaced
// => when error or crash happens here, we can't do anything
for temp_account in temp_accounts {
let filename = temp_account.filename.expect(
"self is instance of DiskDirectory; DiskDirectory fills filename in load; qed",
);
source_path.set_file_name(&filename);
target_path.set_file_name(&filename);
fs::rename(&source_path, &target_path).map_err(|err| SetKeyError::Fatal(err.into()))?;
}
source_path.set_file_name(VAULT_FILE_NAME);
target_path.set_file_name(VAULT_FILE_NAME);
fs::rename(source_path, target_path).map_err(|err| SetKeyError::Fatal(err.into()))?;
temp_vault
.delete()
.map_err(|err| SetKeyError::NonFatalNew(err))
}
fn meta(&self) -> String {
self.key_manager().meta.lock().clone()
}
fn set_meta(&self, meta: &str) -> Result<(), Error> {
let key_manager = self.key_manager();
let vault_path = self
.path()
.expect("self is instance of DiskDirectory; DiskDirectory always returns path; qed");
create_vault_file(vault_path, &key_manager.key, meta)?;
*key_manager.meta.lock() = meta.to_owned();
Ok(())
}
}
impl VaultKeyFileManager {
pub fn new(name: &str, key: VaultKey, meta: &str) -> Self {
VaultKeyFileManager {
name: name.into(),
key: key,
meta: Mutex::new(meta.to_owned()),
}
}
}
impl KeyFileManager for VaultKeyFileManager {
fn read<T>(&self, filename: Option<String>, reader: T) -> Result<SafeAccount, Error>
where
T: io::Read,
{
let vault_file =
json::VaultKeyFile::load(reader).map_err(|e| Error::Custom(format!("{:?}", e)))?;
let mut safe_account =
SafeAccount::from_vault_file(&self.key.password, vault_file, filename.clone())?;
safe_account.meta = json::insert_vault_name_to_json_meta(&safe_account.meta, &self.name)
.map_err(|err| Error::Custom(format!("{:?}", err)))?;
Ok(safe_account)
}
fn write<T>(&self, mut account: SafeAccount, writer: &mut T) -> Result<(), Error>
where
T: io::Write,
{
account.meta = json::remove_vault_name_from_json_meta(&account.meta)
.map_err(|err| Error::Custom(format!("{:?}", err)))?;
let vault_file: json::VaultKeyFile =
account.into_vault_file(self.key.iterations, &self.key.password)?;
vault_file
.write(writer)
.map_err(|e| Error::Custom(format!("{:?}", e)))
}
}
/// Makes path to vault directory, checking that vault name is appropriate
fn make_vault_dir_path<P>(root: P, name: &str, check_name: bool) -> Result<PathBuf, Error>
where
P: AsRef<Path>,
{
// check vault name
if check_name && !check_vault_name(name) {
return Err(Error::InvalidVaultName);
}
let mut vault_dir_path: PathBuf = root.as_ref().into();
vault_dir_path.push(name);
Ok(vault_dir_path)
}
/// Every vault must have unique name => we rely on filesystem to check this
/// => vault name must not contain any fs-special characters to avoid directory traversal
/// => we only allow alphanumeric + separator characters in vault name.
fn check_vault_name(name: &str) -> bool {
!name.is_empty()
&& name
.chars()
.all(|c| c.is_alphanumeric() || c.is_whitespace() || c == '-' || c == '_')
}
/// Vault can be empty, but still must be pluggable => we store vault password in separate file
fn create_vault_file<P>(vault_dir_path: P, key: &VaultKey, meta: &str) -> Result<(), Error>
where
P: AsRef<Path>,
{
let password_hash = key.password.as_bytes().keccak256();
let crypto = Crypto::with_plain(&password_hash, &key.password, key.iterations)?;
let vault_file_path = vault_dir_path.as_ref().join(VAULT_FILE_NAME);
let temp_vault_file_name = disk::find_unique_filename_using_random_suffix(
vault_dir_path.as_ref(),
&VAULT_TEMP_FILE_NAME,
)?;
let temp_vault_file_path = vault_dir_path.as_ref().join(&temp_vault_file_name);
// this method is used to rewrite existing vault file
// => write to temporary file first, then rename temporary file to vault file
let mut vault_file = disk::create_new_file_with_permissions_to_owner(&temp_vault_file_path)?;
let vault_file_contents = json::VaultFile {
crypto: crypto.into(),
meta: Some(meta.to_owned()),
};
vault_file_contents
.write(&mut vault_file)
.map_err(|e| Error::Custom(format!("{:?}", e)))?;
drop(vault_file);
fs::rename(&temp_vault_file_path, &vault_file_path)?;
Ok(())
}
/// When vault is opened => we must check that password matches && read metadata
fn read_vault_file<P>(vault_dir_path: P, key: Option<&VaultKey>) -> Result<String, Error>
where
P: AsRef<Path>,
{
let mut vault_file_path: PathBuf = vault_dir_path.as_ref().into();
vault_file_path.push(VAULT_FILE_NAME);
let vault_file = fs::File::open(vault_file_path)?;
let vault_file_contents =
json::VaultFile::load(vault_file).map_err(|e| Error::Custom(format!("{:?}", e)))?;
let vault_file_meta = vault_file_contents.meta.unwrap_or("{}".to_owned());
let vault_file_crypto: Crypto = vault_file_contents.crypto.into();
if let Some(key) = key {
let password_bytes = vault_file_crypto.decrypt(&key.password)?;
let password_hash = key.password.as_bytes().keccak256();
if password_hash != password_bytes.as_slice() {
return Err(Error::InvalidPassword);
}
}
Ok(vault_file_meta)
}
#[cfg(test)]
mod test {
extern crate tempdir;
use self::tempdir::TempDir;
use super::{
check_vault_name, create_vault_file, make_vault_dir_path, read_vault_file,
VaultDiskDirectory, VaultKey, VAULT_FILE_NAME,
};
use std::{fs, io::Write, num::NonZeroU32, path::PathBuf};
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(1024).expect("1024 > 0; qed");
}
#[test]
fn check_vault_name_succeeds() {
assert!(check_vault_name("vault"));
assert!(check_vault_name("vault with spaces"));
assert!(check_vault_name("vault with tabs"));
assert!(check_vault_name("vault_with_underscores"));
assert!(check_vault_name("vault-with-dashes"));
assert!(check_vault_name("vault-with-digits-123"));
assert!(check_vault_name("vault中文名字"));
}
#[test]
fn check_vault_name_fails() {
assert!(!check_vault_name(""));
assert!(!check_vault_name("."));
assert!(!check_vault_name("*"));
assert!(!check_vault_name("../.bash_history"));
assert!(!check_vault_name("/etc/passwd"));
assert!(!check_vault_name("c:\\windows"));
}
#[test]
fn make_vault_dir_path_succeeds() {
use std::path::Path;
assert_eq!(
&make_vault_dir_path("/home/user/parity", "vault", true).unwrap(),
&Path::new("/home/user/parity/vault")
);
assert_eq!(
&make_vault_dir_path("/home/user/parity", "*bad-name*", false).unwrap(),
&Path::new("/home/user/parity/*bad-name*")
);
}
#[test]
fn make_vault_dir_path_fails() {
assert!(make_vault_dir_path("/home/user/parity", "*bad-name*", true).is_err());
}
#[test]
fn create_vault_file_succeeds() {
// given
let temp_path = TempDir::new("").unwrap();
let key = VaultKey::new(&"password".into(), *ITERATIONS);
let mut vault_dir: PathBuf = temp_path.path().into();
vault_dir.push("vault");
fs::create_dir_all(&vault_dir).unwrap();
// when
let result = create_vault_file(&vault_dir, &key, "{}");
// then
assert!(result.is_ok());
let mut vault_file_path = vault_dir.clone();
vault_file_path.push(VAULT_FILE_NAME);
assert!(vault_file_path.exists() && vault_file_path.is_file());
}
#[test]
fn read_vault_file_succeeds() {
// given
let temp_path = TempDir::new("").unwrap();
let key = VaultKey::new(&"password".into(), *ITERATIONS);
let vault_file_contents = r#"{"crypto":{"cipher":"aes-128-ctr","cipherparams":{"iv":"758696c8dc6378ab9b25bb42790da2f5"},"ciphertext":"54eb50683717d41caaeb12ea969f2c159daada5907383f26f327606a37dc7168","kdf":"pbkdf2","kdfparams":{"c":1024,"dklen":32,"prf":"hmac-sha256","salt":"3c320fa566a1a7963ac8df68a19548d27c8f40bf92ef87c84594dcd5bbc402b6"},"mac":"9e5c2314c2a0781962db85611417c614bd6756666b6b1e93840f5b6ed895f003"}}"#;
let dir: PathBuf = temp_path.path().into();
let mut vault_file_path: PathBuf = dir.clone();
vault_file_path.push(VAULT_FILE_NAME);
{
let mut vault_file = fs::File::create(vault_file_path).unwrap();
vault_file
.write_all(vault_file_contents.as_bytes())
.unwrap();
}
// when
let result = read_vault_file(&dir, Some(&key));
// then
assert!(result.is_ok());
}
#[test]
fn read_vault_file_fails() {
// given
let temp_path = TempDir::new("").unwrap();
let key = VaultKey::new(&"password1".into(), *ITERATIONS);
let dir: PathBuf = temp_path.path().into();
let mut vault_file_path: PathBuf = dir.clone();
vault_file_path.push(VAULT_FILE_NAME);
// when
let result = read_vault_file(&dir, Some(&key));
// then
assert!(result.is_err());
// and when given
let vault_file_contents = r#"{"crypto":{"cipher":"aes-128-ctr","cipherparams":{"iv":"0155e3690be19fbfbecabcd440aa284b"},"ciphertext":"4d6938a1f49b7782","kdf":"pbkdf2","kdfparams":{"c":1024,"dklen":32,"prf":"hmac-sha256","salt":"b6a9338a7ccd39288a86dba73bfecd9101b4f3db9c9830e7c76afdbd4f6872e5"},"mac":"16381463ea11c6eb2239a9f339c2e780516d29d234ce30ac5f166f9080b5a262"}}"#;
{
let mut vault_file = fs::File::create(vault_file_path).unwrap();
vault_file
.write_all(vault_file_contents.as_bytes())
.unwrap();
}
// when
let result = read_vault_file(&dir, Some(&key));
// then
assert!(result.is_err());
}
#[test]
fn vault_directory_can_be_created() {
// given
let temp_path = TempDir::new("").unwrap();
let key = VaultKey::new(&"password".into(), *ITERATIONS);
let dir: PathBuf = temp_path.path().into();
// when
let vault = VaultDiskDirectory::create(&dir, "vault", key.clone());
// then
assert!(vault.is_ok());
// and when
let vault = VaultDiskDirectory::at(&dir, "vault", key);
// then
assert!(vault.is_ok());
}
#[test]
fn vault_directory_cannot_be_created_if_already_exists() {
// given
let temp_path = TempDir::new("").unwrap();
let key = VaultKey::new(&"password".into(), *ITERATIONS);
let dir: PathBuf = temp_path.path().into();
let mut vault_dir = dir.clone();
vault_dir.push("vault");
fs::create_dir_all(&vault_dir).unwrap();
// when
let vault = VaultDiskDirectory::create(&dir, "vault", key);
// then
assert!(vault.is_err());
}
#[test]
fn vault_directory_cannot_be_opened_if_not_exists() {
// given
let temp_path = TempDir::new("").unwrap();
let key = VaultKey::new(&"password".into(), *ITERATIONS);
let dir: PathBuf = temp_path.path().into();
// when
let vault = VaultDiskDirectory::at(&dir, "vault", key);
// then
assert!(vault.is_err());
}
}

View File

@ -1,116 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crypto::publickey::DerivationError;
use std::{fmt, io::Error as IoError};
/// Account-related errors.
#[derive(Debug)]
pub enum Error {
/// IO error
Io(IoError),
/// Invalid Password
InvalidPassword,
/// Account's secret is invalid.
InvalidSecret,
/// Invalid Vault Crypto meta.
InvalidCryptoMeta,
/// Invalid Account.
InvalidAccount,
/// Invalid Message.
InvalidMessage,
/// Invalid Key File
InvalidKeyFile(String),
/// Vaults are not supported.
VaultsAreNotSupported,
/// Unsupported vault
UnsupportedVault,
/// Invalid vault name
InvalidVaultName,
/// Vault not found
VaultNotFound,
/// Account creation failed.
CreationFailed,
/// `crypto::publickey::Error`
EthCrypto(crypto::Error),
/// `EthCrypto` error
EthCryptoPublicKey(crypto::publickey::Error),
/// Derivation error
Derivation(DerivationError),
/// Custom error
Custom(String),
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
let s = match *self {
Error::Io(ref err) => err.to_string(),
Error::InvalidPassword => "Invalid password".into(),
Error::InvalidSecret => "Invalid secret".into(),
Error::InvalidCryptoMeta => "Invalid crypted metadata".into(),
Error::InvalidAccount => "Invalid account".into(),
Error::InvalidMessage => "Invalid message".into(),
Error::InvalidKeyFile(ref reason) => format!("Invalid key file: {}", reason),
Error::VaultsAreNotSupported => "Vaults are not supported".into(),
Error::UnsupportedVault => "Vault is not supported for this operation".into(),
Error::InvalidVaultName => "Invalid vault name".into(),
Error::VaultNotFound => "Vault not found".into(),
Error::CreationFailed => "Account creation failed".into(),
Error::EthCrypto(ref err) => err.to_string(),
Error::EthCryptoPublicKey(ref err) => err.to_string(),
Error::Derivation(ref err) => format!("Derivation error: {:?}", err),
Error::Custom(ref s) => s.clone(),
};
write!(f, "{}", s)
}
}
impl From<IoError> for Error {
fn from(err: IoError) -> Self {
Error::Io(err)
}
}
impl From<crypto::publickey::Error> for Error {
fn from(err: crypto::publickey::Error) -> Self {
Error::EthCryptoPublicKey(err)
}
}
impl From<crypto::Error> for Error {
fn from(err: crypto::Error) -> Self {
Error::EthCrypto(err)
}
}
impl From<crypto::error::ScryptError> for Error {
fn from(err: crypto::error::ScryptError) -> Self {
Error::EthCrypto(err.into())
}
}
impl From<crypto::error::SymmError> for Error {
fn from(err: crypto::error::SymmError) -> Self {
Error::EthCrypto(err.into())
}
}
impl From<DerivationError> for Error {
fn from(err: DerivationError) -> Self {
Error::Derivation(err)
}
}

View File

@ -1,42 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! ethkey reexport to make documentation look pretty.
pub use _ethkey::*;
pub use crypto::publickey::Address;
use json;
impl Into<json::H160> for Address {
fn into(self) -> json::H160 {
let a: [u8; 20] = self.into();
From::from(a)
}
}
impl From<json::H160> for Address {
fn from(json: json::H160) -> Self {
let a: [u8; 20] = json.into();
From::from(a)
}
}
impl<'a> From<&'a json::H160> for Address {
fn from(json: &'a json::H160) -> Self {
let mut a = [0u8; 20];
a.copy_from_slice(json);
From::from(a)
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,67 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{collections::HashSet, fs, path::Path};
use accounts_dir::{DiskKeyFileManager, KeyDirectory, KeyFileManager};
use crypto::publickey::Address;
use Error;
/// Import an account from a file.
pub fn import_account(path: &Path, dst: &dyn KeyDirectory) -> Result<Address, Error> {
let key_manager = DiskKeyFileManager::default();
let existing_accounts = dst
.load()?
.into_iter()
.map(|a| a.address)
.collect::<HashSet<_>>();
let filename = path
.file_name()
.and_then(|n| n.to_str())
.map(|f| f.to_owned());
let account = fs::File::open(&path)
.map_err(Into::into)
.and_then(|file| key_manager.read(filename, file))?;
let address = account.address.clone();
if !existing_accounts.contains(&address) {
dst.insert(account)?;
}
Ok(address)
}
/// Import all accounts from one directory to the other.
pub fn import_accounts(
src: &dyn KeyDirectory,
dst: &dyn KeyDirectory,
) -> Result<Vec<Address>, Error> {
let accounts = src.load()?;
let existing_accounts = dst
.load()?
.into_iter()
.map(|a| a.address)
.collect::<HashSet<_>>();
accounts
.into_iter()
.filter(|a| !existing_accounts.contains(&a.address))
.map(|a| {
let address = a.address.clone();
dst.insert(a)?;
Ok(address)
})
.collect()
}

View File

@ -1,82 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use rustc_hex::{FromHex, FromHexError, ToHex};
use serde::{de::Error, Deserialize, Deserializer, Serialize, Serializer};
use std::{ops, str};
#[derive(Debug, PartialEq)]
pub struct Bytes(Vec<u8>);
impl ops::Deref for Bytes {
type Target = [u8];
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<'a> Deserialize<'a> for Bytes {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'a>,
{
let s = String::deserialize(deserializer)?;
let data = s
.from_hex()
.map_err(|e| Error::custom(format!("Invalid hex value {}", e)))?;
Ok(Bytes(data))
}
}
impl Serialize for Bytes {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(&self.0.to_hex())
}
}
impl str::FromStr for Bytes {
type Err = FromHexError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
s.from_hex().map(Bytes)
}
}
impl From<&'static str> for Bytes {
fn from(s: &'static str) -> Self {
s.parse().expect(&format!(
"invalid string literal for {}: '{}'",
stringify!(Self),
s
))
}
}
impl From<Vec<u8>> for Bytes {
fn from(v: Vec<u8>) -> Self {
Bytes(v)
}
}
impl From<Bytes> for Vec<u8> {
fn from(b: Bytes) -> Self {
b.0
}
}

View File

@ -1,112 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{Error, H128};
use serde::{
de::{Error as SerdeError, Visitor},
Deserialize, Deserializer, Serialize, Serializer,
};
use std::fmt;
#[derive(Debug, PartialEq)]
pub enum CipherSer {
Aes128Ctr,
}
impl Serialize for CipherSer {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match *self {
CipherSer::Aes128Ctr => serializer.serialize_str("aes-128-ctr"),
}
}
}
impl<'a> Deserialize<'a> for CipherSer {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'a>,
{
deserializer.deserialize_any(CipherSerVisitor)
}
}
struct CipherSerVisitor;
impl<'a> Visitor<'a> for CipherSerVisitor {
type Value = CipherSer;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a valid cipher identifier")
}
fn visit_str<E>(self, value: &str) -> Result<Self::Value, E>
where
E: SerdeError,
{
match value {
"aes-128-ctr" => Ok(CipherSer::Aes128Ctr),
_ => Err(SerdeError::custom(Error::UnsupportedCipher)),
}
}
fn visit_string<E>(self, value: String) -> Result<Self::Value, E>
where
E: SerdeError,
{
self.visit_str(value.as_ref())
}
}
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct Aes128Ctr {
pub iv: H128,
}
#[derive(Debug, PartialEq)]
pub enum CipherSerParams {
Aes128Ctr(Aes128Ctr),
}
impl Serialize for CipherSerParams {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match *self {
CipherSerParams::Aes128Ctr(ref params) => params.serialize(serializer),
}
}
}
impl<'a> Deserialize<'a> for CipherSerParams {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'a>,
{
Aes128Ctr::deserialize(deserializer)
.map(CipherSerParams::Aes128Ctr)
.map_err(|_| Error::InvalidCipherParams)
.map_err(SerdeError::custom)
}
}
#[derive(Debug, PartialEq)]
pub enum Cipher {
Aes128Ctr(Aes128Ctr),
}

View File

@ -1,212 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{Bytes, Cipher, CipherSer, CipherSerParams, Kdf, KdfSer, KdfSerParams, H256};
use serde::{
de::{Error, MapAccess, Visitor},
ser::SerializeStruct,
Deserialize, Deserializer, Serialize, Serializer,
};
use serde_json;
use std::{fmt, str};
pub type CipherText = Bytes;
#[derive(Debug, PartialEq)]
pub struct Crypto {
pub cipher: Cipher,
pub ciphertext: CipherText,
pub kdf: Kdf,
pub mac: H256,
}
impl str::FromStr for Crypto {
type Err = serde_json::error::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
serde_json::from_str(s)
}
}
impl From<Crypto> for String {
fn from(c: Crypto) -> Self {
serde_json::to_string(&c)
.expect("serialization cannot fail, cause all crypto keys are strings")
}
}
enum CryptoField {
Cipher,
CipherParams,
CipherText,
Kdf,
KdfParams,
Mac,
Version,
}
impl<'a> Deserialize<'a> for CryptoField {
fn deserialize<D>(deserializer: D) -> Result<CryptoField, D::Error>
where
D: Deserializer<'a>,
{
deserializer.deserialize_any(CryptoFieldVisitor)
}
}
struct CryptoFieldVisitor;
impl<'a> Visitor<'a> for CryptoFieldVisitor {
type Value = CryptoField;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a valid crypto struct description")
}
fn visit_str<E>(self, value: &str) -> Result<Self::Value, E>
where
E: Error,
{
match value {
"cipher" => Ok(CryptoField::Cipher),
"cipherparams" => Ok(CryptoField::CipherParams),
"ciphertext" => Ok(CryptoField::CipherText),
"kdf" => Ok(CryptoField::Kdf),
"kdfparams" => Ok(CryptoField::KdfParams),
"mac" => Ok(CryptoField::Mac),
"version" => Ok(CryptoField::Version),
_ => Err(Error::custom(format!("Unknown field: '{}'", value))),
}
}
}
impl<'a> Deserialize<'a> for Crypto {
fn deserialize<D>(deserializer: D) -> Result<Crypto, D::Error>
where
D: Deserializer<'a>,
{
static FIELDS: &'static [&'static str] = &["id", "version", "crypto", "Crypto", "address"];
deserializer.deserialize_struct("Crypto", FIELDS, CryptoVisitor)
}
}
struct CryptoVisitor;
impl<'a> Visitor<'a> for CryptoVisitor {
type Value = Crypto;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a valid vault crypto object")
}
fn visit_map<V>(self, mut visitor: V) -> Result<Self::Value, V::Error>
where
V: MapAccess<'a>,
{
let mut cipher = None;
let mut cipherparams = None;
let mut ciphertext = None;
let mut kdf = None;
let mut kdfparams = None;
let mut mac = None;
loop {
match visitor.next_key()? {
Some(CryptoField::Cipher) => {
cipher = Some(visitor.next_value()?);
}
Some(CryptoField::CipherParams) => {
cipherparams = Some(visitor.next_value()?);
}
Some(CryptoField::CipherText) => {
ciphertext = Some(visitor.next_value()?);
}
Some(CryptoField::Kdf) => {
kdf = Some(visitor.next_value()?);
}
Some(CryptoField::KdfParams) => {
kdfparams = Some(visitor.next_value()?);
}
Some(CryptoField::Mac) => {
mac = Some(visitor.next_value()?);
}
// skip not required version field (it appears in pyethereum generated keystores)
Some(CryptoField::Version) => visitor.next_value().unwrap_or(()),
None => {
break;
}
}
}
let cipher = match (cipher, cipherparams) {
(Some(CipherSer::Aes128Ctr), Some(CipherSerParams::Aes128Ctr(params))) => {
Cipher::Aes128Ctr(params)
}
(None, _) => return Err(V::Error::missing_field("cipher")),
(Some(_), None) => return Err(V::Error::missing_field("cipherparams")),
};
let ciphertext = ciphertext.ok_or_else(|| V::Error::missing_field("ciphertext"))?;
let kdf = match (kdf, kdfparams) {
(Some(KdfSer::Pbkdf2), Some(KdfSerParams::Pbkdf2(params))) => Kdf::Pbkdf2(params),
(Some(KdfSer::Scrypt), Some(KdfSerParams::Scrypt(params))) => Kdf::Scrypt(params),
(Some(_), Some(_)) => return Err(V::Error::custom("Invalid cipherparams")),
(None, _) => return Err(V::Error::missing_field("kdf")),
(Some(_), None) => return Err(V::Error::missing_field("kdfparams")),
};
let mac = mac.ok_or_else(|| V::Error::missing_field("mac"))?;
let result = Crypto {
cipher: cipher,
ciphertext: ciphertext,
kdf: kdf,
mac: mac,
};
Ok(result)
}
}
impl Serialize for Crypto {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut crypto = serializer.serialize_struct("Crypto", 6)?;
match self.cipher {
Cipher::Aes128Ctr(ref params) => {
crypto.serialize_field("cipher", &CipherSer::Aes128Ctr)?;
crypto.serialize_field("cipherparams", params)?;
}
}
crypto.serialize_field("ciphertext", &self.ciphertext)?;
match self.kdf {
Kdf::Pbkdf2(ref params) => {
crypto.serialize_field("kdf", &KdfSer::Pbkdf2)?;
crypto.serialize_field("kdfparams", params)?;
}
Kdf::Scrypt(ref params) => {
crypto.serialize_field("kdf", &KdfSer::Scrypt)?;
crypto.serialize_field("kdfparams", params)?;
}
}
crypto.serialize_field("mac", &self.mac)?;
crypto.end()
}
}

View File

@ -1,50 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::fmt;
#[derive(Debug, PartialEq)]
pub enum Error {
UnsupportedCipher,
InvalidCipherParams,
UnsupportedKdf,
InvalidUuid,
UnsupportedVersion,
InvalidCiphertext,
InvalidH256,
InvalidPrf,
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match *self {
Error::InvalidUuid => write!(f, "Invalid Uuid"),
Error::UnsupportedVersion => write!(f, "Unsupported version"),
Error::UnsupportedKdf => write!(f, "Unsupported kdf"),
Error::InvalidCiphertext => write!(f, "Invalid ciphertext"),
Error::UnsupportedCipher => write!(f, "Unsupported cipher"),
Error::InvalidCipherParams => write!(f, "Invalid cipher params"),
Error::InvalidH256 => write!(f, "Invalid hash"),
Error::InvalidPrf => write!(f, "Invalid prf"),
}
}
}
impl Into<String> for Error {
fn into(self) -> String {
format!("{}", self)
}
}

View File

@ -1,135 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::Error;
use rustc_hex::{FromHex, ToHex};
use serde::{
de::{Error as SerdeError, Visitor},
Deserialize, Deserializer, Serialize, Serializer,
};
use std::{fmt, ops, str};
macro_rules! impl_hash {
($name: ident, $size: expr) => {
pub struct $name([u8; $size]);
impl fmt::Debug for $name {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
let self_ref: &[u8] = &self.0;
write!(f, "{:?}", self_ref)
}
}
impl PartialEq for $name {
fn eq(&self, other: &Self) -> bool {
let self_ref: &[u8] = &self.0;
let other_ref: &[u8] = &other.0;
self_ref == other_ref
}
}
impl ops::Deref for $name {
type Target = [u8];
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl Serialize for $name {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(&self.0.to_hex())
}
}
impl<'a> Deserialize<'a> for $name {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'a>,
{
struct HashVisitor;
impl<'b> Visitor<'b> for HashVisitor {
type Value = $name;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a hex-encoded {}", stringify!($name))
}
fn visit_str<E>(self, value: &str) -> Result<Self::Value, E>
where
E: SerdeError,
{
value.parse().map_err(SerdeError::custom)
}
fn visit_string<E>(self, value: String) -> Result<Self::Value, E>
where
E: SerdeError,
{
self.visit_str(value.as_ref())
}
}
deserializer.deserialize_any(HashVisitor)
}
}
impl str::FromStr for $name {
type Err = Error;
fn from_str(value: &str) -> Result<Self, Self::Err> {
match value.from_hex() {
Ok(ref hex) if hex.len() == $size => {
let mut hash = [0u8; $size];
hash.clone_from_slice(hex);
Ok($name(hash))
}
_ => Err(Error::InvalidH256),
}
}
}
impl From<&'static str> for $name {
fn from(s: &'static str) -> Self {
s.parse().expect(&format!(
"invalid string literal for {}: '{}'",
stringify!($name),
s
))
}
}
impl From<[u8; $size]> for $name {
fn from(bytes: [u8; $size]) -> Self {
$name(bytes)
}
}
impl Into<[u8; $size]> for $name {
fn into(self) -> [u8; $size] {
self.0
}
}
};
}
impl_hash!(H128, 16);
impl_hash!(H160, 20);
impl_hash!(H256, 32);

View File

@ -1,179 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Universaly unique identifier.
use super::Error;
use rustc_hex::{FromHex, ToHex};
use serde::{
de::{Error as SerdeError, Visitor},
Deserialize, Deserializer, Serialize, Serializer,
};
use std::{fmt, str};
/// Universaly unique identifier.
#[derive(Debug, PartialEq)]
pub struct Uuid([u8; 16]);
impl From<[u8; 16]> for Uuid {
fn from(uuid: [u8; 16]) -> Self {
Uuid(uuid)
}
}
impl<'a> Into<String> for &'a Uuid {
fn into(self) -> String {
let d1 = &self.0[0..4];
let d2 = &self.0[4..6];
let d3 = &self.0[6..8];
let d4 = &self.0[8..10];
let d5 = &self.0[10..16];
[d1, d2, d3, d4, d5]
.iter()
.map(|d| d.to_hex())
.collect::<Vec<String>>()
.join("-")
}
}
impl Into<String> for Uuid {
fn into(self) -> String {
Into::into(&self)
}
}
impl Into<[u8; 16]> for Uuid {
fn into(self) -> [u8; 16] {
self.0
}
}
impl fmt::Display for Uuid {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
let s: String = (self as &Uuid).into();
write!(f, "{}", s)
}
}
fn copy_into(from: &str, into: &mut [u8]) -> Result<(), Error> {
let from = from.from_hex().map_err(|_| Error::InvalidUuid)?;
if from.len() != into.len() {
return Err(Error::InvalidUuid);
}
into.copy_from_slice(&from);
Ok(())
}
impl str::FromStr for Uuid {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let parts: Vec<&str> = s.split("-").collect();
if parts.len() != 5 {
return Err(Error::InvalidUuid);
}
let mut uuid = [0u8; 16];
copy_into(parts[0], &mut uuid[0..4])?;
copy_into(parts[1], &mut uuid[4..6])?;
copy_into(parts[2], &mut uuid[6..8])?;
copy_into(parts[3], &mut uuid[8..10])?;
copy_into(parts[4], &mut uuid[10..16])?;
Ok(Uuid(uuid))
}
}
impl From<&'static str> for Uuid {
fn from(s: &'static str) -> Self {
s.parse().expect(&format!(
"invalid string literal for {}: '{}'",
stringify!(Self),
s
))
}
}
impl Serialize for Uuid {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s: String = self.into();
serializer.serialize_str(&s)
}
}
impl<'a> Deserialize<'a> for Uuid {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'a>,
{
deserializer.deserialize_any(UuidVisitor)
}
}
struct UuidVisitor;
impl<'a> Visitor<'a> for UuidVisitor {
type Value = Uuid;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a valid hex-encoded UUID")
}
fn visit_str<E>(self, value: &str) -> Result<Self::Value, E>
where
E: SerdeError,
{
value.parse().map_err(SerdeError::custom)
}
fn visit_string<E>(self, value: String) -> Result<Self::Value, E>
where
E: SerdeError,
{
self.visit_str(value.as_ref())
}
}
#[cfg(test)]
mod tests {
use super::Uuid;
#[test]
fn uuid_from_str() {
let uuid: Uuid = "3198bc9c-6672-5ab3-d995-4942343ae5b6".into();
assert_eq!(
uuid,
Uuid::from([
0x31, 0x98, 0xbc, 0x9c, 0x66, 0x72, 0x5a, 0xb3, 0xd9, 0x95, 0x49, 0x42, 0x34, 0x3a,
0xe5, 0xb6
])
);
}
#[test]
fn uuid_from_and_to_str() {
let from = "3198bc9c-6672-5ab3-d995-4942343ae5b6";
let uuid: Uuid = from.into();
let to: String = uuid.into();
assert_eq!(from, &to);
}
}

View File

@ -1,186 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{Bytes, Error};
use serde::{
de::{Error as SerdeError, Visitor},
Deserialize, Deserializer, Serialize, Serializer,
};
use std::{fmt, num::NonZeroU32};
#[derive(Debug, PartialEq)]
pub enum KdfSer {
Pbkdf2,
Scrypt,
}
impl Serialize for KdfSer {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match *self {
KdfSer::Pbkdf2 => serializer.serialize_str("pbkdf2"),
KdfSer::Scrypt => serializer.serialize_str("scrypt"),
}
}
}
impl<'a> Deserialize<'a> for KdfSer {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'a>,
{
deserializer.deserialize_any(KdfSerVisitor)
}
}
struct KdfSerVisitor;
impl<'a> Visitor<'a> for KdfSerVisitor {
type Value = KdfSer;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a kdf algorithm identifier")
}
fn visit_str<E>(self, value: &str) -> Result<Self::Value, E>
where
E: SerdeError,
{
match value {
"pbkdf2" => Ok(KdfSer::Pbkdf2),
"scrypt" => Ok(KdfSer::Scrypt),
_ => Err(SerdeError::custom(Error::UnsupportedKdf)),
}
}
fn visit_string<E>(self, value: String) -> Result<Self::Value, E>
where
E: SerdeError,
{
self.visit_str(value.as_ref())
}
}
#[derive(Debug, PartialEq)]
pub enum Prf {
HmacSha256,
}
impl Serialize for Prf {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match *self {
Prf::HmacSha256 => serializer.serialize_str("hmac-sha256"),
}
}
}
impl<'a> Deserialize<'a> for Prf {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'a>,
{
deserializer.deserialize_any(PrfVisitor)
}
}
struct PrfVisitor;
impl<'a> Visitor<'a> for PrfVisitor {
type Value = Prf;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a prf algorithm identifier")
}
fn visit_str<E>(self, value: &str) -> Result<Self::Value, E>
where
E: SerdeError,
{
match value {
"hmac-sha256" => Ok(Prf::HmacSha256),
_ => Err(SerdeError::custom(Error::InvalidPrf)),
}
}
fn visit_string<E>(self, value: String) -> Result<Self::Value, E>
where
E: SerdeError,
{
self.visit_str(value.as_ref())
}
}
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct Pbkdf2 {
pub c: NonZeroU32,
pub dklen: u32,
pub prf: Prf,
pub salt: Bytes,
}
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct Scrypt {
pub dklen: u32,
pub p: u32,
pub n: u32,
pub r: u32,
pub salt: Bytes,
}
#[derive(Debug, PartialEq)]
pub enum KdfSerParams {
Pbkdf2(Pbkdf2),
Scrypt(Scrypt),
}
impl Serialize for KdfSerParams {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match *self {
KdfSerParams::Pbkdf2(ref params) => params.serialize(serializer),
KdfSerParams::Scrypt(ref params) => params.serialize(serializer),
}
}
}
impl<'a> Deserialize<'a> for KdfSerParams {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'a>,
{
use serde_json::{from_value, Value};
let v: Value = Deserialize::deserialize(deserializer)?;
from_value(v.clone())
.map(KdfSerParams::Pbkdf2)
.or_else(|_| from_value(v).map(KdfSerParams::Scrypt))
.map_err(|_| D::Error::custom("Invalid KDF algorithm"))
}
}
#[derive(Debug, PartialEq)]
pub enum Kdf {
Pbkdf2(Pbkdf2),
Scrypt(Scrypt),
}

View File

@ -1,350 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{Crypto, Uuid, Version, H160};
use serde::{
de::{DeserializeOwned, Error, MapAccess, Visitor},
Deserialize, Deserializer, Serialize, Serializer,
};
use serde_json;
use std::{
fmt,
io::{Read, Write},
};
/// Public opaque type representing serializable `KeyFile`.
#[derive(Debug, PartialEq)]
pub struct OpaqueKeyFile {
key_file: KeyFile,
}
impl Serialize for OpaqueKeyFile {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
self.key_file.serialize(serializer)
}
}
impl<T> From<T> for OpaqueKeyFile
where
T: Into<KeyFile>,
{
fn from(val: T) -> Self {
OpaqueKeyFile {
key_file: val.into(),
}
}
}
#[derive(Debug, PartialEq, Serialize)]
pub struct KeyFile {
pub id: Uuid,
pub version: Version,
pub crypto: Crypto,
pub address: Option<H160>,
pub name: Option<String>,
pub meta: Option<String>,
}
enum KeyFileField {
Id,
Version,
Crypto,
Address,
Name,
Meta,
}
impl<'a> Deserialize<'a> for KeyFileField {
fn deserialize<D>(deserializer: D) -> Result<KeyFileField, D::Error>
where
D: Deserializer<'a>,
{
deserializer.deserialize_any(KeyFileFieldVisitor)
}
}
struct KeyFileFieldVisitor;
impl<'a> Visitor<'a> for KeyFileFieldVisitor {
type Value = KeyFileField;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a valid key file field")
}
fn visit_str<E>(self, value: &str) -> Result<Self::Value, E>
where
E: Error,
{
match value {
"id" => Ok(KeyFileField::Id),
"version" => Ok(KeyFileField::Version),
"crypto" => Ok(KeyFileField::Crypto),
"Crypto" => Ok(KeyFileField::Crypto),
"address" => Ok(KeyFileField::Address),
"name" => Ok(KeyFileField::Name),
"meta" => Ok(KeyFileField::Meta),
_ => Err(Error::custom(format!("Unknown field: '{}'", value))),
}
}
}
impl<'a> Deserialize<'a> for KeyFile {
fn deserialize<D>(deserializer: D) -> Result<KeyFile, D::Error>
where
D: Deserializer<'a>,
{
static FIELDS: &'static [&'static str] = &["id", "version", "crypto", "Crypto", "address"];
deserializer.deserialize_struct("KeyFile", FIELDS, KeyFileVisitor)
}
}
fn none_if_empty<'a, T>(v: Option<serde_json::Value>) -> Option<T>
where
T: DeserializeOwned,
{
v.and_then(|v| {
if v.is_null() {
None
} else {
serde_json::from_value(v).ok()
}
})
}
struct KeyFileVisitor;
impl<'a> Visitor<'a> for KeyFileVisitor {
type Value = KeyFile;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a valid key object")
}
fn visit_map<V>(self, mut visitor: V) -> Result<Self::Value, V::Error>
where
V: MapAccess<'a>,
{
let mut id = None;
let mut version = None;
let mut crypto = None;
let mut address = None;
let mut name = None;
let mut meta = None;
loop {
match visitor.next_key()? {
Some(KeyFileField::Id) => {
id = Some(visitor.next_value()?);
}
Some(KeyFileField::Version) => {
version = Some(visitor.next_value()?);
}
Some(KeyFileField::Crypto) => {
crypto = Some(visitor.next_value()?);
}
Some(KeyFileField::Address) => {
address = Some(visitor.next_value()?);
}
Some(KeyFileField::Name) => name = none_if_empty(visitor.next_value().ok()),
Some(KeyFileField::Meta) => meta = none_if_empty(visitor.next_value().ok()),
None => {
break;
}
}
}
let id = id.ok_or_else(|| V::Error::missing_field("id"))?;
let version = version.ok_or_else(|| V::Error::missing_field("version"))?;
let crypto = crypto.ok_or_else(|| V::Error::missing_field("crypto"))?;
let result = KeyFile {
id: id,
version: version,
crypto: crypto,
address: address,
name: name,
meta: meta,
};
Ok(result)
}
}
impl KeyFile {
pub fn load<R>(reader: R) -> Result<Self, serde_json::Error>
where
R: Read,
{
serde_json::from_reader(reader)
}
pub fn write<W>(&self, writer: &mut W) -> Result<(), serde_json::Error>
where
W: Write,
{
serde_json::to_writer(writer, self)
}
}
#[cfg(test)]
mod tests {
use json::{Aes128Ctr, Cipher, Crypto, Kdf, KeyFile, Scrypt, Uuid, Version};
use serde_json;
use std::str::FromStr;
#[test]
fn basic_keyfile() {
let json = r#"
{
"address": "6edddfc6349aff20bc6467ccf276c5b52487f7a8",
"crypto": {
"cipher": "aes-128-ctr",
"ciphertext": "7203da0676d141b138cd7f8e1a4365f59cc1aa6978dc5443f364ca943d7cb4bc",
"cipherparams": {
"iv": "b5a7ec855ec9e2c405371356855fec83"
},
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "1e8642fdf1f87172492c1412fc62f8db75d796cdfa9c53c3f2b11e44a2a1b209"
},
"mac": "46325c5d4e8c991ad2683d525c7854da387138b6ca45068985aa4959fa2b8c8f"
},
"id": "8777d9f6-7860-4b9b-88b7-0b57ee6b3a73",
"version": 3,
"name": "Test",
"meta": "{}"
}"#;
let expected = KeyFile {
id: Uuid::from_str("8777d9f6-7860-4b9b-88b7-0b57ee6b3a73").unwrap(),
version: Version::V3,
address: Some("6edddfc6349aff20bc6467ccf276c5b52487f7a8".into()),
crypto: Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr {
iv: "b5a7ec855ec9e2c405371356855fec83".into(),
}),
ciphertext: "7203da0676d141b138cd7f8e1a4365f59cc1aa6978dc5443f364ca943d7cb4bc"
.into(),
kdf: Kdf::Scrypt(Scrypt {
n: 262144,
dklen: 32,
p: 1,
r: 8,
salt: "1e8642fdf1f87172492c1412fc62f8db75d796cdfa9c53c3f2b11e44a2a1b209".into(),
}),
mac: "46325c5d4e8c991ad2683d525c7854da387138b6ca45068985aa4959fa2b8c8f".into(),
},
name: Some("Test".to_owned()),
meta: Some("{}".to_owned()),
};
let keyfile: KeyFile = serde_json::from_str(json).unwrap();
assert_eq!(keyfile, expected);
}
#[test]
fn capital_crypto_keyfile() {
let json = r#"
{
"address": "6edddfc6349aff20bc6467ccf276c5b52487f7a8",
"Crypto": {
"cipher": "aes-128-ctr",
"ciphertext": "7203da0676d141b138cd7f8e1a4365f59cc1aa6978dc5443f364ca943d7cb4bc",
"cipherparams": {
"iv": "b5a7ec855ec9e2c405371356855fec83"
},
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "1e8642fdf1f87172492c1412fc62f8db75d796cdfa9c53c3f2b11e44a2a1b209"
},
"mac": "46325c5d4e8c991ad2683d525c7854da387138b6ca45068985aa4959fa2b8c8f"
},
"id": "8777d9f6-7860-4b9b-88b7-0b57ee6b3a73",
"version": 3
}"#;
let expected = KeyFile {
id: "8777d9f6-7860-4b9b-88b7-0b57ee6b3a73".into(),
version: Version::V3,
address: Some("6edddfc6349aff20bc6467ccf276c5b52487f7a8".into()),
crypto: Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr {
iv: "b5a7ec855ec9e2c405371356855fec83".into(),
}),
ciphertext: "7203da0676d141b138cd7f8e1a4365f59cc1aa6978dc5443f364ca943d7cb4bc"
.into(),
kdf: Kdf::Scrypt(Scrypt {
n: 262144,
dklen: 32,
p: 1,
r: 8,
salt: "1e8642fdf1f87172492c1412fc62f8db75d796cdfa9c53c3f2b11e44a2a1b209".into(),
}),
mac: "46325c5d4e8c991ad2683d525c7854da387138b6ca45068985aa4959fa2b8c8f".into(),
},
name: None,
meta: None,
};
let keyfile: KeyFile = serde_json::from_str(json).unwrap();
assert_eq!(keyfile, expected);
}
#[test]
fn to_and_from_json() {
let file = KeyFile {
id: "8777d9f6-7860-4b9b-88b7-0b57ee6b3a73".into(),
version: Version::V3,
address: Some("6edddfc6349aff20bc6467ccf276c5b52487f7a8".into()),
crypto: Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr {
iv: "b5a7ec855ec9e2c405371356855fec83".into(),
}),
ciphertext: "7203da0676d141b138cd7f8e1a4365f59cc1aa6978dc5443f364ca943d7cb4bc"
.into(),
kdf: Kdf::Scrypt(Scrypt {
n: 262144,
dklen: 32,
p: 1,
r: 8,
salt: "1e8642fdf1f87172492c1412fc62f8db75d796cdfa9c53c3f2b11e44a2a1b209".into(),
}),
mac: "46325c5d4e8c991ad2683d525c7854da387138b6ca45068985aa4959fa2b8c8f".into(),
},
name: Some("Test".to_owned()),
meta: None,
};
let serialized = serde_json::to_string(&file).unwrap();
println!("{}", serialized);
let deserialized = serde_json::from_str(&serialized).unwrap();
assert_eq!(file, deserialized);
}
}

View File

@ -1,48 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Contract interface specification.
mod bytes;
mod cipher;
mod crypto;
mod error;
mod hash;
mod id;
mod kdf;
mod key_file;
mod presale;
mod vault_file;
mod vault_key_file;
mod version;
pub use self::{
bytes::Bytes,
cipher::{Aes128Ctr, Cipher, CipherSer, CipherSerParams},
crypto::{CipherText, Crypto},
error::Error,
hash::{H128, H160, H256},
id::Uuid,
kdf::{Kdf, KdfSer, KdfSerParams, Pbkdf2, Prf, Scrypt},
key_file::{KeyFile, OpaqueKeyFile},
presale::{Encseed, PresaleWallet},
vault_file::VaultFile,
vault_key_file::{
insert_vault_name_to_json_meta, remove_vault_name_from_json_meta, VaultKeyFile,
VaultKeyMeta,
},
version::Version,
};

View File

@ -1,105 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::Crypto;
use serde_json;
use std::io::{Read, Write};
/// Vault meta file
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct VaultFile {
/// Vault password, encrypted with vault password
pub crypto: Crypto,
/// Vault metadata string
pub meta: Option<String>,
}
impl VaultFile {
pub fn load<R>(reader: R) -> Result<Self, serde_json::Error>
where
R: Read,
{
serde_json::from_reader(reader)
}
pub fn write<W>(&self, writer: &mut W) -> Result<(), serde_json::Error>
where
W: Write,
{
serde_json::to_writer(writer, self)
}
}
#[cfg(test)]
mod test {
use json::{Aes128Ctr, Cipher, Crypto, Kdf, Pbkdf2, Prf, VaultFile};
use serde_json;
use std::num::NonZeroU32;
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(1024).expect("1024 > 0; qed");
}
#[test]
fn to_and_from_json() {
let file = VaultFile {
crypto: Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr {
iv: "0155e3690be19fbfbecabcd440aa284b".into(),
}),
ciphertext: "4d6938a1f49b7782".into(),
kdf: Kdf::Pbkdf2(Pbkdf2 {
c: *ITERATIONS,
dklen: 32,
prf: Prf::HmacSha256,
salt: "b6a9338a7ccd39288a86dba73bfecd9101b4f3db9c9830e7c76afdbd4f6872e5".into(),
}),
mac: "16381463ea11c6eb2239a9f339c2e780516d29d234ce30ac5f166f9080b5a262".into(),
},
meta: Some("{}".into()),
};
let serialized = serde_json::to_string(&file).unwrap();
let deserialized = serde_json::from_str(&serialized).unwrap();
assert_eq!(file, deserialized);
}
#[test]
fn to_and_from_json_no_meta() {
let file = VaultFile {
crypto: Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr {
iv: "0155e3690be19fbfbecabcd440aa284b".into(),
}),
ciphertext: "4d6938a1f49b7782".into(),
kdf: Kdf::Pbkdf2(Pbkdf2 {
c: *ITERATIONS,
dklen: 32,
prf: Prf::HmacSha256,
salt: "b6a9338a7ccd39288a86dba73bfecd9101b4f3db9c9830e7c76afdbd4f6872e5".into(),
}),
mac: "16381463ea11c6eb2239a9f339c2e780516d29d234ce30ac5f166f9080b5a262".into(),
},
meta: None,
};
let serialized = serde_json::to_string(&file).unwrap();
let deserialized = serde_json::from_str(&serialized).unwrap();
assert_eq!(file, deserialized);
}
}

View File

@ -1,205 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{Crypto, Uuid, Version, H160};
use serde::de::Error;
use serde_json::{self, error, value::Value};
use std::io::{Read, Write};
/// Meta key name for vault field
const VAULT_NAME_META_KEY: &'static str = "vault";
/// Key file as stored in vaults
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct VaultKeyFile {
/// Key id
pub id: Uuid,
/// Key version
pub version: Version,
/// Secret, encrypted with account password
pub crypto: Crypto,
/// Serialized `VaultKeyMeta`, encrypted with vault password
pub metacrypto: Crypto,
}
/// Data, stored in `VaultKeyFile::metacrypto`
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct VaultKeyMeta {
/// Key address
pub address: H160,
/// Key name
pub name: Option<String>,
/// Key metadata
pub meta: Option<String>,
}
/// Insert vault name to the JSON meta field
pub fn insert_vault_name_to_json_meta(
meta: &str,
vault_name: &str,
) -> Result<String, error::Error> {
let mut meta = if meta.is_empty() {
Value::Object(serde_json::Map::new())
} else {
serde_json::from_str(meta)?
};
if let Some(meta_obj) = meta.as_object_mut() {
meta_obj.insert(
VAULT_NAME_META_KEY.to_owned(),
Value::String(vault_name.to_owned()),
);
serde_json::to_string(meta_obj)
} else {
Err(error::Error::custom(
"Meta is expected to be a serialized JSON object",
))
}
}
/// Remove vault name from the JSON meta field
pub fn remove_vault_name_from_json_meta(meta: &str) -> Result<String, error::Error> {
let mut meta = if meta.is_empty() {
Value::Object(serde_json::Map::new())
} else {
serde_json::from_str(meta)?
};
if let Some(meta_obj) = meta.as_object_mut() {
meta_obj.remove(VAULT_NAME_META_KEY);
serde_json::to_string(meta_obj)
} else {
Err(error::Error::custom(
"Meta is expected to be a serialized JSON object",
))
}
}
impl VaultKeyFile {
pub fn load<R>(reader: R) -> Result<Self, serde_json::Error>
where
R: Read,
{
serde_json::from_reader(reader)
}
pub fn write<W>(&self, writer: &mut W) -> Result<(), serde_json::Error>
where
W: Write,
{
serde_json::to_writer(writer, self)
}
}
impl VaultKeyMeta {
pub fn load(bytes: &[u8]) -> Result<Self, serde_json::Error> {
serde_json::from_slice(&bytes)
}
pub fn write(&self) -> Result<Vec<u8>, serde_json::Error> {
let s = serde_json::to_string(self)?;
Ok(s.as_bytes().into())
}
}
#[cfg(test)]
mod test {
use json::{
insert_vault_name_to_json_meta, remove_vault_name_from_json_meta, Aes128Ctr, Cipher,
Crypto, Kdf, Pbkdf2, Prf, VaultKeyFile, Version,
};
use serde_json;
use std::num::NonZeroU32;
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(10240).expect("10240 > 0; qed");
}
#[test]
fn to_and_from_json() {
let file = VaultKeyFile {
id: "08d82c39-88e3-7a71-6abb-89c8f36c3ceb".into(),
version: Version::V3,
crypto: Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr {
iv: "fecb968bbc8c7e608a89ebcfe53a41d0".into(),
}),
ciphertext: "4befe0a66d9a4b6fec8e39eb5c90ac5dafdeaab005fff1af665fd1f9af925c91".into(),
kdf: Kdf::Pbkdf2(Pbkdf2 {
c: *ITERATIONS,
dklen: 32,
prf: Prf::HmacSha256,
salt: "f17731e84ecac390546692dbd4ccf6a3a2720dc9652984978381e61c28a471b2".into(),
}),
mac: "7c7c3daafb24cf11eb3079dfb9064a11e92f309a0ee1dd676486bab119e686b7".into(),
},
metacrypto: Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr {
iv: "9c353fb3f894fc05946843616c26bb3f".into(),
}),
ciphertext: "fef0d113d7576c1702daf380ad6f4c5408389e57991cae2a174facd74bd549338e1014850bddbab7eb486ff5f5c9c5532800c6a6d4db2be2212cd5cd3769244ab230e1f369e8382a9e6d7c0a".into(),
kdf: Kdf::Pbkdf2(Pbkdf2 {
c: *ITERATIONS,
dklen: 32,
prf: Prf::HmacSha256,
salt: "aca82865174a82249a198814b263f43a631f272cbf7ed329d0f0839d259c652a".into(),
}),
mac: "b7413946bfe459d2801268dc331c04b3a84d92be11ef4dd9a507f895e8d9b5bd".into(),
}
};
let serialized = serde_json::to_string(&file).unwrap();
let deserialized = serde_json::from_str(&serialized).unwrap();
assert_eq!(file, deserialized);
}
#[test]
fn vault_name_inserted_to_json_meta() {
assert_eq!(
insert_vault_name_to_json_meta(r#""#, "MyVault").unwrap(),
r#"{"vault":"MyVault"}"#
);
assert_eq!(
insert_vault_name_to_json_meta(r#"{"tags":["kalabala"]}"#, "MyVault").unwrap(),
r#"{"tags":["kalabala"],"vault":"MyVault"}"#
);
}
#[test]
fn vault_name_not_inserted_to_json_meta() {
assert!(insert_vault_name_to_json_meta(r#"///3533"#, "MyVault").is_err());
assert!(insert_vault_name_to_json_meta(r#""string""#, "MyVault").is_err());
}
#[test]
fn vault_name_removed_from_json_meta() {
assert_eq!(
remove_vault_name_from_json_meta(r#"{"vault":"MyVault"}"#).unwrap(),
r#"{}"#
);
assert_eq!(
remove_vault_name_from_json_meta(r#"{"tags":["kalabala"],"vault":"MyVault"}"#).unwrap(),
r#"{"tags":["kalabala"]}"#
);
}
#[test]
fn vault_name_not_removed_from_json_meta() {
assert!(remove_vault_name_from_json_meta(r#"///3533"#).is_err());
assert!(remove_vault_name_from_json_meta(r#""string""#).is_err());
}
}

View File

@ -1,67 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::Error;
use serde::{
de::{Error as SerdeError, Visitor},
Deserialize, Deserializer, Serialize, Serializer,
};
use std::fmt;
#[derive(Debug, PartialEq)]
pub enum Version {
V3,
}
impl Serialize for Version {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match *self {
Version::V3 => serializer.serialize_u64(3),
}
}
}
impl<'a> Deserialize<'a> for Version {
fn deserialize<D>(deserializer: D) -> Result<Version, D::Error>
where
D: Deserializer<'a>,
{
deserializer.deserialize_any(VersionVisitor)
}
}
struct VersionVisitor;
impl<'a> Visitor<'a> for VersionVisitor {
type Value = Version;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a valid key version identifier")
}
fn visit_u64<E>(self, value: u64) -> Result<Self::Value, E>
where
E: SerdeError,
{
match value {
3 => Ok(Version::V3),
_ => Err(SerdeError::custom(Error::UnsupportedVersion)),
}
}
}

View File

@ -1,107 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crypto::{
self, pbkdf2,
publickey::{Address, KeyPair, Secret},
Keccak256,
};
use ethkey::Password;
use json;
use std::{fs, num::NonZeroU32, path::Path};
use Error;
/// Pre-sale wallet.
pub struct PresaleWallet {
iv: [u8; 16],
ciphertext: Vec<u8>,
address: Address,
}
impl From<json::PresaleWallet> for PresaleWallet {
fn from(wallet: json::PresaleWallet) -> Self {
let mut iv = [0u8; 16];
iv.copy_from_slice(&wallet.encseed[..16]);
let mut ciphertext = vec![];
ciphertext.extend_from_slice(&wallet.encseed[16..]);
PresaleWallet {
iv: iv,
ciphertext: ciphertext,
address: Address::from(wallet.address),
}
}
}
impl PresaleWallet {
/// Open a pre-sale wallet.
pub fn open<P>(path: P) -> Result<Self, Error>
where
P: AsRef<Path>,
{
let file = fs::File::open(path)?;
let presale =
json::PresaleWallet::load(file).map_err(|e| Error::InvalidKeyFile(format!("{}", e)))?;
Ok(PresaleWallet::from(presale))
}
/// Decrypt the wallet.
pub fn decrypt(&self, password: &Password) -> Result<KeyPair, Error> {
let mut derived_key = [0u8; 32];
let salt = pbkdf2::Salt(password.as_bytes());
let sec = pbkdf2::Secret(password.as_bytes());
let iter = NonZeroU32::new(2000).expect("2000 > 0; qed");
pbkdf2::sha256(iter.get(), salt, sec, &mut derived_key);
let mut key = vec![0; self.ciphertext.len()];
let len =
crypto::aes::decrypt_128_cbc(&derived_key[0..16], &self.iv, &self.ciphertext, &mut key)
.map_err(|_| Error::InvalidPassword)?;
let unpadded = &key[..len];
let secret = Secret::import_key(&unpadded.keccak256())?;
if let Ok(kp) = KeyPair::from_secret(secret) {
if kp.address() == self.address {
return Ok(kp);
}
}
Err(Error::InvalidPassword)
}
}
#[cfg(test)]
mod tests {
use super::PresaleWallet;
use json;
#[test]
fn test() {
let json = r#"
{
"encseed": "137103c28caeebbcea5d7f95edb97a289ded151b72159137cb7b2671f394f54cff8c121589dcb373e267225547b3c71cbdb54f6e48ec85cd549f96cf0dedb3bc0a9ac6c79b9c426c5878ca2c9d06ff42a23cb648312fc32ba83649de0928e066",
"ethaddr": "ede84640d1a1d3e06902048e67aa7db8d52c2ce1",
"email": "123@gmail.com",
"btcaddr": "1JvqEc6WLhg6GnyrLBe2ztPAU28KRfuseH"
} "#;
let wallet = json::PresaleWallet::load(json.as_bytes()).unwrap();
let wallet = PresaleWallet::from(wallet);
assert!(wallet.decrypt(&"123".into()).is_ok());
assert!(wallet.decrypt(&"124".into()).is_err());
}
}

Some files were not shown because too many files have changed in this diff Show More