Compare commits

..

54 Commits

Author SHA1 Message Date
5chdn
c3377b6f72
Bump beta to 1.9.3 2018-02-14 15:32:50 +01:00
Afri Schoedon
4bc73dd155
Backport master ci prs 2018-02-14 15:29:18 +01:00
Denis S. Soldatov aka General-Beck
d49233f45b
Update gitlab-test.sh (#7883)
* Update gitlab-test.sh

https://github.com/paritytech/parity/issues/7871

* Update aura-test.sh
2018-02-14 15:26:31 +01:00
Afri Schoedon
6ef563561f
Resolve conflicts 2018-02-14 15:26:27 +01:00
Denis S. Soldatov aka General-Beck
8c4686482f
Update gitlab-build.sh (#7855)
fix build ```version``` after https://github.com/paritytech/parity/pull/7723
2018-02-14 15:24:06 +01:00
Tomasz Drwięga
b23024abfb
Resolve conflicts 2018-02-14 15:23:47 +01:00
Afri Schoedon
0b01fc0f8d
Resolve conflicts 2018-02-14 15:19:55 +01:00
GitLab Build Bot
af70a681d5 [ci skip] js-precompiled 20180212-180349 2018-02-12 18:04:52 +00:00
GitLab Build Bot
7adfb82076 [ci skip] js-precompiled 20180202-122111 2018-02-02 12:22:09 +00:00
GitLab Build Bot
d504ce64e8 [ci skip] js-precompiled 20180202-081050 2018-02-02 08:11:54 +00:00
Afri Schoedon
0feb0bb6e7
Backports beta (#7780)
* Bump beta to 1.9.2

* Update ropsten.json (#7776)
2018-02-01 21:09:42 +01:00
GitLab Build Bot
3b5a8d5d69 [ci skip] js-precompiled 20180201-173714 2018-02-01 17:38:09 +00:00
Denis S. Soldatov aka General-Beck
aca9f13d45
snapcraft push beta 2018-02-01 17:31:13 +03:00
GitLab Build Bot
e09bef98fb [ci skip] js-precompiled 20180201-110702 2018-02-01 11:07:58 +00:00
GitLab Build Bot
ceb590a360 [ci skip] js-precompiled 20180201-094935 2018-02-01 09:50:52 +00:00
5chdn
75c0db2b15
Trigger CI 2018-02-01 09:29:47 +01:00
GitLab Build Bot
70b42345c5 [ci skip] js-precompiled 20180201-070128 2018-02-01 07:02:34 +00:00
André Silva
a42d780d02 [Beta] Backports (#7756)
* Filter-out nodes.json (#7716)

* Filter-out nodes.json

* network: sort node table nodes by failure ratio

* network: fix node table tests

* network: fit node failure percentage into buckets of 5%

* network: consider number of attempts in sorting of node table

* network: fix node table grumbles

* Fix client not being dropped on shutdown (#7695)

* parity: wait for client to drop on shutdown

* parity: fix grumbles in shutdown wait

* parity: increase shutdown timeouts

* Wrap --help output to 120 characters (#7626)

* Update Clap dependency and remove workarounds

* WIP

* Remove line breaks in help messages for now

* Multiple values can only be separated by commas (closes #7428)

* Grumbles; refactor repeating code; add constant

* Use a single Wrapper rather than allocate a new one for each call

* Wrap --help to 120 characters rather than 100 characters
2018-01-31 21:45:23 +01:00
GitLab Build Bot
582fa8ce45 [ci skip] js-precompiled 20180131-171157 2018-01-31 17:13:01 +00:00
Jaco Greeff
73be0fb096 [beta] Token filter balances (throttle) (#7742)
* [beta] Token filter balances (throttle)

* Cleanups

* Remove unused uniq

* Update @parity/shared to 2.2.23

* Remove unused code paths
2018-01-31 14:59:53 +01:00
Afri Schoedon
627d1a4971
Bump beta to 1.9.1 (#7751) 2018-01-31 13:25:11 +01:00
Jaco Greeff
a7807106f5 [beta] Explicitly add branch name (#7754)
* [beta] Explicitly add branch name

* Fix cargo update branch to beta
2018-01-31 12:04:04 +01:00
Tomasz Drwięga
33b39f0725 Revert "revert to #7677 #7679" (#7715)
This reverts commit 568dc33a02.
2018-01-29 11:43:30 +01:00
Denis S. Soldatov aka General-Beck
53ec1141cf
fix permissions 2018-01-25 03:45:03 +03:00
Denis S. Soldatov aka General-Beck
145229d46d
add display of stages in js-release 2018-01-25 03:38:31 +03:00
Denis S. Soldatov aka General-Beck
568dc33a02
revert to #7677 #7679 2018-01-25 03:23:33 +03:00
Denis S. Soldatov aka General-Beck
cf10450108
add display of stages in js-release 2018-01-25 02:16:58 +03:00
GitLab Build Bot
fe779686ca [ci skip] js-precompiled 20180124-230347 2018-01-24 23:04:39 +00:00
Denis S. Soldatov aka General-Beck
58c1dbe322 Update gitlab-test.sh 2018-01-24 23:52:00 +01:00
Denis S. Soldatov aka General-Beck
14b578832d Update gitlab-test.sh 2018-01-24 23:39:03 +01:00
Denis S. Soldatov aka General-Beck
e961398393 Update gitlab-test.sh 2018-01-24 23:25:06 +01:00
Denis S. Soldatov aka General-Beck
0fad2a6d8c Update gitlab-test.sh 2018-01-24 23:12:09 +01:00
Denis S. Soldatov aka General-Beck
f3bcada7b9 Update gitlab-test.sh 2018-01-24 23:09:39 +01:00
Amaury Martiny
b814f1ccbf Add when when too many accounts (#7677) (#7679) 2018-01-24 09:45:08 +01:00
Afri Schoedon
cad91df2b8
Update installer.nsi 2018-01-23 22:37:33 +01:00
Denis S. Soldatov aka General-Beck
50a58e1ae8
fix conditions in gitlab-test (#7676)
* fix conditions in gitlab-test

* Update gitlab-test.sh
2018-01-23 14:55:02 +03:00
Denis S. Soldatov aka General-Beck
1e36fc5d0f
remove cargo cache 2018-01-23 14:42:24 +03:00
Marek Kotewicz
fa6a0a6b60 Backports to beta (#7660)
* Improve handling of RocksDB corruption (#7630)

* kvdb-rocksdb: update rust-rocksdb version

* kvdb-rocksdb: mark corruptions and attempt repair on db open

* kvdb-rocksdb: better corruption detection on open

* kvdb-rocksdb: add corruption_file_name const

* kvdb-rocksdb: rename mark_corruption to check_for_corruption

* Hardening of CSP (#7621)

* Fixed delegatecall's from/to (#7568)

* Fixed delegatecall's from/to, closes #7166

* added tests for delegatecall traces, #7167

* Light client RPCs (#7603)

* Implement registrar.

* Implement eth_getCode

* Don't wait for providers.

* Don't wait for providers.

* Fix linting and wasm tests.

* Problem: AttachedProtocols don't get registered (#7610)

I was investigating issues I am having with Whisper support. I've
enabled Whisper on a custom test network and inserted traces into
Whisper handler implementation (Network<T> and NetworkProtocolHandler
for Network<T>) and I noticed that the handler was never invoked.

After further research on this matter, I found out that
AttachedProtocol's register function does nothing:
https://github.com/paritytech/parity/blob/master/sync/src/api.rs#L172
but there was an implementation originally:
99075ad#diff-5212acb6bcea60e9804ba7b50f6fe6ec and it did the actual
expected logic of registering the protocol in the NetworkService.

However, as of 16d84f8#diff-5212acb6bcea60e9804ba7b50f6fe6ec ("finished
removing ipc") this implementation is gone and only the no-op function
is left.

Which leads me to a conclusion that in fact Whisper's handler never gets
registered in the service and therefore two nodes won't communicate
using it.

Solution: Resurrect original non-empty `AttachedProtocols.register`
implementation

Resolves #7566

* Fix Temporarily Invalid blocks handling (#7613)

* Handle temporarily invalid blocks in sync.

* Fix tests.
2018-01-23 12:32:34 +01:00
Denis S. Soldatov aka General-Beck
a8fc42d282
add docker build for beta (#7671)
* add docker build for beta

* add cargo cache
2018-01-23 06:19:39 +03:00
Denis S. Soldatov aka General-Beck
c6685a7f57
fix snapcraft build for beta (#7670) 2018-01-23 04:12:22 +03:00
Denis S. Soldatov aka General-Beck
736a8c40f0
Update Parity.pkgproj
1.10.0 ->1.9.0
2018-01-23 02:53:18 +03:00
Denis S. Soldatov aka General-Beck
5f74f8c265
update gitlab build from master
Signed-off-by: Denis S. Soldatov aka General-Beck <general.beck@gmail.com>
2018-01-23 01:50:52 +03:00
GitLab Build Bot
97ed569588 [ci skip] js-precompiled 20180119-115500 2018-01-19 11:55:49 +00:00
Jaco Greeff
6766ef988d
Update references to dapp sources (#7634) (#7636)
* Update plugin references

* Update dapp references

* Update console references
2018-01-19 12:19:33 +01:00
GitLab Build Bot
8a87cfb893 [ci skip] js-precompiled 20180118-163407 2018-01-18 16:34:59 +00:00
Jaco Greeff
54aebdcb45
Update tokenreg (#7618) (#7619)
* Update tokenreg

* Add commit hash
2018-01-18 16:54:23 +01:00
GitLab Build Bot
86a6145d76 [ci skip] js-precompiled 20180117-211011 2018-01-17 21:11:06 +00:00
Afri Schoedon
718020b64b [beta] fix cache:key (#7598)
* Bump 1.9 to beta

* Update .gitlab-ci.yml

fix cache:key
2018-01-17 23:36:13 +03:00
Afri Schoedon
8c36a56365
Bump 1.9 to beta (#7533) 2018-01-17 12:28:21 +01:00
GitLab Build Bot
7bccaa5c15 [ci skip] js-precompiled 20180111-130237 2018-01-11 13:03:32 +00:00
Jaco Greeff
98ec46fff6
[beta] Trigger js-precompiled (#7535) 2018-01-11 13:27:01 +01:00
GitLab Build Bot
8dc584ece9 [ci skip] js-precompiled 20180111-094838 2018-01-11 09:50:08 +00:00
André Silva
63d154dad3 kvdb: update rust-rocksdb version (#7512) 2018-01-10 11:23:37 +01:00
Amaury Martiny
0030bb4f1d Update js-api (#7510) 2018-01-09 17:57:52 +01:00
3547 changed files with 418998 additions and 235436 deletions

View File

@ -1,3 +0,0 @@
[target.x86_64-pc-windows-msvc]
# Link the C runtime statically ; https://github.com/openethereum/parity-ethereum/issues/6643
rustflags = ["-Ctarget-feature=+crt-static"]

View File

@ -1,2 +0,0 @@
# Reformat the source code
610d9baba4af83b5767c659ca2ccfed337af1056

View File

@ -2,11 +2,11 @@
## 1. Purpose
A primary goal of OpenEthereum is to be inclusive to the largest number of contributors, with the most varied and diverse backgrounds possible. As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion (or lack thereof).
A primary goal of Parity is to be inclusive to the largest number of contributors, with the most varied and diverse backgrounds possible. As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion (or lack thereof).
This code of conduct outlines our expectations for all those who participate in our community, as well as the consequences for unacceptable behavior.
We invite all those who participate in OpenEthereum to help us create safe and positive experiences for everyone.
We invite all those who participate in Parity to help us create safe and positive experiences for everyone.
## 2. Open Source Citizenship
@ -63,7 +63,7 @@ Additionally, community organizers are available to help community members engag
## 7. Addressing Grievances
If you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify OpenEthereum Technologies with a concise description of your grievance. Your grievance will be handled in accordance with our existing governing policies.
If you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify Parity Technologies with a concise description of your grievance. Your grievance will be handled in accordance with our existing governing policies.
## 8. Scope
@ -73,7 +73,7 @@ This code of conduct and its related procedures also applies to unacceptable beh
## 9. Contact info
You can contact OpenEthereum via Email: community@parity.io
You can contact Parity via Email: community@parity.io
## 10. License and attribution

View File

@ -2,7 +2,7 @@
## Do you have a question?
Check out our [Beginner Introduction](https://openethereum.github.io/Beginner-Introduction), [Configuration](https://openethereum.github.io//Configuring-OpenEthereum), and [FAQ](https://openethereum.github.io/FAQ) articles on our [wiki](https://openethereum.github.io/)!
Check out our [Basic Usage](https://github.com/paritytech/parity/wiki/Basic-Usage), [Configuration](https://github.com/paritytech/parity/wiki/Configuring-Parity), and [FAQ](https://github.com/paritytech/parity/wiki/FAQ) articles on our [wiki](https://github.com/paritytech/parity/wiki)!
See also frequently asked questions [tagged with `parity`](https://ethereum.stackexchange.com/questions/tagged/parity?sort=votes&pageSize=50) on Stack Exchange.
@ -10,11 +10,11 @@ See also frequently asked questions [tagged with `parity`](https://ethereum.stac
Do **not** open an issue on Github if you think your discovered bug could be a **security-relevant vulnerability**. Please, read our [security policy](../SECURITY.md) instead.
Otherwise, just create a [new issue](https://github.com/openethereum/openethereum/issues/new) in our repository and state:
Otherwise, just create a [new issue](https://github.com/paritytech/parity/issues/new) in our repository and state:
- What's your OpenEthereum version?
- What's your Parity version?
- What's your operating system and version?
- How did you install OpenEthereum?
- How did you install parity?
- Is your node fully synchronized?
- Did you try turning it off and on again?
@ -22,47 +22,12 @@ Also, try to include **steps to reproduce** the issue and expand on the **actual
## Contribute!
If you would like to contribute to OpenEthereum, please **fork it**, fix bugs or implement features, and [propose a pull request](https://github.com/openethereum/openethereum/compare).
If you would like to contribute to Parity, please **fork it**, fix bugs or implement features, and [propose a pull request](https://github.com/paritytech/parity/compare).
### Labels & Milestones
We use [labels](https://github.com/openethereum/openethereum/labels) to manage PRs and issues and communicate the state of a PR. Please familiarize yourself with them. Furthermore we are organizing issues in [milestones](https://github.com/openethereum/openethereum/milestones). Best way to get started is to a pick a ticket from the current milestone tagged [`easy`](https://github.com/openethereum/openethereum/labels/Q2-easy%20%F0%9F%92%83) and get going, or [`mentor`](https://github.com/openethereum/openethereum/labels/Q1-mentor%20%F0%9F%95%BA) and get in contact with the mentor offering their support on that larger task.
### Rules
There are a few basic ground-rules for contributors (including the maintainer(s) of the project):
* **No pushing directly to the master branch**.
* **All modifications** must be made in a **pull-request** to solicit feedback from other contributors.
* Pull-requests cannot be merged before CI runs green and two reviewers have given their approval.
* All code changed should be formated by running `cargo fmt -- --config=merge_imports=true`
### Recommendations
* **Non-master branch names** *should* be prefixed with a short name moniker, followed by the associated Github Issue ID (if any), and a brief description of the task using the format `<GITHUB_USERNAME>-<ISSUE_ID>-<BRIEF_DESCRIPTION>` (e.g. `gavin-123-readme`). The name moniker helps people to inquiry about their unfinished work, and the GitHub Issue ID helps your future self and other developers (particularly those who are onboarding) find out about and understand the original scope of the task, and where it fits into Parity Ethereum [Projects](https://github.com/openethereum/openethereum/projects).
* **Remove stale branches periodically**
### Preparing Pull Requests
* If your PR does not alter any logic (e.g. comments, dependencies, docs), then it may be tagged [`insubstantial`](https://github.com/openethereum/openethereum/pulls?q=is%3Aopen+is%3Apr+label%3A%22A2-insubstantial+%F0%9F%91%B6%22).
* Once a PR is ready for review please add the [`pleasereview`](https://github.com/openethereum/openethereum/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3A%22A0-pleasereview+%F0%9F%A4%93%22+) label.
### Reviewing Pull Requests*:
* At least two reviewers are required to review PRs (even for PRs tagged [`insubstantial`](https://github.com/openethereum/openethereum/pulls?q=is%3Aopen+is%3Apr+label%3A%22A2-insubstantial+%F0%9F%91%B6%22)).
When doing a review, make sure to look for any:
* Buggy behavior.
* Undue maintenance burden.
* Breaking with house coding style.
* Pessimization (i.e. reduction of speed as measured in the projects benchmarks).
* Breaking changes should be carefuly reviewed and tagged as such so they end up in the [changelog](../CHANGELOG.md).
* Uselessness (i.e. it does not strictly add a feature or fix a known issue).
Please, refer to the [Coding Guide](https://github.com/paritytech/parity/wiki/Coding-guide) in our wiki for more details about hacking on Parity.
## License.
By contributing to Parity Ethereum, you agree that your contributions will be licensed under the [GPLv3 License](../LICENSE).
By contributing to Parity, you agree that your contributions will be licensed under the [GPLv3 License](../LICENSE).
Each contributor has to sign our Contributor License Agreement. The purpose of the CLA is to ensure that the guardian of a project's outputs has the necessary ownership or grants of rights over all contributions to allow them to distribute under the chosen license. You can read and sign our full Contributor License Agreement at [cla.parity.io](https://cla.parity.io) before submitting a pull request.

View File

@ -1,13 +1,13 @@
For questions please use https://discord.io/openethereum, issues are for bugs and feature requests.
_Before filing a new issue, please **provide the following information**._
- **OpenEthereum version (>=3.1.0)**: 0.0.0
- **Operating system**: Windows / MacOS / Linux
- **Installation**: homebrew / one-line installer / built from source
- **Fully synchronized**: no / yes
- **Network**: ethereum / ropsten / kovan / ...
- **Restarted**: no / yes
> I'm running:
>
> - **Which Parity version?**: 0.0.0
> - **Which operating system?**: Windows / MacOS / Linux
> - **How installed?**: via installer / homebrew / binaries / from source
> - **Are you fully synchronized?**: no / yes
> - **Which network are you connected to?**: ethereum / ropsten / kovan / ...
> - **Did you try to restart the node?**: no / yes
_Your issue description goes here below. Try to include **actual** vs. **expected behavior** and **steps to reproduce** the issue._

View File

@ -1,33 +0,0 @@
name: Build and Test Suite on Windows
on:
push:
branches:
- main
- dev
jobs:
build-tests:
name: Test and Build
strategy:
matrix:
platform:
- windows2019 # custom runner
toolchain:
- 1.52.1
runs-on: ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@main
with:
submodules: true
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.toolchain }}
profile: minimal
override: true
- name: Build tests
uses: actions-rs/cargo@v1
with:
command: test
args: --locked --all --release --features "json-tests" --verbose --no-run

View File

@ -1,40 +0,0 @@
name: Build and Test Suite
on:
pull_request:
push:
branches:
- main
- dev
jobs:
build-tests:
name: Test and Build
strategy:
matrix:
platform:
- ubuntu-16.04
- macos-latest
toolchain:
- 1.52.1
runs-on: ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@main
with:
submodules: true
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.toolchain }}
profile: minimal
override: true
- name: Build tests
uses: actions-rs/cargo@v1
with:
command: test
args: --locked --all --release --features "json-tests" --verbose --no-run
- name: Run tests for ${{ matrix.platform }}
uses: actions-rs/cargo@v1
with:
command: test
args: --locked --all --release --features "json-tests" --verbose

View File

@ -1,285 +0,0 @@
name: Build Release Suite
on:
push:
tags:
- v*
# Global vars
env:
AWS_REGION: "us-east-1"
AWS_S3_ARTIFACTS_BUCKET: "openethereum-releases"
ACTIONS_ALLOW_UNSECURE_COMMANDS: true
jobs:
build:
name: Build Release
strategy:
matrix:
platform:
- ubuntu-16.04
- macos-latest
toolchain:
- 1.52.1
runs-on: ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@main
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.toolchain }}
profile: minimal
override: true
# ==============================
# Windows Build
# ==============================
# - name: Install LLVM for Windows
# if: matrix.platform == 'windows2019'
# run: choco install llvm
# - name: Build OpenEthereum for Windows
# if: matrix.platform == 'windows2019'
# run: sh scripts/actions/build-windows.sh ${{matrix.platform}}
# - name: Upload Windows build
# uses: actions/upload-artifact@v2
# if: matrix.platform == 'windows2019'
# with:
# name: windows-artifacts
# path: artifacts
# ==============================
# Linux/Macos Build
# ==============================
- name: Build OpenEthereum for ${{matrix.platform}}
if: matrix.platform != 'windows2019'
run: sh scripts/actions/build-linux.sh ${{matrix.platform}}
- name: Upload Linux build
uses: actions/upload-artifact@v2
if: matrix.platform == 'ubuntu-16.04'
with:
name: linux-artifacts
path: artifacts
- name: Upload MacOS build
uses: actions/upload-artifact@v2
if: matrix.platform == 'macos-latest'
with:
name: macos-artifacts
path: artifacts
zip-artifacts-creator:
name: Create zip artifacts
needs: build
runs-on: ubuntu-16.04
steps:
- name: Set env
run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
# ==============================
# Create ZIP files
# ==============================
# - name: Download Windows artifacts
# uses: actions/download-artifact@v2
# with:
# name: windows-artifacts
# path: windows-artifacts
- name: Download Linux artifacts
uses: actions/download-artifact@v2
with:
name: linux-artifacts
path: linux-artifacts
- name: Download MacOS artifacts
uses: actions/download-artifact@v2
with:
name: macos-artifacts
path: macos-artifacts
- name: Display structure of downloaded files
run: ls
- name: Create zip Linux
id: create_zip_linux
run: |
cd linux-artifacts/
zip -rT openethereum-linux-${{ env.RELEASE_VERSION }}.zip *
ls openethereum-linux-${{ env.RELEASE_VERSION }}.zip
cd ..
mv linux-artifacts/openethereum-linux-${{ env.RELEASE_VERSION }}.zip .
echo "Setting outputs..."
echo ::set-output name=LINUX_ARTIFACT::openethereum-linux-${{ env.RELEASE_VERSION }}.zip
echo ::set-output name=LINUX_SHASUM::$(shasum -a 256 openethereum-linux-${{ env.RELEASE_VERSION }}.zip | awk '{print $1}')
- name: Create zip MacOS
id: create_zip_macos
run: |
cd macos-artifacts/
zip -rT openethereum-macos-${{ env.RELEASE_VERSION }}.zip *
ls openethereum-macos-${{ env.RELEASE_VERSION }}.zip
cd ..
mv macos-artifacts/openethereum-macos-${{ env.RELEASE_VERSION }}.zip .
echo "Setting outputs..."
echo ::set-output name=MACOS_ARTIFACT::openethereum-macos-${{ env.RELEASE_VERSION }}.zip
echo ::set-output name=MACOS_SHASUM::$(shasum -a 256 openethereum-macos-${{ env.RELEASE_VERSION }}.zip | awk '{print $1}')
# - name: Create zip Windows
# id: create_zip_windows
# run: |
# cd windows-artifacts/
# zip -rT openethereum-windows-${{ env.RELEASE_VERSION }}.zip *
# ls openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# cd ..
# mv windows-artifacts/openethereum-windows-${{ env.RELEASE_VERSION }}.zip .
# echo "Setting outputs..."
# echo ::set-output name=WINDOWS_ARTIFACT::openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# echo ::set-output name=WINDOWS_SHASUM::$(shasum -a 256 openethereum-windows-${{ env.RELEASE_VERSION }}.zip | awk '{print $1}')
# =======================================================================
# Upload artifacts
# This is required to share artifacts between different jobs
# =======================================================================
- name: Upload artifacts
uses: actions/upload-artifact@v2
with:
name: openethereum-linux-${{ env.RELEASE_VERSION }}.zip
path: openethereum-linux-${{ env.RELEASE_VERSION }}.zip
- name: Upload artifacts
uses: actions/upload-artifact@v2
with:
name: openethereum-macos-${{ env.RELEASE_VERSION }}.zip
path: openethereum-macos-${{ env.RELEASE_VERSION }}.zip
# - name: Upload artifacts
# uses: actions/upload-artifact@v2
# with:
# name: openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# path: openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# =======================================================================
# Upload artifacts to S3
# This is required by some software distribution systems which require
# artifacts to be downloadable, like Brew on MacOS.
# =======================================================================
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Copy files to S3 with the AWS CLI
run: |
# Deploy zip artifacts to S3 bucket to a directory whose name is the tagged release version.
# Deploy macos binary artifact (if required, add more `aws s3 cp` commands to deploy specific OS versions)
aws s3 cp macos-artifacts/openethereum s3://${{ env.AWS_S3_ARTIFACTS_BUCKET }}/${{ env.RELEASE_VERSION }}/macos/ --region ${{ env.AWS_REGION }}
outputs:
linux-artifact: ${{ steps.create_zip_linux.outputs.LINUX_ARTIFACT }}
linux-shasum: ${{ steps.create_zip_linux.outputs.LINUX_SHASUM }}
macos-artifact: ${{ steps.create_zip_macos.outputs.MACOS_ARTIFACT }}
macos-shasum: ${{ steps.create_zip_macos.outputs.MACOS_SHASUM }}
# windows-artifact: ${{ steps.create_zip_windows.outputs.WINDOWS_ARTIFACT }}
# windows-shasum: ${{ steps.create_zip_windows.outputs.WINDOWS_SHASUM }}
draft-release:
name: Draft Release
needs: zip-artifacts-creator
runs-on: ubuntu-16.04
steps:
- name: Set env
run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
# ==============================
# Download artifacts
# ==============================
- name: Download artifacts
uses: actions/download-artifact@v2
with:
name: openethereum-linux-${{ env.RELEASE_VERSION }}.zip
- name: Download artifacts
uses: actions/download-artifact@v2
with:
name: openethereum-macos-${{ env.RELEASE_VERSION }}.zip
# - name: Download artifacts
# uses: actions/download-artifact@v2
# with:
# name: openethereum-windows-${{ env.RELEASE_VERSION }}.zip
- name: Display structure of downloaded files
run: ls
# ==============================
# Create release draft
# ==============================
- name: Create Release Draft
id: create_release_draft
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
with:
tag_name: ${{ github.ref }}
release_name: OpenEthereum ${{ github.ref }}
body: |
This release contains <ADD_TEXT>
| System | Architecture | Binary | Sha256 Checksum |
|:---:|:---:|:---:|:---|
| <img src="https://gist.github.com/5chdn/1fce888fde1d773761f809b607757f76/raw/44c4f0fc63f1ea8e61a9513af5131ef65eaa6c75/apple.png" alt="Apple Icon by Pixel Perfect from https://www.flaticon.com/authors/pixel-perfect" style="width: 32px;"/> | x64 | [${{ needs.zip-artifacts-creator.outputs.macos-artifact }}](https://github.com/openethereum/openethereum/releases/download/${{ env.RELEASE_VERSION }}/${{ needs.zip-artifacts-creator.outputs.macos-artifact }}) | `${{ needs.zip-artifacts-creator.outputs.macos-shasum }}` |
| <img src="https://gist.github.com/5chdn/1fce888fde1d773761f809b607757f76/raw/44c4f0fc63f1ea8e61a9513af5131ef65eaa6c75/linux.png" alt="Linux Icon by Pixel Perfect from https://www.flaticon.com/authors/pixel-perfect" style="width: 32px;"/> | x64 | [${{ needs.zip-artifacts-creator.outputs.linux-artifact }}](https://github.com/openethereum/openethereum/releases/download/${{ env.RELEASE_VERSION }}/${{ needs.zip-artifacts-creator.outputs.linux-artifact }}) | `${{ needs.zip-artifacts-creator.outputs.linux-shasum }}` |
| <img src="https://gist.github.com/5chdn/1fce888fde1d773761f809b607757f76/raw/44c4f0fc63f1ea8e61a9513af5131ef65eaa6c75/windows.png" alt="Windows Icon by Pixel Perfect from https://www.flaticon.com/authors/pixel-perfect" style="width: 32px;"/> | x64 | [${{ needs.zip-artifacts-creator.outputs.windows-artifact }}](https://github.com/openethereum/openethereum/releases/download/${{ env.RELEASE_VERSION }}/${{ needs.zip-artifacts-creator.outputs.windows-artifact }}) | `${{ needs.zip-artifacts-creator.outputs.windows-shasum }}` |
| | | | |
| **System** | **Option** | - | **Resource** |
| <img src="https://gist.github.com/5chdn/1fce888fde1d773761f809b607757f76/raw/44c4f0fc63f1ea8e61a9513af5131ef65eaa6c75/settings.png" alt="Settings Icon by Pixel Perfect from https://www.flaticon.com/authors/pixel-perfect" style="width: 32px;"/> | Docker | - | [hub.docker.com/r/openethereum/openethereum](https://hub.docker.com/r/openethereum/openethereum) |
draft: true
prerelease: true
- name: Upload Release Asset - Linux
id: upload_release_asset_linux
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release_draft.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: ./openethereum-linux-${{ env.RELEASE_VERSION }}.zip
asset_name: openethereum-linux-${{ env.RELEASE_VERSION }}.zip
asset_content_type: application/zip
- name: Upload Release Asset - MacOS
id: upload_release_asset_macos
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release_draft.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: ./openethereum-macos-${{ env.RELEASE_VERSION }}.zip
asset_name: openethereum-macos-${{ env.RELEASE_VERSION }}.zip
asset_content_type: application/zip
# - name: Upload Release Asset - Windows
# id: upload_release_asset_windows
# uses: actions/upload-release-asset@v1
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# with:
# upload_url: ${{ steps.create_release_draft.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
# asset_path: ./openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# asset_name: openethereum-windows-${{ env.RELEASE_VERSION }}.zip
# asset_content_type: application/zip

View File

@ -1,50 +0,0 @@
name: Check
on:
pull_request:
push:
branches:
- main
- dev
jobs:
check:
name: Check
runs-on: ubuntu-16.04
steps:
- name: Checkout sources
uses: actions/checkout@main
with:
submodules: true
- name: Install 1.52.1 toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: 1.52.1
profile: minimal
override: true
- name: Run cargo check 1/3
uses: actions-rs/cargo@v1
with:
command: check
args: --locked --no-default-features --verbose
- name: Run cargo check 2/3
uses: actions-rs/cargo@v1
with:
command: check
args: --locked --manifest-path crates/runtime/io/Cargo.toml --no-default-features --verbose
- name: Run cargo check 3/3
uses: actions-rs/cargo@v1
with:
command: check
args: --locked --manifest-path crates/runtime/io/Cargo.toml --features "mio" --verbose
- name: Run cargo check evmbin
uses: actions-rs/cargo@v1
with:
command: check
args: --locked -p evmbin --verbose
- name: Run cargo check benches
uses: actions-rs/cargo@v1
with:
command: check
args: --locked --all --benches --verbose
- name: Run validate chainspecs
run: ./scripts/actions/validate-chainspecs.sh

View File

@ -1,29 +0,0 @@
name: Docker Image Nightly Release
# Run "nightly" build on each commit to "dev" branch.
on:
push:
branches:
- dev
jobs:
deploy-docker:
name: Build Release
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@master
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: 1.52.1
profile: minimal
override: true
- name: Deploy to docker hub
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: openethereum/openethereum
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: scripts/docker/alpine/Dockerfile
tags: "nightly"

View File

@ -1,30 +0,0 @@
name: Docker Image Tag and Latest Release
on:
push:
tags:
- v*
jobs:
deploy-docker:
name: Build Release
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@master
- name: Set env
run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: 1.52.1
profile: minimal
override: true
- name: Deploy to docker hub
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: openethereum/openethereum
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: scripts/docker/alpine/Dockerfile
tags: "latest,${{ env.RELEASE_VERSION }}"

View File

@ -1,30 +0,0 @@
name: Docker Image Release
on:
push:
branches:
- main
tags:
- v*
jobs:
deploy-docker:
name: Build Release
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@master
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: 1.52.1
profile: minimal
override: true
- name: Deploy to docker hub
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: openethereum/openethereum
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: scripts/docker/alpine/Dockerfile
tag_names: true

View File

@ -1,20 +0,0 @@
on: [push, pull_request]
name: rustfmt
jobs:
fmt:
name: Rustfmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.52.1
override: true
- run: rustup component add rustfmt
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check --config merge_imports=true

5
.gitignore vendored
View File

@ -40,8 +40,5 @@ node_modules
out/
.vscode
rls/
/parity.*
# cargo remote artifacts
remote-target
/parity.*

265
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,265 @@
stages:
- test
- js-build
- push-release
- build
variables:
RUST_BACKTRACE: "1"
RUSTFLAGS: ""
CARGOFLAGS: ""
CI_SERVER_NAME: "GitLab CI"
LIBSSL: "libssl1.0.0 (>=1.0.0)"
cache:
key: "$CI_BUILD_STAGE-$CI_BUILD_REF_NAME"
paths:
- target
untracked: true
linux-stable:
stage: build
image: parity/rust:gitlab-ci
only:
- beta
- tags
- stable
- triggers
script:
- rustup default stable
# ARGUMENTS: 1. BUILD_PLATFORM (target for binaries) 2. PLATFORM (target for cargo) 3. ARC (architecture) 4. & 5. CC & CXX flags 6. binary identifier
- scripts/gitlab-build.sh x86_64-unknown-linux-gnu x86_64-unknown-linux-gnu amd64 gcc g++ ubuntu
tags:
- rust-stable
artifacts:
paths:
- parity.zip
name: "stable-x86_64-unknown-linux-gnu_parity"
linux-stable-debian:
stage: build
image: parity/rust-debian:gitlab-ci
only:
- beta
- tags
- stable
- triggers
script:
- export LIBSSL="libssl1.1 (>=1.1.0)"
- scripts/gitlab-build.sh x86_64-unknown-debian-gnu x86_64-unknown-linux-gnu amd64 gcc g++ debian
tags:
- rust-debian
artifacts:
paths:
- parity.zip
name: "stable-x86_64-unknown-debian-gnu_parity"
linux-centos:
stage: build
image: parity/rust-centos:gitlab-ci
only:
- beta
- tags
- stable
- triggers
script:
- scripts/gitlab-build.sh x86_64-unknown-centos-gnu x86_64-unknown-linux-gnu x86_64 gcc g++ centos
tags:
- rust-centos
artifacts:
paths:
- parity.zip
name: "x86_64-unknown-centos-gnu_parity"
linux-i686:
stage: build
image: parity/rust-i686:gitlab-ci
only:
- beta
- tags
- stable
- triggers
script:
- scripts/gitlab-build.sh i686-unknown-linux-gnu i686-unknown-linux-gnu i386 gcc g++ ubuntu
tags:
- rust-i686
artifacts:
paths:
- parity.zip
name: "i686-unknown-linux-gnu"
linux-armv7:
stage: build
image: parity/rust-armv7:gitlab-ci
only:
- beta
- tags
- stable
- triggers
script:
- scripts/gitlab-build.sh armv7-unknown-linux-gnueabihf armv7-unknown-linux-gnueabihf armhf arm-linux-gnueabihf-gcc arm-linux-gnueabihf-g++ ubuntu
tags:
- rust-arm
artifacts:
paths:
- parity.zip
name: "armv7_unknown_linux_gnueabihf_parity"
linux-arm:
stage: build
image: parity/rust-arm:gitlab-ci
only:
- beta
- tags
- stable
- triggers
script:
- scripts/gitlab-build.sh arm-unknown-linux-gnueabihf arm-unknown-linux-gnueabihf armhf arm-linux-gnueabihf-gcc arm-linux-gnueabihf-g++ ubuntu
tags:
- rust-arm
artifacts:
paths:
- parity.zip
name: "arm-unknown-linux-gnueabihf_parity"
linux-aarch64:
stage: build
image: parity/rust-arm64:gitlab-ci
only:
- beta
- tags
- stable
- triggers
script:
- scripts/gitlab-build.sh aarch64-unknown-linux-gnu aarch64-unknown-linux-gnu arm64 aarch64-linux-gnu-gcc aarch64-linux-gnu-g++ ubuntu
tags:
- rust-arm
artifacts:
paths:
- parity.zip
name: "aarch64-unknown-linux-gnu_parity"
linux-snap:
stage: build
image: snapcore/snapcraft:stable
only:
- stable
- beta
- tags
- triggers
script:
- scripts/gitlab-build.sh x86_64-unknown-snap-gnu x86_64-unknown-linux-gnu amd64 gcc g++ snap
tags:
- rust-stable
artifacts:
paths:
- parity.zip
name: "stable-x86_64-unknown-snap-gnu_parity"
allow_failure: true
darwin:
stage: build
only:
- beta
- tags
- stable
- triggers
script:
- scripts/gitlab-build.sh x86_64-apple-darwin x86_64-apple-darwin macos gcc g++ macos
tags:
- osx
artifacts:
paths:
- parity.zip
name: "x86_64-apple-darwin_parity"
windows:
cache:
key: "%CI_BUILD_STAGE%-%CI_BUILD_REF_NAME%"
untracked: true
stage: build
only:
- beta
- tags
- stable
- triggers
script:
- sh scripts/gitlab-build.sh x86_64-pc-windows-msvc x86_64-pc-windows-msvc installer "" "" windows
tags:
- rust-windows
artifacts:
paths:
- parity.zip
name: "x86_64-pc-windows-msvc_parity"
docker-build:
stage: build
only:
- tags
- triggers
before_script:
- docker info
script:
- if [ "$CI_BUILD_REF_NAME" == "beta-release" ]; then DOCKER_TAG="latest"; else DOCKER_TAG=$CI_BUILD_REF_NAME; fi
- echo "Tag:" $DOCKER_TAG
- docker login -u $Docker_Hub_User_Parity -p $Docker_Hub_Pass_Parity
- scripts/docker-build.sh $DOCKER_TAG
- docker logout
tags:
- docker
test-coverage:
stage: test
only:
- master
script:
- scripts/gitlab-test.sh test-coverage
tags:
- kcov
allow_failure: true
test-rust-stable:
stage: test
image: parity/rust:gitlab-ci
script:
- scripts/gitlab-test.sh stable
tags:
- rust-stable
test-rust-beta:
stage: test
only:
- triggers
- master
image: parity/rust:gitlab-ci
script:
- scripts/gitlab-test.sh beta
tags:
- rust-beta
allow_failure: true
test-rust-nightly:
stage: test
only:
- triggers
- master
image: parity/rust:gitlab-ci
script:
- scripts/gitlab-test.sh nightly
tags:
- rust
- rust-nightly
allow_failure: true
js-test:
stage: test
image: parity/rust:gitlab-ci
script:
- scripts/gitlab-test.sh js-test
tags:
- rust-stable
js-release:
stage: js-build
only:
- master
- stable
- beta
- tags
- triggers
image: parity/rust:gitlab-ci
script:
- scripts/gitlab-test.sh js-release
tags:
- javascript
push-release:
stage: push-release
only:
- tags
- triggers
image: parity/rust:gitlab-ci
script:
- scripts/gitlab-push-release.sh
tags:
- curl

8
.gitmodules vendored
View File

@ -1,3 +1,7 @@
[submodule "crates/ethcore/res/json_tests"]
path = crates/ethcore/res/json_tests
[submodule "ethcore/res/ethereum/tests"]
path = ethcore/res/ethereum/tests
url = https://github.com/ethereum/tests.git
branch = develop
[submodule "ethcore/res/wasm-tests"]
path = ethcore/res/wasm-tests
url = https://github.com/paritytech/wasm-tests

View File

@ -1,208 +1,371 @@
## OpenEthereum v3.3.3
## Parity [v1.8.5](https://github.com/paritytech/parity/releases/tag/v1.8.5) (2017-12-29)
Enhancements:
* Implement eip-3607 (#593)
Parity 1.8.5 changes the default behavior of JSON-RPC CORS setting, detects same-key engine signers in Aura networks, and updates bootnodes for the Kovan and Foundation networks.
Bug fixes:
* Add type field for legacy transactions in RPC calls (#580)
* Makes eth_mining to return False if not is not allowed to seal (#581)
* Made nodes data concatenate as RLP sequences instead of bytes (#598)
Note: The default value of `--jsonrpc-cors` option has been altered to disallow (potentially malicious) websites from accessing the low-sensitivity RPCs (viewing exposed accounts, proposing transactions for signing). Currently domains need to be whitelisted manually. To bring back previous behaviour run with `--jsonrpc-cors all` or `--jsonrpc-cors http://example.com`.
## OpenEthereum v3.3.2
The full list of included changes:
Enhancements:
* London hardfork block: Sokol (24114400)
- Beta Backports ([#7297](https://github.com/paritytech/parity/pull/7297))
- New warp enodes ([#7287](https://github.com/paritytech/parity/pull/7287))
- New warp enodes
- Added one more warp enode; replaced spaces with tabs
- Bump beta to 1.8.5
- Update kovan boot nodes
- Detect different node, same-key signing in aura ([#7245](https://github.com/paritytech/parity/pull/7245))
- Detect different node, same-key signing in aura
- Reduce scope of warning
- Fix Cargo.lock
- Updating mainnet bootnodes.
- Update bootnodes ([#7363](https://github.com/paritytech/parity/pull/7363))
- Updating mainnet bootnodes.
- Add additional parity-beta bootnodes.
- Restore old parity bootnodes and update foudation bootnodes
- Fix default CORS. ([#7388](https://github.com/paritytech/parity/pull/7388))
Bug fixes:
* Fix for maxPriorityFeePerGas overflow
## Parity [v1.8.4](https://github.com/paritytech/parity/releases/tag/v1.8.4) (2017-12-12)
## OpenEthereum v3.3.1
Parity 1.8.4 applies fixes for Proof-of-Authority networks and schedules the Kovan-Byzantium hard-fork.
Enhancements:
* Add eth_maxPriorityFeePerGas implementation (#570)
* Add a bootnode for Kovan
- The Kovan testnet will fork on block `5067000` at `Thu Dec 14 2017 05:40:03 UTC`.
- This enables Byzantium features on Kovan.
- This disables uncles on Kovan for stability reasons.
- Proof-of-Authority networks are advised to set `maximumUncleCount` to 0 in a future `maximumUncleCountTransition` for stability reasons.
- See the [Kovan chain spec](https://github.com/paritytech/parity/blob/master/ethcore/res/ethereum/kovan.json) for an example.
- New PoA networks created with Parity will have this feature enabled by default.
Bug fixes:
* Fix for modexp overflow in debug mode (#578)
Furthermore, this release includes the ECIP-1039 Monetary policy rounding specification for Ethereum Classic, reduces the maximum Ethash-block timestamp drift to 15 seconds, and fixes various bugs for WASM and the RPC APIs.
## OpenEthereum v3.3.0
The full list of included changes:
Enhancements:
* Add `validateServiceTransactionsTransition` spec option to be able to enable additional checking of zero gas price transactions by block verifier
- Beta Backports and HF block update ([#7244](https://github.com/paritytech/parity/pull/7244))
- Reduce max block timestamp drift to 15 seconds ([#7240](https://github.com/paritytech/parity/pull/7240))
- Add test for block timestamp validation within allowed drift
- Update kovan HF block number.
- Beta Kovan HF ([#7234](https://github.com/paritytech/parity/pull/7234))
- Kovan HF.
- Bump version.
- Fix aura difficulty race ([#7198](https://github.com/paritytech/parity/pull/7198))
- Fix test key
- Extract out score calculation
- Fix build
- Update kovan HF block number.
- Add missing byzantium builtins.
- Bump installers versions.
- Increase allowed time drift to 10s. ([#7238](https://github.com/paritytech/parity/pull/7238))
- Beta Backports ([#7197](https://github.com/paritytech/parity/pull/7197))
- Maximum uncle count transition ([#7196](https://github.com/paritytech/parity/pull/7196))
- Enable delayed maximum_uncle_count activation.
- Fix tests.
- Defer kovan HF.
- Disable uncles by default ([#7006](https://github.com/paritytech/parity/pull/7006))
- Escape inifinite loop in estimte_gas ([#7075](https://github.com/paritytech/parity/pull/7075))
- ECIP-1039: Monetary policy rounding specification ([#7067](https://github.com/paritytech/parity/pull/7067))
- WASM Remove blockhash error ([#7121](https://github.com/paritytech/parity/pull/7121))
- Remove blockhash error
- Update tests.
- WASM storage_read and storage_write don't return anything ([#7110](https://github.com/paritytech/parity/pull/7110))
- WASM parse payload from panics ([#7097](https://github.com/paritytech/parity/pull/7097))
- Fix no-default-features. ([#7096](https://github.com/paritytech/parity/pull/7096))
## OpenEthereum v3.3.0-rc.15
## Parity [v1.8.3](https://github.com/paritytech/parity/releases/tag/v1.8.3) (2017-11-15)
* Revert eip1559BaseFeeMinValue activation on xDai at London hardfork block
Parity 1.8.3 contains several bug-fixes and removes the ability to deploy built-in multi-signature wallets.
## OpenEthereum v3.3.0-rc.14
The full list of included changes:
Enhancements:
* Add eip1559BaseFeeMinValue and eip1559BaseFeeMinValueTransition spec options
* Activate eip1559BaseFeeMinValue on xDai at London hardfork block (19040000), set it to 20 GWei
* Activate eip1559BaseFeeMinValue on POA Core at block 24199500 (November 8, 2021), set it to 10 GWei
* Delay difficulty bomb to June 2022 for Ethereum Mainnet (EIP-4345)
- Backports to beta ([#7043](https://github.com/paritytech/parity/pull/7043))
- pwasm-std update ([#7018](https://github.com/paritytech/parity/pull/7018))
- Version 1.8.3
- Make CLI arguments parsing more backwards compatible ([#7004](https://github.com/paritytech/parity/pull/7004))
- Skip nonce check for gas estimation ([#6997](https://github.com/paritytech/parity/pull/6997))
- Events in WASM runtime ([#6967](https://github.com/paritytech/parity/pull/6967))
- Return decoded seal fields. ([#6932](https://github.com/paritytech/parity/pull/6932))
- Fix serialization of status in transaction receipts. ([#6926](https://github.com/paritytech/parity/pull/6926))
- Windows fixes ([#6921](https://github.com/paritytech/parity/pull/6921))
- Disallow built-in multi-sig deploy (only watch) ([#7014](https://github.com/paritytech/parity/pull/7014))
- Add hint in ActionParams for splitting code/data ([#6968](https://github.com/paritytech/parity/pull/6968))
- Action params and embedded params handling
- Fix name-spaces
## OpenEthereum v3.3.0-rc.13
## Parity [v1.8.2](https://github.com/paritytech/parity/releases/tag/v1.8.2) (2017-10-26)
Enhancements:
* London hardfork block: POA Core (24090200)
Parity 1.8.2 fixes an important potential consensus issue and a few additional minor issues:
## OpenEthereum v3.3.0-rc.12
- `blockNumber` transaction field is now returned correctly in RPC calls.
- Possible crash when `--force-sealing` option is used.
Enhancements:
* London hardfork block: xDai (19040000)
The full list of included changes:
## OpenEthereum v3.3.0-rc.11
- Beta Backports ([#6891](https://github.com/paritytech/parity/pull/6891))
- Bump to v1.8.2
- Refactor static context check in CREATE. ([#6886](https://github.com/paritytech/parity/pull/6886))
- Refactor static context check in CREATE.
- Fix wasm.
- Fix serialization of non-localized transactions ([#6868](https://github.com/paritytech/parity/pull/6868))
- Fix serialization of non-localized transactions.
- Return proper SignedTransactions representation.
- Allow force sealing and reseal=0 for non-dev chains. ([#6878](https://github.com/paritytech/parity/pull/6878))
Bug fixes:
* Ignore GetNodeData requests only for non-AuRa chains
## Parity [v1.8.1](https://github.com/paritytech/parity/releases/tag/v1.8.1) (2017-10-20)
## OpenEthereum v3.3.0-rc.10
Parity 1.8.1 fixes several bugs with token balances, tweaks snapshot-sync, improves the performance of nodes with huge amounts of accounts and changes the Trezor account derivation path.
Enhancements:
* Add eip1559FeeCollector and eip1559FeeCollectorTransition spec options
**Important Note**: The **Trezor** account derivation path was changed in this release ([#6815](https://github.com/paritytech/parity/pull/6815)) to always use the first account (`m/44'/60'/0'/0/0` instead of `m/44'/60'/0'/0`). This way we enable compatibility with other Ethereum wallets supporting Trezor hardware-wallets. However, **action is required** before upgrading, if you have funds on your Parity Trezor wallet. If you already upgraded to 1.8.1, please downgrade to 1.8.0 first to recover the funds with the following steps:
## OpenEthereum v3.3.0-rc.9
1. Make sure you have 1.8.0-beta and your Trezor plugged in.
2. Create a new standard Parity account. Make sure you have backups of the recovery phrase and don't forget the password.
3. Move your funds from the Trezor hardware-wallet account to the freshly generated Parity account.
4. Upgrade to 1.8.1-beta and plug in your Trezor.
5. Move your funds from your Parity account to the new Trezor account.
6. Keep using Parity as normal.
Bug fixes:
* Add service transactions support for EIP-1559
* Fix MinGasPrice config option for POSDAO and EIP-1559
If you don't want to downgrade or move your funds off your Trezor-device, you can also use the official Trezor application or other wallets allowing to select the derivation path to access the funds.
Enhancements:
* min_gas_price becomes min_effective_priority_fee
* added version 4 for TxPermission contract
The full list of included changes:
## OpenEthereum v3.3.0-rc.8
- Add ECIP1017 to Morden config ([#6845](https://github.com/paritytech/parity/pull/6845))
- Ethstore optimizations ([#6844](https://github.com/paritytech/parity/pull/6844))
- Bumb to v1.8.1 ([#6843](https://github.com/paritytech/parity/pull/6843))
- Backport ([#6837](https://github.com/paritytech/parity/pull/6837))
- Tweaked snapshot sync threshold ([#6829](https://github.com/paritytech/parity/pull/6829))
- Change keypath derivation logic ([#6815](https://github.com/paritytech/parity/pull/6815))
- Refresh cached tokens based on registry info & random balances ([#6824](https://github.com/paritytech/parity/pull/6824))
- Refresh cached tokens based on registry info & random balances ([#6818](https://github.com/paritytech/parity/pull/6818))
- Don't display errored token images
Bug fixes:
* Ignore GetNodeData requests (#519)
## Parity [v1.8.0](https://github.com/paritytech/parity/releases/tag/v1.8.0) (2017-10-15)
## OpenEthereum v3.3.0-rc.7
We are happy to announce our newest Parity 1.8 release. Among others, it enables the following features:
Bug fixes:
* GetPooledTransactions is sent in invalid form (wrong packet id)
- Full Whisper v6 integration
- Trezor hardware-wallet support
- WASM contract support
- PICOPS KYC-certified accounts and vouching for community-dapps
- Light client compatibility for Proof-of-Authority networks
- Transaction permissioning and permissioned p2p-connections
- Full Byzantium-fork compatibility
- Full Musicoin MCIP-3 UBI-fork compatibility
## OpenEthereum v3.3.0-rc.6
Further, users upgrading from 1.7 should acknowledge the following changes:
Enhancements:
* London hardfork block: kovan (26741100) (#502)
- The chain-engine was further abstracted and chain-specs need to be upgraded. [#6134](https://github.com/paritytech/parity/pull/6134) [#6591](https://github.com/paritytech/parity/pull/6591)
- `network_id` was renamed to `chain_id` where applicable. [#6345](https://github.com/paritytech/parity/pull/6345)
- `trace_filter` RPC method now comes with pagination. [#6312](https://github.com/paritytech/parity/pull/6312)
- Added tracing of rewards on closing blocks. [#6194](https://github.com/paritytech/parity/pull/6194)
## OpenEthereum v3.3.0-rc.4
The full list of included changes:
Enhancements:
* London hardfork block: mainnet (12,965,000) (#475)
* Support for eth/66 protocol version (#465)
* Bump ethereum/tests to v9.0.3
* Add eth_feeHistory
- Updated ethabi to fix auto-update ([#6771](https://github.com/paritytech/parity/pull/6771))
- Fixed kovan chain validation ([#6760](https://github.com/paritytech/parity/pull/6760))
- Fixed kovan chain validation
- Fork detection
- Fixed typo
- Bumped fork block number for auto-update ([#6755](https://github.com/paritytech/parity/pull/6755))
- CLI: Reject invalid argument values rather than ignore them ([#6747](https://github.com/paritytech/parity/pull/6747))
- Fixed modexp gas calculation overflow ([#6745](https://github.com/paritytech/parity/pull/6745))
- Backport beta - Fixes Badges ([#6732](https://github.com/paritytech/parity/pull/6732))
- Fix badges not showing up ([#6730](https://github.com/paritytech/parity/pull/6730))
- Always fetch meta data first [badges]
- Bump to v1.8.0 in beta
- Fix tokens and badges ([#6725](https://github.com/paritytech/parity/pull/6725))
- Update new token fetching
- Working Certifications Monitoring
- Update on Certification / Revoke
- Fix none-fetched tokens value display
- Fix tests
- Check vouch status on appId in addition to contentHash ([#6719](https://github.com/paritytech/parity/pull/6719))
- Check vouch status on appId in addition to contentHash
- Simplify var expansion
- Prevent going offline when restoring or taking a snapshot [#6694](https://github.com/paritytech/parity/pull/6694)
- Graceful exit when invalid CLI flags are passed (#6485) [#6711](https://github.com/paritytech/parity/pull/6711)
- Fixed RETURNDATA out of bounds check [#6718](https://github.com/paritytech/parity/pull/6718)
- Display vouched overlay on dapps [#6710](https://github.com/paritytech/parity/pull/6710)
- Fix gas estimation if `from` is not provided. [#6714](https://github.com/paritytech/parity/pull/6714)
- Emulate signer pubsub on public node [#6708](https://github.com/paritytech/parity/pull/6708)
- Removes dependency on rustc_serialize (#5988) [#6705](https://github.com/paritytech/parity/pull/6705)
- Fixed potential modexp exp len overflow [#6686](https://github.com/paritytech/parity/pull/6686)
- Fix asciiToHex for characters < 0x10 [#6702](https://github.com/paritytech/parity/pull/6702)
- Fix address input [#6701](https://github.com/paritytech/parity/pull/6701)
- Allow signer signing display of markdown [#6707](https://github.com/paritytech/parity/pull/6707)
- Fixed build warnings [#6664](https://github.com/paritytech/parity/pull/6664)
- Fix warp sync blockers detection [#6691](https://github.com/paritytech/parity/pull/6691)
- Difficulty tests [#6687](https://github.com/paritytech/parity/pull/6687)
- Separate migrations from util [#6690](https://github.com/paritytech/parity/pull/6690)
- Changelog for 1.7.3 [#6678](https://github.com/paritytech/parity/pull/6678)
- WASM gas schedule [#6638](https://github.com/paritytech/parity/pull/6638)
- Fix wallet view [#6597](https://github.com/paritytech/parity/pull/6597)
- Byzantium fork block number [#6660](https://github.com/paritytech/parity/pull/6660)
- Fixed RETURNDATA size for built-ins [#6652](https://github.com/paritytech/parity/pull/6652)
- Light Client: fetch transactions/receipts by transaction hash [#6641](https://github.com/paritytech/parity/pull/6641)
- Add Musicoin and MCIP-3 UBI hardfork. [#6621](https://github.com/paritytech/parity/pull/6621)
- fix 1.8 backcompat: revert to manual encoding/decoding of transition proofs [#6665](https://github.com/paritytech/parity/pull/6665)
- Tweaked block download timeouts (#6595) [#6655](https://github.com/paritytech/parity/pull/6655)
- Renamed RPC receipt statusCode field to status [#6650](https://github.com/paritytech/parity/pull/6650)
- SecretStore: session level timeout [#6631](https://github.com/paritytech/parity/pull/6631)
- SecretStore: ShareRemove of 'isolated' nodes [#6630](https://github.com/paritytech/parity/pull/6630)
- SecretStore: exclusive sessions [#6624](https://github.com/paritytech/parity/pull/6624)
- Fixed network protocol version negotiation [#6649](https://github.com/paritytech/parity/pull/6649)
- Updated systemd files for linux (Resolves #6592) [#6598](https://github.com/paritytech/parity/pull/6598)
- move additional_params to machine, fixes registry on non-ethash chains [#6646](https://github.com/paritytech/parity/pull/6646)
- Fix Token Transfer in transaction list [#6589](https://github.com/paritytech/parity/pull/6589)
- Update jsonrpc dependencies and rewrite dapps to futures. [#6522](https://github.com/paritytech/parity/pull/6522)
- Balance queries implemented in WASM runtime [#6639](https://github.com/paritytech/parity/pull/6639)
- Don't expose port 80 for parity anymore [#6633](https://github.com/paritytech/parity/pull/6633)
- WASM Runtime refactoring [#6596](https://github.com/paritytech/parity/pull/6596)
- Fix compilation [#6625](https://github.com/paritytech/parity/pull/6625)
- Downgrade futures to suppress warnings. [#6620](https://github.com/paritytech/parity/pull/6620)
- Add pagination for trace_filter rpc method [#6312](https://github.com/paritytech/parity/pull/6312)
- Disallow pasting recovery phrases on first run [#6602](https://github.com/paritytech/parity/pull/6602)
- fix typo: Unkown => Unknown [#6559](https://github.com/paritytech/parity/pull/6559)
- SecretStore: administrative sessions prototypes [#6605](https://github.com/paritytech/parity/pull/6605)
- fix parity.io link 404 [#6617](https://github.com/paritytech/parity/pull/6617)
- SecretStore: add node to existing session poc + discussion [#6480](https://github.com/paritytech/parity/pull/6480)
- Generalize engine trait [#6591](https://github.com/paritytech/parity/pull/6591)
- Add RPC eth_chainId for querying the current blockchain chain ID [#6329](https://github.com/paritytech/parity/pull/6329)
- Debounce sync status. [#6572](https://github.com/paritytech/parity/pull/6572)
- [Public Node] Disable tx scheduling and hardware wallets [#6588](https://github.com/paritytech/parity/pull/6588)
- Use memmap for dag cache [#6193](https://github.com/paritytech/parity/pull/6193)
- Rename Requests to Batch [#6582](https://github.com/paritytech/parity/pull/6582)
- Use host as ws/dapps url if present. [#6566](https://github.com/paritytech/parity/pull/6566)
- Sync progress and error handling fixes [#6560](https://github.com/paritytech/parity/pull/6560)
- Fixed receipt serialization and RPC [#6555](https://github.com/paritytech/parity/pull/6555)
- Fix number of confirmations for transaction [#6552](https://github.com/paritytech/parity/pull/6552)
- Fix #6540 [#6556](https://github.com/paritytech/parity/pull/6556)
- Fix failing hardware tests [#6553](https://github.com/paritytech/parity/pull/6553)
- Required validators >= num owners in Wallet Creation [#6551](https://github.com/paritytech/parity/pull/6551)
- Random cleanups / improvements to a state [#6472](https://github.com/paritytech/parity/pull/6472)
- Changelog for 1.7.2 [#6363](https://github.com/paritytech/parity/pull/6363)
- Ropsten fork [#6533](https://github.com/paritytech/parity/pull/6533)
- Byzantium updates [#5855](https://github.com/paritytech/parity/pull/5855)
- Fix extension detection [#6452](https://github.com/paritytech/parity/pull/6452)
- Downgrade futures to supress warnings [#6521](https://github.com/paritytech/parity/pull/6521)
- separate trie from util and make its dependencies into libs [#6478](https://github.com/paritytech/parity/pull/6478)
- WASM sha3 test [#6512](https://github.com/paritytech/parity/pull/6512)
- Fix broken JavaScript tests [#6498](https://github.com/paritytech/parity/pull/6498)
- SecretStore: use random key to encrypt channel + session-level nonce [#6470](https://github.com/paritytech/parity/pull/6470)
- Trezor Support [#6403](https://github.com/paritytech/parity/pull/6403)
- Fix compiler warning [#6491](https://github.com/paritytech/parity/pull/6491)
- Fix typo [#6505](https://github.com/paritytech/parity/pull/6505)
- WASM: added math overflow test [#6474](https://github.com/paritytech/parity/pull/6474)
- Fix slow balances [#6471](https://github.com/paritytech/parity/pull/6471)
- WASM runtime update [#6467](https://github.com/paritytech/parity/pull/6467)
- Compatibility with whisper v6 [#6179](https://github.com/paritytech/parity/pull/6179)
- light-poa round 2: allow optional casting of engine client to full client [#6468](https://github.com/paritytech/parity/pull/6468)
- Moved attributes under docs [#6475](https://github.com/paritytech/parity/pull/6475)
- cleanup util dependencies [#6464](https://github.com/paritytech/parity/pull/6464)
- removed redundant earlymergedb trace guards [#6463](https://github.com/paritytech/parity/pull/6463)
- UtilError utilizes error_chain! [#6461](https://github.com/paritytech/parity/pull/6461)
- fixed master [#6465](https://github.com/paritytech/parity/pull/6465)
- Refactor and port CLI from Docopt to Clap (#2066) [#6356](https://github.com/paritytech/parity/pull/6356)
- Add language selector in production [#6317](https://github.com/paritytech/parity/pull/6317)
- eth_call returns output of contract creations [#6420](https://github.com/paritytech/parity/pull/6420)
- Refactor: Don't reexport bigint from util [#6459](https://github.com/paritytech/parity/pull/6459)
- Transaction permissioning [#6441](https://github.com/paritytech/parity/pull/6441)
- Added missing SecretStore tests - signing session [#6411](https://github.com/paritytech/parity/pull/6411)
- Light-client sync for contract-based PoA [#6370](https://github.com/paritytech/parity/pull/6370)
- triehash is separated from util [#6428](https://github.com/paritytech/parity/pull/6428)
- remove re-export of parking_lot in util [#6435](https://github.com/paritytech/parity/pull/6435)
- fix modexp bug: return 0 if base is zero [#6424](https://github.com/paritytech/parity/pull/6424)
- separate semantic_version from util [#6438](https://github.com/paritytech/parity/pull/6438)
- move timer.rs to ethcore [#6437](https://github.com/paritytech/parity/pull/6437)
- remove re-export of ansi_term in util [#6433](https://github.com/paritytech/parity/pull/6433)
- Pub sub blocks [#6139](https://github.com/paritytech/parity/pull/6139)
- replace trait Hashable with fn keccak [#6423](https://github.com/paritytech/parity/pull/6423)
- add more hash backward compatibility test for bloom [#6425](https://github.com/paritytech/parity/pull/6425)
- remove the redundant hasher in Bloom [#6404](https://github.com/paritytech/parity/pull/6404)
- Remove re-export of HeapSizeOf in util (part of #6418) [#6419](https://github.com/paritytech/parity/pull/6419)
- Rewards on closing blocks [#6194](https://github.com/paritytech/parity/pull/6194)
- ensure balances of constructor accounts are kept [#6413](https://github.com/paritytech/parity/pull/6413)
- removed recursion from triedbmut::lookup [#6394](https://github.com/paritytech/parity/pull/6394)
- do not activate genesis epoch in immediate transition validator contract [#6349](https://github.com/paritytech/parity/pull/6349)
- Use git for the snap version [#6271](https://github.com/paritytech/parity/pull/6271)
- Permissioned p2p connections [#6359](https://github.com/paritytech/parity/pull/6359)
- Don't accept transactions above block gas limit. [#6408](https://github.com/paritytech/parity/pull/6408)
- Fix memory tracing. [#6399](https://github.com/paritytech/parity/pull/6399)
- earlydb optimizations [#6393](https://github.com/paritytech/parity/pull/6393)
- Optimized PlainHasher hashing. Trie insertions are >15 faster [#6321](https://github.com/paritytech/parity/pull/6321)
- Trie optimizations [#6389](https://github.com/paritytech/parity/pull/6389)
- small optimizations for triehash [#6392](https://github.com/paritytech/parity/pull/6392)
- Bring back IPFS tests. [#6398](https://github.com/paritytech/parity/pull/6398)
- Running state test using parity-evm [#6355](https://github.com/paritytech/parity/pull/6355)
- Wasm math tests extended [#6354](https://github.com/paritytech/parity/pull/6354)
- Expose health status over RPC [#6274](https://github.com/paritytech/parity/pull/6274)
- fix bloom bitvecjournal storage allocation [#6390](https://github.com/paritytech/parity/pull/6390)
- fixed pending block panic [#6391](https://github.com/paritytech/parity/pull/6391)
- Infoline less opaque for UI/visibility [#6364](https://github.com/paritytech/parity/pull/6364)
- Fix eth_call. [#6365](https://github.com/paritytech/parity/pull/6365)
- updated bigint [#6341](https://github.com/paritytech/parity/pull/6341)
- Optimize trie iter by avoiding redundant copying [#6347](https://github.com/paritytech/parity/pull/6347)
- Only keep a single rocksdb debug log file [#6346](https://github.com/paritytech/parity/pull/6346)
- Tweaked snapshot params [#6344](https://github.com/paritytech/parity/pull/6344)
- Rename network_id to chain_id where applicable. [#6345](https://github.com/paritytech/parity/pull/6345)
- Itertools are no longer reexported from util, optimized triedb iter [#6322](https://github.com/paritytech/parity/pull/6322)
- Better check the created accounts before showing Startup Wizard [#6331](https://github.com/paritytech/parity/pull/6331)
- Better error messages for invalid types in RPC [#6311](https://github.com/paritytech/parity/pull/6311)
- fix panic in parity-evm json tracer [#6338](https://github.com/paritytech/parity/pull/6338)
- WASM math test [#6305](https://github.com/paritytech/parity/pull/6305)
- rlp_derive [#6125](https://github.com/paritytech/parity/pull/6125)
- Fix --chain parsing in parity-evm. [#6314](https://github.com/paritytech/parity/pull/6314)
- Unexpose RPC methods on :8180 [#6295](https://github.com/paritytech/parity/pull/6295)
- Ignore errors from dappsUrl when starting UI. [#6296](https://github.com/paritytech/parity/pull/6296)
- updated bigint with optimized mul and from_big_indian [#6323](https://github.com/paritytech/parity/pull/6323)
- SecretStore: bunch of fixes and improvements [#6168](https://github.com/paritytech/parity/pull/6168)
- Master requires rust 1.19 [#6308](https://github.com/paritytech/parity/pull/6308)
- Add more descriptive error when signing/decrypting using hw wallet. [#6302](https://github.com/paritytech/parity/pull/6302)
- Increase default gas limit for eth_call. [#6299](https://github.com/paritytech/parity/pull/6299)
- rust-toolchain file on master [#6266](https://github.com/paritytech/parity/pull/6266)
- Migrate wasm-tests to updated runtime [#6278](https://github.com/paritytech/parity/pull/6278)
- Extension fixes [#6284](https://github.com/paritytech/parity/pull/6284)
- Fix a hash displayed in tooltip when signing arbitrary data [#6283](https://github.com/paritytech/parity/pull/6283)
- Time should not contribue to overall status. [#6276](https://github.com/paritytech/parity/pull/6276)
- Add --to and --gas-price to evmbin [#6277](https://github.com/paritytech/parity/pull/6277)
- Fix dapps CSP when UI is exposed externally [#6178](https://github.com/paritytech/parity/pull/6178)
- Add warning to web browser and fix links. [#6232](https://github.com/paritytech/parity/pull/6232)
- Update Settings/Proxy view to match entries in proxy.pac [#4771](https://github.com/paritytech/parity/pull/4771)
- Dapp refresh [#5752](https://github.com/paritytech/parity/pull/5752)
- Add support for ConsenSys multisig wallet [#6153](https://github.com/paritytech/parity/pull/6153)
- updated jsonrpc [#6264](https://github.com/paritytech/parity/pull/6264)
- SecretStore: encrypt messages using private key from key store [#6146](https://github.com/paritytech/parity/pull/6146)
- Wasm storage read test [#6255](https://github.com/paritytech/parity/pull/6255)
- propagate stratum submit share error upstream [#6260](https://github.com/paritytech/parity/pull/6260)
- Using multiple NTP servers [#6173](https://github.com/paritytech/parity/pull/6173)
- Add GitHub issue templates. [#6259](https://github.com/paritytech/parity/pull/6259)
- format instant change proofs correctly [#6241](https://github.com/paritytech/parity/pull/6241)
- price-info does not depend on util [#6231](https://github.com/paritytech/parity/pull/6231)
- native-contracts crate does not depend on util any more [#6233](https://github.com/paritytech/parity/pull/6233)
- Bump master to 1.8.0 [#6256](https://github.com/paritytech/parity/pull/6256)
- SecretStore: do not cache ACL contract + on-chain key servers configuration [#6107](https://github.com/paritytech/parity/pull/6107)
- Fix the README badges [#6229](https://github.com/paritytech/parity/pull/6229)
- updated tiny-keccak to 1.3 [#6248](https://github.com/paritytech/parity/pull/6248)
- Small grammatical error [#6244](https://github.com/paritytech/parity/pull/6244)
- Multi-call RPC [#6195](https://github.com/paritytech/parity/pull/6195)
- InstantSeal fix [#6223](https://github.com/paritytech/parity/pull/6223)
- Untrusted RLP length overflow check [#6227](https://github.com/paritytech/parity/pull/6227)
- Chainspec validation [#6197](https://github.com/paritytech/parity/pull/6197)
- Fix cache path when using --base-path [#6212](https://github.com/paritytech/parity/pull/6212)
- removed std reexports from util && fixed broken tests [#6187](https://github.com/paritytech/parity/pull/6187)
- WASM MVP continued [#6132](https://github.com/paritytech/parity/pull/6132)
- Decouple virtual machines [#6184](https://github.com/paritytech/parity/pull/6184)
- Realloc test added [#6177](https://github.com/paritytech/parity/pull/6177)
- Re-enable wallets, fixed forgetting accounts [#6196](https://github.com/paritytech/parity/pull/6196)
- Move more params to the common section. [#6134](https://github.com/paritytech/parity/pull/6134)
- Whisper js [#6161](https://github.com/paritytech/parity/pull/6161)
- typo in uninstaller [#6185](https://github.com/paritytech/parity/pull/6185)
- fix #6052. honor --no-color for signer command [#6100](https://github.com/paritytech/parity/pull/6100)
- Refactor --allow-ips to handle custom ip-ranges [#6144](https://github.com/paritytech/parity/pull/6144)
- Update Changelog for 1.6.10 and 1.7.0 [#6183](https://github.com/paritytech/parity/pull/6183)
- Fix unsoundness in ethash's unsafe code [#6140](https://github.com/paritytech/parity/pull/6140)
Bug fixes:
* GetNodeData from eth63 is missing (#466)
* Effective gas price not omitting (#477)
* London support in openethereum-evm (#479)
* gasPrice is required field for Transaction object (#481)
## OpenEthereum v3.3.0-rc.3
### Previous releases
Bug fixes:
* Add effective_gas_price to eth_getTransactionReceipt #445 (#450)
* Update eth_gasPrice to support EIP-1559 #449 (#458)
* eth_estimateGas returns "Requires higher than upper limit of X" after London Ropsten Hard Fork #459 (#460)
## OpenEthereum v3.3.0-rc.2
Enhancements:
* EIP-1559: Fee market change for ETH 1.0 chain
* EIP-3198: BASEFEE opcode
* EIP-3529: Reduction in gas refunds
* EIP-3541: Reject new contracts starting with the 0xEF byte
* Delay difficulty bomb to December 2021 (EIP-3554)
* London hardfork blocks: goerli (5,062,605), rinkeby (8,897,988), ropsten (10,499,401)
* Add chainspecs for aleut and baikal
* Bump ethereum/tests to v9.0.2
## OpenEthereum v3.2.6
Enhancement:
* Berlin hardfork blocks: poacore (21,364,900), poasokol (21,050,600)
## OpenEthereum v3.2.5
Bug fixes:
* Backport: Block sync stopped without any errors. #277 (#286)
* Strict memory order (#306)
Enhancements:
* Executable queue for ancient blocks inclusion (#208)
* Backport AuRa commits for xdai (#330)
* Add Nethermind to clients that accept service transactions (#324)
* Implement the filter argument in parity_pendingTransactions (#295)
* Ethereum-types and various libs upgraded (#315)
* [evmbin] Omit storage output, now for std-json (#311)
* Freeze pruning while creating snapshot (#205)
* AuRa multi block reward (#290)
* Improved metrics. DB read/write. prometheus prefix config (#240)
* Send RLPx auth in EIP-8 format (#287)
* rpc module reverted for RPC JSON api (#284)
* Revert "Remove eth/63 protocol version (#252)"
* Support for eth/65 protocol version (#366)
* Berlin hardfork blocks: kovan (24,770,900), xdai (16,101,500)
* Bump ethereum/tests to v8.0.3
devops:
* Upgrade docker alpine to `v1.13.2`. for rust `v1.47`.
* Send SIGTERM instead of SIGHUP to OE daemon (#317)
## OpenEthereum v3.2.4
* Fix for Typed transaction broadcast.
## OpenEthereum v3.2.3
* Hotfix for berlin consensus error.
## OpenEthereum v3.2.2-rc.1
Bug fixes:
* Backport: Block sync stopped without any errors. #277 (#286)
* Strict memory order (#306)
Enhancements:
* Executable queue for ancient blocks inclusion (#208)
* Backport AuRa commits for xdai (#330)
* Add Nethermind to clients that accept service transactions (#324)
* Implement the filter argument in parity_pendingTransactions (#295)
* Ethereum-types and various libs upgraded (#315)
* Bump ethereum/tests to v8.0.2
* [evmbin] Omit storage output, now for std-json (#311)
* Freeze pruning while creating snapshot (#205)
* AuRa multi block reward (#290)
* Improved metrics. DB read/write. prometheus prefix config (#240)
* Send RLPx auth in EIP-8 format (#287)
* rpc module reverted for RPC JSON api (#284)
* Revert "Remove eth/63 protocol version (#252)"
devops:
* Upgrade docker alpine to `v1.13.2`. for rust `v1.47`.
* Send SIGTERM instead of SIGHUP to OE daemon (#317)
## OpenEthereum v3.2.1
Hot fix issue, related to initial sync:
* Initial sync gets stuck. (#318)
## OpenEthereum v3.2.0
Bug fixes:
* Update EWF's chains with Istanbul transition block numbers (#11482) (#254)
* fix Supplied instant is later than self (#169)
* ethcore/snapshot: fix double-lock in Service::feed_chunk (#289)
Enhancements:
* Berlin hardfork blocks: mainnet (12,244,000), goerli (4,460,644), rinkeby (8,290,928) and ropsten (9,812,189)
* yolo3x spec (#241)
* EIP-2930 RPC support
* Remove eth/63 protocol version (#252)
* Snapshot manifest block added to prometheus (#232)
* EIP-1898: Allow default block parameter to be blockHash
* Change ProtocolId to U64
* Update ethereum/tests
- [CHANGELOG-1.7](docs/CHANGELOG-1.7.md)
- [CHANGELOG-1.6](docs/CHANGELOG-1.6.md)
- [CHANGELOG-1.5](docs/CHANGELOG-1.5.md)
- [CHANGELOG-1.4](docs/CHANGELOG-1.4.md)
- [CHANGELOG-1.3](docs/CHANGELOG-1.3.md)
- [CHANGELOG-1.2](docs/CHANGELOG-1.2.md)
- [CHANGELOG-1.1](docs/CHANGELOG-1.1.md)
- [CHANGELOG-1.0](docs/CHANGELOG-1.0.md)
- [CHANGELOG-0.9](docs/CHANGELOG-0.9.md)

6375
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,77 +1,75 @@
[package]
description = "OpenEthereum"
name = "openethereum"
# NOTE Make sure to update util/version/Cargo.toml as well
version = "3.3.3"
description = "Parity Ethereum client"
name = "parity"
version = "1.9.3"
license = "GPL-3.0"
authors = [
"OpenEthereum developers",
"Parity Technologies <admin@parity.io>"
]
authors = ["Parity Technologies <admin@parity.io>"]
build = "build.rs"
[dependencies]
blooms-db = { path = "crates/db/blooms-db" }
log = "0.4"
log = "0.3"
env_logger = "0.4"
rustc-hex = "1.0"
docopt = "1.0"
docopt = "0.8"
clap = "2"
term_size = "0.3"
textwrap = "0.9"
time = "0.1"
num_cpus = "1.2"
number_prefix = "0.2"
rpassword = "1.0"
semver = "0.9"
ansi_term = "0.10"
parking_lot = "0.11.1"
regex = "1.0"
atty = "0.2.8"
semver = "0.6"
ansi_term = "0.9"
parking_lot = "0.4"
regex = "0.2"
isatty = "0.1"
toml = "0.4"
serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
app_dirs = "1.1.1"
futures = "0.1"
hyper = { version = "0.12" }
futures-cpupool = "0.1"
fdlimit = "0.1"
ws2_32-sys = "0.2"
ctrlc = { git = "https://github.com/paritytech/rust-ctrlc.git" }
jsonrpc-core = "15.0.0"
parity-bytes = "0.1"
common-types = { path = "crates/ethcore/types" }
ethcore = { path = "crates/ethcore", features = ["parity"] }
ethcore-accounts = { path = "crates/accounts", optional = true }
ethcore-blockchain = { path = "crates/ethcore/blockchain" }
ethcore-call-contract = { path = "crates/vm/call-contract"}
ethcore-db = { path = "crates/db/db" }
ethcore-io = { path = "crates/runtime/io" }
ethcore-logger = { path = "bin/oe/logger" }
ethcore-miner = { path = "crates/concensus/miner" }
ethcore-network = { path = "crates/net/network" }
ethcore-service = { path = "crates/ethcore/service" }
ethcore-sync = { path = "crates/ethcore/sync" }
ethereum-types = "0.9.2"
ethkey = { path = "crates/accounts/ethkey" }
ethstore = { path = "crates/accounts/ethstore" }
fetch = { path = "crates/net/fetch" }
node-filter = { path = "crates/net/node-filter" }
parity-crypto = { version = "0.6.2", features = [ "publickey" ] }
rlp = { version = "0.4.6" }
cli-signer= { path = "crates/util/cli-signer" }
parity-daemonize = "0.3"
parity-local-store = { path = "crates/concensus/miner/local-store" }
parity-runtime = { path = "crates/runtime/runtime" }
parity-rpc = { path = "crates/rpc" }
parity-version = { path = "crates/util/version" }
parity-path = "0.1"
dir = { path = "crates/util/dir" }
panic_hook = { path = "crates/util/panic-hook" }
keccak-hash = "0.5.0"
migration-rocksdb = { path = "crates/db/migration-rocksdb" }
kvdb = "0.1"
kvdb-rocksdb = "0.1.3"
journaldb = { path = "crates/db/journaldb" }
stats = { path = "crates/util/stats" }
prometheus = "0.9.0"
jsonrpc-core = { git = "https://github.com/paritytech/jsonrpc.git", branch = "parity-1.9" }
ethsync = { path = "sync" }
ethcore = { path = "ethcore" }
ethcore-util = { path = "util" }
ethcore-bytes = { path = "util/bytes" }
ethcore-bigint = { path = "util/bigint" }
ethcore-io = { path = "util/io" }
ethcore-devtools = { path = "devtools" }
ethcore-light = { path = "ethcore/light" }
ethcore-logger = { path = "logger" }
ethcore-stratum = { path = "stratum" }
ethcore-network = { path = "util/network" }
node-filter = { path = "ethcore/node_filter" }
ethkey = { path = "ethkey" }
node-health = { path = "dapps/node-health" }
rlp = { path = "util/rlp" }
rpc-cli = { path = "rpc_cli" }
parity-hash-fetch = { path = "hash-fetch" }
parity-ipfs-api = { path = "ipfs" }
parity-local-store = { path = "local-store" }
parity-reactor = { path = "util/reactor" }
parity-rpc = { path = "rpc" }
parity-rpc-client = { path = "rpc_client" }
parity-updater = { path = "updater" }
parity-version = { path = "util/version" }
parity-whisper = { path = "whisper" }
path = { path = "util/path" }
dir = { path = "util/dir" }
panic_hook = { path = "panic_hook" }
keccak-hash = { path = "util/hash" }
migration = { path = "util/migration" }
kvdb = { path = "util/kvdb" }
kvdb-rocksdb = { path = "util/kvdb-rocksdb" }
journaldb = { path = "util/journaldb" }
# ethcore-secretstore = { path = "crates/util/secret-store", optional = true }
parity-dapps = { path = "dapps", optional = true }
ethcore-secretstore = { path = "secret_store", optional = true }
[build-dependencies]
rustc_version = "0.2"
@ -79,55 +77,54 @@ rustc_version = "0.2"
[dev-dependencies]
pretty_assertions = "0.1"
ipnetwork = "0.12.6"
tempdir = "0.3"
fake-fetch = { path = "crates/net/fake-fetch" }
lazy_static = "1.2.0"
[target.'cfg(windows)'.dependencies]
winapi = { version = "0.3.4", features = ["winsock2", "winuser", "shellapi"] }
winapi = "0.2"
[target.'cfg(not(windows))'.dependencies]
daemonize = "0.2"
[features]
default = ["accounts"]
accounts = ["ethcore-accounts", "parity-rpc/accounts"]
miner-debug = ["ethcore/miner-debug"]
default = ["ui-precompiled"]
ui = [
"ui-enabled",
"parity-dapps/ui",
]
ui-precompiled = [
"ui-enabled",
"parity-dapps/ui-precompiled",
]
ui-enabled = ["dapps"]
dapps = ["parity-dapps"]
jit = ["ethcore/jit"]
json-tests = ["ethcore/json-tests"]
ci-skip-tests = ["ethcore/ci-skip-tests"]
test-heavy = ["ethcore/test-heavy"]
evm-debug = ["ethcore/evm-debug"]
evm-debug-tests = ["ethcore/evm-debug-tests"]
slow-blocks = ["ethcore/slow-blocks"]
secretstore = ["ethcore-secretstore"]
final = ["parity-version/final"]
deadlock_detection = ["parking_lot/deadlock_detection"]
# to create a memory profile (requires nightly rust), use e.g.
# `heaptrack /path/to/parity <parity params>`,
# to visualize a memory profile, use `heaptrack_gui`
# or
# `valgrind --tool=massif /path/to/parity <parity params>`
# and `massif-visualizer` for visualization
memory_profiling = []
[lib]
path = "bin/oe/lib.rs"
[[bin]]
path = "bin/oe/main.rs"
name = "openethereum"
path = "parity/main.rs"
name = "parity"
[profile.test]
lto = false
opt-level = 3 # makes tests slower to compile, but faster to run
[profile.dev]
panic = "abort"
[profile.release]
debug = false
lto = true
lto = false
panic = "abort"
[workspace]
# This should only list projects that are not
# in the dependency tree in any other way
# (i.e. pretty much only standalone CLI tools)
members = [
"bin/ethkey",
"bin/ethstore",
"bin/evmbin",
"bin/chainspec"
"chainspec",
"dapps/js-glue",
"ethcore/wasm/run",
"ethkey/cli",
"ethstore/cli",
"evmbin",
"transaction-pool",
"whisper",
]

349
README.md
View File

@ -1,102 +1,102 @@
# OpenEthereum
# [Parity](https://parity.io/) - fast, light, and robust Ethereum client
Fast and feature-rich multi-network Ethereum client.
[![build status](https://gitlab.parity.io/parity/parity/badges/master/build.svg)](https://gitlab.parity.io/parity/parity/commits/master)
[![Snap Status](https://build.snapcraft.io/badge/paritytech/parity.svg)](https://build.snapcraft.io/user/paritytech/parity)
[![GPLv3](https://img.shields.io/badge/license-GPL%20v3-green.svg)](https://www.gnu.org/licenses/gpl-3.0.en.html)
[» Download the latest release «](https://github.com/openethereum/openethereum/releases/latest)
- [Download the latest release here.](https://github.com/paritytech/parity/releases/latest)
[![GPL licensed][license-badge]][license-url]
[![Build Status][ci-badge]][ci-url]
[![Discord chat][chat-badge]][chat-url]
### Join the chat!
[license-badge]: https://img.shields.io/badge/license-GPL_v3-green.svg
[license-url]: LICENSE
[ci-badge]: https://github.com/openethereum/openethereum/workflows/Build%20and%20Test%20Suite/badge.svg
[ci-url]: https://github.com/openethereum/openethereum/actions
[chat-badge]: https://img.shields.io/discord/669192218728202270.svg?logo=discord
[chat-url]: https://discord.io/openethereum
Get in touch with us on Gitter:
[![Gitter: Parity](https://img.shields.io/badge/gitter-parity-4AB495.svg)](https://gitter.im/paritytech/parity)
[![Gitter: Parity.js](https://img.shields.io/badge/gitter-parity.js-4AB495.svg)](https://gitter.im/paritytech/parity.js)
[![Gitter: Parity/Miners](https://img.shields.io/badge/gitter-parity/miners-4AB495.svg)](https://gitter.im/paritytech/parity/miners)
[![Gitter: Parity-PoA](https://img.shields.io/badge/gitter-parity--poa-4AB495.svg)](https://gitter.im/paritytech/parity-poa)
## Table of Contents
Or join our community on Matrix:
[![Riot: +Parity](https://img.shields.io/badge/riot-%2Bparity%3Amatrix.parity.io-orange.svg)](https://riot.im/app/#/group/+parity:matrix.parity.io)
1. [Description](#chapter-001)
2. [Technical Overview](#chapter-002)
3. [Building](#chapter-003)<br>
3.1 [Building Dependencies](#chapter-0031)<br>
3.2 [Building from Source Code](#chapter-0032)<br>
3.3 [Starting OpenEthereum](#chapter-0034)
4. [Testing](#chapter-004)
5. [Documentation](#chapter-005)
6. [Toolchain](#chapter-006)
7. [Contributing](#chapter-008)
8. [License](#chapter-009)
Be sure to check out [our wiki](https://paritytech.github.io/wiki/) and the [internal documentation](https://paritytech.github.io/parity/ethcore/index.html) for more information.
----
## 1. Description <a id="chapter-001"></a>
## About Parity
**Built for mission-critical use**: Miners, service providers, and exchanges need fast synchronisation and maximum uptime. OpenEthereum provides the core infrastructure essential for speedy and reliable services.
Parity's goal is to be the fastest, lightest, and most secure Ethereum client. We are developing Parity using the sophisticated and cutting-edge Rust programming language. Parity is licensed under the GPLv3, and can be used for all your Ethereum needs.
- Clean, modular codebase for easy customisation
- Advanced CLI-based client
- Minimal memory and storage footprint
- Synchronise in hours, not days with Warp Sync
- Modular for light integration into your service or product
Parity comes with a built-in wallet. To access [Parity Wallet](http://web3.site/) simply go to http://web3.site/ (if you don't have access to the internet, but still want to use the service, you can also use http://127.0.0.1:8180/). It includes various functionality allowing you to:
## 2. Technical Overview <a id="chapter-002"></a>
- create and manage your Ethereum accounts;
- manage your Ether and any Ethereum tokens;
- create and register your own tokens;
- and much more.
OpenEthereum's goal is to be the fastest, lightest, and most secure Ethereum client. We are developing OpenEthereum using the **Rust programming language**. OpenEthereum is licensed under the GPLv3 and can be used for all your Ethereum needs.
By default, Parity will also run a JSONRPC server on `127.0.0.1:8545` and a websockets server on `127.0.0.1:8546`. This is fully configurable and supports a number of APIs.
By default, OpenEthereum runs a JSON-RPC HTTP server on port `:8545` and a Web-Sockets server on port `:8546`. This is fully configurable and supports a number of APIs.
If you run into an issue while using Parity, feel free to file one in this repository or hop on our [Gitter](https://gitter.im/paritytech/parity) or [Riot](https://riot.im/app/#/group/+parity:matrix.parity.io) chat room to ask a question. We are glad to help!
If you run into problems while using OpenEthereum, check out the [old wiki for documentation](https://openethereum.github.io/), feel free to [file an issue in this repository](https://github.com/openethereum/openethereum/issues/new), or hop on our [Discord](https://discord.io/openethereum) chat room to ask a question. We are glad to help!
**For security-critical issues**, please refer to the security policy outlined in [SECURITY.MD](SECURITY.md).
You can download OpenEthereum's latest release at [the releases page](https://github.com/openethereum/openethereum/releases) or follow the instructions below to build from source. Read the [CHANGELOG.md](CHANGELOG.md) for a list of all changes between different versions.
Parity's current release is 1.8. You can download it at https://github.com/paritytech/parity/releases or follow the instructions below to build from source.
## 3. Building <a id="chapter-003"></a>
----
### 3.1 Build Dependencies <a id="chapter-0031"></a>
## Build dependencies
OpenEthereum requires **latest stable Rust version** to build.
**Parity requires Rust version 1.21.0 to build**
We recommend installing Rust through [rustup](https://www.rustup.rs/). If you don't already have `rustup`, you can install it like this:
We recommend installing Rust through [rustup](https://www.rustup.rs/). If you don't already have rustup, you can install it like this:
- Linux:
```bash
$ curl https://sh.rustup.rs -sSf | sh
```
```bash
$ curl https://sh.rustup.rs -sSf | sh
```
OpenEthereum also requires `clang` (>= 9.0), `clang++`, `pkg-config`, `file`, `make`, and `cmake` packages to be installed.
Parity also requires `gcc`, `g++`, `libssl-dev`/`openssl`, `libudev-dev` and `pkg-config` packages to be installed.
- OSX:
```bash
$ curl https://sh.rustup.rs -sSf | sh
```
`clang` is required. It comes with Xcode command line tools or can be installed with homebrew.
- Windows
Make sure you have Visual Studio 2015 with C++ support installed. Next, download and run the rustup installer from
https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe, start "VS2015 x64 Native Tools Command Prompt", and use the following command to install and set up the msvc toolchain:
```bash
$ curl https://sh.rustup.rs -sSf | sh
$ rustup default stable-x86_64-pc-windows-msvc
```
`clang` is required. It comes with Xcode command line tools or can be installed with homebrew.
Once you have rustup, install Parity or download and build from source
- Windows:
Make sure you have Visual Studio 2015 with C++ support installed. Next, download and run the `rustup` installer from
https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe, start "VS2015 x64 Native Tools Command Prompt", and use the following command to install and set up the `msvc` toolchain:
```bash
$ rustup default stable-x86_64-pc-windows-msvc
```
----
Once you have `rustup` installed, then you need to install:
* [Perl](https://www.perl.org)
* [Yasm](https://yasm.tortall.net)
## Install from the snap store
Make sure that these binaries are in your `PATH`. After that, you should be able to build OpenEthereum from source.
### 3.2 Build from Source Code <a id="chapter-0032"></a>
In any of the [supported Linux distros](https://snapcraft.io/docs/core/install):
```bash
# download OpenEthereum code
$ git clone https://github.com/openethereum/openethereum
$ cd openethereum
# build in release mode
$ cargo build --release --features final
sudo snap install parity --edge
```
This produces an executable in the `./target/release` subdirectory.
(Note that this is an experimental and unstable release, at the moment)
----
## Build from source
```bash
# download Parity code
$ git clone https://github.com/paritytech/parity
$ cd parity
# build in release mode
$ cargo build --release
```
This will produce an executable in the `./target/release` subdirectory.
Note: if cargo fails to parse manifest try:
@ -104,215 +104,46 @@ Note: if cargo fails to parse manifest try:
$ ~/.cargo/bin/cargo build --release
```
Note, when compiling a crate and you receive errors, it's in most cases your outdated version of Rust, or some of your crates have to be recompiled. Cleaning the repository will most likely solve the issue if you are on the latest stable version of Rust, try:
Note: When compiling a crate and you receive the following error:
```
error: the crate is compiled with the panic strategy `abort` which is incompatible with this crate's strategy of `unwind`
```
Cleaning the repository will most likely solve the issue, try:
```bash
$ cargo clean
```
This always compiles the latest nightly builds. If you want to build stable, do a
This will always compile the latest nightly builds. If you want to build stable or beta, do a `git checkout stable` or `git checkout beta` first.
----
## Simple one-line installer for Mac and Ubuntu
```bash
$ git checkout stable
bash <(curl https://get.parity.io -Lk)
```
### 3.3 Starting OpenEthereum <a id="chapter-0034"></a>
The one-line installer always defaults to the latest beta release.
#### Manually
## Start Parity
To start OpenEthereum manually, just run
### Manually
To start Parity manually, just run
```bash
$ ./target/release/openethereum
$ ./target/release/parity
```
so OpenEthereum begins syncing the Ethereum blockchain.
and Parity will begin syncing the Ethereum blockchain.
#### Using `systemd` service file
### Using systemd service file
To start OpenEthereum as a regular user using `systemd` init:
To start Parity as a regular user using systemd init:
1. Copy `./scripts/openethereum.service` to your
`systemd` user directory (usually `~/.config/systemd/user`).
2. Copy release to bin folder, write `sudo install ./target/release/openethereum /usr/bin/openethereum`
3. To configure OpenEthereum, see [our wiki](https://openethereum.github.io/Configuring-OpenEthereum) for details.
## 4. Testing <a id="chapter-004"></a>
Download the required test files: `git submodule update --init --recursive`. You can run tests with the following commands:
* **All** packages
```
cargo test --all
```
* Specific package
```
cargo test --package <spec>
```
Replace `<spec>` with one of the packages from the [package list](#package-list) (e.g. `cargo test --package evmbin`).
You can show your logs in the test output by passing `--nocapture` (i.e. `cargo test --package evmbin -- --nocapture`)
## 5. Documentation <a id="chapter-005"></a>
Be sure to [check out our wiki](https://openethereum.github.io/) for more information.
### Viewing documentation for OpenEthereum packages
You can generate documentation for OpenEthereum Rust packages that automatically opens in your web browser using [rustdoc with Cargo](https://doc.rust-lang.org/rustdoc/what-is-rustdoc.html#using-rustdoc-with-cargo) (of the The Rustdoc Book), by running the the following commands:
* **All** packages
```
cargo doc --document-private-items --open
```
* Specific package
```
cargo doc --package <spec> -- --document-private-items --open
```
Use`--document-private-items` to also view private documentation and `--no-deps` to exclude building documentation for dependencies.
Replacing `<spec>` with one of the following from the details section below (i.e. `cargo doc --package openethereum --open`):
<a id="package-list"></a>
**Package List**
<details><p>
* OpenEthereum Client Application
```bash
openethereum
```
* OpenEthereum Account Management, Key Management Tool, and Keys Generator
```bash
ethcore-accounts, ethkey-cli, ethstore, ethstore-cli
```
* OpenEthereum Chain Specification
```bash
chainspec
```
* OpenEthereum CLI Signer Tool & RPC Client
```bash
cli-signer parity-rpc-client
```
* OpenEthereum Ethash & ProgPoW Implementations
```bash
ethash
```
* EthCore Library
```bash
ethcore
```
* OpenEthereum Blockchain Database, Test Generator, Configuration,
Caching, Importing Blocks, and Block Information
```bash
ethcore-blockchain
```
* OpenEthereum Contract Calls and Blockchain Service & Registry Information
```bash
ethcore-call-contract
```
* OpenEthereum Database Access & Utilities, Database Cache Manager
```bash
ethcore-db
```
* OpenEthereum Virtual Machine (EVM) Rust Implementation
```bash
evm
```
* OpenEthereum Light Client Implementation
```bash
ethcore-light
```
* Smart Contract based Node Filter, Manage Permissions of Network Connections
```bash
node-filter
```
* OpenEthereum Client & Network Service Creation & Registration with the I/O Subsystem
```bash
ethcore-service
```
* OpenEthereum Blockchain Synchronization
```bash
ethcore-sync
```
* OpenEthereum Common Types
```bash
common-types
```
* OpenEthereum Virtual Machines (VM) Support Library
```bash
vm
```
* OpenEthereum WASM Interpreter
```bash
wasm
```
* OpenEthereum WASM Test Runner
```bash
pwasm-run-test
```
* OpenEthereum EVM Implementation
```bash
evmbin
```
* OpenEthereum JSON Deserialization
```bash
ethjson
```
* OpenEthereum State Machine Generalization for Consensus Engines
```bash
parity-machine
```
* OpenEthereum Miner Interface
```bash
ethcore-miner parity-local-store price-info ethcore-stratum using_queue
```
* OpenEthereum Logger Implementation
```bash
ethcore-logger
```
* OpenEthereum JSON-RPC Servers
```bash
parity-rpc
```
* OpenEthereum Updater Service
```bash
parity-updater parity-hash-fetch
```
* OpenEthereum Core Libraries (`util`)
```bash
accounts-bloom blooms-db dir eip-712 fake-fetch fastmap fetch ethcore-io
journaldb keccak-hasher len-caching-lock memory-cache memzero
migration-rocksdb ethcore-network ethcore-network-devp2p panic_hook
patricia-trie-ethereum registrar rlp_compress stats
time-utils triehash-ethereum unexpected parity-version
```
</p></details>
## 6. Toolchain <a id="chapter-006"></a>
In addition to the OpenEthereum client, there are additional tools in this repository available:
- [evmbin](./bin/evmbin) - OpenEthereum EVM Implementation.
- [ethstore](./crates/accounts/ethstore) - OpenEthereum Key Management.
- [ethkey](./crates/accounts/ethkey) - OpenEthereum Keys Generator.
The following tools are available in a separate repository:
- [ethabi](https://github.com/openethereum/ethabi) - OpenEthereum Encoding of Function Calls. [Docs here](https://crates.io/crates/ethabi)
- [whisper](https://github.com/openethereum/whisper) - OpenEthereum Whisper-v2 PoC Implementation.
## 7. Contributing <a id="chapter-007"></a>
An introduction has been provided in the ["So You Want to be a Core Developer" presentation slides by Hernando Castano](http://tiny.cc/contrib-to-parity-eth). Additional guidelines are provided in [CONTRIBUTING](./.github/CONTRIBUTING.md).
### Contributor Code of Conduct
[CODE_OF_CONDUCT](./.github/CODE_OF_CONDUCT.md)
## 8. License <a id="chapter-008"></a>
[LICENSE](./LICENSE)
1. Copy `./scripts/parity.service` to your
systemd user directory (usually `~/.config/systemd/user`).
2. To configure Parity, write a `/etc/parity/config.toml` config file, see [Configuring Parity](https://github.com/paritytech/parity/wiki/Configuring-Parity) for details.

54
SECURITY.md Normal file
View File

@ -0,0 +1,54 @@
# Security Policy
For security inquiries or vulnerability reports, please send a message to security@parity.io.
Please use a descriptive subject line so we can identify the report as such.
If you send a report, we will respond to the e-mail within 48 hours, and provide regular updates from that time onwards.
If you would like to encrypt your report, please use the PGP key provided below.
It is also reproduced [on MIT's key server](https://pgp.mit.edu/pks/lookup?op=get&search=0x5D0F03018D07DE73)
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBFlyIAwBCACe0keNPjgYzZ1Oy/8t3zj/Qw9bHHqrzx7FWy8NbXnYBM19NqOZ
DIP7Oe0DvCaf/uruBskCS0iVstHlEFQ2AYe0Ei0REt9lQdy61GylU/DEB3879IG+
6FO0SnFeYeerv1/hFI2K6uv8v7PyyVDiiJSW0I1KIs2OBwJicTKmWxLAeQsRgx9G
yRGalrVk4KP+6pWTA7k3DxmDZKZyfYV/Ej10NtuzmsemwDbv98HKeomp/kgFOfSy
3AZjeCpctlsNqpjUuXa0/HudmH2WLxZ0fz8XeoRh8XM9UudNIecjrDqmAFrt/btQ
/3guvlzhFCdhYPVGsUusKMECk/JG+Xx1/1ZjABEBAAG0LFBhcml0eSBTZWN1cml0
eSBDb250YWN0IDxzZWN1cml0eUBwYXJpdHkuaW8+iQFUBBMBCAA+FiEE2uUVYCjP
N6B8aTiDXQ8DAY0H3nMFAllyIAwCGwMFCQPCZwAFCwkIBwIGFQgJCgsCBBYCAwEC
HgECF4AACgkQXQ8DAY0H3nM60wgAkS3A36Zc+upiaxU7tumcGv+an17j7gin0sif
+0ELSjVfrXInM6ovai+NhUdcLkJ7tCrKS90fvlaELK5Sg9CXBWCTFccKN4A/B7ey
rOg2NPXUecnyBB/XqQgKYH7ujYlOlqBDXMfz6z8Hj6WToxg9PPMGGomyMGh8AWxM
3yRPFs5RKt0VKgN++5N00oly5Y8ri5pgCidDvCLYMGTVDHFKwkuc9w6BlWlu1R1e
/hXFWUFAP1ffTAul3QwyKhjPn2iotCdxXjvt48KaU8DN4iL7aMBN/ZBKqGS7yRdF
D/JbJyaaJ0ZRvFSTSXy/sWY3z1B5mtCPBxco8hqqNfRkCwuZ6LkBDQRZciAMAQgA
8BP8xrwe12TOUTqL/Vrbxv/FLdhKh53J6TrPKvC2TEEKOrTNo5ahRq+XOS5E7G2N
x3b+fq8gR9BzFcldAx0XWUtGs/Wv++ulaSNqTBxj13J3G3WGsUfMKxRgj//piCUD
bCFLQfGZdKk0M1o9QkPVARwwmvCNiNB/l++xGqPtfc44H5jWj3GoGvL2MkShPzrN
yN/bJ+m+R5gtFGdInqa5KXBuxxuW25eDKJ+LzjbgUgeC76wNcfOiQHTdMkcupjdO
bbGFwo10hcbRAOcZEv6//Zrlmk/6nPxEd2hN20St2bSN0+FqfZ267mWEu3ejsgF8
ArdCpv5h4fBvJyNwiTZwIQARAQABiQE8BBgBCAAmFiEE2uUVYCjPN6B8aTiDXQ8D
AY0H3nMFAllyIAwCGwwFCQPCZwAACgkQXQ8DAY0H3nNisggAl4fqhRlA34wIb190
sqXHVxiCuzPaqS6krE9xAa1+gncX485OtcJNqnjugHm2rFE48lv7oasviuPXuInE
/OgVFnXYv9d/Xx2JUeDs+bFTLouCDRY2Unh7KJZasfqnMcCHWcxHx5FvRNZRssaB
WTZVo6sizPurGUtbpYe4/OLFhadBqAE0EUmVRFEUMc1YTnu4eLaRBzoWN4d2UWwi
LN25RSrVSke7LTSFbgn9ntQrQ2smXSR+cdNkkfRCjFcpUaecvFl9HwIqoyVbT4Ym
0hbpbbX/cJdc91tKa+psa29uMeGL/cgL9fAu19yNFRyOTMxjZnvql1X/WE1pLmoP
ETBD1Q==
=K9Qw
-----END PGP PUBLIC KEY BLOCK-----
```
Important Legal Information:
Your submission might be eligible for a bug bounty. The bug bounty program is an experimental and discretionary rewards program for the Parity community to reward those who are helping to improve the Parity software. Rewards are at the sole discretion of Parity Technologies Ltd..
We are not able to issue rewards to individuals who are on sanctions lists or who are in countries on sanctions lists (e.g. North Korea, Iran, etc).
You are responsible for all taxes. All rewards are subject to applicable law.
Finally, your testing must not violate any law or compromise any data that is not yours.

View File

@ -1,9 +0,0 @@
[package]
description = "Parity Ethereum Chain Specification"
name = "chainspec"
version = "0.1.0"
authors = ["Marek Kotewicz <marek@parity.io>"]
[dependencies]
ethjson = { path = "../../crates/ethjson" }
serde_json = "1.0"

View File

@ -1,51 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate ethjson;
extern crate serde_json;
use ethjson::spec::Spec;
use std::{env, fs, process};
fn quit(s: &str) -> ! {
println!("{}", s);
process::exit(1);
}
fn main() {
let mut args = env::args();
if args.len() != 2 {
quit(
"You need to specify chainspec.json\n\
\n\
./chainspec <chainspec.json>",
);
}
let path = args.nth(1).expect("args.len() == 2; qed");
let file = match fs::File::open(&path) {
Ok(file) => file,
Err(_) => quit(&format!("{} could not be opened", path)),
};
let spec: Result<Spec, _> = serde_json::from_reader(file);
if let Err(err) = spec {
quit(&format!("{} {}", path, err.to_string()));
}
println!("{} is valid", path);
}

View File

@ -1,22 +0,0 @@
[package]
description = "Parity Ethereum Keys Generator CLI"
name = "ethkey-cli"
version = "0.1.0"
authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
docopt = "1.0"
env_logger = "0.5"
ethkey = { path = "../../crates/accounts/ethkey" }
panic_hook = { path = "../../crates/util/panic-hook" }
parity-crypto = { version = "0.6.2", features = [ "publickey" ] }
parity-wordlist="1.3"
rustc-hex = "1.0"
serde = "1.0"
serde_derive = "1.0"
threadpool = "1.7"
[[bin]]
name = "ethkey"
path = "src/main.rs"
doc = false

View File

@ -1,493 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate docopt;
extern crate env_logger;
extern crate ethkey;
extern crate panic_hook;
extern crate parity_crypto as crypto;
extern crate parity_wordlist;
extern crate rustc_hex;
extern crate serde;
extern crate threadpool;
#[macro_use]
extern crate serde_derive;
use std::{env, fmt, io, num::ParseIntError, process, sync};
use crypto::publickey::{
sign, verify_address, verify_public, Error as EthkeyError, Generator, KeyPair, Random,
};
use docopt::Docopt;
use ethkey::{brain_recover, Brain, BrainPrefix, Prefix};
use rustc_hex::{FromHex, FromHexError};
const USAGE: &'static str = r#"
Parity Ethereum keys generator.
Copyright 2015-2019 Parity Technologies (UK) Ltd.
Usage:
ethkey info <secret-or-phrase> [options]
ethkey generate random [options]
ethkey generate prefix <prefix> [options]
ethkey sign <secret> <message>
ethkey verify public <public> <signature> <message>
ethkey verify address <address> <signature> <message>
ethkey recover <address> <known-phrase>
ethkey [-h | --help]
Options:
-h, --help Display this message and exit.
-s, --secret Display only the secret key.
-p, --public Display only the public key.
-a, --address Display only the address.
-b, --brain Use parity brain wallet algorithm. Not recommended.
Commands:
info Display public key and address of the secret.
generate random Generates new random Ethereum key.
generate prefix Random generation, but address must start with a prefix ("vanity address").
sign Sign message using a secret key.
verify Verify signer of the signature by public key or address.
recover Try to find brain phrase matching given address from partial phrase.
"#;
#[derive(Debug, Deserialize)]
struct Args {
cmd_info: bool,
cmd_generate: bool,
cmd_random: bool,
cmd_prefix: bool,
cmd_sign: bool,
cmd_verify: bool,
cmd_public: bool,
cmd_address: bool,
cmd_recover: bool,
arg_prefix: String,
arg_secret: String,
arg_secret_or_phrase: String,
arg_known_phrase: String,
arg_message: String,
arg_public: String,
arg_address: String,
arg_signature: String,
flag_secret: bool,
flag_public: bool,
flag_address: bool,
flag_brain: bool,
}
#[derive(Debug)]
enum Error {
Ethkey(EthkeyError),
FromHex(FromHexError),
ParseInt(ParseIntError),
Docopt(docopt::Error),
Io(io::Error),
}
impl From<EthkeyError> for Error {
fn from(err: EthkeyError) -> Self {
Error::Ethkey(err)
}
}
impl From<FromHexError> for Error {
fn from(err: FromHexError) -> Self {
Error::FromHex(err)
}
}
impl From<ParseIntError> for Error {
fn from(err: ParseIntError) -> Self {
Error::ParseInt(err)
}
}
impl From<docopt::Error> for Error {
fn from(err: docopt::Error) -> Self {
Error::Docopt(err)
}
}
impl From<io::Error> for Error {
fn from(err: io::Error) -> Self {
Error::Io(err)
}
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match *self {
Error::Ethkey(ref e) => write!(f, "{}", e),
Error::FromHex(ref e) => write!(f, "{}", e),
Error::ParseInt(ref e) => write!(f, "{}", e),
Error::Docopt(ref e) => write!(f, "{}", e),
Error::Io(ref e) => write!(f, "{}", e),
}
}
}
enum DisplayMode {
KeyPair,
Secret,
Public,
Address,
}
impl DisplayMode {
fn new(args: &Args) -> Self {
if args.flag_secret {
DisplayMode::Secret
} else if args.flag_public {
DisplayMode::Public
} else if args.flag_address {
DisplayMode::Address
} else {
DisplayMode::KeyPair
}
}
}
fn main() {
panic_hook::set_abort();
env_logger::try_init().expect("Logger initialized only once.");
match execute(env::args()) {
Ok(ok) => println!("{}", ok),
Err(Error::Docopt(ref e)) => e.exit(),
Err(err) => {
eprintln!("{}", err);
process::exit(1);
}
}
}
fn display(result: (KeyPair, Option<String>), mode: DisplayMode) -> String {
let keypair = result.0;
match mode {
DisplayMode::KeyPair => match result.1 {
Some(extra_data) => format!("{}\n{}", extra_data, keypair),
None => format!("{}", keypair),
},
DisplayMode::Secret => format!("{:x}", keypair.secret()),
DisplayMode::Public => format!("{:x}", keypair.public()),
DisplayMode::Address => format!("{:x}", keypair.address()),
}
}
fn execute<S, I>(command: I) -> Result<String, Error>
where
I: IntoIterator<Item = S>,
S: AsRef<str>,
{
let args: Args = Docopt::new(USAGE).and_then(|d| d.argv(command).deserialize())?;
return if args.cmd_info {
let display_mode = DisplayMode::new(&args);
let result = if args.flag_brain {
let phrase = args.arg_secret_or_phrase;
let phrase_info = validate_phrase(&phrase);
let keypair = Brain::new(phrase).generate();
(keypair, Some(phrase_info))
} else {
let secret = args
.arg_secret_or_phrase
.parse()
.map_err(|_| EthkeyError::InvalidSecretKey)?;
(KeyPair::from_secret(secret)?, None)
};
Ok(display(result, display_mode))
} else if args.cmd_generate {
let display_mode = DisplayMode::new(&args);
let result = if args.cmd_random {
if args.flag_brain {
let mut brain = BrainPrefix::new(vec![0], usize::max_value(), BRAIN_WORDS);
let keypair = brain.generate()?;
let phrase = format!("recovery phrase: {}", brain.phrase());
(keypair, Some(phrase))
} else {
(Random.generate(), None)
}
} else if args.cmd_prefix {
let prefix = args.arg_prefix.from_hex()?;
let brain = args.flag_brain;
in_threads(move || {
let iterations = 1024;
let prefix = prefix.clone();
move || {
let prefix = prefix.clone();
let res = if brain {
let mut brain = BrainPrefix::new(prefix, iterations, BRAIN_WORDS);
let result = brain.generate();
let phrase = format!("recovery phrase: {}", brain.phrase());
result.map(|keypair| (keypair, Some(phrase)))
} else {
let result = Prefix::new(prefix, iterations).generate();
result.map(|res| (res, None))
};
Ok(res.map(Some).unwrap_or(None))
}
})?
} else {
return Ok(format!("{}", USAGE));
};
Ok(display(result, display_mode))
} else if args.cmd_sign {
let secret = args
.arg_secret
.parse()
.map_err(|_| EthkeyError::InvalidSecretKey)?;
let message = args
.arg_message
.parse()
.map_err(|_| EthkeyError::InvalidMessage)?;
let signature = sign(&secret, &message)?;
Ok(format!("{}", signature))
} else if args.cmd_verify {
let signature = args
.arg_signature
.parse()
.map_err(|_| EthkeyError::InvalidSignature)?;
let message = args
.arg_message
.parse()
.map_err(|_| EthkeyError::InvalidMessage)?;
let ok = if args.cmd_public {
let public = args
.arg_public
.parse()
.map_err(|_| EthkeyError::InvalidPublicKey)?;
verify_public(&public, &signature, &message)?
} else if args.cmd_address {
let address = args
.arg_address
.parse()
.map_err(|_| EthkeyError::InvalidAddress)?;
verify_address(&address, &signature, &message)?
} else {
return Ok(format!("{}", USAGE));
};
Ok(format!("{}", ok))
} else if args.cmd_recover {
let display_mode = DisplayMode::new(&args);
let known_phrase = args.arg_known_phrase;
let address = args
.arg_address
.parse()
.map_err(|_| EthkeyError::InvalidAddress)?;
let (phrase, keypair) = in_threads(move || {
let mut it =
brain_recover::PhrasesIterator::from_known_phrase(&known_phrase, BRAIN_WORDS);
move || {
let mut i = 0;
while let Some(phrase) = it.next() {
i += 1;
let keypair = Brain::new(phrase.clone()).generate();
if keypair.address() == address {
return Ok(Some((phrase, keypair)));
}
if i >= 1024 {
return Ok(None);
}
}
Err(EthkeyError::Custom("Couldn't find any results.".into()))
}
})?;
Ok(display((keypair, Some(phrase)), display_mode))
} else {
Ok(format!("{}", USAGE))
};
}
const BRAIN_WORDS: usize = 12;
fn validate_phrase(phrase: &str) -> String {
match Brain::validate_phrase(phrase, BRAIN_WORDS) {
Ok(()) => format!("The recovery phrase looks correct.\n"),
Err(err) => format!("The recover phrase was not generated by Parity: {}", err),
}
}
fn in_threads<F, X, O>(prepare: F) -> Result<O, EthkeyError>
where
O: Send + 'static,
X: Send + 'static,
F: Fn() -> X,
X: FnMut() -> Result<Option<O>, EthkeyError>,
{
let pool = threadpool::Builder::new().build();
let (tx, rx) = sync::mpsc::sync_channel(1);
let is_done = sync::Arc::new(sync::atomic::AtomicBool::default());
for _ in 0..pool.max_count() {
let is_done = is_done.clone();
let tx = tx.clone();
let mut task = prepare();
pool.execute(move || {
loop {
if is_done.load(sync::atomic::Ordering::SeqCst) {
return;
}
let res = match task() {
Ok(None) => continue,
Ok(Some(v)) => Ok(v),
Err(err) => Err(err),
};
// We are interested only in the first response.
let _ = tx.send(res);
}
});
}
if let Ok(solution) = rx.recv() {
is_done.store(true, sync::atomic::Ordering::SeqCst);
return solution;
}
Err(EthkeyError::Custom("No results found.".into()))
}
#[cfg(test)]
mod tests {
use super::execute;
#[test]
fn info() {
let command = vec![
"ethkey",
"info",
"17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55",
]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected =
"secret: 17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55
public: 689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124
address: 26d1ec50b4e62c1d1a40d16e7cacc6a6580757d5".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn brain() {
let command = vec!["ethkey", "info", "--brain", "this is sparta"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected =
"The recover phrase was not generated by Parity: The word 'this' does not come from the dictionary.
secret: aa22b54c0cb43ee30a014afe5ef3664b1cde299feabca46cd3167a85a57c39f2
public: c4c5398da6843632c123f543d714d2d2277716c11ff612b2a2f23c6bda4d6f0327c31cd58c55a9572c3cc141dade0c32747a13b7ef34c241b26c84adbb28fcf4
address: 006e27b6a72e1f34c626762f3c4761547aff1421".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn secret() {
let command = vec!["ethkey", "info", "--brain", "this is sparta", "--secret"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected =
"aa22b54c0cb43ee30a014afe5ef3664b1cde299feabca46cd3167a85a57c39f2".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn public() {
let command = vec!["ethkey", "info", "--brain", "this is sparta", "--public"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "c4c5398da6843632c123f543d714d2d2277716c11ff612b2a2f23c6bda4d6f0327c31cd58c55a9572c3cc141dade0c32747a13b7ef34c241b26c84adbb28fcf4".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn address() {
let command = vec!["ethkey", "info", "-b", "this is sparta", "--address"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "006e27b6a72e1f34c626762f3c4761547aff1421".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn sign() {
let command = vec![
"ethkey",
"sign",
"17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55",
"bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987",
]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn verify_valid_public() {
let command = vec!["ethkey", "verify", "public", "689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124", "c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200", "bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "true".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn verify_valid_address() {
let command = vec!["ethkey", "verify", "address", "26d1ec50b4e62c1d1a40d16e7cacc6a6580757d5", "c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200", "bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "true".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
#[test]
fn verify_invalid() {
let command = vec!["ethkey", "verify", "public", "689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124", "c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200", "bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec986"]
.into_iter()
.map(Into::into)
.collect::<Vec<String>>();
let expected = "false".to_owned();
assert_eq!(execute(command).unwrap(), expected);
}
}

View File

@ -1,25 +0,0 @@
[package]
description = "Parity Ethereum Key Management CLI"
name = "ethstore-cli"
version = "0.1.1"
authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
docopt = "1.0"
env_logger = "0.5"
num_cpus = "1.6"
rustc-hex = "1.0"
serde = "1.0"
serde_derive = "1.0"
parking_lot = "0.11.1"
ethstore = { path = "../../crates/accounts/ethstore" }
dir = { path = '../../crates/util/dir' }
panic_hook = { path = "../../crates/util/panic-hook" }
[[bin]]
name = "ethstore"
path = "src/main.rs"
doc = false
[dev-dependencies]
tempdir = "0.3.5"

View File

@ -1,66 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use parking_lot::Mutex;
use std::{cmp, collections::VecDeque, sync::Arc, thread};
use ethstore::{ethkey::Password, Error, PresaleWallet};
use num_cpus;
pub fn run(passwords: VecDeque<Password>, wallet_path: &str) -> Result<(), Error> {
let passwords = Arc::new(Mutex::new(passwords));
let mut handles = Vec::new();
for _ in 0..num_cpus::get() {
let passwords = passwords.clone();
let wallet = PresaleWallet::open(&wallet_path)?;
handles.push(thread::spawn(move || {
look_for_password(passwords, wallet);
}));
}
for handle in handles {
handle
.join()
.map_err(|err| Error::Custom(format!("Error finishing thread: {:?}", err)))?;
}
Ok(())
}
fn look_for_password(passwords: Arc<Mutex<VecDeque<Password>>>, wallet: PresaleWallet) {
let mut counter = 0;
while !passwords.lock().is_empty() {
let package = {
let mut passwords = passwords.lock();
let len = passwords.len();
passwords.split_off(cmp::min(len, 32))
};
for pass in package {
counter += 1;
match wallet.decrypt(&pass) {
Ok(_) => {
println!("Found password: {}", pass.as_str());
passwords.lock().clear();
return;
}
_ if counter % 100 == 0 => print!("."),
_ => {}
}
}
}
}

View File

@ -1,363 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate dir;
extern crate docopt;
extern crate ethstore;
extern crate num_cpus;
extern crate panic_hook;
extern crate parking_lot;
extern crate rustc_hex;
extern crate serde;
extern crate env_logger;
#[macro_use]
extern crate serde_derive;
use std::{collections::VecDeque, env, fmt, fs, io::Read, process};
use docopt::Docopt;
use ethstore::{
accounts_dir::{KeyDirectory, RootDiskDirectory},
ethkey::{Address, Password},
import_accounts, EthStore, PresaleWallet, SecretStore, SecretVaultRef, SimpleSecretStore,
StoreAccountRef,
};
mod crack;
pub const USAGE: &'static str = r#"
Parity Ethereum key management tool.
Copyright 2015-2019 Parity Technologies (UK) Ltd.
Usage:
ethstore insert <secret> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore change-pwd <address> <old-pwd> <new-pwd> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore list [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore import [<password>] [--src DIR] [--dir DIR]
ethstore import-wallet <path> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore find-wallet-pass <path> <password>
ethstore remove <address> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore sign <address> <password> <message> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore public <address> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore list-vaults [--dir DIR]
ethstore create-vault <vault> <password> [--dir DIR]
ethstore change-vault-pwd <vault> <old-pwd> <new-pwd> [--dir DIR]
ethstore move-to-vault <address> <vault> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore move-from-vault <address> <vault> <password> [--dir DIR]
ethstore [-h | --help]
Options:
-h, --help Display this message and exit.
--dir DIR Specify the secret store directory. It may be either
parity, parity-(chain), geth, geth-test
or a path [default: parity].
--vault VAULT Specify vault to use in this operation.
--vault-pwd VAULTPWD Specify vault password to use in this operation. Please note
that this option is required when vault option is set.
Otherwise it is ignored.
--src DIR Specify import source. It may be either
parity, parity-(chain), geth, geth-test
or a path [default: geth].
Commands:
insert Save account with password.
change-pwd Change password.
list List accounts.
import Import accounts from src.
import-wallet Import presale wallet.
find-wallet-pass Tries to open a wallet with list of passwords given.
remove Remove account.
sign Sign message.
public Displays public key for an address.
list-vaults List vaults.
create-vault Create new vault.
change-vault-pwd Change vault password.
move-to-vault Move account to vault from another vault/root directory.
move-from-vault Move account to root directory from given vault.
"#;
#[derive(Debug, Deserialize)]
struct Args {
cmd_insert: bool,
cmd_change_pwd: bool,
cmd_list: bool,
cmd_import: bool,
cmd_import_wallet: bool,
cmd_find_wallet_pass: bool,
cmd_remove: bool,
cmd_sign: bool,
cmd_public: bool,
cmd_list_vaults: bool,
cmd_create_vault: bool,
cmd_change_vault_pwd: bool,
cmd_move_to_vault: bool,
cmd_move_from_vault: bool,
arg_secret: String,
arg_password: String,
arg_old_pwd: String,
arg_new_pwd: String,
arg_address: String,
arg_message: String,
arg_path: String,
arg_vault: String,
flag_src: String,
flag_dir: String,
flag_vault: String,
flag_vault_pwd: String,
}
enum Error {
Ethstore(ethstore::Error),
Docopt(docopt::Error),
}
impl From<ethstore::Error> for Error {
fn from(err: ethstore::Error) -> Self {
Error::Ethstore(err)
}
}
impl From<docopt::Error> for Error {
fn from(err: docopt::Error) -> Self {
Error::Docopt(err)
}
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
Error::Ethstore(ref err) => fmt::Display::fmt(err, f),
Error::Docopt(ref err) => fmt::Display::fmt(err, f),
}
}
}
fn main() {
panic_hook::set_abort();
if env::var("RUST_LOG").is_err() {
env::set_var("RUST_LOG", "warn")
}
env_logger::try_init().expect("Logger initialized only once.");
match execute(env::args()) {
Ok(result) => println!("{}", result),
Err(Error::Docopt(ref e)) => e.exit(),
Err(err) => {
eprintln!("{}", err);
process::exit(1);
}
}
}
fn key_dir(location: &str, password: Option<Password>) -> Result<Box<dyn KeyDirectory>, Error> {
let dir: RootDiskDirectory = match location {
path if path.starts_with("parity") => {
let chain = path.split('-').nth(1).unwrap_or("ethereum");
let mut path = dir::default_data_pathbuf();
path.push("keys");
path.push(chain);
RootDiskDirectory::create(path)?
}
path => RootDiskDirectory::create(path)?,
};
Ok(Box::new(dir.with_password(password)))
}
fn open_args_vault(store: &EthStore, args: &Args) -> Result<SecretVaultRef, Error> {
if args.flag_vault.is_empty() {
return Ok(SecretVaultRef::Root);
}
let vault_pwd = load_password(&args.flag_vault_pwd)?;
store.open_vault(&args.flag_vault, &vault_pwd)?;
Ok(SecretVaultRef::Vault(args.flag_vault.clone()))
}
fn open_args_vault_account(
store: &EthStore,
address: Address,
args: &Args,
) -> Result<StoreAccountRef, Error> {
match open_args_vault(store, args)? {
SecretVaultRef::Root => Ok(StoreAccountRef::root(address)),
SecretVaultRef::Vault(name) => Ok(StoreAccountRef::vault(&name, address)),
}
}
fn format_accounts(accounts: &[Address]) -> String {
accounts
.iter()
.enumerate()
.map(|(i, a)| format!("{:2}: 0x{:x}", i, a))
.collect::<Vec<String>>()
.join("\n")
}
fn format_vaults(vaults: &[String]) -> String {
vaults.join("\n")
}
fn load_password(path: &str) -> Result<Password, Error> {
let mut file = fs::File::open(path).map_err(|e| {
ethstore::Error::Custom(format!("Error opening password file '{}': {}", path, e))
})?;
let mut password = String::new();
file.read_to_string(&mut password).map_err(|e| {
ethstore::Error::Custom(format!("Error reading password file '{}': {}", path, e))
})?;
// drop EOF
let _ = password.pop();
Ok(password.into())
}
fn execute<S, I>(command: I) -> Result<String, Error>
where
I: IntoIterator<Item = S>,
S: AsRef<str>,
{
let args: Args = Docopt::new(USAGE).and_then(|d| d.argv(command).deserialize())?;
let store = EthStore::open(key_dir(&args.flag_dir, None)?)?;
return if args.cmd_insert {
let secret = args
.arg_secret
.parse()
.map_err(|_| ethstore::Error::InvalidSecret)?;
let password = load_password(&args.arg_password)?;
let vault_ref = open_args_vault(&store, &args)?;
let account_ref = store.insert_account(vault_ref, secret, &password)?;
Ok(format!("0x{:x}", account_ref.address))
} else if args.cmd_change_pwd {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let old_pwd = load_password(&args.arg_old_pwd)?;
let new_pwd = load_password(&args.arg_new_pwd)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
let ok = store
.change_password(&account_ref, &old_pwd, &new_pwd)
.is_ok();
Ok(format!("{}", ok))
} else if args.cmd_list {
let vault_ref = open_args_vault(&store, &args)?;
let accounts = store.accounts()?;
let accounts: Vec<_> = accounts
.into_iter()
.filter(|a| &a.vault == &vault_ref)
.map(|a| a.address)
.collect();
Ok(format_accounts(&accounts))
} else if args.cmd_import {
let password = match args.arg_password.as_ref() {
"" => None,
_ => Some(load_password(&args.arg_password)?),
};
let src = key_dir(&args.flag_src, password)?;
let dst = key_dir(&args.flag_dir, None)?;
let accounts = import_accounts(&*src, &*dst)?;
Ok(format_accounts(&accounts))
} else if args.cmd_import_wallet {
let wallet = PresaleWallet::open(&args.arg_path)?;
let password = load_password(&args.arg_password)?;
let kp = wallet.decrypt(&password)?;
let vault_ref = open_args_vault(&store, &args)?;
let account_ref = store.insert_account(vault_ref, kp.secret().clone(), &password)?;
Ok(format!("0x{:x}", account_ref.address))
} else if args.cmd_find_wallet_pass {
let passwords = load_password(&args.arg_password)?;
let passwords = passwords
.as_str()
.lines()
.map(|line| str::to_owned(line).into())
.collect::<VecDeque<_>>();
crack::run(passwords, &args.arg_path)?;
Ok(format!("Password not found."))
} else if args.cmd_remove {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let password = load_password(&args.arg_password)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
let ok = store.remove_account(&account_ref, &password).is_ok();
Ok(format!("{}", ok))
} else if args.cmd_sign {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let message = args
.arg_message
.parse()
.map_err(|_| ethstore::Error::InvalidMessage)?;
let password = load_password(&args.arg_password)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
let signature = store.sign(&account_ref, &password, &message)?;
Ok(format!("0x{}", signature))
} else if args.cmd_public {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let password = load_password(&args.arg_password)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
let public = store.public(&account_ref, &password)?;
Ok(format!("0x{:x}", public))
} else if args.cmd_list_vaults {
let vaults = store.list_vaults()?;
Ok(format_vaults(&vaults))
} else if args.cmd_create_vault {
let password = load_password(&args.arg_password)?;
store.create_vault(&args.arg_vault, &password)?;
Ok("OK".to_owned())
} else if args.cmd_change_vault_pwd {
let old_pwd = load_password(&args.arg_old_pwd)?;
let new_pwd = load_password(&args.arg_new_pwd)?;
store.open_vault(&args.arg_vault, &old_pwd)?;
store.change_vault_password(&args.arg_vault, &new_pwd)?;
Ok("OK".to_owned())
} else if args.cmd_move_to_vault {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let password = load_password(&args.arg_password)?;
let account_ref = open_args_vault_account(&store, address, &args)?;
store.open_vault(&args.arg_vault, &password)?;
store.change_account_vault(SecretVaultRef::Vault(args.arg_vault), account_ref)?;
Ok("OK".to_owned())
} else if args.cmd_move_from_vault {
let address = args
.arg_address
.parse()
.map_err(|_| ethstore::Error::InvalidAccount)?;
let password = load_password(&args.arg_password)?;
store.open_vault(&args.arg_vault, &password)?;
store.change_account_vault(
SecretVaultRef::Root,
StoreAccountRef::vault(&args.arg_vault, address),
)?;
Ok("OK".to_owned())
} else {
Ok(format!("{}", USAGE))
};
}

View File

@ -1,33 +0,0 @@
[package]
description = "Parity EVM Implementation"
name = "evmbin"
version = "0.1.0"
authors = ["Parity Technologies <admin@parity.io>"]
[[bin]]
name = "openethereum-evm"
path = "./src/main.rs"
[dependencies]
common-types = { path = "../../crates/ethcore/types", features = ["test-helpers"] }
docopt = "1.0"
env_logger = "0.5"
ethcore = { path = "../../crates/ethcore", features = ["test-helpers", "json-tests", "to-pod-full"] }
ethereum-types = "0.9.2"
ethjson = { path = "../../crates/ethjson" }
evm = { path = "../../crates/vm/evm" }
panic_hook = { path = "../../crates/util/panic-hook" }
parity-bytes = "0.1"
rustc-hex = "1.0"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
vm = { path = "../../crates/vm/vm" }
[dev-dependencies]
criterion = "0.3.0"
pretty_assertions = "0.1"
tempdir = "0.3"
[features]
evm-debug = ["ethcore/evm-debug-tests"]

View File

@ -1,60 +0,0 @@
## evmbin
EVM implementation for OpenEthereum.
### Usage
```
EVM implementation for Parity.
Copyright 2015-2020 Parity Technologies (UK) Ltd.
Usage:
openethereum-evm state-test <file> [--json --std-json --std-dump-json --only NAME --chain CHAIN --std-out-only --std-err-only --omit-storage-output --omit-memory-output]
openethereum-evm stats [options]
openethereum-evm stats-jsontests-vm <file>
openethereum-evm [options]
openethereum-evm [-h | --help]
Commands:
state-test Run a state test from a json file.
stats Execute EVM runtime code and return the statistics.
stats-jsontests-vm Execute standard json-tests format VMTests and return
timing statistics in tsv format.
Transaction options:
--code CODE Contract code as hex (without 0x).
--to ADDRESS Recipient address (without 0x).
--from ADDRESS Sender address (without 0x).
--input DATA Input data as hex (without 0x).
--gas GAS Supplied gas as hex (without 0x).
--gas-price WEI Supplied gas price as hex (without 0x).
State test options:
--chain CHAIN Run only from specific chain name (i.e. one of EIP150, EIP158,
Frontier, Homestead, Byzantium, Constantinople,
ConstantinopleFix, Istanbul, EIP158ToByzantiumAt5, FrontierToHomesteadAt5,
HomesteadToDaoAt5, HomesteadToEIP150At5, Berlin, Yolo3).
--only NAME Runs only a single test matching the name.
General options:
--json Display verbose results in JSON.
--std-json Display results in standardized JSON format.
--std-err-only With --std-json redirect to err output only.
--std-out-only With --std-json redirect to out output only.
--omit-storage-output With --std-json omit storage output.
--omit-memory-output With --std-json omit memory output.
--std-dump-json Display results in standardized JSON format
with additional state dump.
Display result state dump in standardized JSON format.
--chain CHAIN Chain spec file path.
-h, --help Display this message and exit.
```
## OpenEthereum toolchain
_This project is a part of the OpenEthereum toolchain._
- [evmbin](https://github.com/openethereum/openethereum/blob/master/evmbin/) - EVM implementation for OpenEthereum
- [ethabi](https://github.com/openethereum/ethabi) - OpenEthereum function calls encoding.
- [ethstore](https://github.com/openethereum/openethereum/blob/master/accounts/ethstore) - OpenEthereum key management.
- [ethkey](https://github.com/openethereum/openethereum/blob/master/accounts/ethkey) - OpenEthereum keys generator.

View File

@ -1,98 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! benchmarking for EVM
//! should be started with:
//! ```bash
//! cargo bench
//! ```
#[macro_use]
extern crate criterion;
extern crate ethcore;
extern crate ethereum_types;
extern crate evm;
extern crate rustc_hex;
extern crate vm;
use criterion::{black_box, Criterion};
use std::sync::Arc;
use ethereum_types::U256;
use evm::Factory;
use rustc_hex::FromHex;
use vm::{tests::FakeExt, ActionParams, Ext};
criterion_group!(
evmbin,
bench_simple_loop_usize,
bench_simple_loop_u256,
bench_rng_usize,
bench_rng_u256
);
criterion_main!(evmbin);
fn bench_simple_loop_usize(c: &mut Criterion) {
simple_loop(U256::from(::std::usize::MAX), c, "simple_loop_usize")
}
fn bench_simple_loop_u256(c: &mut Criterion) {
simple_loop(!U256::zero(), c, "simple_loop_u256")
}
fn simple_loop(gas: U256, c: &mut Criterion, bench_id: &str) {
let code = black_box(
"606060405260005b620042408112156019575b6001016007565b600081905550600680602b6000396000f3606060405200".from_hex().unwrap()
);
c.bench_function(bench_id, move |b| {
b.iter(|| {
let mut params = ActionParams::default();
params.gas = gas;
params.code = Some(Arc::new(code.clone()));
let mut ext = FakeExt::new();
let evm = Factory::default().create(params, ext.schedule(), ext.depth());
let _ = evm.exec(&mut ext);
})
});
}
fn bench_rng_usize(c: &mut Criterion) {
rng(U256::from(::std::usize::MAX), c, "rng_usize")
}
fn bench_rng_u256(c: &mut Criterion) {
rng(!U256::zero(), c, "rng_u256")
}
fn rng(gas: U256, c: &mut Criterion, bench_id: &str) {
let code = black_box(
"6060604052600360056007600b60005b62004240811215607f5767ffe7649d5eca84179490940267f47ed85c4b9a6379019367f8e5dd9a5c994bba9390930267f91d87e4b8b74e55019267ff97f6f3b29cda529290920267f393ada8dd75c938019167fe8d437c45bb3735830267f47d9a7b5428ffec019150600101600f565b838518831882186000555050505050600680609a6000396000f3606060405200".from_hex().unwrap()
);
c.bench_function(bench_id, move |b| {
b.iter(|| {
let mut params = ActionParams::default();
params.gas = gas;
params.code = Some(Arc::new(code.clone()));
let mut ext = FakeExt::new();
let evm = Factory::default().create(params, ext.schedule(), ext.depth());
let _ = evm.exec(&mut ext);
})
});
}

View File

@ -1,38 +0,0 @@
{
"name": "lab",
"engine": {
"Ethash": {
"params": {
"minimumDifficulty": "0x1",
"difficultyBoundDivisor": "0x800"
}
}
},
"accounts": {
"0000000000000000000000000000000000000020": {
"nonce": "0x0",
"balance": "0x64",
"code": "0x62aaaaaa60aa60aa5060aa60aa60aa60aa60aa60aa"
}
},
"params":{
"networkID": "0x42",
"maximumExtraDataSize": "0x20",
"minGasLimit": "0x1",
"gasLimitBoundDivisor": "0x400"
},
"genesis": {
"gasLimit": "0x8000000",
"seal": {
"ethereum": {
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"nonce": "0x0000000000000042"
}
},
"difficulty": "0x400",
"extraData": "0x0",
"author": "0x3333333333333333333333333333333333333333",
"timestamp": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
}
}

View File

@ -1,40 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Config used by display informants
#[derive(Default, Copy, Clone, Debug)]
pub struct Config {
omit_storage_output: bool,
omit_memory_output: bool,
}
impl Config {
pub fn new(omit_storage_output: bool, omit_memory_output: bool) -> Config {
Config {
omit_storage_output,
omit_memory_output,
}
}
pub fn omit_storage_output(&self) -> bool {
self.omit_storage_output
}
pub fn omit_memory_output(&self) -> bool {
self.omit_memory_output
}
}

View File

@ -1,425 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! JSON VM output.
use std::{collections::HashMap, mem};
use super::config::Config;
use bytes::ToPretty;
use display;
use ethcore::trace;
use ethereum_types::{BigEndianHash, H256, U256};
use info as vm;
/// JSON formatting informant.
#[derive(Default)]
pub struct Informant {
code: Vec<u8>,
depth: usize,
pc: usize,
instruction: u8,
gas_cost: U256,
gas_used: U256,
mem_written: Option<(usize, usize)>,
store_written: Option<(U256, U256)>,
stack: Vec<U256>,
memory: Vec<u8>,
storage: HashMap<H256, H256>,
traces: Vec<String>,
subtraces: Vec<String>,
subinfos: Vec<Informant>,
subdepth: usize,
unmatched: bool,
config: Config,
}
impl Informant {
pub fn new(config: Config) -> Informant {
let mut def = Informant::default();
def.config = config;
def
}
fn with_informant_in_depth<F: Fn(&mut Informant)>(
informant: &mut Informant,
depth: usize,
f: F,
) {
if depth == 0 {
f(informant);
} else {
Self::with_informant_in_depth(
informant
.subinfos
.last_mut()
.expect("prepare/done_trace are not balanced"),
depth - 1,
f,
);
}
}
fn informant_trace(informant: &Informant, gas_used: U256) -> String {
let memory = if informant.config.omit_memory_output() {
"".to_string()
} else {
format!("0x{}", informant.memory.to_hex())
};
let storage = if informant.config.omit_storage_output() {
None
} else {
Some(&informant.storage)
};
let info = ::evm::Instruction::from_u8(informant.instruction).map(|i| i.info());
json!({
"pc": informant.pc,
"op": informant.instruction,
"opName": info.map(|i| i.name).unwrap_or(""),
"gas": format!("{:#x}", gas_used.saturating_add(informant.gas_cost)),
"gasCost": format!("{:#x}", informant.gas_cost),
"memory": memory,
"stack": informant.stack,
"storage": storage,
"depth": informant.depth,
})
.to_string()
}
}
impl vm::Informant for Informant {
type Sink = Config;
fn before_test(&mut self, name: &str, action: &str) {
println!("{}", json!({"action": action, "test": name}));
}
fn set_gas(&mut self, gas: U256) {
self.gas_used = gas;
}
fn clone_sink(&self) -> Self::Sink {
self.config
}
fn finish(result: vm::RunResult<Self::Output>, config: &mut Self::Sink) {
match result {
Ok(success) => {
for trace in success.traces.unwrap_or_else(Vec::new) {
println!("{}", trace);
}
let success_msg = json!({
"output": format!("0x{}", success.output.to_hex()),
"gasUsed": format!("{:#x}", success.gas_used),
"time": display::as_micros(&success.time),
});
println!("{}", success_msg)
}
Err(failure) => {
if !config.omit_storage_output() {
for trace in failure.traces.unwrap_or_else(Vec::new) {
println!("{}", trace);
}
}
let failure_msg = json!({
"error": &failure.error.to_string(),
"gasUsed": format!("{:#x}", failure.gas_used),
"time": display::as_micros(&failure.time),
});
println!("{}", failure_msg)
}
}
}
}
impl trace::VMTracer for Informant {
type Output = Vec<String>;
fn trace_next_instruction(&mut self, pc: usize, instruction: u8, _current_gas: U256) -> bool {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
informant.pc = pc;
informant.instruction = instruction;
informant.unmatched = true;
});
true
}
fn trace_prepare_execute(
&mut self,
pc: usize,
instruction: u8,
gas_cost: U256,
mem_written: Option<(usize, usize)>,
store_written: Option<(U256, U256)>,
) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
informant.pc = pc;
informant.instruction = instruction;
informant.gas_cost = gas_cost;
informant.mem_written = mem_written;
informant.store_written = store_written;
});
}
fn trace_executed(&mut self, gas_used: U256, stack_push: &[U256], mem: &[u8]) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
let store_diff = informant.store_written.clone();
let info = ::evm::Instruction::from_u8(informant.instruction).map(|i| i.info());
let trace = Self::informant_trace(informant, gas_used);
informant.traces.push(trace);
informant.unmatched = false;
informant.gas_used = gas_used;
let len = informant.stack.len();
let info_args = info.map(|i| i.args).unwrap_or(0);
informant
.stack
.truncate(if len > info_args { len - info_args } else { 0 });
informant.stack.extend_from_slice(stack_push);
// TODO [ToDr] Align memory?
if let Some((pos, size)) = informant.mem_written.clone() {
if informant.memory.len() < (pos + size) {
informant.memory.resize(pos + size, 0);
}
informant.memory[pos..(pos + size)].copy_from_slice(&mem[pos..(pos + size)]);
}
if let Some((pos, val)) = store_diff {
informant.storage.insert(
BigEndianHash::from_uint(&pos),
BigEndianHash::from_uint(&val),
);
}
if !informant.subtraces.is_empty() {
informant
.traces
.extend(mem::replace(&mut informant.subtraces, vec![]));
}
});
}
fn prepare_subtrace(&mut self, code: &[u8]) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
let mut vm = Informant::default();
vm.config = informant.config;
vm.depth = informant.depth + 1;
vm.code = code.to_vec();
vm.gas_used = informant.gas_used;
informant.subinfos.push(vm);
});
self.subdepth += 1;
}
fn done_subtrace(&mut self) {
self.subdepth -= 1;
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant| {
if let Some(subtraces) = informant
.subinfos
.pop()
.expect("prepare/done_subtrace are not balanced")
.drain()
{
informant.subtraces.extend(subtraces);
}
});
}
fn drain(mut self) -> Option<Self::Output> {
if self.unmatched {
// print last line with final state:
self.gas_cost = 0.into();
let gas_used = self.gas_used;
let subdepth = self.subdepth;
Self::with_informant_in_depth(&mut self, subdepth, |informant: &mut Informant| {
let trace = Self::informant_trace(informant, gas_used);
informant.traces.push(trace);
});
} else if !self.subtraces.is_empty() {
self.traces
.extend(mem::replace(&mut self.subtraces, vec![]));
}
Some(self.traces)
}
}
#[cfg(test)]
mod tests {
use super::*;
use info::tests::run_test;
use serde_json;
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[serde(rename_all = "camelCase")]
struct TestTrace {
pc: usize,
#[serde(rename = "op")]
instruction: u8,
op_name: String,
#[serde(rename = "gas")]
gas_used: U256,
gas_cost: U256,
memory: String,
stack: Vec<U256>,
storage: Option<HashMap<H256, H256>>,
depth: usize,
}
fn assert_traces_eq(a: &[String], b: &[String]) {
let mut ita = a.iter();
let mut itb = b.iter();
loop {
match (ita.next(), itb.next()) {
(Some(a), Some(b)) => {
// Compare both without worrying about the order of the fields
let actual: TestTrace = serde_json::from_str(a).unwrap();
let expected: TestTrace = serde_json::from_str(b).unwrap();
assert_eq!(actual, expected);
println!("{}", a);
}
(None, None) => return,
e => {
panic!("Traces mismatch: {:?}", e);
}
}
}
}
fn compare_json(traces: Option<Vec<String>>, expected: &str) {
let expected = expected
.split("\n")
.map(|x| x.trim())
.map(|x| x.to_owned())
.filter(|x| !x.is_empty())
.collect::<Vec<_>>();
assert_traces_eq(&traces.unwrap(), &expected);
}
#[test]
fn should_trace_failure() {
run_test(
Informant::default(),
&compare_json,
"60F8d6",
0xffff,
r#"
{"pc":0,"op":96,"opName":"PUSH1","gas":"0xffff","gasCost":"0x3","memory":"0x","stack":[],"storage":{},"depth":1}
{"pc":2,"op":214,"opName":"","gas":"0xfffc","gasCost":"0x0","memory":"0x","stack":["0xf8"],"storage":{},"depth":1}
"#,
);
run_test(
Informant::default(),
&compare_json,
"F8d6",
0xffff,
r#"
{"pc":0,"op":248,"opName":"","gas":"0xffff","gasCost":"0x0","memory":"0x","stack":[],"storage":{},"depth":1}
"#,
);
run_test(
Informant::default(),
&compare_json,
"5A51",
0xfffff,
r#"
{"depth":1,"gas":"0xfffff","gasCost":"0x2","memory":"0x","op":90,"opName":"GAS","pc":0,"stack":[],"storage":{}}
{"depth":1,"gas":"0xffffd","gasCost":"0x0","memory":"0x","op":81,"opName":"MLOAD","pc":1,"stack":["0xffffd"],"storage":{}}
"#,
);
}
#[test]
fn should_trace_create_correctly() {
run_test(
Informant::default(),
&compare_json,
"32343434345830f138343438323439f0",
0xffff,
r#"
{"pc":0,"op":50,"opName":"ORIGIN","gas":"0xffff","gasCost":"0x2","memory":"0x","stack":[],"storage":{},"depth":1}
{"pc":1,"op":52,"opName":"CALLVALUE","gas":"0xfffd","gasCost":"0x2","memory":"0x","stack":["0x0"],"storage":{},"depth":1}
{"pc":2,"op":52,"opName":"CALLVALUE","gas":"0xfffb","gasCost":"0x2","memory":"0x","stack":["0x0","0x0"],"storage":{},"depth":1}
{"pc":3,"op":52,"opName":"CALLVALUE","gas":"0xfff9","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0"],"storage":{},"depth":1}
{"pc":4,"op":52,"opName":"CALLVALUE","gas":"0xfff7","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0"],"storage":{},"depth":1}
{"pc":5,"op":88,"opName":"PC","gas":"0xfff5","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0"],"storage":{},"depth":1}
{"pc":6,"op":48,"opName":"ADDRESS","gas":"0xfff3","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0","0x5"],"storage":{},"depth":1}
{"pc":7,"op":241,"opName":"CALL","gas":"0xfff1","gasCost":"0x61d0","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0","0x5","0x0"],"storage":{},"depth":1}
{"pc":8,"op":56,"opName":"CODESIZE","gas":"0x9e21","gasCost":"0x2","memory":"0x","stack":["0x1"],"storage":{},"depth":1}
{"pc":9,"op":52,"opName":"CALLVALUE","gas":"0x9e1f","gasCost":"0x2","memory":"0x","stack":["0x1","0x10"],"storage":{},"depth":1}
{"pc":10,"op":52,"opName":"CALLVALUE","gas":"0x9e1d","gasCost":"0x2","memory":"0x","stack":["0x1","0x10","0x0"],"storage":{},"depth":1}
{"pc":11,"op":56,"opName":"CODESIZE","gas":"0x9e1b","gasCost":"0x2","memory":"0x","stack":["0x1","0x10","0x0","0x0"],"storage":{},"depth":1}
{"pc":12,"op":50,"opName":"ORIGIN","gas":"0x9e19","gasCost":"0x2","memory":"0x","stack":["0x1","0x10","0x0","0x0","0x10"],"storage":{},"depth":1}
{"pc":13,"op":52,"opName":"CALLVALUE","gas":"0x9e17","gasCost":"0x2","memory":"0x","stack":["0x1","0x10","0x0","0x0","0x10","0x0"],"storage":{},"depth":1}
{"pc":14,"op":57,"opName":"CODECOPY","gas":"0x9e15","gasCost":"0x9","memory":"0x","stack":["0x1","0x10","0x0","0x0","0x10","0x0","0x0"],"storage":{},"depth":1}
{"pc":15,"op":240,"opName":"CREATE","gas":"0x9e0c","gasCost":"0x9e0c","memory":"0x32343434345830f138343438323439f0","stack":["0x1","0x10","0x0","0x0"],"storage":{},"depth":1}
{"pc":0,"op":50,"opName":"ORIGIN","gas":"0x210c","gasCost":"0x2","memory":"0x","stack":[],"storage":{},"depth":2}
{"pc":1,"op":52,"opName":"CALLVALUE","gas":"0x210a","gasCost":"0x2","memory":"0x","stack":["0x0"],"storage":{},"depth":2}
{"pc":2,"op":52,"opName":"CALLVALUE","gas":"0x2108","gasCost":"0x2","memory":"0x","stack":["0x0","0x0"],"storage":{},"depth":2}
{"pc":3,"op":52,"opName":"CALLVALUE","gas":"0x2106","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0"],"storage":{},"depth":2}
{"pc":4,"op":52,"opName":"CALLVALUE","gas":"0x2104","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0"],"storage":{},"depth":2}
{"pc":5,"op":88,"opName":"PC","gas":"0x2102","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0"],"storage":{},"depth":2}
{"pc":6,"op":48,"opName":"ADDRESS","gas":"0x2100","gasCost":"0x2","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0","0x5"],"storage":{},"depth":2}
{"pc":7,"op":241,"opName":"CALL","gas":"0x20fe","gasCost":"0x0","memory":"0x","stack":["0x0","0x0","0x0","0x0","0x0","0x5","0xbd770416a3345f91e4b34576cb804a576fa48eb1"],"storage":{},"depth":2}
"#,
);
run_test(
Informant::default(),
&compare_json,
"3260D85554",
0xffff,
r#"
{"pc":0,"op":50,"opName":"ORIGIN","gas":"0xffff","gasCost":"0x2","memory":"0x","stack":[],"storage":{},"depth":1}
{"pc":1,"op":96,"opName":"PUSH1","gas":"0xfffd","gasCost":"0x3","memory":"0x","stack":["0x0"],"storage":{},"depth":1}
{"pc":3,"op":85,"opName":"SSTORE","gas":"0xfffa","gasCost":"0x1388","memory":"0x","stack":["0x0","0xd8"],"storage":{},"depth":1}
{"pc":4,"op":84,"opName":"SLOAD","gas":"0xec72","gasCost":"0x0","memory":"0x","stack":[],"storage":{"0x00000000000000000000000000000000000000000000000000000000000000d8":"0x0000000000000000000000000000000000000000000000000000000000000000"},"depth":1}
"#,
);
}
#[test]
fn should_omit_storage_and_memory_flag() {
// should omit storage
run_test(
Informant::new(Config::new(true, true)),
&compare_json,
"3260D85554",
0xffff,
r#"
{"pc":0,"op":50,"opName":"ORIGIN","gas":"0xffff","gasCost":"0x2","memory":"","stack":[],"storage":null,"depth":1}
{"pc":1,"op":96,"opName":"PUSH1","gas":"0xfffd","gasCost":"0x3","memory":"","stack":["0x0"],"storage":null,"depth":1}
{"pc":3,"op":85,"opName":"SSTORE","gas":"0xfffa","gasCost":"0x1388","memory":"","stack":["0x0","0xd8"],"storage":null,"depth":1}
{"pc":4,"op":84,"opName":"SLOAD","gas":"0xec72","gasCost":"0x0","memory":"","stack":[],"storage":null,"depth":1}
"#,
)
}
}

View File

@ -1,34 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! VM Output display utils.
use std::time::Duration;
pub mod config;
pub mod json;
pub mod simple;
pub mod std_json;
/// Formats duration into human readable format.
pub fn format_time(time: &Duration) -> String {
format!("{}.{:.9}s", time.as_secs(), time.subsec_nanos())
}
/// Formats the time as microseconds.
pub fn as_micros(time: &Duration) -> u64 {
time.as_secs() * 1_000_000 + time.subsec_nanos() as u64 / 1_000
}

View File

@ -1,74 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Simple VM output.
use super::config::Config;
use bytes::ToPretty;
use ethcore::trace;
use display;
use info as vm;
/// Simple formatting informant.
#[derive(Default)]
pub struct Informant {
config: Config,
}
impl Informant {
pub fn new(config: Config) -> Informant {
Informant { config }
}
}
impl vm::Informant for Informant {
type Sink = Config;
fn before_test(&mut self, name: &str, action: &str) {
println!("Test: {} ({})", name, action);
}
fn clone_sink(&self) -> Self::Sink {
self.config
}
fn finish(result: vm::RunResult<Self::Output>, _sink: &mut Self::Sink) {
match result {
Ok(success) => {
println!("Output: 0x{}", success.output.to_hex());
println!("Gas used: {:x}", success.gas_used);
println!("Time: {}", display::format_time(&success.time));
}
Err(failure) => {
println!("Error: {}", failure.error);
println!("Time: {}", display::format_time(&failure.time));
}
}
}
}
impl trace::VMTracer for Informant {
type Output = ();
fn prepare_subtrace(&mut self, _code: &[u8]) {
Default::default()
}
fn done_subtrace(&mut self) {}
fn drain(self) -> Option<()> {
None
}
}

View File

@ -1,413 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Standardized JSON VM output.
use std::{collections::HashMap, io};
use super::config::Config;
use bytes::ToPretty;
use display;
use ethcore::{pod_state, trace};
use ethereum_types::{BigEndianHash, H256, U256};
use info as vm;
pub trait Writer: io::Write + Send + Sized {
fn clone(&self) -> Self;
fn default() -> Self;
}
impl Writer for io::Stdout {
fn clone(&self) -> Self {
io::stdout()
}
fn default() -> Self {
io::stdout()
}
}
impl Writer for io::Stderr {
fn clone(&self) -> Self {
io::stderr()
}
fn default() -> Self {
io::stderr()
}
}
/// JSON formatting informant.
pub struct Informant<Trace, Out> {
code: Vec<u8>,
instruction: u8,
depth: usize,
stack: Vec<U256>,
storage: HashMap<H256, H256>,
subinfos: Vec<Informant<Trace, Out>>,
subdepth: usize,
trace_sink: Trace,
out_sink: Out,
config: Config,
}
impl Default for Informant<io::Stderr, io::Stdout> {
fn default() -> Self {
Self::new(io::stderr(), io::stdout(), Config::default())
}
}
impl Informant<io::Stdout, io::Stdout> {
/// std json informant using out only.
pub fn out_only(config: Config) -> Self {
Self::new(io::stdout(), io::stdout(), config)
}
}
impl Informant<io::Stderr, io::Stderr> {
/// std json informant using err only.
pub fn err_only(config: Config) -> Self {
Self::new(io::stderr(), io::stderr(), config)
}
}
impl Informant<io::Stderr, io::Stdout> {
pub fn new_default(config: Config) -> Self {
let mut informant = Self::default();
informant.config = config;
informant
}
}
impl<Trace: Writer, Out: Writer> Informant<Trace, Out> {
pub fn new(trace_sink: Trace, out_sink: Out, config: Config) -> Self {
Informant {
code: Default::default(),
instruction: Default::default(),
depth: Default::default(),
stack: Default::default(),
storage: Default::default(),
subinfos: Default::default(),
subdepth: 0,
trace_sink,
out_sink,
config,
}
}
fn with_informant_in_depth<F: Fn(&mut Informant<Trace, Out>)>(
informant: &mut Informant<Trace, Out>,
depth: usize,
f: F,
) {
if depth == 0 {
f(informant);
} else {
Self::with_informant_in_depth(
informant
.subinfos
.last_mut()
.expect("prepare/done_trace are not balanced"),
depth - 1,
f,
);
}
}
fn dump_state_into(
trace_sink: &mut Trace,
root: H256,
end_state: &Option<pod_state::PodState>,
) {
if let Some(ref end_state) = end_state {
let dump_data = json!({
"root": root,
"accounts": end_state,
});
writeln!(trace_sink, "{}", dump_data).expect("The sink must be writeable.");
}
}
}
impl<Trace: Writer, Out: Writer> vm::Informant for Informant<Trace, Out> {
type Sink = (Trace, Out, Config);
fn before_test(&mut self, name: &str, action: &str) {
let out_data = json!({
"action": action,
"test": name,
});
writeln!(&mut self.out_sink, "{}", out_data).expect("The sink must be writeable.");
}
fn set_gas(&mut self, _gas: U256) {}
fn clone_sink(&self) -> Self::Sink {
(
self.trace_sink.clone(),
self.out_sink.clone(),
self.config.clone(),
)
}
fn finish(
result: vm::RunResult<<Self as trace::VMTracer>::Output>,
(ref mut trace_sink, ref mut out_sink, _): &mut Self::Sink,
) {
match result {
Ok(success) => {
let trace_data = json!({"stateRoot": success.state_root});
writeln!(trace_sink, "{}", trace_data).expect("The sink must be writeable.");
Self::dump_state_into(trace_sink, success.state_root, &success.end_state);
let out_data = json!({
"output": format!("0x{}", success.output.to_hex()),
"gasUsed": format!("{:#x}", success.gas_used),
"time": display::as_micros(&success.time),
});
writeln!(out_sink, "{}", out_data).expect("The sink must be writeable.");
}
Err(failure) => {
let out_data = json!({
"error": &failure.error.to_string(),
"gasUsed": format!("{:#x}", failure.gas_used),
"time": display::as_micros(&failure.time),
});
Self::dump_state_into(trace_sink, failure.state_root, &failure.end_state);
writeln!(out_sink, "{}", out_data).expect("The sink must be writeable.");
}
}
}
}
impl<Trace: Writer, Out: Writer> trace::VMTracer for Informant<Trace, Out> {
type Output = ();
fn trace_next_instruction(&mut self, pc: usize, instruction: u8, current_gas: U256) -> bool {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
let storage = if informant.config.omit_storage_output() {
None
} else {
Some(&informant.storage)
};
let info = ::evm::Instruction::from_u8(instruction).map(|i| i.info());
informant.instruction = instruction;
let trace_data = json!({
"pc": pc,
"op": instruction,
"opName": info.map(|i| i.name).unwrap_or(""),
"gas": format!("{:#x}", current_gas),
"stack": informant.stack,
"storage": storage,
"depth": informant.depth,
});
writeln!(&mut informant.trace_sink, "{}", trace_data)
.expect("The sink must be writeable.");
});
true
}
fn trace_prepare_execute(
&mut self,
_pc: usize,
_instruction: u8,
_gas_cost: U256,
_mem_written: Option<(usize, usize)>,
store_written: Option<(U256, U256)>,
) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
if let Some((pos, val)) = store_written {
informant.storage.insert(
BigEndianHash::from_uint(&pos),
BigEndianHash::from_uint(&val),
);
}
});
}
fn trace_executed(&mut self, _gas_used: U256, stack_push: &[U256], _mem: &[u8]) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
let info = ::evm::Instruction::from_u8(informant.instruction).map(|i| i.info());
let len = informant.stack.len();
let info_args = info.map(|i| i.args).unwrap_or(0);
informant
.stack
.truncate(if len > info_args { len - info_args } else { 0 });
informant.stack.extend_from_slice(stack_push);
});
}
fn prepare_subtrace(&mut self, code: &[u8]) {
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
let mut vm = Informant::new(
informant.trace_sink.clone(),
informant.out_sink.clone(),
informant.config,
);
vm.depth = informant.depth + 1;
vm.code = code.to_vec();
informant.subinfos.push(vm);
});
self.subdepth += 1;
}
fn done_subtrace(&mut self) {
self.subdepth -= 1;
let subdepth = self.subdepth;
Self::with_informant_in_depth(self, subdepth, |informant: &mut Informant<Trace, Out>| {
informant.subinfos.pop();
});
}
fn drain(self) -> Option<Self::Output> {
None
}
}
#[cfg(test)]
pub mod tests {
use super::*;
use info::tests::run_test;
use std::sync::{Arc, Mutex};
#[derive(Debug, Clone, Default)]
pub struct TestWriter(pub Arc<Mutex<Vec<u8>>>);
impl Writer for TestWriter {
fn clone(&self) -> Self {
Clone::clone(self)
}
fn default() -> Self {
Default::default()
}
}
impl io::Write for TestWriter {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.0.lock().unwrap().write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.0.lock().unwrap().flush()
}
}
pub fn informant(config: Config) -> (Informant<TestWriter, TestWriter>, Arc<Mutex<Vec<u8>>>) {
let trace_writer: TestWriter = Default::default();
let out_writer: TestWriter = Default::default();
let res = trace_writer.0.clone();
(Informant::new(trace_writer, out_writer, config), res)
}
#[test]
fn should_trace_failure() {
let (inf, res) = informant(Config::default());
run_test(
inf,
move |_, expected| {
let bytes = res.lock().unwrap();
assert_eq!(expected, &String::from_utf8_lossy(&**bytes))
},
"60F8d6",
0xffff,
r#"{"depth":1,"gas":"0xffff","op":96,"opName":"PUSH1","pc":0,"stack":[],"storage":{}}
{"depth":1,"gas":"0xfffc","op":214,"opName":"","pc":2,"stack":["0xf8"],"storage":{}}
"#,
);
let (inf, res) = informant(Config::default());
run_test(
inf,
move |_, expected| {
let bytes = res.lock().unwrap();
assert_eq!(expected, &String::from_utf8_lossy(&**bytes))
},
"F8d6",
0xffff,
r#"{"depth":1,"gas":"0xffff","op":248,"opName":"","pc":0,"stack":[],"storage":{}}
"#,
);
}
#[test]
fn should_trace_create_correctly() {
let (informant, res) = informant(Config::default());
run_test(
informant,
move |_, expected| {
let bytes = res.lock().unwrap();
assert_eq!(expected, &String::from_utf8_lossy(&**bytes))
},
"32343434345830f138343438323439f0",
0xffff,
r#"{"depth":1,"gas":"0xffff","op":50,"opName":"ORIGIN","pc":0,"stack":[],"storage":{}}
{"depth":1,"gas":"0xfffd","op":52,"opName":"CALLVALUE","pc":1,"stack":["0x0"],"storage":{}}
{"depth":1,"gas":"0xfffb","op":52,"opName":"CALLVALUE","pc":2,"stack":["0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0xfff9","op":52,"opName":"CALLVALUE","pc":3,"stack":["0x0","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0xfff7","op":52,"opName":"CALLVALUE","pc":4,"stack":["0x0","0x0","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0xfff5","op":88,"opName":"PC","pc":5,"stack":["0x0","0x0","0x0","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0xfff3","op":48,"opName":"ADDRESS","pc":6,"stack":["0x0","0x0","0x0","0x0","0x0","0x5"],"storage":{}}
{"depth":1,"gas":"0xfff1","op":241,"opName":"CALL","pc":7,"stack":["0x0","0x0","0x0","0x0","0x0","0x5","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e21","op":56,"opName":"CODESIZE","pc":8,"stack":["0x1"],"storage":{}}
{"depth":1,"gas":"0x9e1f","op":52,"opName":"CALLVALUE","pc":9,"stack":["0x1","0x10"],"storage":{}}
{"depth":1,"gas":"0x9e1d","op":52,"opName":"CALLVALUE","pc":10,"stack":["0x1","0x10","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e1b","op":56,"opName":"CODESIZE","pc":11,"stack":["0x1","0x10","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e19","op":50,"opName":"ORIGIN","pc":12,"stack":["0x1","0x10","0x0","0x0","0x10"],"storage":{}}
{"depth":1,"gas":"0x9e17","op":52,"opName":"CALLVALUE","pc":13,"stack":["0x1","0x10","0x0","0x0","0x10","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e15","op":57,"opName":"CODECOPY","pc":14,"stack":["0x1","0x10","0x0","0x0","0x10","0x0","0x0"],"storage":{}}
{"depth":1,"gas":"0x9e0c","op":240,"opName":"CREATE","pc":15,"stack":["0x1","0x10","0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x210c","op":50,"opName":"ORIGIN","pc":0,"stack":[],"storage":{}}
{"depth":2,"gas":"0x210a","op":52,"opName":"CALLVALUE","pc":1,"stack":["0x0"],"storage":{}}
{"depth":2,"gas":"0x2108","op":52,"opName":"CALLVALUE","pc":2,"stack":["0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x2106","op":52,"opName":"CALLVALUE","pc":3,"stack":["0x0","0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x2104","op":52,"opName":"CALLVALUE","pc":4,"stack":["0x0","0x0","0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x2102","op":88,"opName":"PC","pc":5,"stack":["0x0","0x0","0x0","0x0","0x0"],"storage":{}}
{"depth":2,"gas":"0x2100","op":48,"opName":"ADDRESS","pc":6,"stack":["0x0","0x0","0x0","0x0","0x0","0x5"],"storage":{}}
{"depth":2,"gas":"0x20fe","op":241,"opName":"CALL","pc":7,"stack":["0x0","0x0","0x0","0x0","0x0","0x5","0xbd770416a3345f91e4b34576cb804a576fa48eb1"],"storage":{}}
"#,
)
}
#[test]
fn should_omit_storage_and_memory_flag() {
// should omit storage
let (informant, res) = informant(Config::new(true, true));
run_test(
informant,
move |_, expected| {
let bytes = res.lock().unwrap();
assert_eq!(expected, &String::from_utf8_lossy(&**bytes))
},
"3260D85554",
0xffff,
r#"{"depth":1,"gas":"0xffff","op":50,"opName":"ORIGIN","pc":0,"stack":[],"storage":null}
{"depth":1,"gas":"0xfffd","op":96,"opName":"PUSH1","pc":1,"stack":["0x0"],"storage":null}
{"depth":1,"gas":"0xfffa","op":85,"opName":"SSTORE","pc":3,"stack":["0x0","0xd8"],"storage":null}
{"depth":1,"gas":"0xec72","op":84,"opName":"SLOAD","pc":4,"stack":[],"storage":null}
"#,
)
}
}

View File

@ -1,316 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! VM runner.
use ethcore::{
client::{self, EvmTestClient, EvmTestError, TransactErr, TransactSuccess},
pod_state, spec, state, state_db, trace, TrieSpec,
};
use ethereum_types::{H256, U256};
use ethjson;
use std::time::{Duration, Instant};
use types::transaction;
use vm::ActionParams;
/// VM execution informant
pub trait Informant: trace::VMTracer {
/// Sink to use with finish
type Sink;
/// Display a single run init message
fn before_test(&mut self, test: &str, action: &str);
/// Set initial gas.
fn set_gas(&mut self, _gas: U256) {}
/// Clone sink.
fn clone_sink(&self) -> Self::Sink;
/// Display final result.
fn finish(result: RunResult<Self::Output>, &mut Self::Sink);
}
/// Execution finished correctly
#[derive(Debug)]
pub struct Success<T> {
/// State root
pub state_root: H256,
/// Used gas
pub gas_used: U256,
/// Output as bytes
pub output: Vec<u8>,
/// Time Taken
pub time: Duration,
/// Traces
pub traces: Option<T>,
/// Optional end state dump
pub end_state: Option<pod_state::PodState>,
}
/// Execution failed
#[derive(Debug)]
pub struct Failure<T> {
/// State root
pub state_root: H256,
/// Used gas
pub gas_used: U256,
/// Internal error
pub error: EvmTestError,
/// Duration
pub time: Duration,
/// Traces
pub traces: Option<T>,
/// Optional end state dump
pub end_state: Option<pod_state::PodState>,
}
/// EVM Execution result
pub type RunResult<T> = Result<Success<T>, Failure<T>>;
/// Execute given `ActionParams` and return the result.
pub fn run_action<T: Informant>(
spec: &spec::Spec,
mut params: ActionParams,
mut informant: T,
trie_spec: TrieSpec,
) -> RunResult<T::Output> {
informant.set_gas(params.gas);
// if the code is not overwritten from CLI, use code from spec file.
if params.code.is_none() {
if let Some(acc) = spec.genesis_state().get().get(&params.code_address) {
params.code = acc.code.clone().map(::std::sync::Arc::new);
params.code_hash = None;
}
}
run(
spec,
trie_spec,
params.gas,
spec.genesis_state(),
|mut client| {
let result = match client.call(params, &mut trace::NoopTracer, &mut informant) {
Ok(r) => (Ok(r.return_data.to_vec()), Some(r.gas_left)),
Err(err) => (Err(err), None),
};
(result.0, H256::zero(), None, result.1, informant.drain())
},
)
}
/// Execute given Transaction and verify resulting state root.
pub fn run_transaction<T: Informant>(
name: &str,
idx: usize,
spec: &ethjson::spec::ForkSpec,
pre_state: &pod_state::PodState,
post_root: H256,
env_info: &client::EnvInfo,
transaction: transaction::SignedTransaction,
mut informant: T,
trie_spec: TrieSpec,
) {
let spec_name = format!("{:?}", spec).to_lowercase();
let spec = match EvmTestClient::spec_from_json(spec) {
Some(spec) => {
informant.before_test(&format!("{}:{}:{}", name, spec_name, idx), "starting");
spec
}
None => {
informant.before_test(
&format!("{}:{}:{}", name, spec_name, idx),
"skipping because of missing spec",
);
return;
}
};
informant.set_gas(env_info.gas_limit);
let mut sink = informant.clone_sink();
let result = run(
&spec,
trie_spec,
transaction.tx().gas,
pre_state,
|mut client| {
let result = client.transact(env_info, transaction, trace::NoopTracer, informant);
match result {
Ok(TransactSuccess {
state_root,
gas_left,
output,
vm_trace,
end_state,
..
}) => {
if state_root != post_root {
(
Err(EvmTestError::PostCondition(format!(
"State root mismatch (got: {:#x}, expected: {:#x})",
state_root, post_root,
))),
state_root,
end_state,
Some(gas_left),
None,
)
} else {
(Ok(output), state_root, end_state, Some(gas_left), vm_trace)
}
}
Err(TransactErr {
state_root,
error,
end_state,
}) => (
Err(EvmTestError::PostCondition(format!(
"Unexpected execution error: {:?}",
error
))),
state_root,
end_state,
None,
None,
),
}
},
);
T::finish(result, &mut sink)
}
fn dump_state(state: &state::State<state_db::StateDB>) -> Option<pod_state::PodState> {
state.to_pod_full().ok()
}
/// Execute VM with given `ActionParams`
pub fn run<'a, F, X>(
spec: &'a spec::Spec,
trie_spec: TrieSpec,
initial_gas: U256,
pre_state: &'a pod_state::PodState,
run: F,
) -> RunResult<X>
where
F: FnOnce(
EvmTestClient,
) -> (
Result<Vec<u8>, EvmTestError>,
H256,
Option<pod_state::PodState>,
Option<U256>,
Option<X>,
),
{
let do_dump = trie_spec == TrieSpec::Fat;
let mut test_client =
EvmTestClient::from_pod_state_with_trie(spec, pre_state.clone(), trie_spec).map_err(
|error| Failure {
gas_used: 0.into(),
error,
time: Duration::from_secs(0),
traces: None,
state_root: H256::default(),
end_state: None,
},
)?;
if do_dump {
test_client.set_dump_state_fn(dump_state);
}
let start = Instant::now();
let result = run(test_client);
let time = start.elapsed();
match result {
(Ok(output), state_root, end_state, gas_left, traces) => Ok(Success {
state_root,
gas_used: gas_left
.map(|gas_left| initial_gas - gas_left)
.unwrap_or(initial_gas),
output,
time,
traces,
end_state,
}),
(Err(error), state_root, end_state, gas_left, traces) => Err(Failure {
gas_used: gas_left
.map(|gas_left| initial_gas - gas_left)
.unwrap_or(initial_gas),
error,
time,
traces,
state_root,
end_state,
}),
}
}
#[cfg(test)]
pub mod tests {
use super::*;
use ethereum_types::Address;
use rustc_hex::FromHex;
use std::sync::Arc;
use tempdir::TempDir;
pub fn run_test<T, I, F>(informant: I, compare: F, code: &str, gas: T, expected: &str)
where
T: Into<U256>,
I: Informant,
F: FnOnce(Option<I::Output>, &str),
{
let mut params = ActionParams::default();
params.code = Some(Arc::new(code.from_hex().unwrap()));
params.gas = gas.into();
let tempdir = TempDir::new("").unwrap();
let spec = ::ethcore::ethereum::new_foundation(&tempdir.path());
let result = run_action(&spec, params, informant, TrieSpec::Secure);
match result {
Ok(Success { traces, .. }) => compare(traces, expected),
Err(Failure { traces, .. }) => compare(traces, expected),
}
}
#[test]
fn should_call_account_from_spec() {
use display::{config::Config, std_json::tests::informant};
let (inf, res) = informant(Config::default());
let mut params = ActionParams::default();
params.code_address = Address::from_low_u64_be(0x20);
params.gas = 0xffff.into();
let spec = ::ethcore::ethereum::load(None, include_bytes!("../res/testchain.json"));
let _result = run_action(&spec, params, inf, TrieSpec::Secure);
assert_eq!(
&String::from_utf8_lossy(&**res.lock().unwrap()),
r#"{"depth":1,"gas":"0xffff","op":98,"opName":"PUSH3","pc":0,"stack":[],"storage":{}}
{"depth":1,"gas":"0xfffc","op":96,"opName":"PUSH1","pc":4,"stack":["0xaaaaaa"],"storage":{}}
{"depth":1,"gas":"0xfff9","op":96,"opName":"PUSH1","pc":6,"stack":["0xaaaaaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xfff6","op":80,"opName":"POP","pc":8,"stack":["0xaaaaaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xfff4","op":96,"opName":"PUSH1","pc":9,"stack":["0xaaaaaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xfff1","op":96,"opName":"PUSH1","pc":11,"stack":["0xaaaaaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xffee","op":96,"opName":"PUSH1","pc":13,"stack":["0xaaaaaa","0xaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xffeb","op":96,"opName":"PUSH1","pc":15,"stack":["0xaaaaaa","0xaa","0xaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xffe8","op":96,"opName":"PUSH1","pc":17,"stack":["0xaaaaaa","0xaa","0xaa","0xaa","0xaa","0xaa"],"storage":{}}
{"depth":1,"gas":"0xffe5","op":96,"opName":"PUSH1","pc":19,"stack":["0xaaaaaa","0xaa","0xaa","0xaa","0xaa","0xaa","0xaa"],"storage":{}}
"#
);
}
}

View File

@ -1,511 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! OpenEthereum EVM interpreter binary.
#![warn(missing_docs)]
extern crate common_types as types;
extern crate ethcore;
extern crate ethjson;
extern crate rustc_hex;
extern crate serde;
#[macro_use]
extern crate serde_derive;
#[macro_use]
extern crate serde_json;
extern crate docopt;
extern crate env_logger;
extern crate ethereum_types;
extern crate evm;
extern crate panic_hook;
extern crate parity_bytes as bytes;
extern crate vm;
#[cfg(test)]
#[macro_use]
extern crate pretty_assertions;
#[cfg(test)]
extern crate tempdir;
use bytes::Bytes;
use docopt::Docopt;
use ethcore::{json_tests, spec, TrieSpec};
use ethereum_types::{Address, U256};
use ethjson::spec::ForkSpec;
use evm::EnvInfo;
use rustc_hex::FromHex;
use std::{fmt, fs, path::PathBuf, sync::Arc};
use vm::{ActionParams, CallType};
mod display;
mod info;
use info::Informant;
const USAGE: &'static str = r#"
EVM implementation for Parity.
Copyright 2015-2020 Parity Technologies (UK) Ltd.
Usage:
openethereum-evm state-test <file> [--json --std-json --std-dump-json --only NAME --chain CHAIN --std-out-only --std-err-only --omit-storage-output --omit-memory-output]
openethereum-evm stats [options]
openethereum-evm stats-jsontests-vm <file>
openethereum-evm [options]
openethereum-evm [-h | --help]
Commands:
state-test Run a state test from a json file.
stats Execute EVM runtime code and return the statistics.
stats-jsontests-vm Execute standard json-tests format VMTests and return
timing statistics in tsv format.
Transaction options:
--code CODE Contract code as hex (without 0x).
--to ADDRESS Recipient address (without 0x).
--from ADDRESS Sender address (without 0x).
--input DATA Input data as hex (without 0x).
--gas GAS Supplied gas as hex (without 0x).
--gas-price WEI Supplied gas price as hex (without 0x).
State test options:
--chain CHAIN Run only from specific chain name (i.e. one of EIP150, EIP158,
Frontier, Homestead, Byzantium, Constantinople,
ConstantinopleFix, Istanbul, EIP158ToByzantiumAt5, FrontierToHomesteadAt5,
HomesteadToDaoAt5, HomesteadToEIP150At5, Berlin, Yolo3).
--only NAME Runs only a single test matching the name.
General options:
--json Display verbose results in JSON.
--std-json Display results in standardized JSON format.
--std-err-only With --std-json redirect to err output only.
--std-out-only With --std-json redirect to out output only.
--omit-storage-output With --std-json omit storage output.
--omit-memory-output With --std-json omit memory output.
--std-dump-json Display results in standardized JSON format
with additional state dump.
Display result state dump in standardized JSON format.
--chain CHAIN Chain spec file path.
-h, --help Display this message and exit.
"#;
fn main() {
panic_hook::set_abort();
env_logger::init();
let args: Args = Docopt::new(USAGE)
.and_then(|d| d.deserialize())
.unwrap_or_else(|e| e.exit());
let config = args.config();
if args.cmd_state_test {
run_state_test(args)
} else if args.cmd_stats_jsontests_vm {
run_stats_jsontests_vm(args)
} else if args.flag_json {
run_call(args, display::json::Informant::new(config))
} else if args.flag_std_dump_json || args.flag_std_json {
if args.flag_std_err_only {
run_call(args, display::std_json::Informant::err_only(config))
} else if args.flag_std_out_only {
run_call(args, display::std_json::Informant::out_only(config))
} else {
run_call(args, display::std_json::Informant::new_default(config))
};
} else {
run_call(args, display::simple::Informant::new(config))
}
}
fn run_stats_jsontests_vm(args: Args) {
use json_tests::HookType;
use std::{
collections::HashMap,
time::{Duration, Instant},
};
let file = args.arg_file.expect("FILE (or PATH) is required");
let mut timings: HashMap<String, (Instant, Option<Duration>)> = HashMap::new();
{
let mut record_time = |name: &str, typ: HookType| match typ {
HookType::OnStart => {
timings.insert(name.to_string(), (Instant::now(), None));
}
HookType::OnStop => {
timings.entry(name.to_string()).and_modify(|v| {
v.1 = Some(v.0.elapsed());
});
}
};
for file_path in json_tests::find_json_files_recursive(&file) {
let json_data = std::fs::read(&file_path).unwrap();
json_tests::json_executive_test(&file_path, &json_data, &mut record_time);
}
}
for (name, v) in timings {
println!(
"{}\t{}",
name,
display::as_micros(&v.1.expect("All hooks are called with OnStop; qed"))
);
}
}
fn run_state_test(args: Args) {
use ethjson::state::test::Test;
let config = args.config();
let file = args.arg_file.expect("FILE is required");
let mut file = match fs::File::open(&file) {
Err(err) => die(format!("Unable to open: {:?}: {}", file, err)),
Ok(file) => file,
};
let state_test = match Test::load(&mut file) {
Err(err) => die(format!("Unable to load the test file: {}", err)),
Ok(test) => test,
};
let only_test = args.flag_only.map(|s| s.to_lowercase());
let only_chain = args.flag_chain.map(|s| s.to_lowercase());
for (name, test) in state_test {
if let Some(false) = only_test
.as_ref()
.map(|only_test| &name.to_lowercase() == only_test)
{
continue;
}
let multitransaction = test.transaction;
let env_info: EnvInfo = test.env.into();
let pre = test.pre_state.into();
for (spec, states) in test.post_states {
//hardcode base fee for part of the london tests, that miss base fee field in env
let mut test_env = env_info.clone();
if spec >= ForkSpec::London {
if test_env.base_fee.is_none() {
test_env.base_fee = Some(0x0a.into());
}
}
if let Some(false) = only_chain
.as_ref()
.map(|only_chain| &format!("{:?}", spec).to_lowercase() == only_chain)
{
continue;
}
for (idx, state) in states.into_iter().enumerate() {
let post_root = state.hash.into();
let transaction = multitransaction.select(&state.indexes);
let trie_spec = if args.flag_std_dump_json {
TrieSpec::Fat
} else {
TrieSpec::Secure
};
if args.flag_json {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::json::Informant::new(config),
trie_spec,
)
} else if args.flag_std_dump_json || args.flag_std_json {
if args.flag_std_err_only {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::std_json::Informant::err_only(config),
trie_spec,
)
} else if args.flag_std_out_only {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::std_json::Informant::out_only(config),
trie_spec,
)
} else {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::std_json::Informant::new_default(config),
trie_spec,
)
}
} else {
info::run_transaction(
&name,
idx,
&spec,
&pre,
post_root,
&test_env,
transaction,
display::simple::Informant::new(config),
trie_spec,
)
}
}
}
}
}
fn run_call<T: Informant>(args: Args, informant: T) {
let from = arg(args.from(), "--from");
let to = arg(args.to(), "--to");
let code = arg(args.code(), "--code");
let spec = arg(args.spec(), "--chain");
let gas = arg(args.gas(), "--gas");
let gas_price = arg(args.gas_price(), "--gas-price");
let data = arg(args.data(), "--input");
if code.is_none() && to == Address::default() {
die("Either --code or --to is required.");
}
let mut params = ActionParams::default();
if spec.engine.params().eip2929_transition == 0 {
params.access_list.enable();
params.access_list.insert_address(from);
params.access_list.insert_address(to);
for (builtin, _) in spec.engine.builtins() {
params.access_list.insert_address(*builtin);
}
}
params.call_type = if code.is_none() {
CallType::Call
} else {
CallType::None
};
params.code_address = to;
params.address = to;
params.sender = from;
params.origin = from;
params.gas = gas;
params.gas_price = gas_price;
params.code = code.map(Arc::new);
params.data = data;
let mut sink = informant.clone_sink();
let result = if args.flag_std_dump_json {
info::run_action(&spec, params, informant, TrieSpec::Fat)
} else {
info::run_action(&spec, params, informant, TrieSpec::Secure)
};
T::finish(result, &mut sink);
}
#[derive(Debug, Deserialize)]
struct Args {
cmd_stats: bool,
cmd_state_test: bool,
cmd_stats_jsontests_vm: bool,
arg_file: Option<PathBuf>,
flag_only: Option<String>,
flag_from: Option<String>,
flag_to: Option<String>,
flag_code: Option<String>,
flag_gas: Option<String>,
flag_gas_price: Option<String>,
flag_input: Option<String>,
flag_chain: Option<String>,
flag_json: bool,
flag_std_json: bool,
flag_std_dump_json: bool,
flag_std_err_only: bool,
flag_std_out_only: bool,
flag_omit_storage_output: bool,
flag_omit_memory_output: bool,
}
impl Args {
pub fn gas(&self) -> Result<U256, String> {
match self.flag_gas {
Some(ref gas) => gas.parse().map_err(to_string),
None => Ok(U256::from(u64::max_value())),
}
}
pub fn gas_price(&self) -> Result<U256, String> {
match self.flag_gas_price {
Some(ref gas_price) => gas_price.parse().map_err(to_string),
None => Ok(U256::zero()),
}
}
pub fn from(&self) -> Result<Address, String> {
match self.flag_from {
Some(ref from) => from.parse().map_err(to_string),
None => Ok(Address::default()),
}
}
pub fn to(&self) -> Result<Address, String> {
match self.flag_to {
Some(ref to) => to.parse().map_err(to_string),
None => Ok(Address::default()),
}
}
pub fn code(&self) -> Result<Option<Bytes>, String> {
match self.flag_code {
Some(ref code) => code.from_hex().map(Some).map_err(to_string),
None => Ok(None),
}
}
pub fn data(&self) -> Result<Option<Bytes>, String> {
match self.flag_input {
Some(ref input) => input.from_hex().map_err(to_string).map(Some),
None => Ok(None),
}
}
pub fn spec(&self) -> Result<spec::Spec, String> {
Ok(match self.flag_chain {
Some(ref spec_name) => {
let fork_spec: Result<ethjson::spec::ForkSpec, _> =
serde_json::from_str(&format!("{:?}", spec_name));
if let Ok(fork_spec) = fork_spec {
ethcore::client::EvmTestClient::spec_from_json(&fork_spec)
.expect("this forkspec is not defined")
} else {
let file = fs::File::open(spec_name).map_err(|e| format!("{}", e))?;
spec::Spec::load(&::std::env::temp_dir(), file)?
}
}
None => ethcore::ethereum::new_foundation(&::std::env::temp_dir()),
})
}
pub fn config(&self) -> display::config::Config {
display::config::Config::new(self.flag_omit_storage_output, self.flag_omit_memory_output)
}
}
fn arg<T>(v: Result<T, String>, param: &str) -> T {
v.unwrap_or_else(|e| die(format!("Invalid {}: {}", param, e)))
}
fn to_string<T: fmt::Display>(msg: T) -> String {
format!("{}", msg)
}
fn die<T: fmt::Display>(msg: T) -> ! {
println!("{}", msg);
::std::process::exit(-1)
}
#[cfg(test)]
mod tests {
use super::{Args, USAGE};
use docopt::Docopt;
use ethereum_types::Address;
fn run<T: AsRef<str>>(args: &[T]) -> Args {
Docopt::new(USAGE)
.and_then(|d| d.argv(args.into_iter()).deserialize())
.unwrap()
}
#[test]
fn should_parse_all_the_options() {
let args = run(&[
"openethereum-evm",
"--json",
"--std-json",
"--std-dump-json",
"--gas",
"1",
"--gas-price",
"2",
"--from",
"0000000000000000000000000000000000000003",
"--to",
"0000000000000000000000000000000000000004",
"--code",
"05",
"--input",
"06",
"--chain",
"./testfile",
"--std-err-only",
"--std-out-only",
]);
assert_eq!(args.flag_json, true);
assert_eq!(args.flag_std_json, true);
assert_eq!(args.flag_std_dump_json, true);
assert_eq!(args.flag_std_err_only, true);
assert_eq!(args.flag_std_out_only, true);
assert_eq!(args.gas(), Ok(1.into()));
assert_eq!(args.gas_price(), Ok(2.into()));
assert_eq!(args.from(), Ok(Address::from_low_u64_be(3)));
assert_eq!(args.to(), Ok(Address::from_low_u64_be(4)));
assert_eq!(args.code(), Ok(Some(vec![05])));
assert_eq!(args.data(), Ok(Some(vec![06])));
assert_eq!(args.flag_chain, Some("./testfile".to_owned()));
}
#[test]
fn should_parse_state_test_command() {
let args = run(&[
"openethereum-evm",
"state-test",
"./file.json",
"--chain",
"homestead",
"--only=add11",
"--json",
"--std-json",
"--std-dump-json",
]);
assert_eq!(args.cmd_state_test, true);
assert!(args.arg_file.is_some());
assert_eq!(args.flag_json, true);
assert_eq!(args.flag_std_json, true);
assert_eq!(args.flag_std_dump_json, true);
assert_eq!(args.flag_chain, Some("homestead".to_owned()));
assert_eq!(args.flag_only, Some("add11".to_owned()));
}
}

View File

@ -1,141 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crate::params::SpecType;
use std::num::NonZeroU32;
#[derive(Debug, PartialEq)]
pub enum AccountCmd {
New(NewAccount),
List(ListAccounts),
Import(ImportAccounts),
}
#[derive(Debug, PartialEq)]
pub struct ListAccounts {
pub path: String,
pub spec: SpecType,
}
#[derive(Debug, PartialEq)]
pub struct NewAccount {
pub iterations: NonZeroU32,
pub path: String,
pub spec: SpecType,
pub password_file: Option<String>,
}
#[derive(Debug, PartialEq)]
pub struct ImportAccounts {
pub from: Vec<String>,
pub to: String,
pub spec: SpecType,
}
#[cfg(not(feature = "accounts"))]
pub fn execute(_cmd: AccountCmd) -> Result<String, String> {
Err("Account management is deprecated. Please see #9997 for alternatives:\nhttps://github.com/openethereum/openethereum/issues/9997".into())
}
#[cfg(feature = "accounts")]
mod command {
use super::*;
use crate::{
accounts::{AccountProvider, AccountProviderSettings},
helpers::{password_from_file, password_prompt},
};
use ethstore::{accounts_dir::RootDiskDirectory, import_account, import_accounts, EthStore};
use std::path::PathBuf;
pub fn execute(cmd: AccountCmd) -> Result<String, String> {
match cmd {
AccountCmd::New(new_cmd) => new(new_cmd),
AccountCmd::List(list_cmd) => list(list_cmd),
AccountCmd::Import(import_cmd) => import(import_cmd),
}
}
fn keys_dir(path: String, spec: SpecType) -> Result<RootDiskDirectory, String> {
let spec = spec.spec(&::std::env::temp_dir())?;
let mut path = PathBuf::from(&path);
path.push(spec.data_dir);
RootDiskDirectory::create(path).map_err(|e| format!("Could not open keys directory: {}", e))
}
fn secret_store(
dir: Box<RootDiskDirectory>,
iterations: Option<NonZeroU32>,
) -> Result<EthStore, String> {
match iterations {
Some(i) => EthStore::open_with_iterations(dir, i),
_ => EthStore::open(dir),
}
.map_err(|e| format!("Could not open keys store: {}", e))
}
fn new(n: NewAccount) -> Result<String, String> {
let password = match n.password_file {
Some(file) => password_from_file(file)?,
None => password_prompt()?,
};
let dir = Box::new(keys_dir(n.path, n.spec)?);
let secret_store = Box::new(secret_store(dir, Some(n.iterations))?);
let acc_provider = AccountProvider::new(secret_store, AccountProviderSettings::default());
let new_account = acc_provider
.new_account(&password)
.map_err(|e| format!("Could not create new account: {}", e))?;
Ok(format!("0x{:x}", new_account))
}
fn list(list_cmd: ListAccounts) -> Result<String, String> {
let dir = Box::new(keys_dir(list_cmd.path, list_cmd.spec)?);
let secret_store = Box::new(secret_store(dir, None)?);
let acc_provider = AccountProvider::new(secret_store, AccountProviderSettings::default());
let accounts = acc_provider.accounts().map_err(|e| format!("{}", e))?;
let result = accounts
.into_iter()
.map(|a| format!("0x{:x}", a))
.collect::<Vec<String>>()
.join("\n");
Ok(result)
}
fn import(i: ImportAccounts) -> Result<String, String> {
let to = keys_dir(i.to, i.spec)?;
let mut imported = 0;
for path in &i.from {
let path = PathBuf::from(path);
if path.is_dir() {
let from = RootDiskDirectory::at(&path);
imported += import_accounts(&from, &to)
.map_err(|e| format!("Importing accounts from {:?} failed: {}", path, e))?
.len();
} else if path.is_file() {
import_account(&path, &to)
.map_err(|e| format!("Importing account from {:?} failed: {}", path, e))?;
imported += 1;
}
}
Ok(format!("{} account(s) imported", imported))
}
}
#[cfg(feature = "accounts")]
pub use self::command::execute;

View File

@ -1,263 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::sync::Arc;
use crypto::publickey;
use dir::Directories;
use ethereum_types::{Address, H160};
use ethkey::Password;
use crate::params::{AccountsConfig, SpecType};
#[cfg(not(feature = "accounts"))]
mod accounts {
use super::*;
/// Dummy AccountProvider
pub struct AccountProvider;
impl ::ethcore::miner::LocalAccounts for AccountProvider {
fn is_local(&self, _address: &Address) -> bool {
false
}
}
pub fn prepare_account_provider(
_spec: &SpecType,
_dirs: &Directories,
_data_dir: &str,
_cfg: AccountsConfig,
_passwords: &[Password],
) -> Result<AccountProvider, String> {
warn!("Note: Your instance of OpenEthereum is running without account support. Some CLI options are ignored.");
Ok(AccountProvider)
}
pub fn miner_local_accounts(_: Arc<AccountProvider>) -> AccountProvider {
AccountProvider
}
pub fn miner_author(
_spec: &SpecType,
_dirs: &Directories,
_account_provider: &Arc<AccountProvider>,
_engine_signer: Address,
_passwords: &[Password],
) -> Result<Option<::ethcore::miner::Author>, String> {
Ok(None)
}
pub fn accounts_list(
_account_provider: Arc<AccountProvider>,
) -> Arc<dyn Fn() -> Vec<Address> + Send + Sync> {
Arc::new(|| vec![])
}
}
#[cfg(feature = "accounts")]
mod accounts {
use super::*;
use crate::{ethereum_types::H256, upgrade::upgrade_key_location};
use std::str::FromStr;
pub use crate::accounts::AccountProvider;
/// Pops along with error messages when a password is missing or invalid.
const VERIFY_PASSWORD_HINT: &str = "Make sure valid password is present in files passed using `--password` or in the configuration file.";
/// Initialize account provider
pub fn prepare_account_provider(
spec: &SpecType,
dirs: &Directories,
data_dir: &str,
cfg: AccountsConfig,
passwords: &[Password],
) -> Result<AccountProvider, String> {
use crate::accounts::AccountProviderSettings;
use ethstore::{accounts_dir::RootDiskDirectory, EthStore};
let path = dirs.keys_path(data_dir);
upgrade_key_location(&dirs.legacy_keys_path(cfg.testnet), &path);
let dir = Box::new(
RootDiskDirectory::create(&path)
.map_err(|e| format!("Could not open keys directory: {}", e))?,
);
let account_settings = AccountProviderSettings {
unlock_keep_secret: cfg.enable_fast_unlock,
blacklisted_accounts: match *spec {
SpecType::Morden
| SpecType::Ropsten
| SpecType::Kovan
| SpecType::Goerli
| SpecType::Sokol
| SpecType::Dev => vec![],
_ => vec![H160::from_str("00a329c0648769a73afac7f9381e08fb43dbea72")
.expect("the string is valid hex; qed")],
},
};
let ethstore = EthStore::open_with_iterations(dir, cfg.iterations)
.map_err(|e| format!("Could not open keys directory: {}", e))?;
if cfg.refresh_time > 0 {
ethstore.set_refresh_time(::std::time::Duration::from_secs(cfg.refresh_time));
}
let account_provider = AccountProvider::new(Box::new(ethstore), account_settings);
// Add development account if running dev chain:
if let SpecType::Dev = *spec {
insert_dev_account(&account_provider);
}
for a in cfg.unlocked_accounts {
// Check if the account exists
if !account_provider.has_account(a) {
return Err(format!(
"Account {} not found for the current chain. {}",
a,
build_create_account_hint(spec, &dirs.keys)
));
}
// Check if any passwords have been read from the password file(s)
if passwords.is_empty() {
return Err(format!(
"No password found to unlock account {}. {}",
a, VERIFY_PASSWORD_HINT
));
}
if !passwords.iter().any(|p| {
account_provider
.unlock_account_permanently(a, (*p).clone())
.is_ok()
}) {
return Err(format!(
"No valid password to unlock account {}. {}",
a, VERIFY_PASSWORD_HINT
));
}
}
Ok(account_provider)
}
pub struct LocalAccounts(Arc<AccountProvider>);
impl ::ethcore::miner::LocalAccounts for LocalAccounts {
fn is_local(&self, address: &Address) -> bool {
self.0.has_account(*address)
}
}
pub fn miner_local_accounts(account_provider: Arc<AccountProvider>) -> LocalAccounts {
LocalAccounts(account_provider)
}
pub fn miner_author(
spec: &SpecType,
dirs: &Directories,
account_provider: &Arc<AccountProvider>,
engine_signer: Address,
passwords: &[Password],
) -> Result<Option<::ethcore::miner::Author>, String> {
use ethcore::engines::EngineSigner;
// Check if engine signer exists
if !account_provider.has_account(engine_signer) {
return Err(format!(
"Consensus signer account not found for the current chain. {}",
build_create_account_hint(spec, &dirs.keys)
));
}
// Check if any passwords have been read from the password file(s)
if passwords.is_empty() {
return Err(format!(
"No password found for the consensus signer {}. {}",
engine_signer, VERIFY_PASSWORD_HINT
));
}
let mut author = None;
for password in passwords {
let signer = parity_rpc::signer::EngineSigner::new(
account_provider.clone(),
engine_signer,
password.clone(),
);
// sign dummy msg to check if password and account can be used.
if signer.sign(H256::from_low_u64_be(1)).is_ok() {
author = Some(::ethcore::miner::Author::Sealer(Box::new(signer)));
}
}
if author.is_none() {
return Err(format!(
"No valid password for the consensus signer {}. {}",
engine_signer, VERIFY_PASSWORD_HINT
));
}
Ok(author)
}
pub fn accounts_list(
account_provider: Arc<AccountProvider>,
) -> Arc<dyn Fn() -> Vec<Address> + Send + Sync> {
Arc::new(move || account_provider.accounts().unwrap_or_default())
}
fn insert_dev_account(account_provider: &AccountProvider) {
let secret = publickey::Secret::from_str(
"4d5db4107d237df6a3d58ee5f70ae63d73d7658d4026f2eefd2f204c81682cb7",
)
.expect("Valid account;qed");
let dev_account = publickey::KeyPair::from_secret(secret.clone())
.expect("Valid secret produces valid key;qed");
if !account_provider.has_account(dev_account.address()) {
match account_provider.insert_account(secret, &Password::from(String::new())) {
Err(e) => warn!("Unable to add development account: {}", e),
Ok(address) => {
let _ = account_provider
.set_account_name(address.clone(), "Development Account".into());
let _ = account_provider.set_account_meta(
address,
::serde_json::to_string(
&(vec![
(
"description",
"Never use this account outside of development chain!",
),
("passwordHint", "Password is empty string"),
]
.into_iter()
.collect::<::std::collections::HashMap<_, _>>()),
)
.expect("Serialization of hashmap does not fail."),
);
}
}
}
}
// Construct an error `String` with an adaptive hint on how to create an account.
fn build_create_account_hint(spec: &SpecType, keys: &str) -> String {
format!("You can create an account via RPC, UI or `openethereum account new --chain {} --keys-path {}`.", spec, keys)
}
}
pub use self::accounts::{
accounts_list, miner_author, miner_local_accounts, prepare_account_provider, AccountProvider,
};

View File

@ -1,551 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{fs, io, sync::Arc, time::Instant};
use crate::{
bytes::ToPretty,
cache::CacheConfig,
db,
hash::{keccak, KECCAK_NULL_RLP},
helpers::{execute_upgrades, to_client_config},
informant::{FullNodeInformantData, Informant, MillisecondDuration},
params::{fatdb_switch_to_bool, tracing_switch_to_bool, Pruning, SpecType, Switch},
types::data_format::DataFormat,
user_defaults::UserDefaults,
};
use ansi_term::Colour;
use dir::Directories;
use ethcore::{
client::{
Balance, BlockChainClient, BlockChainReset, BlockId, DatabaseCompactionProfile,
ImportExportBlocks, Mode, Nonce, VMType,
},
miner::Miner,
verification::queue::VerifierSettings,
};
use ethcore_service::ClientService;
use ethereum_types::{Address, H256, U256};
#[derive(Debug, PartialEq)]
pub enum BlockchainCmd {
Kill(KillBlockchain),
Import(ImportBlockchain),
Export(ExportBlockchain),
ExportState(ExportState),
Reset(ResetBlockchain),
}
#[derive(Debug, PartialEq)]
pub struct ResetBlockchain {
pub dirs: Directories,
pub spec: SpecType,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub tracing: Switch,
pub fat_db: Switch,
pub compaction: DatabaseCompactionProfile,
pub cache_config: CacheConfig,
pub num: u32,
}
#[derive(Debug, PartialEq)]
pub struct KillBlockchain {
pub spec: SpecType,
pub dirs: Directories,
pub pruning: Pruning,
}
#[derive(Debug, PartialEq)]
pub struct ImportBlockchain {
pub spec: SpecType,
pub cache_config: CacheConfig,
pub dirs: Directories,
pub file_path: Option<String>,
pub format: Option<DataFormat>,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub compaction: DatabaseCompactionProfile,
pub tracing: Switch,
pub fat_db: Switch,
pub vm_type: VMType,
pub check_seal: bool,
pub with_color: bool,
pub verifier_settings: VerifierSettings,
pub max_round_blocks_to_import: usize,
}
#[derive(Debug, PartialEq)]
pub struct ExportBlockchain {
pub spec: SpecType,
pub cache_config: CacheConfig,
pub dirs: Directories,
pub file_path: Option<String>,
pub format: Option<DataFormat>,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub compaction: DatabaseCompactionProfile,
pub fat_db: Switch,
pub tracing: Switch,
pub from_block: BlockId,
pub to_block: BlockId,
pub check_seal: bool,
pub max_round_blocks_to_import: usize,
}
#[derive(Debug, PartialEq)]
pub struct ExportState {
pub spec: SpecType,
pub cache_config: CacheConfig,
pub dirs: Directories,
pub file_path: Option<String>,
pub format: Option<DataFormat>,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub compaction: DatabaseCompactionProfile,
pub fat_db: Switch,
pub tracing: Switch,
pub at: BlockId,
pub storage: bool,
pub code: bool,
pub min_balance: Option<U256>,
pub max_balance: Option<U256>,
pub max_round_blocks_to_import: usize,
}
pub fn execute(cmd: BlockchainCmd) -> Result<(), String> {
match cmd {
BlockchainCmd::Kill(kill_cmd) => kill_db(kill_cmd),
BlockchainCmd::Import(import_cmd) => execute_import(import_cmd),
BlockchainCmd::Export(export_cmd) => execute_export(export_cmd),
BlockchainCmd::ExportState(export_cmd) => execute_export_state(export_cmd),
BlockchainCmd::Reset(reset_cmd) => execute_reset(reset_cmd),
}
}
fn execute_import(cmd: ImportBlockchain) -> Result<(), String> {
let timer = Instant::now();
// load spec file
let spec = cmd.spec.spec(&cmd.dirs.cache)?;
// load genesis hash
let genesis_hash = spec.genesis_header().hash();
// database paths
let db_dirs = cmd.dirs.database(genesis_hash, None, spec.data_dir.clone());
// user defaults path
let user_defaults_path = db_dirs.user_defaults_path();
// load user defaults
let mut user_defaults = UserDefaults::load(&user_defaults_path)?;
// select pruning algorithm
let algorithm = cmd.pruning.to_algorithm(&user_defaults);
// check if tracing is on
let tracing = tracing_switch_to_bool(cmd.tracing, &user_defaults)?;
// check if fatdb is on
let fat_db = fatdb_switch_to_bool(cmd.fat_db, &user_defaults, algorithm)?;
// prepare client and snapshot paths.
let client_path = db_dirs.client_path(algorithm);
let snapshot_path = db_dirs.snapshot_path();
// execute upgrades
execute_upgrades(&cmd.dirs.base, &db_dirs, algorithm, &cmd.compaction)?;
// create dirs used by parity
cmd.dirs.create_dirs(false, false)?;
// prepare client config
let mut client_config = to_client_config(
&cmd.cache_config,
spec.name.to_lowercase(),
Mode::Active,
tracing,
fat_db,
cmd.compaction,
cmd.vm_type,
"".into(),
algorithm,
cmd.pruning_history,
cmd.pruning_memory,
cmd.check_seal,
12,
);
client_config.queue.verifier_settings = cmd.verifier_settings;
let restoration_db_handler = db::restoration_db_handler(&client_path, &client_config);
let client_db = restoration_db_handler
.open(&client_path)
.map_err(|e| format!("Failed to open database {:?}", e))?;
// build client
let service = ClientService::start(
client_config,
&spec,
client_db,
&snapshot_path,
restoration_db_handler,
&cmd.dirs.ipc_path(),
// TODO [ToDr] don't use test miner here
// (actually don't require miner at all)
Arc::new(Miner::new_for_tests(&spec, None)),
)
.map_err(|e| format!("Client service error: {:?}", e))?;
// free up the spec in memory.
drop(spec);
let client = service.client();
let instream: Box<dyn io::Read> = match cmd.file_path {
Some(f) => {
Box::new(fs::File::open(&f).map_err(|_| format!("Cannot open given file: {}", f))?)
}
None => Box::new(io::stdin()),
};
let informant = Arc::new(Informant::new(
FullNodeInformantData {
client: client.clone(),
sync: None,
net: None,
},
None,
None,
cmd.with_color,
));
service
.register_io_handler(informant)
.map_err(|_| "Unable to register informant handler".to_owned())?;
client.import_blocks(instream, cmd.format)?;
// save user defaults
user_defaults.pruning = algorithm;
user_defaults.tracing = tracing;
user_defaults.fat_db = fat_db;
user_defaults.save(&user_defaults_path)?;
let report = client.report();
let ms = timer.elapsed().as_milliseconds();
info!("Import completed in {} seconds, {} blocks, {} blk/s, {} transactions, {} tx/s, {} Mgas, {} Mgas/s",
ms / 1000,
report.blocks_imported,
(report.blocks_imported * 1000) as u64 / ms,
report.transactions_applied,
(report.transactions_applied * 1000) as u64 / ms,
report.gas_processed / 1_000_000,
(report.gas_processed / (ms * 1000)).low_u64(),
);
Ok(())
}
fn start_client(
dirs: Directories,
spec: SpecType,
pruning: Pruning,
pruning_history: u64,
pruning_memory: usize,
tracing: Switch,
fat_db: Switch,
compaction: DatabaseCompactionProfile,
cache_config: CacheConfig,
require_fat_db: bool,
max_round_blocks_to_import: usize,
) -> Result<ClientService, String> {
// load spec file
let spec = spec.spec(&dirs.cache)?;
// load genesis hash
let genesis_hash = spec.genesis_header().hash();
// database paths
let db_dirs = dirs.database(genesis_hash, None, spec.data_dir.clone());
// user defaults path
let user_defaults_path = db_dirs.user_defaults_path();
// load user defaults
let user_defaults = UserDefaults::load(&user_defaults_path)?;
// select pruning algorithm
let algorithm = pruning.to_algorithm(&user_defaults);
// check if tracing is on
let tracing = tracing_switch_to_bool(tracing, &user_defaults)?;
// check if fatdb is on
let fat_db = fatdb_switch_to_bool(fat_db, &user_defaults, algorithm)?;
if !fat_db && require_fat_db {
return Err("This command requires OpenEthereum to be synced with --fat-db on.".to_owned());
}
// prepare client and snapshot paths.
let client_path = db_dirs.client_path(algorithm);
let snapshot_path = db_dirs.snapshot_path();
// execute upgrades
execute_upgrades(&dirs.base, &db_dirs, algorithm, &compaction)?;
// create dirs used by OpenEthereum.
dirs.create_dirs(false, false)?;
// prepare client config
let client_config = to_client_config(
&cache_config,
spec.name.to_lowercase(),
Mode::Active,
tracing,
fat_db,
compaction,
VMType::default(),
"".into(),
algorithm,
pruning_history,
pruning_memory,
true,
max_round_blocks_to_import,
);
let restoration_db_handler = db::restoration_db_handler(&client_path, &client_config);
let client_db = restoration_db_handler
.open(&client_path)
.map_err(|e| format!("Failed to open database {:?}", e))?;
let service = ClientService::start(
client_config,
&spec,
client_db,
&snapshot_path,
restoration_db_handler,
&dirs.ipc_path(),
// It's fine to use test version here,
// since we don't care about miner parameters at all
Arc::new(Miner::new_for_tests(&spec, None)),
)
.map_err(|e| format!("Client service error: {:?}", e))?;
drop(spec);
Ok(service)
}
fn execute_export(cmd: ExportBlockchain) -> Result<(), String> {
let service = start_client(
cmd.dirs,
cmd.spec,
cmd.pruning,
cmd.pruning_history,
cmd.pruning_memory,
cmd.tracing,
cmd.fat_db,
cmd.compaction,
cmd.cache_config,
false,
cmd.max_round_blocks_to_import,
)?;
let client = service.client();
let out: Box<dyn io::Write> = match cmd.file_path {
Some(f) => Box::new(
fs::File::create(&f).map_err(|_| format!("Cannot write to file given: {}", f))?,
),
None => Box::new(io::stdout()),
};
client.export_blocks(out, cmd.from_block, cmd.to_block, cmd.format)?;
info!("Export completed.");
Ok(())
}
fn execute_export_state(cmd: ExportState) -> Result<(), String> {
let service = start_client(
cmd.dirs,
cmd.spec,
cmd.pruning,
cmd.pruning_history,
cmd.pruning_memory,
cmd.tracing,
cmd.fat_db,
cmd.compaction,
cmd.cache_config,
true,
cmd.max_round_blocks_to_import,
)?;
let client = service.client();
let mut out: Box<dyn io::Write> = match cmd.file_path {
Some(f) => Box::new(
fs::File::create(&f).map_err(|_| format!("Cannot write to file given: {}", f))?,
),
None => Box::new(io::stdout()),
};
let mut last: Option<Address> = None;
let at = cmd.at;
let mut i = 0usize;
out.write_fmt(format_args!("{{ \"state\": {{",))
.expect("Couldn't write to stream.");
loop {
let accounts = client
.list_accounts(at, last.as_ref(), 1000)
.ok_or("Specified block not found")?;
if accounts.is_empty() {
break;
}
for account in accounts.into_iter() {
let balance = client
.balance(&account, at.into())
.unwrap_or_else(U256::zero);
if cmd.min_balance.map_or(false, |m| balance < m)
|| cmd.max_balance.map_or(false, |m| balance > m)
{
last = Some(account);
continue; //filtered out
}
if i != 0 {
out.write(b",").expect("Write error");
}
out.write_fmt(format_args!(
"\n\"0x{:x}\": {{\"balance\": \"{:x}\", \"nonce\": \"{:x}\"",
account,
balance,
client.nonce(&account, at).unwrap_or_else(U256::zero)
))
.expect("Write error");
let code = client
.code(&account, at.into())
.unwrap_or(None)
.unwrap_or_else(Vec::new);
if !code.is_empty() {
out.write_fmt(format_args!(", \"code_hash\": \"0x{:x}\"", keccak(&code)))
.expect("Write error");
if cmd.code {
out.write_fmt(format_args!(", \"code\": \"{}\"", code.to_hex()))
.expect("Write error");
}
}
let storage_root = client.storage_root(&account, at).unwrap_or(KECCAK_NULL_RLP);
if storage_root != KECCAK_NULL_RLP {
out.write_fmt(format_args!(", \"storage_root\": \"0x{:x}\"", storage_root))
.expect("Write error");
if cmd.storage {
out.write_fmt(format_args!(", \"storage\": {{"))
.expect("Write error");
let mut last_storage: Option<H256> = None;
loop {
let keys = client
.list_storage(at, &account, last_storage.as_ref(), 1000)
.ok_or("Specified block not found")?;
if keys.is_empty() {
break;
}
for key in keys.into_iter() {
if last_storage.is_some() {
out.write(b",").expect("Write error");
}
out.write_fmt(format_args!(
"\n\t\"0x{:x}\": \"0x{:x}\"",
key,
client
.storage_at(&account, &key, at.into())
.unwrap_or_else(Default::default)
))
.expect("Write error");
last_storage = Some(key);
}
}
out.write(b"\n}").expect("Write error");
}
}
out.write(b"}").expect("Write error");
i += 1;
if i % 10000 == 0 {
info!("Account #{}", i);
}
last = Some(account);
}
}
out.write_fmt(format_args!("\n}}}}")).expect("Write error");
info!("Export completed.");
Ok(())
}
fn execute_reset(cmd: ResetBlockchain) -> Result<(), String> {
let service = start_client(
cmd.dirs,
cmd.spec,
cmd.pruning,
cmd.pruning_history,
cmd.pruning_memory,
cmd.tracing,
cmd.fat_db,
cmd.compaction,
cmd.cache_config,
false,
0,
)?;
let client = service.client();
client.reset(cmd.num)?;
info!("{}", Colour::Green.bold().paint("Successfully reset db!"));
Ok(())
}
pub fn kill_db(cmd: KillBlockchain) -> Result<(), String> {
let spec = cmd.spec.spec(&cmd.dirs.cache)?;
let genesis_hash = spec.genesis_header().hash();
let db_dirs = cmd.dirs.database(genesis_hash, None, spec.data_dir);
let user_defaults_path = db_dirs.user_defaults_path();
let mut user_defaults = UserDefaults::load(&user_defaults_path)?;
let algorithm = cmd.pruning.to_algorithm(&user_defaults);
let dir = db_dirs.db_path(algorithm);
fs::remove_dir_all(&dir).map_err(|e| format!("Error removing database: {:?}", e))?;
user_defaults.is_first_launch = true;
user_defaults.save(&user_defaults_path)?;
info!("Database deleted.");
Ok(())
}
#[cfg(test)]
mod test {
use super::DataFormat;
#[test]
fn test_data_format_parsing() {
assert_eq!(DataFormat::Binary, "binary".parse().unwrap());
assert_eq!(DataFormat::Binary, "bin".parse().unwrap());
assert_eq!(DataFormat::Hex, "hex".parse().unwrap());
}
}

View File

@ -1,142 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::cmp::max;
const MIN_BC_CACHE_MB: u32 = 4;
const MIN_DB_CACHE_MB: u32 = 8;
const MIN_BLOCK_QUEUE_SIZE_LIMIT_MB: u32 = 16;
const DEFAULT_DB_CACHE_SIZE: u32 = 128;
const DEFAULT_BC_CACHE_SIZE: u32 = 8;
const DEFAULT_BLOCK_QUEUE_SIZE_LIMIT_MB: u32 = 40;
const DEFAULT_TRACE_CACHE_SIZE: u32 = 20;
const DEFAULT_STATE_CACHE_SIZE: u32 = 25;
/// Configuration for application cache sizes.
/// All values are represented in MB.
#[derive(Debug, PartialEq)]
pub struct CacheConfig {
/// Size of rocksDB cache. Almost all goes to the state column.
db: u32,
/// Size of blockchain cache.
blockchain: u32,
/// Size of transaction queue cache.
queue: u32,
/// Size of traces cache.
traces: u32,
/// Size of the state cache.
state: u32,
}
impl Default for CacheConfig {
fn default() -> Self {
CacheConfig::new(
DEFAULT_DB_CACHE_SIZE,
DEFAULT_BC_CACHE_SIZE,
DEFAULT_BLOCK_QUEUE_SIZE_LIMIT_MB,
DEFAULT_STATE_CACHE_SIZE,
)
}
}
impl CacheConfig {
/// Creates new cache config with cumulative size equal `total`.
pub fn new_with_total_cache_size(total: u32) -> Self {
CacheConfig {
db: total * 7 / 10,
blockchain: total / 10,
queue: DEFAULT_BLOCK_QUEUE_SIZE_LIMIT_MB,
traces: DEFAULT_TRACE_CACHE_SIZE,
state: total * 2 / 10,
}
}
/// Creates new cache config with gitven details.
pub fn new(db: u32, blockchain: u32, queue: u32, state: u32) -> Self {
CacheConfig {
db: db,
blockchain: blockchain,
queue: queue,
traces: DEFAULT_TRACE_CACHE_SIZE,
state: state,
}
}
/// Size of db cache.
pub fn db_cache_size(&self) -> u32 {
max(MIN_DB_CACHE_MB, self.db)
}
/// Size of block queue size limit
pub fn queue(&self) -> u32 {
max(self.queue, MIN_BLOCK_QUEUE_SIZE_LIMIT_MB)
}
/// Size of the blockchain cache.
pub fn blockchain(&self) -> u32 {
max(self.blockchain, MIN_BC_CACHE_MB)
}
/// Size of the traces cache.
pub fn traces(&self) -> u32 {
self.traces
}
/// Size of the state cache.
pub fn state(&self) -> u32 {
self.state * 3 / 4
}
/// Size of the jump-tables cache.
pub fn jump_tables(&self) -> u32 {
self.state / 4
}
}
#[cfg(test)]
mod tests {
use super::CacheConfig;
#[test]
fn test_cache_config_constructor() {
let config = CacheConfig::new_with_total_cache_size(200);
assert_eq!(config.db, 140);
assert_eq!(config.blockchain(), 20);
assert_eq!(config.queue(), 40);
assert_eq!(config.state(), 30);
assert_eq!(config.jump_tables(), 10);
}
#[test]
fn test_cache_config_db_cache_sizes() {
let config = CacheConfig::new_with_total_cache_size(400);
assert_eq!(config.db, 280);
assert_eq!(config.db_cache_size(), 280);
}
#[test]
fn test_cache_config_default() {
assert_eq!(
CacheConfig::default(),
CacheConfig::new(
super::DEFAULT_DB_CACHE_SIZE,
super::DEFAULT_BC_CACHE_SIZE,
super::DEFAULT_BLOCK_QUEUE_SIZE_LIMIT_MB,
super::DEFAULT_STATE_CACHE_SIZE
)
);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +0,0 @@
[parity]
chain = "dev"
[mining]
reseal_min_period = 0
min_gas_price = 0
[rpc]
interface = "all"
apis = ["all"]
hosts = ["all"]

View File

@ -1,4 +0,0 @@
[rpc]
interface = "all"
apis = ["all"]
hosts = ["all"]

View File

@ -1,28 +0,0 @@
[network]
# OpenEthereum will try to maintain connection to at least 50 peers.
min_peers = 50
# OpenEthereum will maintain at most 100 peers.
max_peers = 100
[ipc]
# You won't be able to use IPC to interact with OpenEthereum.
disable = true
[mining]
# Prepare a block to seal even when there are no miners connected.
force_sealing = true
# New pending block will be created for all transactions (both local and external).
reseal_on_txs = "all"
# New pending block will be created only once per 4000 milliseconds.
reseal_min_period = 4000
# OpenEthereum will keep/relay at most 8192 transactions in queue.
tx_queue_size = 8192
tx_queue_per_sender = 128
[footprint]
# If defined will never use more then 1024MB for all caches. (Overrides other cache settings).
cache_size = 1024
[misc]
# Logging pattern (`<module>=<level>`, e.g. `own_tx=trace`).
logging = "miner=trace,own_tx=trace"

View File

@ -1,7 +0,0 @@
[network]
# OpenEthereum will listen for connections on port 30305.
port = 30305
[rpc]
# JSON-RPC over HTTP will be accessible on port 8645.
port = 8645

View File

@ -1,28 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::io::{Error, ErrorKind};
pub fn preset_config_string(arg: &str) -> Result<&'static str, Error> {
match arg.to_lowercase().as_ref() {
"dev" => Ok(include_str!("./config.dev.toml")),
"mining" => Ok(include_str!("./config.mining.toml")),
"non-standard-ports" => Ok(include_str!("./config.non-standard-ports.toml")),
"insecure" => Ok(include_str!("./config.insecure.toml")),
"dev-insecure" => Ok(include_str!("./config.dev-insecure.toml")),
_ => Err(Error::new(ErrorKind::InvalidInput, "Config doesn't match any presets [dev, mining, non-standard-ports, insecure, dev-insecure]"))
}
}

View File

@ -1,6 +0,0 @@
OpenEthereum Client.
By Wood/Paronyan/Kotewicz/Drwięga/Volf/Greeff
Habermeier/Czaban/Gotchac/Redman/Nikolsky
Schoedon/Tang/Adolfsson/Silva/Palm/Hirsz et al.
Copyright 2015-2020 Parity Technologies (UK) Ltd.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.

View File

@ -1,10 +0,0 @@
OpenEthereum Client.
version {}
Copyright 2015-2020 Parity Technologies (UK) Ltd.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
By Wood/Paronyan/Kotewicz/Drwięga/Volf/Greeff
Habermeier/Czaban/Gotchac/Redman/Nikolsky
Schoedon/Tang/Adolfsson/Silva/Palm/Hirsz et al.

File diff suppressed because it is too large Load Diff

View File

@ -1,25 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Database-related operations.
#[path = "rocksdb/mod.rs"]
mod impls;
pub use self::impls::{migrate, restoration_db_handler};
#[cfg(feature = "secretstore")]
pub use self::impls::open_secretstore_db;

View File

@ -1,83 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Blooms migration from rocksdb to blooms-db
use super::{kvdb_rocksdb::DatabaseConfig, open_database};
use ethcore::error::Error;
use ethereum_types::Bloom;
use rlp;
use std::path::Path;
const LOG_BLOOMS_ELEMENTS_PER_INDEX: u64 = 16;
pub fn migrate_blooms<P: AsRef<Path>>(path: P, config: &DatabaseConfig) -> Result<(), Error> {
// init
let db = open_database(&path.as_ref().to_string_lossy(), config)?;
// possible optimization:
// pre-allocate space on disk for faster migration
// iterate over header blooms and insert them in blooms-db
// Some(3) -> COL_EXTRA
// 3u8 -> ExtrasIndex::BlocksBlooms
// 0u8 -> level 0
let blooms_iterator = db
.key_value()
.iter_from_prefix(Some(3), &[3u8, 0u8])
.filter(|(key, _)| key.len() == 6)
.take_while(|(key, _)| key[0] == 3u8 && key[1] == 0u8)
.map(|(key, group)| {
let index = (key[2] as u64) << 24
| (key[3] as u64) << 16
| (key[4] as u64) << 8
| (key[5] as u64);
let number = index * LOG_BLOOMS_ELEMENTS_PER_INDEX;
let blooms = rlp::decode_list::<Bloom>(&group);
(number, blooms)
});
for (number, blooms) in blooms_iterator {
db.blooms().insert_blooms(number, blooms.iter())?;
}
// iterate over trace blooms and insert them in blooms-db
// Some(4) -> COL_TRACE
// 1u8 -> TraceDBIndex::BloomGroups
// 0u8 -> level 0
let trace_blooms_iterator = db
.key_value()
.iter_from_prefix(Some(4), &[1u8, 0u8])
.filter(|(key, _)| key.len() == 6)
.take_while(|(key, _)| key[0] == 1u8 && key[1] == 0u8)
.map(|(key, group)| {
let index = (key[2] as u64)
| (key[3] as u64) << 8
| (key[4] as u64) << 16
| (key[5] as u64) << 24;
let number = index * LOG_BLOOMS_ELEMENTS_PER_INDEX;
let blooms = rlp::decode_list::<Bloom>(&group);
(number, blooms)
});
for (number, blooms) in trace_blooms_iterator {
db.trace_blooms().insert_blooms(number, blooms.iter())?;
}
Ok(())
}

View File

@ -1,40 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::kvdb_rocksdb::{CompactionProfile, DatabaseConfig};
use ethcore::client::{ClientConfig, DatabaseCompactionProfile};
use ethcore_db::NUM_COLUMNS;
use std::path::Path;
pub fn compaction_profile(
profile: &DatabaseCompactionProfile,
db_path: &Path,
) -> CompactionProfile {
match profile {
&DatabaseCompactionProfile::Auto => CompactionProfile::auto(db_path),
&DatabaseCompactionProfile::SSD => CompactionProfile::ssd(),
&DatabaseCompactionProfile::HDD => CompactionProfile::hdd(),
}
}
pub fn client_db_config(client_path: &Path, client_config: &ClientConfig) -> DatabaseConfig {
let mut client_db_config = DatabaseConfig::with_columns(NUM_COLUMNS);
client_db_config.memory_budget = client_config.db_cache_size;
client_db_config.compaction = compaction_profile(&client_config.db_compaction, &client_path);
client_db_config
}

View File

@ -1,262 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{
kvdb_rocksdb::{CompactionProfile, DatabaseConfig},
migration_rocksdb::{ChangeColumns, Config as MigrationConfig, Manager as MigrationManager},
};
use ethcore::{self, client::DatabaseCompactionProfile};
use std::{
fmt::{Display, Error as FmtError, Formatter},
fs,
io::{Error as IoError, ErrorKind, Read, Write},
path::{Path, PathBuf},
};
use super::{blooms::migrate_blooms, helpers};
/// The migration from v10 to v11.
/// Adds a column for node info.
pub const TO_V11: ChangeColumns = ChangeColumns {
pre_columns: Some(6),
post_columns: Some(7),
version: 11,
};
/// The migration from v11 to v12.
/// Adds a column for light chain storage.
pub const TO_V12: ChangeColumns = ChangeColumns {
pre_columns: Some(7),
post_columns: Some(8),
version: 12,
};
/// Database is assumed to be at default version, when no version file is found.
const DEFAULT_VERSION: u32 = 5;
/// Current version of database models.
const CURRENT_VERSION: u32 = 16;
/// Until this version please use upgrade tool.
const USE_MIGRATION_TOOL: u32 = 15;
/// A version of database at which blooms-db was introduced
const BLOOMS_DB_VERSION: u32 = 13;
/// Defines how many items are migrated to the new version of database at once.
const BATCH_SIZE: usize = 1024;
/// Version file name.
const VERSION_FILE_NAME: &'static str = "db_version";
/// Migration related erorrs.
#[derive(Debug)]
pub enum Error {
/// Returned when current version cannot be read or guessed.
UnknownDatabaseVersion,
/// Existing DB is newer than the known one.
FutureDBVersion,
/// Migration is not possible.
MigrationImpossible,
/// For old versions use external migration tool
UseMigrationTool,
/// Blooms-db migration error.
BloomsDB(ethcore::error::Error),
/// Migration was completed succesfully,
/// but there was a problem with io.
Io(IoError),
}
impl Display for Error {
fn fmt(&self, f: &mut Formatter) -> Result<(), FmtError> {
let out = match *self {
Error::UnknownDatabaseVersion => "Current database version cannot be read".into(),
Error::FutureDBVersion => "Database was created with newer client version. Upgrade your client or delete DB and resync.".into(),
Error::MigrationImpossible => format!("Database migration to version {} is not possible.", CURRENT_VERSION),
Error::BloomsDB(ref err) => format!("blooms-db migration error: {}", err),
Error::UseMigrationTool => "For db versions 15 and lower (v2.5.13=>13, 2.7.2=>14, v3.0.1=>15) please use upgrade db tool to manually upgrade db: https://github.com/openethereum/3.1-db-upgrade-tool".into(),
Error::Io(ref err) => format!("Unexpected io error on DB migration: {}.", err),
};
write!(f, "{}", out)
}
}
impl From<IoError> for Error {
fn from(err: IoError) -> Self {
Error::Io(err)
}
}
/// Returns the version file path.
fn version_file_path(path: &Path) -> PathBuf {
let mut file_path = path.to_owned();
file_path.push(VERSION_FILE_NAME);
file_path
}
/// Reads current database version from the file at given path.
/// If the file does not exist returns `DEFAULT_VERSION`.
fn current_version(path: &Path) -> Result<u32, Error> {
match fs::File::open(version_file_path(path)) {
Err(ref err) if err.kind() == ErrorKind::NotFound => Ok(DEFAULT_VERSION),
Err(_) => Err(Error::UnknownDatabaseVersion),
Ok(mut file) => {
let mut s = String::new();
file.read_to_string(&mut s)
.map_err(|_| Error::UnknownDatabaseVersion)?;
u32::from_str_radix(&s, 10).map_err(|_| Error::UnknownDatabaseVersion)
}
}
}
/// Writes current database version to the file.
/// Creates a new file if the version file does not exist yet.
fn update_version(path: &Path) -> Result<(), Error> {
fs::create_dir_all(path)?;
let mut file = fs::File::create(version_file_path(path))?;
file.write_all(format!("{}", CURRENT_VERSION).as_bytes())?;
Ok(())
}
/// Consolidated database path
fn consolidated_database_path(path: &Path) -> PathBuf {
let mut state_path = path.to_owned();
state_path.push("db");
state_path
}
/// Database backup
fn backup_database_path(path: &Path) -> PathBuf {
let mut backup_path = path.to_owned();
backup_path.pop();
backup_path.push("temp_backup");
backup_path
}
/// Default migration settings.
pub fn default_migration_settings(compaction_profile: &CompactionProfile) -> MigrationConfig {
MigrationConfig {
batch_size: BATCH_SIZE,
compaction_profile: *compaction_profile,
}
}
/// Migrations on the consolidated database.
fn consolidated_database_migrations(
compaction_profile: &CompactionProfile,
) -> Result<MigrationManager, Error> {
let mut manager = MigrationManager::new(default_migration_settings(compaction_profile));
manager
.add_migration(TO_V11)
.map_err(|_| Error::MigrationImpossible)?;
manager
.add_migration(TO_V12)
.map_err(|_| Error::MigrationImpossible)?;
Ok(manager)
}
/// Migrates database at given position with given migration rules.
fn migrate_database(
version: u32,
db_path: &Path,
mut migrations: MigrationManager,
) -> Result<(), Error> {
// check if migration is needed
if !migrations.is_needed(version) {
return Ok(());
}
let backup_path = backup_database_path(&db_path);
// remove the backup dir if it exists
let _ = fs::remove_dir_all(&backup_path);
// migrate old database to the new one
let temp_path = migrations.execute(&db_path, version)?;
// completely in-place migration leads to the paths being equal.
// in that case, no need to shuffle directories.
if temp_path == db_path {
return Ok(());
}
// create backup
fs::rename(&db_path, &backup_path)?;
// replace the old database with the new one
if let Err(err) = fs::rename(&temp_path, &db_path) {
// if something went wrong, bring back backup
fs::rename(&backup_path, &db_path)?;
return Err(err.into());
}
// remove backup
fs::remove_dir_all(&backup_path).map_err(Into::into)
}
fn exists(path: &Path) -> bool {
fs::metadata(path).is_ok()
}
/// Migrates the database.
pub fn migrate(path: &Path, compaction_profile: &DatabaseCompactionProfile) -> Result<(), Error> {
let compaction_profile = helpers::compaction_profile(&compaction_profile, path);
// read version file.
let version = current_version(path)?;
// migrate the databases.
// main db directory may already exists, so let's check if we have blocks dir
if version > CURRENT_VERSION {
return Err(Error::FutureDBVersion);
}
// We are in the latest version, yay!
if version == CURRENT_VERSION {
return Ok(());
}
if version != DEFAULT_VERSION && version <= USE_MIGRATION_TOOL {
return Err(Error::UseMigrationTool);
}
let db_path = consolidated_database_path(path);
// Further migrations
if version < CURRENT_VERSION && exists(&db_path) {
println!(
"Migrating database from version {} to {}",
version, CURRENT_VERSION
);
migrate_database(
version,
&db_path,
consolidated_database_migrations(&compaction_profile)?,
)?;
if version < BLOOMS_DB_VERSION {
println!("Migrating blooms to blooms-db...");
let db_config = DatabaseConfig {
max_open_files: 64,
memory_budget: None,
compaction: compaction_profile,
columns: ethcore_db::NUM_COLUMNS,
};
migrate_blooms(&db_path, &db_config).map_err(Error::BloomsDB)?;
}
println!("Migration finished");
}
// update version file.
update_version(path)
}

View File

@ -1,119 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate ethcore_blockchain;
extern crate kvdb_rocksdb;
extern crate migration_rocksdb;
use self::{
ethcore_blockchain::{BlockChainDB, BlockChainDBHandler},
kvdb_rocksdb::{Database, DatabaseConfig},
};
use blooms_db;
use ethcore::client::ClientConfig;
use ethcore_db::KeyValueDB;
use stats::PrometheusMetrics;
use std::{fs, io, path::Path, sync::Arc};
mod blooms;
mod helpers;
mod migration;
pub use self::migration::migrate;
struct AppDB {
key_value: Arc<dyn KeyValueDB>,
blooms: blooms_db::Database,
trace_blooms: blooms_db::Database,
}
impl BlockChainDB for AppDB {
fn key_value(&self) -> &Arc<dyn KeyValueDB> {
&self.key_value
}
fn blooms(&self) -> &blooms_db::Database {
&self.blooms
}
fn trace_blooms(&self) -> &blooms_db::Database {
&self.trace_blooms
}
}
impl PrometheusMetrics for AppDB {
fn prometheus_metrics(&self, _: &mut stats::PrometheusRegistry) {}
}
/// Open a secret store DB using the given secret store data path. The DB path is one level beneath the data path.
#[cfg(feature = "secretstore")]
pub fn open_secretstore_db(data_path: &str) -> Result<Arc<dyn KeyValueDB>, String> {
use std::path::PathBuf;
let mut db_path = PathBuf::from(data_path);
db_path.push("db");
let db_path = db_path
.to_str()
.ok_or_else(|| "Invalid secretstore path".to_string())?;
Ok(Arc::new(
Database::open_default(&db_path).map_err(|e| format!("Error opening database: {:?}", e))?,
))
}
/// Create a restoration db handler using the config generated by `client_path` and `client_config`.
pub fn restoration_db_handler(
client_path: &Path,
client_config: &ClientConfig,
) -> Box<dyn BlockChainDBHandler> {
let client_db_config = helpers::client_db_config(client_path, client_config);
struct RestorationDBHandler {
config: DatabaseConfig,
}
impl BlockChainDBHandler for RestorationDBHandler {
fn open(&self, db_path: &Path) -> io::Result<Arc<dyn BlockChainDB>> {
open_database(&db_path.to_string_lossy(), &self.config)
}
}
Box::new(RestorationDBHandler {
config: client_db_config,
})
}
pub fn open_database(
client_path: &str,
config: &DatabaseConfig,
) -> io::Result<Arc<dyn BlockChainDB>> {
let path = Path::new(client_path);
let blooms_path = path.join("blooms");
let trace_blooms_path = path.join("trace_blooms");
fs::create_dir_all(&blooms_path)?;
fs::create_dir_all(&trace_blooms_path)?;
let db = Database::open(&config, client_path)?;
let db_with_metrics = ethcore_db::DatabaseWithMetrics::new(db);
let db = AppDB {
key_value: Arc::new(db_with_metrics),
blooms: blooms_db::Database::open(blooms_path)?,
trace_blooms: blooms_db::Database::open(trace_blooms_path)?,
};
Ok(Arc::new(db))
}

View File

@ -1,600 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crate::{
cache::CacheConfig,
db::migrate,
miner::pool::PrioritizationStrategy,
sync::{self, validate_node_url},
upgrade::{upgrade, upgrade_data_paths},
};
use dir::{helpers::replace_home, DatabaseDirectories};
use ethcore::{
client::{BlockId, ClientConfig, DatabaseCompactionProfile, Mode, VMType, VerifierType},
miner::{Penalization, PendingSet},
};
use ethereum_types::{Address, U256};
use ethkey::Password;
use journaldb::Algorithm;
use std::{
collections::HashSet,
fs::File,
io,
io::{BufRead, BufReader, Write},
time::Duration,
};
pub fn to_duration(s: &str) -> Result<Duration, String> {
to_seconds(s).map(Duration::from_secs)
}
fn clean_0x(s: &str) -> &str {
if s.starts_with("0x") {
&s[2..]
} else {
s
}
}
fn to_seconds(s: &str) -> Result<u64, String> {
let bad = |_| {
format!(
"{}: Invalid duration given. See openethereum --help for more information.",
s
)
};
match s {
"twice-daily" => Ok(12 * 60 * 60),
"half-hourly" => Ok(30 * 60),
"1second" | "1 second" | "second" => Ok(1),
"1minute" | "1 minute" | "minute" => Ok(60),
"hourly" | "1hour" | "1 hour" | "hour" => Ok(60 * 60),
"daily" | "1day" | "1 day" | "day" => Ok(24 * 60 * 60),
x if x.ends_with("seconds") => x[0..x.len() - 7].trim().parse().map_err(bad),
x if x.ends_with("minutes") => x[0..x.len() - 7]
.trim()
.parse::<u64>()
.map_err(bad)
.map(|x| x * 60),
x if x.ends_with("hours") => x[0..x.len() - 5]
.trim()
.parse::<u64>()
.map_err(bad)
.map(|x| x * 60 * 60),
x if x.ends_with("days") => x[0..x.len() - 4]
.trim()
.parse::<u64>()
.map_err(bad)
.map(|x| x * 24 * 60 * 60),
x => x.trim().parse().map_err(bad),
}
}
pub fn to_mode(s: &str, timeout: u64, alarm: u64) -> Result<Mode, String> {
match s {
"active" => Ok(Mode::Active),
"passive" => Ok(Mode::Passive(
Duration::from_secs(timeout),
Duration::from_secs(alarm),
)),
"dark" => Ok(Mode::Dark(Duration::from_secs(timeout))),
"offline" => Ok(Mode::Off),
_ => Err(format!(
"{}: Invalid value for --mode. Must be one of active, passive, dark or offline.",
s
)),
}
}
pub fn to_block_id(s: &str) -> Result<BlockId, String> {
if s == "latest" {
Ok(BlockId::Latest)
} else if let Ok(num) = s.parse() {
Ok(BlockId::Number(num))
} else if let Ok(hash) = s.parse() {
Ok(BlockId::Hash(hash))
} else {
Err("Invalid block.".into())
}
}
pub fn to_u256(s: &str) -> Result<U256, String> {
if let Ok(decimal) = U256::from_dec_str(s) {
Ok(decimal)
} else {
clean_0x(s)
.parse()
.map_err(|_| format!("Invalid numeric value: {}", s))
}
}
pub fn to_pending_set(s: &str) -> Result<PendingSet, String> {
match s {
"cheap" => Ok(PendingSet::AlwaysQueue),
"strict" => Ok(PendingSet::AlwaysSealing),
"lenient" => Ok(PendingSet::SealingOrElseQueue),
other => Err(format!("Invalid pending set value: {:?}", other)),
}
}
pub fn to_queue_strategy(s: &str) -> Result<PrioritizationStrategy, String> {
match s {
"gas_price" => Ok(PrioritizationStrategy::GasPriceOnly),
other => Err(format!("Invalid queue strategy: {}", other)),
}
}
pub fn to_queue_penalization(time: Option<u64>) -> Result<Penalization, String> {
Ok(match time {
Some(threshold_ms) => Penalization::Enabled {
offend_threshold: Duration::from_millis(threshold_ms),
},
None => Penalization::Disabled,
})
}
pub fn to_address(s: Option<String>) -> Result<Address, String> {
match s {
Some(ref a) => clean_0x(a)
.parse()
.map_err(|_| format!("Invalid address: {:?}", a)),
None => Ok(Address::default()),
}
}
pub fn to_addresses(s: &Option<String>) -> Result<Vec<Address>, String> {
match *s {
Some(ref adds) if !adds.is_empty() => adds
.split(',')
.map(|a| {
clean_0x(a)
.parse()
.map_err(|_| format!("Invalid address: {:?}", a))
})
.collect(),
_ => Ok(Vec::new()),
}
}
/// Tries to parse string as a price.
pub fn to_price(s: &str) -> Result<f32, String> {
s.parse::<f32>().map_err(|_| {
format!(
"Invalid transaction price {:?} given. Must be a decimal number.",
s
)
})
}
pub fn join_set(set: Option<&HashSet<String>>) -> Option<String> {
set.map(|s| {
s.iter()
.map(|s| s.as_str())
.collect::<Vec<&str>>()
.join(",")
})
}
/// Flush output buffer.
pub fn flush_stdout() {
io::stdout().flush().expect("stdout is flushable; qed");
}
/// Formats and returns parity ipc path.
pub fn parity_ipc_path(base: &str, path: &str, shift: u16) -> String {
let mut path = path.to_owned();
if shift != 0 {
path = path.replace("jsonrpc.ipc", &format!("jsonrpc-{}.ipc", shift));
}
replace_home(base, &path)
}
/// Validates and formats bootnodes option.
pub fn to_bootnodes(bootnodes: &Option<String>) -> Result<Vec<String>, String> {
match *bootnodes {
Some(ref x) if !x.is_empty() => x
.split(',')
.map(|s| match validate_node_url(s).map(Into::into) {
None => Ok(s.to_owned()),
Some(sync::ErrorKind::AddressResolve(_)) => {
Err(format!("Failed to resolve hostname of a boot node: {}", s))
}
Some(_) => Err(format!(
"Invalid node address format given for a boot node: {}",
s
)),
})
.collect(),
Some(_) => Ok(vec![]),
None => Ok(vec![]),
}
}
#[cfg(test)]
pub fn default_network_config() -> crate::sync::NetworkConfiguration {
use super::network::IpFilter;
use crate::sync::NetworkConfiguration;
NetworkConfiguration {
config_path: Some(replace_home(&::dir::default_data_path(), "$BASE/network")),
net_config_path: None,
listen_address: Some("0.0.0.0:30303".into()),
public_address: None,
udp_port: None,
nat_enabled: true,
discovery_enabled: true,
boot_nodes: Vec::new(),
use_secret: None,
max_peers: 50,
min_peers: 25,
snapshot_peers: 0,
max_pending_peers: 64,
ip_filter: IpFilter::default(),
reserved_nodes: Vec::new(),
allow_non_reserved: true,
client_version: ::parity_version::version(),
}
}
pub fn to_client_config(
cache_config: &CacheConfig,
spec_name: String,
mode: Mode,
tracing: bool,
fat_db: bool,
compaction: DatabaseCompactionProfile,
vm_type: VMType,
name: String,
pruning: Algorithm,
pruning_history: u64,
pruning_memory: usize,
check_seal: bool,
max_round_blocks_to_import: usize,
) -> ClientConfig {
let mut client_config = ClientConfig::default();
let mb = 1024 * 1024;
// in bytes
client_config.blockchain.max_cache_size = cache_config.blockchain() as usize * mb;
// in bytes
client_config.blockchain.pref_cache_size = cache_config.blockchain() as usize * 3 / 4 * mb;
// db cache size, in megabytes
client_config.db_cache_size = Some(cache_config.db_cache_size() as usize);
// db queue cache size, in bytes
client_config.queue.max_mem_use = cache_config.queue() as usize * mb;
// in bytes
client_config.tracing.max_cache_size = cache_config.traces() as usize * mb;
// in bytes
client_config.tracing.pref_cache_size = cache_config.traces() as usize * 3 / 4 * mb;
// in bytes
client_config.state_cache_size = cache_config.state() as usize * mb;
// in bytes
client_config.jump_table_size = cache_config.jump_tables() as usize * mb;
// in bytes
client_config.history_mem = pruning_memory * mb;
client_config.mode = mode;
client_config.tracing.enabled = tracing;
client_config.fat_db = fat_db;
client_config.pruning = pruning;
client_config.history = pruning_history;
client_config.db_compaction = compaction;
client_config.vm_type = vm_type;
client_config.name = name;
client_config.verifier_type = if check_seal {
VerifierType::Canon
} else {
VerifierType::CanonNoSeal
};
client_config.spec_name = spec_name;
client_config.max_round_blocks_to_import = max_round_blocks_to_import;
client_config
}
pub fn execute_upgrades(
base_path: &str,
dirs: &DatabaseDirectories,
pruning: Algorithm,
compaction_profile: &DatabaseCompactionProfile,
) -> Result<(), String> {
upgrade_data_paths(base_path, dirs, pruning);
match upgrade(&dirs.path) {
Ok(upgrades_applied) if upgrades_applied > 0 => {
debug!("Executed {} upgrade scripts - ok", upgrades_applied);
}
Err(e) => {
return Err(format!("Error upgrading OpenEthereum data: {:?}", e));
}
_ => {}
}
let client_path = dirs.db_path(pruning);
migrate(&client_path, compaction_profile).map_err(|e| format!("{}", e))
}
/// Prompts user asking for password.
pub fn password_prompt() -> Result<Password, String> {
use rpassword::read_password;
const STDIN_ERROR: &'static str = "Unable to ask for password on non-interactive terminal.";
println!("Please note that password is NOT RECOVERABLE.");
print!("Type password: ");
flush_stdout();
let password = read_password().map_err(|_| STDIN_ERROR.to_owned())?.into();
print!("Repeat password: ");
flush_stdout();
let password_repeat = read_password().map_err(|_| STDIN_ERROR.to_owned())?.into();
if password != password_repeat {
return Err("Passwords do not match!".into());
}
Ok(password)
}
/// Read a password from password file.
pub fn password_from_file(path: String) -> Result<Password, String> {
let passwords = passwords_from_files(&[path])?;
// use only first password from the file
passwords
.get(0)
.map(Password::clone)
.ok_or_else(|| "Password file seems to be empty.".to_owned())
}
/// Reads passwords from files. Treats each line as a separate password.
pub fn passwords_from_files(files: &[String]) -> Result<Vec<Password>, String> {
let passwords = files.iter().map(|filename| {
let file = File::open(filename).map_err(|_| format!("{} Unable to read password file. Ensure it exists and permissions are correct.", filename))?;
let reader = BufReader::new(&file);
let lines = reader.lines()
.filter_map(|l| l.ok())
.map(|pwd| pwd.trim().to_owned().into())
.collect::<Vec<Password>>();
Ok(lines)
}).collect::<Result<Vec<Vec<Password>>, String>>();
Ok(passwords?.into_iter().flat_map(|x| x).collect())
}
#[cfg(test)]
mod tests {
use super::{
join_set, password_from_file, to_address, to_addresses, to_block_id, to_bootnodes,
to_duration, to_mode, to_pending_set, to_price, to_u256,
};
use ethcore::{
client::{BlockId, Mode},
miner::PendingSet,
};
use ethereum_types::U256;
use ethkey::Password;
use std::{collections::HashSet, fs::File, io::Write, time::Duration};
use tempdir::TempDir;
#[test]
fn test_to_duration() {
assert_eq!(
to_duration("twice-daily").unwrap(),
Duration::from_secs(12 * 60 * 60)
);
assert_eq!(
to_duration("half-hourly").unwrap(),
Duration::from_secs(30 * 60)
);
assert_eq!(to_duration("1second").unwrap(), Duration::from_secs(1));
assert_eq!(to_duration("2seconds").unwrap(), Duration::from_secs(2));
assert_eq!(to_duration("15seconds").unwrap(), Duration::from_secs(15));
assert_eq!(to_duration("1minute").unwrap(), Duration::from_secs(1 * 60));
assert_eq!(
to_duration("2minutes").unwrap(),
Duration::from_secs(2 * 60)
);
assert_eq!(
to_duration("15minutes").unwrap(),
Duration::from_secs(15 * 60)
);
assert_eq!(to_duration("hourly").unwrap(), Duration::from_secs(60 * 60));
assert_eq!(
to_duration("daily").unwrap(),
Duration::from_secs(24 * 60 * 60)
);
assert_eq!(
to_duration("1hour").unwrap(),
Duration::from_secs(1 * 60 * 60)
);
assert_eq!(
to_duration("2hours").unwrap(),
Duration::from_secs(2 * 60 * 60)
);
assert_eq!(
to_duration("15hours").unwrap(),
Duration::from_secs(15 * 60 * 60)
);
assert_eq!(
to_duration("1day").unwrap(),
Duration::from_secs(1 * 24 * 60 * 60)
);
assert_eq!(
to_duration("2days").unwrap(),
Duration::from_secs(2 * 24 * 60 * 60)
);
assert_eq!(
to_duration("15days").unwrap(),
Duration::from_secs(15 * 24 * 60 * 60)
);
assert_eq!(
to_duration("15 days").unwrap(),
Duration::from_secs(15 * 24 * 60 * 60)
);
assert_eq!(to_duration("2 seconds").unwrap(), Duration::from_secs(2));
}
#[test]
fn test_to_mode() {
assert_eq!(to_mode("active", 0, 0).unwrap(), Mode::Active);
assert_eq!(
to_mode("passive", 10, 20).unwrap(),
Mode::Passive(Duration::from_secs(10), Duration::from_secs(20))
);
assert_eq!(
to_mode("dark", 20, 30).unwrap(),
Mode::Dark(Duration::from_secs(20))
);
assert!(to_mode("other", 20, 30).is_err());
}
#[test]
fn test_to_block_id() {
assert_eq!(to_block_id("latest").unwrap(), BlockId::Latest);
assert_eq!(to_block_id("0").unwrap(), BlockId::Number(0));
assert_eq!(to_block_id("2").unwrap(), BlockId::Number(2));
assert_eq!(to_block_id("15").unwrap(), BlockId::Number(15));
assert_eq!(
to_block_id("9fc84d84f6a785dc1bd5abacfcf9cbdd3b6afb80c0f799bfb2fd42c44a0c224e")
.unwrap(),
BlockId::Hash(
"9fc84d84f6a785dc1bd5abacfcf9cbdd3b6afb80c0f799bfb2fd42c44a0c224e"
.parse()
.unwrap()
)
);
}
#[test]
fn test_to_u256() {
assert_eq!(to_u256("0").unwrap(), U256::from(0));
assert_eq!(to_u256("11").unwrap(), U256::from(11));
assert_eq!(to_u256("0x11").unwrap(), U256::from(17));
assert!(to_u256("u").is_err())
}
#[test]
fn test_pending_set() {
assert_eq!(to_pending_set("cheap").unwrap(), PendingSet::AlwaysQueue);
assert_eq!(to_pending_set("strict").unwrap(), PendingSet::AlwaysSealing);
assert_eq!(
to_pending_set("lenient").unwrap(),
PendingSet::SealingOrElseQueue
);
assert!(to_pending_set("othe").is_err());
}
#[test]
fn test_to_address() {
assert_eq!(
to_address(Some("0xD9A111feda3f362f55Ef1744347CDC8Dd9964a41".into())).unwrap(),
"D9A111feda3f362f55Ef1744347CDC8Dd9964a41".parse().unwrap()
);
assert_eq!(
to_address(Some("D9A111feda3f362f55Ef1744347CDC8Dd9964a41".into())).unwrap(),
"D9A111feda3f362f55Ef1744347CDC8Dd9964a41".parse().unwrap()
);
assert_eq!(to_address(None).unwrap(), Default::default());
}
#[test]
fn test_to_addresses() {
let addresses = to_addresses(&Some(
"0xD9A111feda3f362f55Ef1744347CDC8Dd9964a41,D9A111feda3f362f55Ef1744347CDC8Dd9964a42"
.into(),
))
.unwrap();
assert_eq!(
addresses,
vec![
"D9A111feda3f362f55Ef1744347CDC8Dd9964a41".parse().unwrap(),
"D9A111feda3f362f55Ef1744347CDC8Dd9964a42".parse().unwrap(),
]
);
}
#[test]
fn test_password() {
let tempdir = TempDir::new("").unwrap();
let path = tempdir.path().join("file");
let mut file = File::create(&path).unwrap();
file.write_all(b"a bc ").unwrap();
assert_eq!(
password_from_file(path.to_str().unwrap().into())
.unwrap()
.as_bytes(),
b"a bc"
);
}
#[test]
fn test_password_multiline() {
let tempdir = TempDir::new("").unwrap();
let path = tempdir.path().join("file");
let mut file = File::create(path.as_path()).unwrap();
file.write_all(
br#" password with trailing whitespace
those passwords should be
ignored
but the first password is trimmed
"#,
)
.unwrap();
assert_eq!(
password_from_file(path.to_str().unwrap().into()).unwrap(),
Password::from("password with trailing whitespace")
);
}
#[test]
fn test_to_price() {
assert_eq!(to_price("1").unwrap(), 1.0);
assert_eq!(to_price("2.3").unwrap(), 2.3);
assert_eq!(to_price("2.33").unwrap(), 2.33);
}
#[test]
fn test_to_bootnodes() {
let one_bootnode = "enode://e731347db0521f3476e6bbbb83375dcd7133a1601425ebd15fd10f3835fd4c304fba6282087ca5a0deeafadf0aa0d4fd56c3323331901c1f38bd181c283e3e35@128.199.55.137:30303";
let two_bootnodes = "enode://e731347db0521f3476e6bbbb83375dcd7133a1601425ebd15fd10f3835fd4c304fba6282087ca5a0deeafadf0aa0d4fd56c3323331901c1f38bd181c283e3e35@128.199.55.137:30303,enode://e731347db0521f3476e6bbbb83375dcd7133a1601425ebd15fd10f3835fd4c304fba6282087ca5a0deeafadf0aa0d4fd56c3323331901c1f38bd181c283e3e35@128.199.55.137:30303";
assert_eq!(to_bootnodes(&Some("".into())), Ok(vec![]));
assert_eq!(to_bootnodes(&None), Ok(vec![]));
assert_eq!(
to_bootnodes(&Some(one_bootnode.into())),
Ok(vec![one_bootnode.into()])
);
assert_eq!(
to_bootnodes(&Some(two_bootnodes.into())),
Ok(vec![one_bootnode.into(), one_bootnode.into()])
);
}
#[test]
fn test_join_set() {
let mut test_set = HashSet::new();
test_set.insert("0x1111111111111111111111111111111111111111".to_string());
test_set.insert("0x0000000000000000000000000000000000000000".to_string());
let res = join_set(Some(&test_set)).unwrap();
assert!(
res == "0x1111111111111111111111111111111111111111,0x0000000000000000000000000000000000000000"
||
res == "0x0000000000000000000000000000000000000000,0x1111111111111111111111111111111111111111"
);
}
}

View File

@ -1,429 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
extern crate ansi_term;
use self::ansi_term::{
Colour,
Colour::{Blue, Cyan, Green, White, Yellow},
Style,
};
use std::{
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering as AtomicOrdering},
Arc,
},
time::{Duration, Instant},
};
use crate::{
io::{IoContext, IoHandler, TimerToken},
sync::{ManageNetwork, SyncProvider},
types::BlockNumber,
};
use atty;
use ethcore::{
client::{
BlockChainClient, BlockChainInfo, BlockId, BlockInfo, BlockQueueInfo, ChainInfo,
ChainNotify, Client, ClientIoMessage, ClientReport, NewBlocks,
},
snapshot::{service::Service as SnapshotService, RestorationStatus, SnapshotService as SS},
};
use number_prefix::{binary_prefix, Prefixed, Standalone};
use parity_rpc::{informant::RpcStats, is_major_importing_or_waiting};
use parking_lot::{Mutex, RwLock};
/// Format byte counts to standard denominations.
pub fn format_bytes(b: usize) -> String {
match binary_prefix(b as f64) {
Standalone(bytes) => format!("{} bytes", bytes),
Prefixed(prefix, n) => format!("{:.0} {}B", n, prefix),
}
}
/// Something that can be converted to milliseconds.
pub trait MillisecondDuration {
/// Get the value in milliseconds.
fn as_milliseconds(&self) -> u64;
}
impl MillisecondDuration for Duration {
fn as_milliseconds(&self) -> u64 {
self.as_secs() * 1000 + self.subsec_nanos() as u64 / 1_000_000
}
}
#[derive(Default)]
struct CacheSizes {
sizes: ::std::collections::BTreeMap<&'static str, usize>,
}
impl CacheSizes {
fn insert(&mut self, key: &'static str, bytes: usize) {
self.sizes.insert(key, bytes);
}
fn display<F>(&self, style: Style, paint: F) -> String
where
F: Fn(Style, String) -> String,
{
use std::fmt::Write;
let mut buf = String::new();
for (name, &size) in &self.sizes {
write!(buf, " {:>8} {}", paint(style, format_bytes(size)), name)
.expect("writing to string won't fail unless OOM; qed")
}
buf
}
}
pub struct SyncInfo {
last_imported_block_number: BlockNumber,
last_imported_ancient_number: Option<BlockNumber>,
num_peers: usize,
max_peers: u32,
snapshot_sync: bool,
}
pub struct Report {
importing: bool,
chain_info: BlockChainInfo,
client_report: ClientReport,
queue_info: BlockQueueInfo,
cache_sizes: CacheSizes,
sync_info: Option<SyncInfo>,
}
/// Something which can provide data to the informant.
pub trait InformantData: Send + Sync {
/// Whether it executes transactions
fn executes_transactions(&self) -> bool;
/// Whether it is currently importing (also included in `Report`)
fn is_major_importing(&self) -> bool;
/// Generate a report of blockchain status, memory usage, and sync info.
fn report(&self) -> Report;
}
/// Informant data for a full node.
pub struct FullNodeInformantData {
pub client: Arc<Client>,
pub sync: Option<Arc<dyn SyncProvider>>,
pub net: Option<Arc<dyn ManageNetwork>>,
}
impl InformantData for FullNodeInformantData {
fn executes_transactions(&self) -> bool {
true
}
fn is_major_importing(&self) -> bool {
let state = self.sync.as_ref().map(|sync| sync.status().state);
is_major_importing_or_waiting(state, self.client.queue_info(), false)
}
fn report(&self) -> Report {
let (client_report, queue_info, blockchain_cache_info) = (
self.client.report(),
self.client.queue_info(),
self.client.blockchain_cache_info(),
);
let chain_info = self.client.chain_info();
let mut cache_sizes = CacheSizes::default();
cache_sizes.insert("queue", queue_info.mem_used);
cache_sizes.insert("chain", blockchain_cache_info.total());
let importing = self.is_major_importing();
let sync_info = match (self.sync.as_ref(), self.net.as_ref()) {
(Some(sync), Some(net)) => {
let status = sync.status();
let num_peers_range = net.num_peers_range();
debug_assert!(num_peers_range.end() >= num_peers_range.start());
Some(SyncInfo {
last_imported_block_number: status
.last_imported_block_number
.unwrap_or(chain_info.best_block_number),
last_imported_ancient_number: status.last_imported_old_block_number,
num_peers: status.num_peers,
max_peers: status
.current_max_peers(*num_peers_range.start(), *num_peers_range.end()),
snapshot_sync: status.is_snapshot_syncing(),
})
}
_ => None,
};
Report {
importing,
chain_info,
client_report,
queue_info,
cache_sizes,
sync_info,
}
}
}
pub struct Informant<T> {
last_tick: RwLock<Instant>,
with_color: bool,
target: T,
snapshot: Option<Arc<SnapshotService>>,
rpc_stats: Option<Arc<RpcStats>>,
last_import: Mutex<Instant>,
skipped: AtomicUsize,
skipped_txs: AtomicUsize,
in_shutdown: AtomicBool,
last_report: Mutex<ClientReport>,
}
impl<T: InformantData> Informant<T> {
/// Make a new instance potentially `with_color` output.
pub fn new(
target: T,
snapshot: Option<Arc<SnapshotService>>,
rpc_stats: Option<Arc<RpcStats>>,
with_color: bool,
) -> Self {
Informant {
last_tick: RwLock::new(Instant::now()),
with_color: with_color,
target: target,
snapshot: snapshot,
rpc_stats: rpc_stats,
last_import: Mutex::new(Instant::now()),
skipped: AtomicUsize::new(0),
skipped_txs: AtomicUsize::new(0),
in_shutdown: AtomicBool::new(false),
last_report: Mutex::new(Default::default()),
}
}
/// Signal that we're shutting down; no more output necessary.
pub fn shutdown(&self) {
self.in_shutdown
.store(true, ::std::sync::atomic::Ordering::SeqCst);
}
pub fn tick(&self) {
let now = Instant::now();
let elapsed;
{
let last_tick = self.last_tick.read();
if now < *last_tick + Duration::from_millis(1500) {
return;
}
elapsed = now - *last_tick;
}
let (client_report, full_report) = {
let last_report = self.last_report.lock();
let full_report = self.target.report();
let diffed = full_report.client_report.clone() - &*last_report;
(diffed, full_report)
};
let Report {
importing,
chain_info,
queue_info,
cache_sizes,
sync_info,
..
} = full_report;
let rpc_stats = self.rpc_stats.as_ref();
let snapshot_sync = sync_info.as_ref().map_or(false, |s| s.snapshot_sync)
&& self
.snapshot
.as_ref()
.map_or(false, |s| match s.restoration_status() {
RestorationStatus::Ongoing { .. } | RestorationStatus::Initializing { .. } => {
true
}
_ => false,
});
if !importing && !snapshot_sync && elapsed < Duration::from_secs(30) {
return;
}
*self.last_tick.write() = now;
*self.last_report.lock() = full_report.client_report.clone();
let paint = |c: Style, t: String| match self.with_color && atty::is(atty::Stream::Stdout) {
true => format!("{}", c.paint(t)),
false => t,
};
info!(target: "import", "{}{} {} {} {}",
match importing {
true => match snapshot_sync {
false => format!("Syncing {} {} {} {}+{} Qed",
paint(White.bold(), format!("{:>8}", format!("#{}", chain_info.best_block_number))),
paint(White.bold(), format!("{}", chain_info.best_block_hash)),
if self.target.executes_transactions() {
format!("{} blk/s {} tx/s {} Mgas/s",
paint(Yellow.bold(), format!("{:7.2}", (client_report.blocks_imported * 1000) as f64 / elapsed.as_milliseconds() as f64)),
paint(Yellow.bold(), format!("{:6.1}", (client_report.transactions_applied * 1000) as f64 / elapsed.as_milliseconds() as f64)),
paint(Yellow.bold(), format!("{:6.1}", (client_report.gas_processed / 1000).low_u64() as f64 / elapsed.as_milliseconds() as f64))
)
} else {
format!("{} hdr/s",
paint(Yellow.bold(), format!("{:6.1}", (client_report.blocks_imported * 1000) as f64 / elapsed.as_milliseconds() as f64))
)
},
paint(Green.bold(), format!("{:5}", queue_info.unverified_queue_size)),
paint(Green.bold(), format!("{:5}", queue_info.verified_queue_size))
),
true => {
self.snapshot.as_ref().map_or(String::new(), |s|
match s.restoration_status() {
RestorationStatus::Ongoing { state_chunks, block_chunks, state_chunks_done, block_chunks_done, .. } => {
format!("Syncing snapshot {}/{}", state_chunks_done + block_chunks_done, state_chunks + block_chunks)
},
RestorationStatus::Initializing { chunks_done } => {
format!("Snapshot initializing ({} chunks restored)", chunks_done)
},
_ => String::new(),
}
)
},
},
false => String::new(),
},
match chain_info.ancient_block_number {
Some(ancient_number) => format!(" Ancient:#{}", ancient_number),
None => String::new(),
},
match sync_info.as_ref() {
Some(ref sync_info) => format!("{}{}/{} peers",
match importing {
true => format!("{}",
if self.target.executes_transactions() {
paint(Green.bold(), format!("{:>8} ", format!("LI:#{}", sync_info.last_imported_block_number)))
} else {
String::new()
}
),
false => match sync_info.last_imported_ancient_number {
Some(number) => format!("{} ", paint(Yellow.bold(), format!("{:>8}", format!("AB:#{}", number)))),
None => String::new(),
}
},
paint(Cyan.bold(), format!("{:2}", sync_info.num_peers)),
paint(Cyan.bold(), format!("{:2}", sync_info.max_peers)),
),
_ => String::new(),
},
cache_sizes.display(Blue.bold(), &paint),
match rpc_stats {
Some(ref rpc_stats) => format!(
"RPC: {} conn, {} req/s, {} µs",
paint(Blue.bold(), format!("{:2}", rpc_stats.sessions())),
paint(Blue.bold(), format!("{:4}", rpc_stats.requests_rate())),
paint(Blue.bold(), format!("{:4}", rpc_stats.approximated_roundtrip())),
),
_ => String::new(),
},
);
}
}
impl ChainNotify for Informant<FullNodeInformantData> {
// t_nb 11.2 Informant. Prints new block inclusiong to console/log.
fn new_blocks(&self, new_blocks: NewBlocks) {
if new_blocks.has_more_blocks_to_import {
return;
}
let mut last_import = self.last_import.lock();
let client = &self.target.client;
let importing = self.target.is_major_importing();
let ripe = Instant::now() > *last_import + Duration::from_secs(1) && !importing;
let txs_imported = new_blocks
.imported
.iter()
.take(
new_blocks
.imported
.len()
.saturating_sub(if ripe { 1 } else { 0 }),
)
.filter_map(|h| client.block(BlockId::Hash(*h)))
.map(|b| b.transactions_count())
.sum();
if ripe {
if let Some(block) = new_blocks
.imported
.last()
.and_then(|h| client.block(BlockId::Hash(*h)))
{
let header_view = block.header_view();
let size = block.rlp().as_raw().len();
let (skipped, skipped_txs) = (
self.skipped.load(AtomicOrdering::Relaxed) + new_blocks.imported.len() - 1,
self.skipped_txs.load(AtomicOrdering::Relaxed) + txs_imported,
);
info!(target: "import", "Imported {} {} ({} txs, {} Mgas, {} ms, {} KiB){}",
Colour::White.bold().paint(format!("#{}", header_view.number())),
Colour::White.bold().paint(format!("{}", header_view.hash())),
Colour::Yellow.bold().paint(format!("{}", block.transactions_count())),
Colour::Yellow.bold().paint(format!("{:.2}", header_view.gas_used().low_u64() as f32 / 1000000f32)),
Colour::Purple.bold().paint(format!("{}", new_blocks.duration.as_milliseconds())),
Colour::Blue.bold().paint(format!("{:.2}", size as f32 / 1024f32)),
if skipped > 0 {
format!(" + another {} block(s) containing {} tx(s)",
Colour::Red.bold().paint(format!("{}", skipped)),
Colour::Red.bold().paint(format!("{}", skipped_txs))
)
} else {
String::new()
}
);
self.skipped.store(0, AtomicOrdering::Relaxed);
self.skipped_txs.store(0, AtomicOrdering::Relaxed);
*last_import = Instant::now();
}
} else {
self.skipped
.fetch_add(new_blocks.imported.len(), AtomicOrdering::Relaxed);
self.skipped_txs
.fetch_add(txs_imported, AtomicOrdering::Relaxed);
}
}
}
const INFO_TIMER: TimerToken = 0;
impl<T: InformantData> IoHandler<ClientIoMessage> for Informant<T> {
fn initialize(&self, io: &IoContext<ClientIoMessage>) {
io.register_timer(INFO_TIMER, Duration::from_secs(5))
.expect("Error registering timer");
}
fn timeout(&self, _io: &IoContext<ClientIoMessage>, timer: TimerToken) {
if timer == INFO_TIMER && !self.in_shutdown.load(AtomicOrdering::SeqCst) {
self.tick();
}
}
}

View File

@ -1,240 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Ethcore client application.
#![warn(missing_docs)]
extern crate ansi_term;
extern crate docopt;
#[macro_use]
extern crate clap;
extern crate atty;
extern crate dir;
extern crate futures;
extern crate jsonrpc_core;
extern crate num_cpus;
extern crate number_prefix;
extern crate parking_lot;
extern crate regex;
extern crate rlp;
extern crate rpassword;
extern crate rustc_hex;
extern crate semver;
extern crate serde;
extern crate serde_json;
#[macro_use]
extern crate serde_derive;
extern crate toml;
extern crate blooms_db;
extern crate cli_signer;
extern crate common_types as types;
extern crate ethcore;
extern crate ethcore_call_contract as call_contract;
extern crate ethcore_db;
extern crate ethcore_io as io;
extern crate ethcore_logger;
extern crate ethcore_miner as miner;
extern crate ethcore_network as network;
extern crate ethcore_service;
extern crate ethcore_sync as sync;
extern crate ethereum_types;
extern crate ethkey;
extern crate ethstore;
extern crate fetch;
extern crate hyper;
extern crate journaldb;
extern crate keccak_hash as hash;
extern crate kvdb;
extern crate node_filter;
extern crate parity_bytes as bytes;
extern crate parity_crypto as crypto;
extern crate parity_local_store as local_store;
extern crate parity_path as path;
extern crate parity_rpc;
extern crate parity_runtime;
extern crate parity_version;
extern crate prometheus;
extern crate stats;
#[macro_use]
extern crate log as rlog;
#[cfg(feature = "ethcore-accounts")]
extern crate ethcore_accounts as accounts;
#[cfg(feature = "secretstore")]
extern crate ethcore_secretstore;
#[cfg(test)]
#[macro_use]
extern crate pretty_assertions;
#[cfg(test)]
extern crate tempdir;
#[cfg(test)]
#[macro_use]
extern crate lazy_static;
mod account;
mod account_utils;
mod blockchain;
mod cache;
mod cli;
mod configuration;
mod db;
mod helpers;
mod informant;
mod metrics;
mod modules;
mod params;
mod presale;
mod rpc;
mod rpc_apis;
mod run;
mod secretstore;
mod signer;
mod snapshot;
mod upgrade;
mod user_defaults;
use std::{fs::File, io::BufReader, sync::Arc};
use crate::{
cli::Args,
configuration::{Cmd, Execute},
hash::keccak_buffer,
};
#[cfg(feature = "memory_profiling")]
use std::alloc::System;
pub use self::{configuration::Configuration, run::RunningClient};
pub use ethcore_logger::{setup_log, Config as LoggerConfig, RotatingLogger};
pub use parity_rpc::PubSubSession;
#[cfg(feature = "memory_profiling")]
#[global_allocator]
static A: System = System;
fn print_hash_of(maybe_file: Option<String>) -> Result<String, String> {
if let Some(file) = maybe_file {
let mut f =
BufReader::new(File::open(&file).map_err(|_| "Unable to open file".to_owned())?);
let hash = keccak_buffer(&mut f).map_err(|_| "Unable to read from file".to_owned())?;
Ok(format!("{:x}", hash))
} else {
Err("Streaming from standard input not yet supported. Specify a file.".to_owned())
}
}
#[cfg(feature = "deadlock_detection")]
fn run_deadlock_detection_thread() {
use ansi_term::Style;
use parking_lot::deadlock;
use std::{thread, time::Duration};
info!("Starting deadlock detection thread.");
// Create a background thread which checks for deadlocks every 10s
thread::spawn(move || loop {
thread::sleep(Duration::from_secs(10));
let deadlocks = deadlock::check_deadlock();
if deadlocks.is_empty() {
continue;
}
warn!(
"{} {} detected",
deadlocks.len(),
Style::new().bold().paint("deadlock(s)")
);
for (i, threads) in deadlocks.iter().enumerate() {
warn!("{} #{}", Style::new().bold().paint("Deadlock"), i);
for t in threads {
warn!("Thread Id {:#?}", t.thread_id());
warn!("{:#?}", t.backtrace());
}
}
});
}
/// Action that OpenEthereum performed when running `start`.
pub enum ExecutionAction {
/// The execution didn't require starting a node, and thus has finished.
/// Contains the string to print on stdout, if any.
Instant(Option<String>),
/// The client has started running and must be shut down manually by calling `shutdown`.
///
/// If you don't call `shutdown()`, execution will continue in the background.
Running(RunningClient),
}
fn execute(command: Execute, logger: Arc<RotatingLogger>) -> Result<ExecutionAction, String> {
#[cfg(feature = "deadlock_detection")]
run_deadlock_detection_thread();
match command.cmd {
Cmd::Run(run_cmd) => {
let outcome = run::execute(run_cmd, logger)?;
Ok(ExecutionAction::Running(outcome))
}
Cmd::Version => Ok(ExecutionAction::Instant(Some(Args::print_version()))),
Cmd::Hash(maybe_file) => {
print_hash_of(maybe_file).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::Account(account_cmd) => {
account::execute(account_cmd).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::ImportPresaleWallet(presale_cmd) => {
presale::execute(presale_cmd).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::Blockchain(blockchain_cmd) => {
blockchain::execute(blockchain_cmd).map(|_| ExecutionAction::Instant(None))
}
Cmd::SignerToken(ws_conf, logger_config) => {
signer::execute(ws_conf, logger_config).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::SignerSign {
id,
pwfile,
port,
authfile,
} => cli_signer::signer_sign(id, pwfile, port, authfile)
.map(|s| ExecutionAction::Instant(Some(s))),
Cmd::SignerList { port, authfile } => {
cli_signer::signer_list(port, authfile).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::SignerReject { id, port, authfile } => {
cli_signer::signer_reject(id, port, authfile).map(|s| ExecutionAction::Instant(Some(s)))
}
Cmd::Snapshot(snapshot_cmd) => {
snapshot::execute(snapshot_cmd).map(|s| ExecutionAction::Instant(Some(s)))
}
}
}
/// Starts the OpenEthereum client.
///
/// The first parameter is the command line arguments that you would pass when running the openethereum
/// binary.
///
/// On error, returns what to print on stderr.
// FIXME: totally independent logging capability, see https://github.com/openethereum/openethereum/issues/10252
pub fn start(conf: Configuration, logger: Arc<RotatingLogger>) -> Result<ExecutionAction, String> {
execute(conf.into_command()?, logger)
}

View File

@ -1,17 +0,0 @@
[package]
description = "Parity Ethereum Logger Implementation"
name = "ethcore-logger"
version = "1.12.0"
license = "GPL-3.0"
authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
log = "0.4"
env_logger = "0.5"
atty = "0.2"
lazy_static = "1.0"
regex = "1.0"
time = "0.1"
parking_lot = "0.11.1"
arrayvec = "0.4"
ansi_term = "0.10"

View File

@ -1,191 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Logger for OpenEthereum executables
extern crate ansi_term;
extern crate arrayvec;
extern crate atty;
extern crate env_logger;
extern crate log as rlog;
extern crate parking_lot;
extern crate regex;
extern crate time;
#[macro_use]
extern crate lazy_static;
mod rotating;
use ansi_term::Colour;
use env_logger::{Builder as LogBuilder, Formatter};
use parking_lot::Mutex;
use regex::Regex;
use std::{
env, fs,
io::Write,
sync::{Arc, Weak},
thread,
};
pub use rotating::{init_log, RotatingLogger};
#[derive(Debug, PartialEq, Clone)]
pub struct Config {
pub mode: Option<String>,
pub color: bool,
pub file: Option<String>,
}
impl Default for Config {
fn default() -> Self {
Config {
mode: None,
color: !cfg!(windows),
file: None,
}
}
}
lazy_static! {
static ref ROTATING_LOGGER: Mutex<Weak<RotatingLogger>> = Mutex::new(Default::default());
}
/// Sets up the logger
pub fn setup_log(config: &Config) -> Result<Arc<RotatingLogger>, String> {
use rlog::*;
let mut levels = String::new();
let mut builder = LogBuilder::new();
// Disable info logging by default for some modules:
builder.filter(Some("ws"), LevelFilter::Warn);
builder.filter(Some("hyper"), LevelFilter::Warn);
builder.filter(Some("rustls"), LevelFilter::Error);
// Enable info for others.
builder.filter(None, LevelFilter::Info);
if let Ok(lvl) = env::var("RUST_LOG") {
levels.push_str(&lvl);
levels.push_str(",");
builder.parse(&lvl);
}
if let Some(ref s) = config.mode {
levels.push_str(s);
builder.parse(s);
}
let isatty = atty::is(atty::Stream::Stderr);
let enable_color = config.color && isatty;
let logs = Arc::new(RotatingLogger::new(levels));
let logger = logs.clone();
let mut open_options = fs::OpenOptions::new();
let maybe_file = match config.file.as_ref() {
Some(f) => Some(
open_options
.append(true)
.create(true)
.open(f)
.map_err(|e| format!("Cannot write to log file given: {}, {}", f, e))?,
),
None => None,
};
let format = move |buf: &mut Formatter, record: &Record| {
let timestamp = time::strftime("%Y-%m-%d %H:%M:%S %Z", &time::now()).unwrap();
let with_color = if max_level() <= LevelFilter::Info {
format!(
"{} {}",
Colour::Black.bold().paint(timestamp),
record.args()
)
} else {
let name = thread::current().name().map_or_else(Default::default, |x| {
format!("{}", Colour::Blue.bold().paint(x))
});
format!(
"{} {} {} {} {}",
Colour::Black.bold().paint(timestamp),
name,
record.level(),
record.target(),
record.args()
)
};
let removed_color = kill_color(with_color.as_ref());
let ret = match enable_color {
true => with_color,
false => removed_color.clone(),
};
if let Some(mut file) = maybe_file.as_ref() {
// ignore errors - there's nothing we can do
let _ = file.write_all(removed_color.as_bytes());
let _ = file.write_all(b"\n");
}
logger.append(removed_color);
if !isatty && record.level() <= Level::Info && atty::is(atty::Stream::Stdout) {
// duplicate INFO/WARN output to console
println!("{}", ret);
}
writeln!(buf, "{}", ret)
};
builder.format(format);
builder
.try_init()
.and_then(|_| {
*ROTATING_LOGGER.lock() = Arc::downgrade(&logs);
Ok(logs)
})
// couldn't create new logger - try to fall back on previous logger.
.or_else(|err| {
ROTATING_LOGGER
.lock()
.upgrade()
.ok_or_else(|| format!("{:?}", err))
})
}
fn kill_color(s: &str) -> String {
lazy_static! {
static ref RE: Regex = Regex::new("\x1b\\[[^m]+m").unwrap();
}
RE.replace_all(s, "").to_string()
}
#[test]
fn should_remove_colour() {
let before = "test";
let after = kill_color(&Colour::Red.bold().paint(before));
assert_eq!(after, "test");
}
#[test]
fn should_remove_multiple_colour() {
let t = format!(
"{} {}",
Colour::Red.bold().paint("test"),
Colour::White.normal().paint("again")
);
let after = kill_color(&t);
assert_eq!(after, "test again");
}

View File

@ -1,121 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Common log helper functions
use arrayvec::ArrayVec;
use env_logger::Builder as LogBuilder;
use rlog::LevelFilter;
use std::env;
use parking_lot::{RwLock, RwLockReadGuard};
lazy_static! {
static ref LOG_DUMMY: () = {
let mut builder = LogBuilder::new();
builder.filter(None, LevelFilter::Info);
if let Ok(log) = env::var("RUST_LOG") {
builder.parse(&log);
}
if !builder.try_init().is_ok() {
println!("logger initialization failed!");
}
};
}
/// Intialize log with default settings
pub fn init_log() {
*LOG_DUMMY
}
const LOG_SIZE: usize = 128;
/// Logger implementation that keeps up to `LOG_SIZE` log elements.
pub struct RotatingLogger {
/// Defined logger levels
levels: String,
/// Logs array. Latest log is always at index 0
logs: RwLock<ArrayVec<[String; LOG_SIZE]>>,
}
impl RotatingLogger {
/// Creates new `RotatingLogger` with given levels.
/// It does not enforce levels - it's just read only.
pub fn new(levels: String) -> Self {
RotatingLogger {
levels: levels,
logs: RwLock::new(ArrayVec::<[_; LOG_SIZE]>::new()),
}
}
/// Append new log entry
pub fn append(&self, log: String) {
let mut logs = self.logs.write();
if logs.is_full() {
logs.pop();
}
logs.insert(0, log);
}
/// Return levels
pub fn levels(&self) -> &str {
&self.levels
}
/// Return logs
pub fn logs(&self) -> RwLockReadGuard<ArrayVec<[String; LOG_SIZE]>> {
self.logs.read()
}
}
#[cfg(test)]
mod test {
use super::RotatingLogger;
fn logger() -> RotatingLogger {
RotatingLogger::new("test".to_owned())
}
#[test]
fn should_return_log_levels() {
// given
let logger = logger();
// when
let levels = logger.levels();
// then
assert_eq!(levels, "test");
}
#[test]
fn should_return_latest_logs() {
// given
let logger = logger();
// when
logger.append("a".to_owned());
logger.append("b".to_owned());
// then
let logs = logger.logs();
assert_eq!(logs[0], "b".to_owned());
assert_eq!(logs[1], "a".to_owned());
assert_eq!(logs.len(), 2);
}
}

View File

@ -1,180 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Ethcore client application.
#![warn(missing_docs)]
extern crate ctrlc;
extern crate dir;
extern crate fdlimit;
#[macro_use]
extern crate log;
extern crate ansi_term;
extern crate openethereum;
extern crate panic_hook;
extern crate parity_daemonize;
extern crate parking_lot;
extern crate ethcore_logger;
#[cfg(windows)]
extern crate winapi;
use std::{
io::Write,
process,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
};
use ansi_term::Colour;
use ctrlc::CtrlC;
use ethcore_logger::setup_log;
use fdlimit::raise_fd_limit;
use openethereum::{start, ExecutionAction};
use parity_daemonize::AsHandle;
use parking_lot::{Condvar, Mutex};
#[derive(Debug)]
/// Status used to exit or restart the program.
struct ExitStatus {
/// Whether the program panicked.
panicking: bool,
/// Whether the program should exit.
should_exit: bool,
}
fn main() -> Result<(), i32> {
let conf = {
let args = std::env::args().collect::<Vec<_>>();
openethereum::Configuration::parse_cli(&args).unwrap_or_else(|e| e.exit())
};
let logger = setup_log(&conf.logger_config()).unwrap_or_else(|e| {
eprintln!("{}", e);
process::exit(2)
});
// FIXME: `pid_file` shouldn't need to cloned here
// see: `https://github.com/paritytech/parity-daemonize/pull/13` for more info
let handle = if let Some(pid) = conf.args.arg_daemon_pid_file.clone() {
info!(
"{}",
Colour::Blue.paint("starting in daemon mode").to_string()
);
let _ = std::io::stdout().flush();
match parity_daemonize::daemonize(pid) {
Ok(h) => Some(h),
Err(e) => {
error!("{}", Colour::Red.paint(format!("{}", e)));
return Err(1);
}
}
} else {
None
};
// increase max number of open files
raise_fd_limit();
let exit = Arc::new((
Mutex::new(ExitStatus {
panicking: false,
should_exit: false,
}),
Condvar::new(),
));
// Double panic can happen. So when we lock `ExitStatus` after the main thread is notified, it cannot be locked
// again.
let exiting = Arc::new(AtomicBool::new(false));
trace!(target: "mode", "Not hypervised: not setting exit handlers.");
let exec = start(conf, logger);
match exec {
Ok(result) => match result {
ExecutionAction::Instant(output) => {
if let Some(s) = output {
println!("{}", s);
}
}
ExecutionAction::Running(client) => {
panic_hook::set_with({
let e = exit.clone();
let exiting = exiting.clone();
move |panic_msg| {
warn!("Panic occured, see stderr for details");
eprintln!("{}", panic_msg);
if !exiting.swap(true, Ordering::SeqCst) {
*e.0.lock() = ExitStatus {
panicking: true,
should_exit: true,
};
e.1.notify_all();
}
}
});
CtrlC::set_handler({
let e = exit.clone();
let exiting = exiting.clone();
move || {
if !exiting.swap(true, Ordering::SeqCst) {
*e.0.lock() = ExitStatus {
panicking: false,
should_exit: true,
};
e.1.notify_all();
}
}
});
// so the client has started successfully
// if this is a daemon, detach from the parent process
if let Some(mut handle) = handle {
handle.detach()
}
// Wait for signal
let mut lock = exit.0.lock();
if !lock.should_exit {
let _ = exit.1.wait(&mut lock);
}
client.shutdown();
if lock.panicking {
return Err(1);
}
}
},
Err(err) => {
// error occured during start up
// if this is a daemon, detach from the parent process
if let Some(mut handle) = handle {
handle.detach_with_msg(format!("{}", Colour::Red.paint(&err)))
}
eprintln!("{}", err);
return Err(1);
}
};
Ok(())
}

View File

@ -1,117 +0,0 @@
use std::{sync::Arc, time::Instant};
use crate::{futures::Future, rpc, rpc_apis};
use parking_lot::Mutex;
use hyper::{service::service_fn_ok, Body, Method, Request, Response, Server, StatusCode};
use stats::{
prometheus::{self, Encoder},
PrometheusMetrics, PrometheusRegistry,
};
#[derive(Debug, Clone, PartialEq)]
pub struct MetricsConfiguration {
/// Are metrics enabled (default is false)?
pub enabled: bool,
/// Prefix
pub prefix: String,
/// The IP of the network interface used (default is 127.0.0.1).
pub interface: String,
/// The network port (default is 3000).
pub port: u16,
}
impl Default for MetricsConfiguration {
fn default() -> Self {
MetricsConfiguration {
enabled: false,
prefix: "".into(),
interface: "127.0.0.1".into(),
port: 3000,
}
}
}
struct State {
rpc_apis: Arc<rpc_apis::FullDependencies>,
}
fn handle_request(
req: Request<Body>,
conf: Arc<MetricsConfiguration>,
state: Arc<Mutex<State>>,
) -> Response<Body> {
let (parts, _body) = req.into_parts();
match (parts.method, parts.uri.path()) {
(Method::GET, "/metrics") => {
let start = Instant::now();
let mut reg = PrometheusRegistry::new(conf.prefix.clone());
let state = state.lock();
state.rpc_apis.client.prometheus_metrics(&mut reg);
state.rpc_apis.sync.prometheus_metrics(&mut reg);
let elapsed = start.elapsed();
reg.register_gauge(
"metrics_time",
"Time to perform rpc metrics",
elapsed.as_millis() as i64,
);
let mut buffer = vec![];
let encoder = prometheus::TextEncoder::new();
let metric_families = reg.registry().gather();
encoder
.encode(&metric_families, &mut buffer)
.expect("all source of metrics are static; qed");
let text = String::from_utf8(buffer).expect("metrics encoding is ASCII; qed");
Response::new(Body::from(text))
}
(_, _) => {
let mut res = Response::new(Body::from("not found"));
*res.status_mut() = StatusCode::NOT_FOUND;
res
}
}
}
/// Start the prometheus metrics server accessible via GET <host>:<port>/metrics
pub fn start_prometheus_metrics(
conf: &MetricsConfiguration,
deps: &rpc::Dependencies<rpc_apis::FullDependencies>,
) -> Result<(), String> {
if !conf.enabled {
return Ok(());
}
let addr = format!("{}:{}", conf.interface, conf.port);
let addr = addr
.parse()
.map_err(|err| format!("Failed to parse address '{}': {}", addr, err))?;
let state = State {
rpc_apis: deps.apis.clone(),
};
let state = Arc::new(Mutex::new(state));
let conf = Arc::new(conf.to_owned());
let server = Server::bind(&addr)
.serve(move || {
// This is the `Service` that will handle the connection.
// `service_fn_ok` is a helper to convert a function that
// returns a Response into a `Service`.
let state = state.clone();
let conf = conf.clone();
service_fn_ok(move |req: Request<Body>| {
handle_request(req, conf.clone(), state.clone())
})
})
.map_err(|e| eprintln!("server error: {}", e));
info!("Started prometeus metrics at http://{}/metrics", addr);
deps.executor.spawn(server);
Ok(())
}

View File

@ -1,63 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::sync::{mpsc, Arc};
use crate::{
sync::{self, ConnectionFilter, NetworkConfiguration, Params, SyncConfig},
types::BlockNumber,
};
use ethcore::{client::BlockChainClient, snapshot::SnapshotService};
use std::collections::BTreeSet;
pub use crate::sync::{EthSync, ManageNetwork, SyncProvider};
pub use ethcore::client::ChainNotify;
use ethcore_logger::Config as LogConfig;
pub type SyncModules = (
Arc<dyn SyncProvider>,
Arc<dyn ManageNetwork>,
Arc<dyn ChainNotify>,
mpsc::Sender<sync::PriorityTask>,
);
pub fn sync(
config: SyncConfig,
network_config: NetworkConfiguration,
chain: Arc<dyn BlockChainClient>,
forks: BTreeSet<BlockNumber>,
snapshot_service: Arc<dyn SnapshotService>,
_log_settings: &LogConfig,
connection_filter: Option<Arc<dyn ConnectionFilter>>,
) -> Result<SyncModules, sync::Error> {
let eth_sync = EthSync::new(
Params {
config,
chain,
forks,
snapshot_service,
network_config,
},
connection_filter,
)?;
Ok((
eth_sync.clone() as Arc<dyn SyncProvider>,
eth_sync.clone() as Arc<dyn ManageNetwork>,
eth_sync.clone() as Arc<dyn ChainNotify>,
eth_sync.priority_tasks(),
))
}

View File

@ -1,542 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{collections::HashSet, fmt, fs, num::NonZeroU32, str, time::Duration};
use crate::{
miner::{
gas_price_calibrator::{GasPriceCalibrator, GasPriceCalibratorOptions},
gas_pricer::GasPricer,
},
user_defaults::UserDefaults,
};
use ethcore::{
client::Mode,
ethereum,
spec::{Spec, SpecParams},
};
use ethereum_types::{Address, U256};
use fetch::Client as FetchClient;
use journaldb::Algorithm;
use parity_runtime::Executor;
use parity_version::version_data;
use crate::configuration;
#[derive(Debug, PartialEq)]
pub enum SpecType {
Foundation,
Poanet,
Xdai,
Volta,
Ewc,
Musicoin,
Ellaism,
Mix,
Callisto,
Morden,
Ropsten,
Kovan,
Rinkeby,
Goerli,
Sokol,
Yolo3,
Dev,
Custom(String),
}
impl Default for SpecType {
fn default() -> Self {
SpecType::Foundation
}
}
impl str::FromStr for SpecType {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let spec = match s {
"eth" | "ethereum" | "foundation" | "mainnet" => SpecType::Foundation,
"poanet" | "poacore" => SpecType::Poanet,
"xdai" => SpecType::Xdai,
"volta" => SpecType::Volta,
"ewc" | "energyweb" => SpecType::Ewc,
"musicoin" => SpecType::Musicoin,
"ellaism" => SpecType::Ellaism,
"mix" => SpecType::Mix,
"callisto" => SpecType::Callisto,
"morden" => SpecType::Morden,
"ropsten" => SpecType::Ropsten,
"kovan" => SpecType::Kovan,
"rinkeby" => SpecType::Rinkeby,
"goerli" | "görli" | "testnet" => SpecType::Goerli,
"sokol" | "poasokol" => SpecType::Sokol,
"yolo3" => SpecType::Yolo3,
"dev" => SpecType::Dev,
other => SpecType::Custom(other.into()),
};
Ok(spec)
}
}
impl fmt::Display for SpecType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str(match *self {
SpecType::Foundation => "foundation",
SpecType::Poanet => "poanet",
SpecType::Xdai => "xdai",
SpecType::Volta => "volta",
SpecType::Ewc => "energyweb",
SpecType::Musicoin => "musicoin",
SpecType::Ellaism => "ellaism",
SpecType::Mix => "mix",
SpecType::Callisto => "callisto",
SpecType::Morden => "morden",
SpecType::Ropsten => "ropsten",
SpecType::Kovan => "kovan",
SpecType::Rinkeby => "rinkeby",
SpecType::Goerli => "goerli",
SpecType::Sokol => "sokol",
SpecType::Yolo3 => "yolo3",
SpecType::Dev => "dev",
SpecType::Custom(ref custom) => custom,
})
}
}
impl SpecType {
pub fn spec<'a, T: Into<SpecParams<'a>>>(&self, params: T) -> Result<Spec, String> {
let params = params.into();
match *self {
SpecType::Foundation => Ok(ethereum::new_foundation(params)),
SpecType::Poanet => Ok(ethereum::new_poanet(params)),
SpecType::Xdai => Ok(ethereum::new_xdai(params)),
SpecType::Volta => Ok(ethereum::new_volta(params)),
SpecType::Ewc => Ok(ethereum::new_ewc(params)),
SpecType::Musicoin => Ok(ethereum::new_musicoin(params)),
SpecType::Ellaism => Ok(ethereum::new_ellaism(params)),
SpecType::Mix => Ok(ethereum::new_mix(params)),
SpecType::Callisto => Ok(ethereum::new_callisto(params)),
SpecType::Morden => Ok(ethereum::new_morden(params)),
SpecType::Ropsten => Ok(ethereum::new_ropsten(params)),
SpecType::Kovan => Ok(ethereum::new_kovan(params)),
SpecType::Rinkeby => Ok(ethereum::new_rinkeby(params)),
SpecType::Goerli => Ok(ethereum::new_goerli(params)),
SpecType::Sokol => Ok(ethereum::new_sokol(params)),
SpecType::Yolo3 => Ok(ethereum::new_yolo3(params)),
SpecType::Dev => Ok(Spec::new_instant()),
SpecType::Custom(ref filename) => {
let file = fs::File::open(filename).map_err(|e| {
format!("Could not load specification file at {}: {}", filename, e)
})?;
Spec::load(params, file)
}
}
}
pub fn legacy_fork_name(&self) -> Option<String> {
match *self {
SpecType::Musicoin => Some("musicoin".to_owned()),
_ => None,
}
}
}
#[derive(Debug, PartialEq)]
pub enum Pruning {
Specific(Algorithm),
Auto,
}
impl Default for Pruning {
fn default() -> Self {
Pruning::Auto
}
}
impl str::FromStr for Pruning {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"auto" => Ok(Pruning::Auto),
other => other.parse().map(Pruning::Specific),
}
}
}
impl Pruning {
pub fn to_algorithm(&self, user_defaults: &UserDefaults) -> Algorithm {
match *self {
Pruning::Specific(algo) => algo,
Pruning::Auto => user_defaults.pruning,
}
}
}
#[derive(Debug, PartialEq)]
pub struct ResealPolicy {
pub own: bool,
pub external: bool,
}
impl Default for ResealPolicy {
fn default() -> Self {
ResealPolicy {
own: true,
external: true,
}
}
}
impl str::FromStr for ResealPolicy {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let (own, external) = match s {
"none" => (false, false),
"own" => (true, false),
"ext" => (false, true),
"all" => (true, true),
x => return Err(format!("Invalid reseal value: {}", x)),
};
let reseal = ResealPolicy {
own: own,
external: external,
};
Ok(reseal)
}
}
#[derive(Debug, PartialEq)]
pub struct AccountsConfig {
pub iterations: NonZeroU32,
pub refresh_time: u64,
pub testnet: bool,
pub password_files: Vec<String>,
pub unlocked_accounts: Vec<Address>,
pub enable_fast_unlock: bool,
}
impl Default for AccountsConfig {
fn default() -> Self {
AccountsConfig {
iterations: NonZeroU32::new(10240).expect("10240 > 0; qed"),
refresh_time: 5,
testnet: false,
password_files: Vec::new(),
unlocked_accounts: Vec::new(),
enable_fast_unlock: false,
}
}
}
#[derive(Debug, PartialEq)]
pub enum GasPricerConfig {
Fixed(U256),
Calibrated {
usd_per_tx: f32,
recalibration_period: Duration,
api_endpoint: String,
},
}
impl Default for GasPricerConfig {
fn default() -> Self {
GasPricerConfig::Calibrated {
usd_per_tx: 0.0001f32,
recalibration_period: Duration::from_secs(3600),
api_endpoint: configuration::ETHERSCAN_ETH_PRICE_ENDPOINT.to_string(),
}
}
}
impl GasPricerConfig {
pub fn to_gas_pricer(&self, fetch: FetchClient, p: Executor) -> GasPricer {
match *self {
GasPricerConfig::Fixed(u) => GasPricer::Fixed(u),
GasPricerConfig::Calibrated {
usd_per_tx,
recalibration_period,
ref api_endpoint,
} => GasPricer::new_calibrated(GasPriceCalibrator::new(
GasPriceCalibratorOptions {
usd_per_tx: usd_per_tx,
recalibration_period: recalibration_period,
},
fetch,
p,
api_endpoint.clone(),
)),
}
}
}
#[derive(Debug, PartialEq)]
pub struct MinerExtras {
pub author: Address,
pub engine_signer: Address,
pub extra_data: Vec<u8>,
pub gas_range_target: (U256, U256),
pub work_notify: Vec<String>,
pub local_accounts: HashSet<Address>,
}
impl Default for MinerExtras {
fn default() -> Self {
MinerExtras {
author: Default::default(),
engine_signer: Default::default(),
extra_data: version_data(),
gas_range_target: (8_000_000.into(), 10_000_000.into()),
work_notify: Default::default(),
local_accounts: Default::default(),
}
}
}
/// 3-value enum.
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum Switch {
/// True.
On,
/// False.
Off,
/// Auto.
Auto,
}
impl Default for Switch {
fn default() -> Self {
Switch::Auto
}
}
impl str::FromStr for Switch {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"on" => Ok(Switch::On),
"off" => Ok(Switch::Off),
"auto" => Ok(Switch::Auto),
other => Err(format!("Invalid switch value: {}", other)),
}
}
}
pub fn tracing_switch_to_bool(
switch: Switch,
user_defaults: &UserDefaults,
) -> Result<bool, String> {
match (user_defaults.is_first_launch, switch, user_defaults.tracing) {
(false, Switch::On, false) => Err("TraceDB resync required".into()),
(_, Switch::On, _) => Ok(true),
(_, Switch::Off, _) => Ok(false),
(_, Switch::Auto, def) => Ok(def),
}
}
pub fn fatdb_switch_to_bool(
switch: Switch,
user_defaults: &UserDefaults,
_algorithm: Algorithm,
) -> Result<bool, String> {
let result = match (user_defaults.is_first_launch, switch, user_defaults.fat_db) {
(false, Switch::On, false) => Err("FatDB resync required".into()),
(_, Switch::On, _) => Ok(true),
(_, Switch::Off, _) => Ok(false),
(_, Switch::Auto, def) => Ok(def),
};
result
}
pub fn mode_switch_to_bool(
switch: Option<Mode>,
user_defaults: &UserDefaults,
) -> Result<Mode, String> {
Ok(switch.unwrap_or(user_defaults.mode().clone()))
}
#[cfg(test)]
mod tests {
use super::{tracing_switch_to_bool, Pruning, ResealPolicy, SpecType, Switch};
use crate::user_defaults::UserDefaults;
use journaldb::Algorithm;
#[test]
fn test_spec_type_parsing() {
assert_eq!(SpecType::Foundation, "eth".parse().unwrap());
assert_eq!(SpecType::Foundation, "ethereum".parse().unwrap());
assert_eq!(SpecType::Foundation, "foundation".parse().unwrap());
assert_eq!(SpecType::Foundation, "mainnet".parse().unwrap());
assert_eq!(SpecType::Poanet, "poanet".parse().unwrap());
assert_eq!(SpecType::Poanet, "poacore".parse().unwrap());
assert_eq!(SpecType::Xdai, "xdai".parse().unwrap());
assert_eq!(SpecType::Volta, "volta".parse().unwrap());
assert_eq!(SpecType::Ewc, "ewc".parse().unwrap());
assert_eq!(SpecType::Ewc, "energyweb".parse().unwrap());
assert_eq!(SpecType::Musicoin, "musicoin".parse().unwrap());
assert_eq!(SpecType::Ellaism, "ellaism".parse().unwrap());
assert_eq!(SpecType::Mix, "mix".parse().unwrap());
assert_eq!(SpecType::Callisto, "callisto".parse().unwrap());
assert_eq!(SpecType::Morden, "morden".parse().unwrap());
assert_eq!(SpecType::Ropsten, "ropsten".parse().unwrap());
assert_eq!(SpecType::Kovan, "kovan".parse().unwrap());
assert_eq!(SpecType::Rinkeby, "rinkeby".parse().unwrap());
assert_eq!(SpecType::Goerli, "goerli".parse().unwrap());
assert_eq!(SpecType::Goerli, "görli".parse().unwrap());
assert_eq!(SpecType::Goerli, "testnet".parse().unwrap());
assert_eq!(SpecType::Sokol, "sokol".parse().unwrap());
assert_eq!(SpecType::Sokol, "poasokol".parse().unwrap());
}
#[test]
fn test_spec_type_default() {
assert_eq!(SpecType::Foundation, SpecType::default());
}
#[test]
fn test_spec_type_display() {
assert_eq!(format!("{}", SpecType::Foundation), "foundation");
assert_eq!(format!("{}", SpecType::Poanet), "poanet");
assert_eq!(format!("{}", SpecType::Xdai), "xdai");
assert_eq!(format!("{}", SpecType::Volta), "volta");
assert_eq!(format!("{}", SpecType::Ewc), "energyweb");
assert_eq!(format!("{}", SpecType::Musicoin), "musicoin");
assert_eq!(format!("{}", SpecType::Ellaism), "ellaism");
assert_eq!(format!("{}", SpecType::Mix), "mix");
assert_eq!(format!("{}", SpecType::Callisto), "callisto");
assert_eq!(format!("{}", SpecType::Morden), "morden");
assert_eq!(format!("{}", SpecType::Ropsten), "ropsten");
assert_eq!(format!("{}", SpecType::Kovan), "kovan");
assert_eq!(format!("{}", SpecType::Rinkeby), "rinkeby");
assert_eq!(format!("{}", SpecType::Goerli), "goerli");
assert_eq!(format!("{}", SpecType::Sokol), "sokol");
assert_eq!(format!("{}", SpecType::Dev), "dev");
assert_eq!(format!("{}", SpecType::Custom("foo/bar".into())), "foo/bar");
}
#[test]
fn test_pruning_parsing() {
assert_eq!(Pruning::Auto, "auto".parse().unwrap());
assert_eq!(
Pruning::Specific(Algorithm::Archive),
"archive".parse().unwrap()
);
assert_eq!(
Pruning::Specific(Algorithm::EarlyMerge),
"light".parse().unwrap()
);
assert_eq!(
Pruning::Specific(Algorithm::OverlayRecent),
"fast".parse().unwrap()
);
assert_eq!(
Pruning::Specific(Algorithm::RefCounted),
"basic".parse().unwrap()
);
}
#[test]
fn test_pruning_default() {
assert_eq!(Pruning::Auto, Pruning::default());
}
#[test]
fn test_reseal_policy_parsing() {
let none = ResealPolicy {
own: false,
external: false,
};
let own = ResealPolicy {
own: true,
external: false,
};
let ext = ResealPolicy {
own: false,
external: true,
};
let all = ResealPolicy {
own: true,
external: true,
};
assert_eq!(none, "none".parse().unwrap());
assert_eq!(own, "own".parse().unwrap());
assert_eq!(ext, "ext".parse().unwrap());
assert_eq!(all, "all".parse().unwrap());
}
#[test]
fn test_reseal_policy_default() {
let all = ResealPolicy {
own: true,
external: true,
};
assert_eq!(all, ResealPolicy::default());
}
#[test]
fn test_switch_parsing() {
assert_eq!(Switch::On, "on".parse().unwrap());
assert_eq!(Switch::Off, "off".parse().unwrap());
assert_eq!(Switch::Auto, "auto".parse().unwrap());
}
#[test]
fn test_switch_default() {
assert_eq!(Switch::default(), Switch::Auto);
}
fn user_defaults_with_tracing(first_launch: bool, tracing: bool) -> UserDefaults {
let mut ud = UserDefaults::default();
ud.is_first_launch = first_launch;
ud.tracing = tracing;
ud
}
#[test]
fn test_switch_to_bool() {
assert!(
!tracing_switch_to_bool(Switch::Off, &user_defaults_with_tracing(true, true)).unwrap()
);
assert!(
!tracing_switch_to_bool(Switch::Off, &user_defaults_with_tracing(true, false)).unwrap()
);
assert!(
!tracing_switch_to_bool(Switch::Off, &user_defaults_with_tracing(false, true)).unwrap()
);
assert!(
!tracing_switch_to_bool(Switch::Off, &user_defaults_with_tracing(false, false))
.unwrap()
);
assert!(
tracing_switch_to_bool(Switch::On, &user_defaults_with_tracing(true, true)).unwrap()
);
assert!(
tracing_switch_to_bool(Switch::On, &user_defaults_with_tracing(true, false)).unwrap()
);
assert!(
tracing_switch_to_bool(Switch::On, &user_defaults_with_tracing(false, true)).unwrap()
);
assert!(
tracing_switch_to_bool(Switch::On, &user_defaults_with_tracing(false, false)).is_err()
);
}
}

View File

@ -1,65 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crate::{
helpers::{password_from_file, password_prompt},
params::SpecType,
};
use crypto::publickey;
use ethkey::Password;
use ethstore::PresaleWallet;
use std::num::NonZeroU32;
#[derive(Debug, PartialEq)]
pub struct ImportWallet {
pub iterations: NonZeroU32,
pub path: String,
pub spec: SpecType,
pub wallet_path: String,
pub password_file: Option<String>,
}
pub fn execute(cmd: ImportWallet) -> Result<String, String> {
let password = match cmd.password_file.clone() {
Some(file) => password_from_file(file)?,
None => password_prompt()?,
};
let wallet = PresaleWallet::open(cmd.wallet_path.clone())
.map_err(|_| "Unable to open presale wallet.")?;
let kp = wallet.decrypt(&password).map_err(|_| "Invalid password.")?;
let address = kp.address();
import_account(&cmd, kp, password);
Ok(format!("{:?}", address))
}
#[cfg(feature = "accounts")]
pub fn import_account(cmd: &ImportWallet, kp: publickey::KeyPair, password: Password) {
use accounts::{AccountProvider, AccountProviderSettings};
use ethstore::{accounts_dir::RootDiskDirectory, EthStore};
let dir = Box::new(RootDiskDirectory::create(cmd.path.clone()).unwrap());
let secret_store = Box::new(EthStore::open_with_iterations(dir, cmd.iterations).unwrap());
let acc_provider = AccountProvider::new(secret_store, AccountProviderSettings::default());
acc_provider
.insert_account(kp.secret().clone(), &password)
.unwrap();
}
#[cfg(not(feature = "accounts"))]
pub fn import_account(_cmd: &ImportWallet, _kp: publickey::KeyPair, _password: Password) {}

View File

@ -1,364 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{collections::HashSet, io, path::PathBuf, sync::Arc};
use crate::{
helpers::parity_ipc_path,
rpc_apis::{self, ApiSet},
};
use dir::{default_data_path, helpers::replace_home};
use jsonrpc_core::MetaIoHandler;
use parity_rpc::{
self as rpc,
informant::{Middleware, RpcStats},
DomainsValidation, Metadata,
};
use parity_runtime::Executor;
pub use parity_rpc::{HttpServer, IpcServer, RequestMiddleware};
//pub use parity_rpc::ws::Server as WsServer;
pub use parity_rpc::ws::{ws, Server as WsServer};
pub const DAPPS_DOMAIN: &'static str = "web3.site";
#[derive(Debug, Clone, PartialEq)]
pub struct HttpConfiguration {
pub enabled: bool,
pub interface: String,
pub port: u16,
pub apis: ApiSet,
pub cors: Option<Vec<String>>,
pub hosts: Option<Vec<String>>,
pub server_threads: usize,
pub processing_threads: usize,
pub max_payload: usize,
pub keep_alive: bool,
}
impl Default for HttpConfiguration {
fn default() -> Self {
HttpConfiguration {
enabled: true,
interface: "127.0.0.1".into(),
port: 8545,
apis: ApiSet::UnsafeContext,
cors: Some(vec![]),
hosts: Some(vec![]),
server_threads: 1,
processing_threads: 4,
max_payload: 5,
keep_alive: true,
}
}
}
#[derive(Debug, PartialEq)]
pub struct IpcConfiguration {
pub enabled: bool,
pub socket_addr: String,
pub apis: ApiSet,
}
impl Default for IpcConfiguration {
fn default() -> Self {
IpcConfiguration {
enabled: true,
socket_addr: if cfg!(windows) {
r"\\.\pipe\jsonrpc.ipc".into()
} else {
let data_dir = ::dir::default_data_path();
parity_ipc_path(&data_dir, "$BASE/jsonrpc.ipc", 0)
},
apis: ApiSet::IpcContext,
}
}
}
#[derive(Debug, Clone, PartialEq)]
pub struct WsConfiguration {
pub enabled: bool,
pub interface: String,
pub port: u16,
pub apis: ApiSet,
pub max_connections: usize,
pub origins: Option<Vec<String>>,
pub hosts: Option<Vec<String>>,
pub signer_path: PathBuf,
pub support_token_api: bool,
pub max_payload: usize,
}
impl Default for WsConfiguration {
fn default() -> Self {
let data_dir = default_data_path();
WsConfiguration {
enabled: true,
interface: "127.0.0.1".into(),
port: 8546,
apis: ApiSet::UnsafeContext,
max_connections: 100,
origins: Some(vec![
"parity://*".into(),
"chrome-extension://*".into(),
"moz-extension://*".into(),
]),
hosts: Some(Vec::new()),
signer_path: replace_home(&data_dir, "$BASE/signer").into(),
support_token_api: true,
max_payload: 5,
}
}
}
impl WsConfiguration {
pub fn address(&self) -> Option<rpc::Host> {
address(self.enabled, &self.interface, self.port, &self.hosts)
}
}
fn address(
enabled: bool,
bind_iface: &str,
bind_port: u16,
hosts: &Option<Vec<String>>,
) -> Option<rpc::Host> {
if !enabled {
return None;
}
match *hosts {
Some(ref hosts) if !hosts.is_empty() => Some(hosts[0].clone().into()),
_ => Some(format!("{}:{}", bind_iface, bind_port).into()),
}
}
pub struct Dependencies<D: rpc_apis::Dependencies> {
pub apis: Arc<D>,
pub executor: Executor,
pub stats: Arc<RpcStats>,
}
pub fn new_ws<D: rpc_apis::Dependencies>(
conf: WsConfiguration,
deps: &Dependencies<D>,
) -> Result<Option<WsServer>, String> {
if !conf.enabled {
return Ok(None);
}
let domain = DAPPS_DOMAIN;
let url = format!("{}:{}", conf.interface, conf.port);
let addr = url
.parse()
.map_err(|_| format!("Invalid WebSockets listen host/port given: {}", url))?;
let full_handler = setup_apis(rpc_apis::ApiSet::All, deps);
let handler = {
let mut handler = MetaIoHandler::with_middleware((
rpc::WsDispatcher::new(full_handler),
Middleware::new(deps.stats.clone(), deps.apis.activity_notifier()),
));
let apis = conf.apis.list_apis();
deps.apis.extend_with_set(&mut handler, &apis);
handler
};
let allowed_origins = into_domains(with_domain(conf.origins, domain, &None));
let allowed_hosts = into_domains(with_domain(conf.hosts, domain, &Some(url.clone().into())));
let signer_path;
let path = match conf.support_token_api {
true => {
signer_path = crate::signer::codes_path(&conf.signer_path);
Some(signer_path.as_path())
}
false => None,
};
let start_result = rpc::start_ws(
&addr,
handler,
allowed_origins,
allowed_hosts,
conf.max_connections,
rpc::WsExtractor::new(path.clone()),
rpc::WsExtractor::new(path.clone()),
rpc::WsStats::new(deps.stats.clone()),
conf.max_payload,
);
// match start_result {
// Ok(server) => Ok(Some(server)),
// Err(rpc::ws::Error::Io(rpc::ws::ErrorKind::Io(ref err), _)) if err.kind() == io::ErrorKind::AddrInUse => Err(
// format!("WebSockets address {} is already in use, make sure that another instance of an Ethereum client is not running or change the address using the --ws-port and --ws-interface options.", url)
// ),
// Err(e) => Err(format!("WebSockets error: {:?}", e)),
// }
match start_result {
Ok(server) => Ok(Some(server)),
Err(rpc::ws::Error::WsError(ws::Error {
kind: ws::ErrorKind::Io(ref err), ..
})) if err.kind() == io::ErrorKind::AddrInUse => Err(
format!("WebSockets address {} is already in use, make sure that another instance of an Ethereum client is not running or change the address using the --ws-port and --ws-interface options.", url)
),
Err(e) => Err(format!("WebSockets error: {:?}", e)),
}
}
pub fn new_http<D: rpc_apis::Dependencies>(
id: &str,
options: &str,
conf: HttpConfiguration,
deps: &Dependencies<D>,
) -> Result<Option<HttpServer>, String> {
if !conf.enabled {
return Ok(None);
}
let domain = DAPPS_DOMAIN;
let url = format!("{}:{}", conf.interface, conf.port);
let addr = url
.parse()
.map_err(|_| format!("Invalid {} listen host/port given: {}", id, url))?;
let handler = setup_apis(conf.apis, deps);
let cors_domains = into_domains(conf.cors);
let allowed_hosts = into_domains(with_domain(conf.hosts, domain, &Some(url.clone().into())));
let start_result = rpc::start_http(
&addr,
cors_domains,
allowed_hosts,
handler,
rpc::RpcExtractor,
conf.server_threads,
conf.max_payload,
conf.keep_alive,
);
match start_result {
Ok(server) => Ok(Some(server)),
Err(ref err) if err.kind() == io::ErrorKind::AddrInUse => Err(
format!("{} address {} is already in use, make sure that another instance of an Ethereum client is not running or change the address using the --{}-port and --{}-interface options.", id, url, options, options)
),
Err(e) => Err(format!("{} error: {:?}", id, e)),
}
}
pub fn new_ipc<D: rpc_apis::Dependencies>(
conf: IpcConfiguration,
dependencies: &Dependencies<D>,
) -> Result<Option<IpcServer>, String> {
if !conf.enabled {
return Ok(None);
}
let handler = setup_apis(conf.apis, dependencies);
let path = PathBuf::from(&conf.socket_addr);
// Make sure socket file can be created on unix-like OS.
// Windows pipe paths are not on the FS.
if !cfg!(windows) {
if let Some(dir) = path.parent() {
::std::fs::create_dir_all(&dir).map_err(|err| {
format!(
"Unable to create IPC directory at {}: {}",
dir.display(),
err
)
})?;
}
}
match rpc::start_ipc(&conf.socket_addr, handler, rpc::RpcExtractor) {
Ok(server) => Ok(Some(server)),
Err(io_error) => Err(format!("IPC error: {}", io_error)),
}
}
fn into_domains<T: From<String>>(items: Option<Vec<String>>) -> DomainsValidation<T> {
items
.map(|vals| vals.into_iter().map(T::from).collect())
.into()
}
fn with_domain(
items: Option<Vec<String>>,
domain: &str,
dapps_address: &Option<rpc::Host>,
) -> Option<Vec<String>> {
fn extract_port(s: &str) -> Option<u16> {
s.split(':').nth(1).and_then(|s| s.parse().ok())
}
items.map(move |items| {
let mut items = items.into_iter().collect::<HashSet<_>>();
{
let mut add_hosts = |address: &Option<rpc::Host>| {
if let Some(host) = address.clone() {
items.insert(host.to_string());
items.insert(host.replace("127.0.0.1", "localhost"));
items.insert(format!("http://*.{}", domain)); //proxypac
if let Some(port) = extract_port(&*host) {
items.insert(format!("http://*.{}:{}", domain, port));
}
}
};
add_hosts(dapps_address);
}
items.into_iter().collect()
})
}
pub fn setup_apis<D>(
apis: ApiSet,
deps: &Dependencies<D>,
) -> MetaIoHandler<Metadata, Middleware<D::Notifier>>
where
D: rpc_apis::Dependencies,
{
let mut handler = MetaIoHandler::with_middleware(Middleware::new(
deps.stats.clone(),
deps.apis.activity_notifier(),
));
let apis = apis.list_apis();
deps.apis.extend_with_set(&mut handler, &apis);
handler
}
#[cfg(test)]
mod tests {
use super::address;
#[test]
fn should_return_proper_address() {
assert_eq!(address(false, "localhost", 8180, &None), None);
assert_eq!(
address(true, "localhost", 8180, &None),
Some("localhost:8180".into())
);
assert_eq!(
address(true, "localhost", 8180, &Some(vec!["host:443".into()])),
Some("host:443".into())
);
assert_eq!(
address(true, "localhost", 8180, &Some(vec!["host".into()])),
Some("host".into())
);
}
}

View File

@ -1,646 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{
cmp::PartialEq,
collections::{BTreeMap, HashSet},
str::FromStr,
sync::Arc,
};
pub use parity_rpc::signer::SignerService;
use crate::{
account_utils::{self, AccountProvider},
miner::external::ExternalMiner,
sync::{ManageNetwork, SyncProvider},
};
use ethcore::{client::Client, miner::Miner, snapshot::SnapshotService};
use ethcore_logger::RotatingLogger;
use fetch::Client as FetchClient;
use jsonrpc_core::{self as core, MetaIoHandler};
use parity_rpc::{
dispatch::FullDispatcher,
informant::{ActivityNotifier, ClientNotifier},
Host, Metadata, NetworkSettings,
};
use parity_runtime::Executor;
use parking_lot::Mutex;
#[derive(Debug, PartialEq, Clone, Eq, Hash)]
pub enum Api {
/// Web3 (Safe)
Web3,
/// Net (Safe)
Net,
/// Eth (Safe)
Eth,
/// Eth Pub-Sub (Safe)
EthPubSub,
/// Geth-compatible "personal" API (DEPRECATED; only used in `--geth` mode.)
Personal,
/// Signer - Confirm transactions in Signer (UNSAFE: Passwords, List of transactions)
Signer,
/// Parity - Custom extensions (Safe)
Parity,
/// Traces (Safe)
Traces,
/// Rpc (Safe)
Rpc,
/// Parity PubSub - Generic Publish-Subscriber (Safety depends on other APIs exposed).
ParityPubSub,
/// Parity Accounts extensions (UNSAFE: Passwords, Side Effects (new account))
ParityAccounts,
/// Parity - Set methods (UNSAFE: Side Effects affecting node operation)
ParitySet,
/// SecretStore (UNSAFE: arbitrary hash signing)
SecretStore,
/// Geth-compatible (best-effort) debug API (Potentially UNSAFE)
/// NOTE We don't aim to support all methods, only the ones that are useful.
Debug,
}
impl FromStr for Api {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
use self::Api::*;
match s {
"debug" => Ok(Debug),
"eth" => Ok(Eth),
"net" => Ok(Net),
"parity" => Ok(Parity),
"parity_accounts" => Ok(ParityAccounts),
"parity_pubsub" => Ok(ParityPubSub),
"parity_set" => Ok(ParitySet),
"personal" => Ok(Personal),
"pubsub" => Ok(EthPubSub),
"rpc" => Ok(Rpc),
"secretstore" => Ok(SecretStore),
"signer" => Ok(Signer),
"traces" => Ok(Traces),
"web3" => Ok(Web3),
api => Err(format!("Unknown api: {}", api)),
}
}
}
#[derive(Debug, Clone)]
pub enum ApiSet {
// Unsafe context (like jsonrpc over http)
UnsafeContext,
// All possible APIs (safe context like token-protected WS interface)
All,
// Local "unsafe" context and accounts access
IpcContext,
// APIs for Parity Generic Pub-Sub
PubSub,
// Fixed list of APis
List(HashSet<Api>),
}
impl Default for ApiSet {
fn default() -> Self {
ApiSet::UnsafeContext
}
}
impl PartialEq for ApiSet {
fn eq(&self, other: &Self) -> bool {
self.list_apis() == other.list_apis()
}
}
impl FromStr for ApiSet {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut apis = HashSet::new();
for api in s.split(',') {
match api {
"all" => {
apis.extend(ApiSet::All.list_apis());
}
"safe" => {
// Safe APIs are those that are safe even in UnsafeContext.
apis.extend(ApiSet::UnsafeContext.list_apis());
}
// Remove the API
api if api.starts_with("-") => {
let api = api[1..].parse()?;
apis.remove(&api);
}
api => {
let api = api.parse()?;
apis.insert(api);
}
}
}
Ok(ApiSet::List(apis))
}
}
fn to_modules(apis: &HashSet<Api>) -> BTreeMap<String, String> {
let mut modules = BTreeMap::new();
for api in apis {
let (name, version) = match *api {
Api::Debug => ("debug", "1.0"),
Api::Eth => ("eth", "1.0"),
Api::EthPubSub => ("pubsub", "1.0"),
Api::Net => ("net", "1.0"),
Api::Parity => ("parity", "1.0"),
Api::ParityAccounts => ("parity_accounts", "1.0"),
Api::ParityPubSub => ("parity_pubsub", "1.0"),
Api::ParitySet => ("parity_set", "1.0"),
Api::Personal => ("personal", "1.0"),
Api::Rpc => ("rpc", "1.0"),
Api::SecretStore => ("secretstore", "1.0"),
Api::Signer => ("signer", "1.0"),
Api::Traces => ("traces", "1.0"),
Api::Web3 => ("web3", "1.0"),
};
modules.insert(name.into(), version.into());
}
modules
}
macro_rules! add_signing_methods {
($namespace:ident, $handler:expr, $deps:expr, $dispatch:expr) => {{
let deps = &$deps;
let (dispatcher, accounts) = $dispatch;
if deps.signer_service.is_enabled() {
$handler.extend_with($namespace::to_delegate(SigningQueueClient::new(
&deps.signer_service,
dispatcher.clone(),
deps.executor.clone(),
accounts,
)))
} else {
$handler.extend_with($namespace::to_delegate(SigningUnsafeClient::new(
accounts,
dispatcher.clone(),
)))
}
}};
}
/// RPC dependencies can be used to initialize RPC endpoints from APIs.
pub trait Dependencies {
type Notifier: ActivityNotifier;
/// Create the activity notifier.
fn activity_notifier(&self) -> Self::Notifier;
/// Extend the given I/O handler with endpoints for each API.
fn extend_with_set<S>(&self, handler: &mut MetaIoHandler<Metadata, S>, apis: &HashSet<Api>)
where
S: core::Middleware<Metadata>;
}
/// RPC dependencies for a full node.
pub struct FullDependencies {
pub signer_service: Arc<SignerService>,
pub client: Arc<Client>,
pub snapshot: Arc<dyn SnapshotService>,
pub sync: Arc<dyn SyncProvider>,
pub net: Arc<dyn ManageNetwork>,
pub accounts: Arc<AccountProvider>,
pub miner: Arc<Miner>,
pub external_miner: Arc<ExternalMiner>,
pub logger: Arc<RotatingLogger>,
pub settings: Arc<NetworkSettings>,
pub net_service: Arc<dyn ManageNetwork>,
pub experimental_rpcs: bool,
pub ws_address: Option<Host>,
pub fetch: FetchClient,
pub executor: Executor,
pub gas_price_percentile: usize,
pub poll_lifetime: u32,
pub allow_missing_blocks: bool,
pub no_ancient_blocks: bool,
}
impl FullDependencies {
fn extend_api<S>(
&self,
handler: &mut MetaIoHandler<Metadata, S>,
apis: &HashSet<Api>,
for_generic_pubsub: bool,
) where
S: core::Middleware<Metadata>,
{
use parity_rpc::v1::*;
let nonces = Arc::new(Mutex::new(dispatch::Reservations::new(
self.executor.clone(),
)));
let dispatcher = FullDispatcher::new(
self.client.clone(),
self.miner.clone(),
nonces.clone(),
self.gas_price_percentile,
);
let account_signer = Arc::new(dispatch::Signer::new(self.accounts.clone())) as _;
let accounts = account_utils::accounts_list(self.accounts.clone());
for api in apis {
match *api {
Api::Debug => {
handler.extend_with(DebugClient::new(self.client.clone()).to_delegate());
}
Api::Web3 => {
handler.extend_with(Web3Client::default().to_delegate());
}
Api::Net => {
handler.extend_with(NetClient::new(&self.sync).to_delegate());
}
Api::Eth => {
let client = EthClient::new(
&self.client,
&self.snapshot,
&self.sync,
&accounts,
&self.miner,
&self.external_miner,
EthClientOptions {
gas_price_percentile: self.gas_price_percentile,
allow_missing_blocks: self.allow_missing_blocks,
allow_experimental_rpcs: self.experimental_rpcs,
no_ancient_blocks: self.no_ancient_blocks,
},
);
handler.extend_with(client.to_delegate());
if !for_generic_pubsub {
let filter_client = EthFilterClient::new(
self.client.clone(),
self.miner.clone(),
self.poll_lifetime,
);
handler.extend_with(filter_client.to_delegate());
add_signing_methods!(
EthSigning,
handler,
self,
(&dispatcher, &account_signer)
);
}
}
Api::EthPubSub => {
if !for_generic_pubsub {
let client =
EthPubSubClient::new(self.client.clone(), self.executor.clone());
let h = client.handler();
self.miner
.add_transactions_listener(Box::new(move |hashes| {
if let Some(h) = h.upgrade() {
h.notify_new_transactions(hashes);
}
}));
if let Some(h) = client.handler().upgrade() {
self.client.add_notify(h);
}
handler.extend_with(client.to_delegate());
}
}
Api::Personal => {
#[cfg(feature = "accounts")]
handler.extend_with(
PersonalClient::new(
&self.accounts,
dispatcher.clone(),
self.experimental_rpcs,
)
.to_delegate(),
);
}
Api::Signer => {
handler.extend_with(
SignerClient::new(
account_signer.clone(),
dispatcher.clone(),
&self.signer_service,
self.executor.clone(),
)
.to_delegate(),
);
}
Api::Parity => {
let signer = match self.signer_service.is_enabled() {
true => Some(self.signer_service.clone()),
false => None,
};
handler.extend_with(
ParityClient::new(
self.client.clone(),
self.miner.clone(),
self.sync.clone(),
self.net_service.clone(),
self.logger.clone(),
self.settings.clone(),
signer,
self.ws_address.clone(),
self.snapshot.clone().into(),
)
.to_delegate(),
);
#[cfg(feature = "accounts")]
handler.extend_with(ParityAccountsInfo::to_delegate(
ParityAccountsClient::new(&self.accounts),
));
if !for_generic_pubsub {
add_signing_methods!(
ParitySigning,
handler,
self,
(&dispatcher, &account_signer)
);
}
}
Api::ParityPubSub => {
if !for_generic_pubsub {
let mut rpc = MetaIoHandler::default();
let apis = ApiSet::List(apis.clone())
.retain(ApiSet::PubSub)
.list_apis();
self.extend_api(&mut rpc, &apis, true);
handler.extend_with(
PubSubClient::new(rpc, self.executor.clone()).to_delegate(),
);
}
}
Api::ParityAccounts => {
#[cfg(feature = "accounts")]
handler.extend_with(ParityAccounts::to_delegate(ParityAccountsClient::new(
&self.accounts,
)));
}
Api::ParitySet => {
handler.extend_with(
ParitySetClient::new(
&self.client,
&self.miner,
&self.net_service,
self.fetch.clone(),
)
.to_delegate(),
);
#[cfg(feature = "accounts")]
handler.extend_with(
ParitySetAccountsClient::new(&self.accounts, &self.miner).to_delegate(),
);
}
Api::Traces => handler.extend_with(TracesClient::new(&self.client).to_delegate()),
Api::Rpc => {
let modules = to_modules(&apis);
handler.extend_with(RpcClient::new(modules).to_delegate());
}
Api::SecretStore => {
#[cfg(feature = "accounts")]
handler.extend_with(SecretStoreClient::new(&self.accounts).to_delegate());
}
}
}
}
}
impl Dependencies for FullDependencies {
type Notifier = ClientNotifier;
fn activity_notifier(&self) -> ClientNotifier {
ClientNotifier {
client: self.client.clone(),
}
}
fn extend_with_set<S>(&self, handler: &mut MetaIoHandler<Metadata, S>, apis: &HashSet<Api>)
where
S: core::Middleware<Metadata>,
{
self.extend_api(handler, apis, false)
}
}
impl ApiSet {
/// Retains only APIs in given set.
pub fn retain(self, set: Self) -> Self {
ApiSet::List(&self.list_apis() & &set.list_apis())
}
pub fn list_apis(&self) -> HashSet<Api> {
let mut public_list: HashSet<Api> = [
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::Rpc,
]
.iter()
.cloned()
.collect();
match *self {
ApiSet::List(ref apis) => apis.clone(),
ApiSet::UnsafeContext => {
public_list.insert(Api::Traces);
public_list.insert(Api::ParityPubSub);
public_list
}
ApiSet::IpcContext => {
public_list.insert(Api::Traces);
public_list.insert(Api::ParityPubSub);
public_list.insert(Api::ParityAccounts);
public_list
}
ApiSet::All => {
public_list.insert(Api::Debug);
public_list.insert(Api::Traces);
public_list.insert(Api::ParityPubSub);
public_list.insert(Api::ParityAccounts);
public_list.insert(Api::ParitySet);
public_list.insert(Api::Signer);
public_list.insert(Api::Personal);
public_list.insert(Api::SecretStore);
public_list
}
ApiSet::PubSub => [
Api::Eth,
Api::Parity,
Api::ParityAccounts,
Api::ParitySet,
Api::Traces,
]
.iter()
.cloned()
.collect(),
}
}
}
#[cfg(test)]
mod test {
use super::{Api, ApiSet};
#[test]
fn test_api_parsing() {
assert_eq!(Api::Debug, "debug".parse().unwrap());
assert_eq!(Api::Web3, "web3".parse().unwrap());
assert_eq!(Api::Net, "net".parse().unwrap());
assert_eq!(Api::Eth, "eth".parse().unwrap());
assert_eq!(Api::EthPubSub, "pubsub".parse().unwrap());
assert_eq!(Api::Personal, "personal".parse().unwrap());
assert_eq!(Api::Signer, "signer".parse().unwrap());
assert_eq!(Api::Parity, "parity".parse().unwrap());
assert_eq!(Api::ParityAccounts, "parity_accounts".parse().unwrap());
assert_eq!(Api::ParitySet, "parity_set".parse().unwrap());
assert_eq!(Api::Traces, "traces".parse().unwrap());
assert_eq!(Api::Rpc, "rpc".parse().unwrap());
assert_eq!(Api::SecretStore, "secretstore".parse().unwrap());
assert!("rp".parse::<Api>().is_err());
}
#[test]
fn test_api_set_default() {
assert_eq!(ApiSet::UnsafeContext, ApiSet::default());
}
#[test]
fn test_api_set_parsing() {
assert_eq!(
ApiSet::List(vec![Api::Web3, Api::Eth].into_iter().collect()),
"web3,eth".parse().unwrap()
);
}
#[test]
fn test_api_set_unsafe_context() {
let expected = vec![
// make sure this list contains only SAFE methods
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
]
.into_iter()
.collect();
assert_eq!(ApiSet::UnsafeContext.list_apis(), expected);
}
#[test]
fn test_api_set_ipc_context() {
let expected = vec![
// safe
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
// semi-safe
Api::ParityAccounts,
]
.into_iter()
.collect();
assert_eq!(ApiSet::IpcContext.list_apis(), expected);
}
#[test]
fn test_all_apis() {
assert_eq!(
"all".parse::<ApiSet>().unwrap(),
ApiSet::List(
vec![
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
Api::SecretStore,
Api::ParityAccounts,
Api::ParitySet,
Api::Signer,
Api::Personal,
Api::Debug,
]
.into_iter()
.collect()
)
);
}
#[test]
fn test_all_without_personal_apis() {
assert_eq!(
"personal,all,-personal".parse::<ApiSet>().unwrap(),
ApiSet::List(
vec![
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
Api::SecretStore,
Api::ParityAccounts,
Api::ParitySet,
Api::Signer,
Api::Debug,
]
.into_iter()
.collect()
)
);
}
#[test]
fn test_safe_parsing() {
assert_eq!(
"safe".parse::<ApiSet>().unwrap(),
ApiSet::List(
vec![
Api::Web3,
Api::Net,
Api::Eth,
Api::EthPubSub,
Api::Parity,
Api::ParityPubSub,
Api::Traces,
Api::Rpc,
]
.into_iter()
.collect()
)
);
}
}

View File

@ -1,751 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{
any::Any,
str::FromStr,
sync::{atomic, Arc, Weak},
thread,
time::{Duration, Instant},
};
use crate::{
account_utils,
cache::CacheConfig,
db,
helpers::{execute_upgrades, passwords_from_files, to_client_config},
informant::{FullNodeInformantData, Informant},
metrics::{start_prometheus_metrics, MetricsConfiguration},
miner::{external::ExternalMiner, work_notify::WorkPoster},
modules,
params::{
fatdb_switch_to_bool, mode_switch_to_bool, tracing_switch_to_bool, AccountsConfig,
GasPricerConfig, MinerExtras, Pruning, SpecType, Switch,
},
rpc, rpc_apis, secretstore, signer,
sync::{self, SyncConfig},
user_defaults::UserDefaults,
};
use ansi_term::Colour;
use dir::{DatabaseDirectories, Directories};
use ethcore::{
client::{BlockChainClient, BlockInfo, Client, DatabaseCompactionProfile, Mode, VMType},
miner::{self, stratum, Miner, MinerOptions, MinerService},
snapshot::{self, SnapshotConfiguration},
verification::queue::VerifierSettings,
};
use ethcore_logger::{Config as LogConfig, RotatingLogger};
use ethcore_service::ClientService;
use ethereum_types::{H256, U64};
use journaldb::Algorithm;
use jsonrpc_core;
use node_filter::NodeFilter;
use parity_rpc::{
informant, is_major_importing, FutureOutput, FutureResponse, FutureResult, Metadata,
NetworkSettings, Origin, PubSubSession,
};
use parity_runtime::Runtime;
use parity_version::version;
// How often we attempt to take a snapshot: only snapshot on blocknumbers that are multiples of this.
const SNAPSHOT_PERIOD: u64 = 20000;
// Start snapshoting from `tip`-`history, with this we want to bypass reorgs. Should be smaller than prunning history.
const SNAPSHOT_HISTORY: u64 = 50;
// Full client number of DNS threads
const FETCH_FULL_NUM_DNS_THREADS: usize = 4;
#[derive(Debug, PartialEq)]
pub struct RunCmd {
pub cache_config: CacheConfig,
pub dirs: Directories,
pub spec: SpecType,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
/// Some if execution should be daemonized. Contains pid_file path.
pub daemon: Option<String>,
pub logger_config: LogConfig,
pub miner_options: MinerOptions,
pub gas_price_percentile: usize,
pub poll_lifetime: u32,
pub ws_conf: rpc::WsConfiguration,
pub http_conf: rpc::HttpConfiguration,
pub ipc_conf: rpc::IpcConfiguration,
pub net_conf: sync::NetworkConfiguration,
pub network_id: Option<u64>,
pub warp_sync: bool,
pub warp_barrier: Option<u64>,
pub acc_conf: AccountsConfig,
pub gas_pricer_conf: GasPricerConfig,
pub miner_extras: MinerExtras,
pub mode: Option<Mode>,
pub tracing: Switch,
pub fat_db: Switch,
pub compaction: DatabaseCompactionProfile,
pub vm_type: VMType,
pub experimental_rpcs: bool,
pub net_settings: NetworkSettings,
pub secretstore_conf: secretstore::Configuration,
pub name: String,
pub custom_bootnodes: bool,
pub stratum: Option<stratum::Options>,
pub snapshot_conf: SnapshotConfiguration,
pub check_seal: bool,
pub allow_missing_blocks: bool,
pub download_old_blocks: bool,
pub verifier_settings: VerifierSettings,
pub no_persistent_txqueue: bool,
pub max_round_blocks_to_import: usize,
pub metrics_conf: MetricsConfiguration,
}
// node info fetcher for the local store.
struct FullNodeInfo {
miner: Option<Arc<Miner>>, // TODO: only TXQ needed, just use that after decoupling.
}
impl crate::local_store::NodeInfo for FullNodeInfo {
fn pending_transactions(&self) -> Vec<crate::types::transaction::PendingTransaction> {
let miner = match self.miner.as_ref() {
Some(m) => m,
None => return Vec::new(),
};
miner
.local_transactions()
.values()
.filter_map(|status| match *status {
crate::miner::pool::local_transactions::Status::Pending(ref tx) => {
Some(tx.pending().clone())
}
_ => None,
})
.collect()
}
}
/// Executes the given run command.
///
/// On error, returns what to print on stderr.
pub fn execute(cmd: RunCmd, logger: Arc<RotatingLogger>) -> Result<RunningClient, String> {
// load spec
let spec = cmd.spec.spec(&cmd.dirs.cache)?;
// load genesis hash
let genesis_hash = spec.genesis_header().hash();
// database paths
let db_dirs = cmd.dirs.database(
genesis_hash,
cmd.spec.legacy_fork_name(),
spec.data_dir.clone(),
);
// user defaults path
let user_defaults_path = db_dirs.user_defaults_path();
// load user defaults
let mut user_defaults = UserDefaults::load(&user_defaults_path)?;
// select pruning algorithm
let algorithm = cmd.pruning.to_algorithm(&user_defaults);
// check if tracing is on
let tracing = tracing_switch_to_bool(cmd.tracing, &user_defaults)?;
// check if fatdb is on
let fat_db = fatdb_switch_to_bool(cmd.fat_db, &user_defaults, algorithm)?;
// get the mode
let mode = mode_switch_to_bool(cmd.mode, &user_defaults)?;
trace!(target: "mode", "mode is {:?}", mode);
let network_enabled = match mode {
Mode::Dark(_) | Mode::Off => false,
_ => true,
};
// prepare client and snapshot paths.
let client_path = db_dirs.client_path(algorithm);
let snapshot_path = db_dirs.snapshot_path();
// execute upgrades
execute_upgrades(&cmd.dirs.base, &db_dirs, algorithm, &cmd.compaction)?;
// create dirs used by parity
cmd.dirs.create_dirs(
cmd.acc_conf.unlocked_accounts.len() == 0,
cmd.secretstore_conf.enabled,
)?;
//print out running parity environment
print_running_environment(&spec.data_dir, &cmd.dirs, &db_dirs);
// display info about used pruning algorithm
info!(
"State DB configuration: {}{}{}",
Colour::White.bold().paint(algorithm.as_str()),
match fat_db {
true => Colour::White.bold().paint(" +Fat").to_string(),
false => "".to_owned(),
},
match tracing {
true => Colour::White.bold().paint(" +Trace").to_string(),
false => "".to_owned(),
}
);
info!(
"Operating mode: {}",
Colour::White.bold().paint(format!("{}", mode))
);
// display warning about using experimental journaldb algorithm
if !algorithm.is_stable() {
warn!(
"Your chosen strategy is {}! You can re-run with --pruning to change.",
Colour::Red.bold().paint("unstable")
);
}
// create sync config
let mut sync_config = SyncConfig::default();
sync_config.network_id = match cmd.network_id {
Some(id) => id,
None => spec.network_id(),
};
if spec.subprotocol_name().len() > 8 {
warn!("Your chain specification's subprotocol length is more then 8. Ignoring.");
} else {
sync_config.subprotocol_name = U64::from(spec.subprotocol_name().as_bytes())
}
sync_config.fork_block = spec.fork_block();
let mut warp_sync = spec.engine.supports_warp() && cmd.warp_sync;
if warp_sync {
// Logging is not initialized yet, so we print directly to stderr
if fat_db {
warn!("Warning: Warp Sync is disabled because Fat DB is turned on.");
warp_sync = false;
} else if tracing {
warn!("Warning: Warp Sync is disabled because tracing is turned on.");
warp_sync = false;
} else if algorithm != Algorithm::OverlayRecent {
warn!("Warning: Warp Sync is disabled because of non-default pruning mode.");
warp_sync = false;
}
}
sync_config.warp_sync = match (warp_sync, cmd.warp_barrier) {
(true, Some(block)) => sync::WarpSync::OnlyAndAfter(block),
(true, _) => sync::WarpSync::Enabled,
_ => sync::WarpSync::Disabled,
};
sync_config.download_old_blocks = cmd.download_old_blocks;
sync_config.eip1559_transition = spec.params().eip1559_transition;
let passwords = passwords_from_files(&cmd.acc_conf.password_files)?;
// prepare account provider
let account_provider = Arc::new(account_utils::prepare_account_provider(
&cmd.spec,
&cmd.dirs,
&spec.data_dir,
cmd.acc_conf,
&passwords,
)?);
// spin up event loop
let runtime = Runtime::with_default_thread_count();
// fetch service
let fetch = fetch::Client::new(FETCH_FULL_NUM_DNS_THREADS)
.map_err(|e| format!("Error starting fetch client: {:?}", e))?;
let txpool_size = cmd.miner_options.pool_limits.max_count;
// create miner
let miner = Arc::new(Miner::new(
cmd.miner_options,
cmd.gas_pricer_conf
.to_gas_pricer(fetch.clone(), runtime.executor()),
&spec,
(
cmd.miner_extras.local_accounts,
account_utils::miner_local_accounts(account_provider.clone()),
),
));
miner.set_author(miner::Author::External(cmd.miner_extras.author));
miner.set_gas_range_target(cmd.miner_extras.gas_range_target);
miner.set_extra_data(cmd.miner_extras.extra_data);
if !cmd.miner_extras.work_notify.is_empty() {
miner.add_work_listener(Box::new(WorkPoster::new(
&cmd.miner_extras.work_notify,
fetch.clone(),
runtime.executor(),
)));
}
let engine_signer = cmd.miner_extras.engine_signer;
if engine_signer != Default::default() {
if let Some(author) = account_utils::miner_author(
&cmd.spec,
&cmd.dirs,
&account_provider,
engine_signer,
&passwords,
)? {
miner.set_author(author);
}
}
// create client config
let mut client_config = to_client_config(
&cmd.cache_config,
spec.name.to_lowercase(),
mode.clone(),
tracing,
fat_db,
cmd.compaction,
cmd.vm_type,
cmd.name,
algorithm,
cmd.pruning_history,
cmd.pruning_memory,
cmd.check_seal,
cmd.max_round_blocks_to_import,
);
client_config.queue.verifier_settings = cmd.verifier_settings;
client_config.queue.verifier_settings.bad_hashes = verification_bad_blocks(&cmd.spec);
client_config.transaction_verification_queue_size = ::std::cmp::max(2048, txpool_size / 4);
client_config.snapshot = cmd.snapshot_conf.clone();
// set up bootnodes
let mut net_conf = cmd.net_conf;
if !cmd.custom_bootnodes {
net_conf.boot_nodes = spec.nodes.clone();
}
// set network path.
net_conf.net_config_path = Some(db_dirs.network_path().to_string_lossy().into_owned());
let restoration_db_handler = db::restoration_db_handler(&client_path, &client_config);
let client_db = restoration_db_handler
.open(&client_path)
.map_err(|e| format!("Failed to open database {:?}", e))?;
// create client service.
let service = ClientService::start(
client_config,
&spec,
client_db,
&snapshot_path,
restoration_db_handler,
&cmd.dirs.ipc_path(),
miner.clone(),
)
.map_err(|e| format!("Client service error: {:?}", e))?;
let connection_filter_address = spec.params().node_permission_contract;
// drop the spec to free up genesis state.
let forks = spec.hard_forks.clone();
drop(spec);
// take handle to client
let client = service.client();
// Update miners block gas limit and base_fee
let base_fee = client
.engine()
.calculate_base_fee(&client.best_block_header());
let allow_non_eoa_sender = client
.engine()
.allow_non_eoa_sender(client.best_block_header().number() + 1);
miner.update_transaction_queue_limits(
*client.best_block_header().gas_limit(),
base_fee,
allow_non_eoa_sender,
);
let connection_filter = connection_filter_address.map(|a| {
Arc::new(NodeFilter::new(
Arc::downgrade(&client) as Weak<dyn BlockChainClient>,
a,
))
});
let snapshot_service = service.snapshot_service();
// initialize the local node information store.
let store = {
let db = service.db();
let node_info = FullNodeInfo {
miner: match cmd.no_persistent_txqueue {
true => None,
false => Some(miner.clone()),
},
};
let store = crate::local_store::create(
db.key_value().clone(),
::ethcore_db::COL_NODE_INFO,
node_info,
);
if cmd.no_persistent_txqueue {
info!("Running without a persistent transaction queue.");
if let Err(e) = store.clear() {
warn!("Error clearing persistent transaction queue: {}", e);
}
}
// re-queue pending transactions.
match store.pending_transactions() {
Ok(pending) => {
for pending_tx in pending {
if let Err(e) = miner.import_own_transaction(&*client, pending_tx) {
warn!("Error importing saved transaction: {}", e)
}
}
}
Err(e) => warn!("Error loading cached pending transactions from disk: {}", e),
}
Arc::new(store)
};
// register it as an IO service to update periodically.
service
.register_io_handler(store)
.map_err(|_| "Unable to register local store handler".to_owned())?;
// create external miner
let external_miner = Arc::new(ExternalMiner::default());
// start stratum
if let Some(ref stratum_config) = cmd.stratum {
stratum::Stratum::register(stratum_config, miner.clone(), Arc::downgrade(&client))
.map_err(|e| format!("Stratum start error: {:?}", e))?;
}
// create sync object
let (sync_provider, manage_network, chain_notify, priority_tasks) = modules::sync(
sync_config,
net_conf.clone().into(),
client.clone(),
forks,
snapshot_service.clone(),
&cmd.logger_config,
connection_filter
.clone()
.map(|f| f as Arc<dyn crate::sync::ConnectionFilter + 'static>),
)
.map_err(|e| format!("Sync error: {}", e))?;
service.add_notify(chain_notify.clone());
// Propagate transactions as soon as they are imported.
let tx = ::parking_lot::Mutex::new(priority_tasks);
let is_ready = Arc::new(atomic::AtomicBool::new(true));
miner.add_transactions_listener(Box::new(move |_hashes| {
// we want to have only one PendingTransactions task in the queue.
if is_ready
.compare_exchange(
true,
false,
atomic::Ordering::SeqCst,
atomic::Ordering::SeqCst,
)
.is_ok()
{
let task =
crate::sync::PriorityTask::PropagateTransactions(Instant::now(), is_ready.clone());
// we ignore error cause it means that we are closing
let _ = tx.lock().send(task);
}
}));
// start network
if network_enabled {
chain_notify.start();
}
// set up dependencies for rpc servers
let rpc_stats = Arc::new(informant::RpcStats::default());
let secret_store = account_provider.clone();
let signer_service = Arc::new(signer::new_service(&cmd.ws_conf, &cmd.logger_config));
let deps_for_rpc_apis = Arc::new(rpc_apis::FullDependencies {
signer_service: signer_service,
snapshot: snapshot_service.clone(),
client: client.clone(),
sync: sync_provider.clone(),
net: manage_network.clone(),
accounts: secret_store,
miner: miner.clone(),
external_miner: external_miner.clone(),
logger: logger.clone(),
settings: Arc::new(cmd.net_settings.clone()),
net_service: manage_network.clone(),
experimental_rpcs: cmd.experimental_rpcs,
ws_address: cmd.ws_conf.address(),
fetch: fetch.clone(),
executor: runtime.executor(),
gas_price_percentile: cmd.gas_price_percentile,
poll_lifetime: cmd.poll_lifetime,
allow_missing_blocks: cmd.allow_missing_blocks,
no_ancient_blocks: !cmd.download_old_blocks,
});
let dependencies = rpc::Dependencies {
apis: deps_for_rpc_apis.clone(),
executor: runtime.executor(),
stats: rpc_stats.clone(),
};
// start rpc servers
let rpc_direct = rpc::setup_apis(rpc_apis::ApiSet::All, &dependencies);
let ws_server = rpc::new_ws(cmd.ws_conf.clone(), &dependencies)?;
let ipc_server = rpc::new_ipc(cmd.ipc_conf, &dependencies)?;
// start the prometheus metrics server
start_prometheus_metrics(&cmd.metrics_conf, &dependencies)?;
let http_server = rpc::new_http(
"HTTP JSON-RPC",
"jsonrpc",
cmd.http_conf.clone(),
&dependencies,
)?;
// secret store key server
let secretstore_deps = secretstore::Dependencies {
client: client.clone(),
sync: sync_provider.clone(),
miner: miner.clone(),
account_provider,
accounts_passwords: &passwords,
};
let secretstore_key_server = secretstore::start(
cmd.secretstore_conf.clone(),
secretstore_deps,
runtime.executor(),
)?;
// the informant
let informant = Arc::new(Informant::new(
FullNodeInformantData {
client: service.client(),
sync: Some(sync_provider.clone()),
net: Some(manage_network.clone()),
},
Some(snapshot_service.clone()),
Some(rpc_stats.clone()),
cmd.logger_config.color,
));
service.add_notify(informant.clone());
service
.register_io_handler(informant.clone())
.map_err(|_| "Unable to register informant handler".to_owned())?;
// save user defaults
user_defaults.is_first_launch = false;
user_defaults.pruning = algorithm;
user_defaults.tracing = tracing;
user_defaults.fat_db = fat_db;
user_defaults.set_mode(mode);
user_defaults.save(&user_defaults_path)?;
// tell client how to save the default mode if it gets changed.
client.on_user_defaults_change(move |mode: Option<Mode>| {
if let Some(mode) = mode {
user_defaults.set_mode(mode);
}
let _ = user_defaults.save(&user_defaults_path); // discard failures - there's nothing we can do
});
// the watcher must be kept alive.
let watcher = match cmd.snapshot_conf.enable {
false => None,
true => {
let sync = sync_provider.clone();
let client = client.clone();
let watcher = Arc::new(snapshot::Watcher::new(
service.client(),
move || is_major_importing(Some(sync.status().state), client.queue_info()),
service.io().channel(),
SNAPSHOT_PERIOD,
SNAPSHOT_HISTORY,
));
service.add_notify(watcher.clone());
Some(watcher)
}
};
Ok(RunningClient {
inner: RunningClientInner::Full {
rpc: rpc_direct,
informant,
client,
client_service: Arc::new(service),
keep_alive: Box::new((
watcher,
ws_server,
http_server,
ipc_server,
secretstore_key_server,
runtime,
)),
},
})
}
/// Set bad blocks in VerificationQeueu. By omiting header we can omit particular fork of chain.
fn verification_bad_blocks(spec: &SpecType) -> Vec<H256> {
match *spec {
SpecType::Ropsten => {
vec![
H256::from_str("1eac3d16c642411f13c287e29144c6f58fda859407c8f24c38deb168e1040714")
.expect("Valid hex string"),
]
}
_ => vec![],
}
}
/// Parity client currently executing in background threads.
///
/// Should be destroyed by calling `shutdown()`, otherwise execution will continue in the
/// background.
pub struct RunningClient {
inner: RunningClientInner,
}
enum RunningClientInner {
Full {
rpc:
jsonrpc_core::MetaIoHandler<Metadata, informant::Middleware<informant::ClientNotifier>>,
informant: Arc<Informant<FullNodeInformantData>>,
client: Arc<Client>,
client_service: Arc<ClientService>,
keep_alive: Box<dyn Any>,
},
}
impl RunningClient {
/// Performs an asynchronous RPC query.
// FIXME: [tomaka] This API should be better, with for example a Future
pub fn rpc_query(
&self,
request: &str,
session: Option<Arc<PubSubSession>>,
) -> FutureResult<FutureResponse, FutureOutput> {
let metadata = Metadata {
origin: Origin::CApi,
session,
};
match self.inner {
RunningClientInner::Full { ref rpc, .. } => rpc.handle_request(request, metadata),
}
}
/// Shuts down the client.
pub fn shutdown(self) {
match self.inner {
RunningClientInner::Full {
rpc,
informant,
client,
client_service,
keep_alive,
} => {
info!("Finishing work, please wait...");
// Create a weak reference to the client so that we can wait on shutdown
// until it is dropped
let weak_client = Arc::downgrade(&client);
// Shutdown and drop the ClientService
client_service.shutdown();
trace!(target: "shutdown", "ClientService shut down");
drop(client_service);
trace!(target: "shutdown", "ClientService dropped");
// drop this stuff as soon as exit detected.
drop(rpc);
trace!(target: "shutdown", "RPC dropped");
drop(keep_alive);
trace!(target: "shutdown", "KeepAlive dropped");
// to make sure timer does not spawn requests while shutdown is in progress
informant.shutdown();
trace!(target: "shutdown", "Informant shut down");
// just Arc is dropping here, to allow other reference release in its default time
drop(informant);
trace!(target: "shutdown", "Informant dropped");
drop(client);
trace!(target: "shutdown", "Client dropped");
// This may help when debugging ref cycles. Requires nightly-only `#![feature(weak_counts)]`
// trace!(target: "shutdown", "Waiting for refs to Client to shutdown, strong_count={:?}, weak_count={:?}", weak_client.strong_count(), weak_client.weak_count());
trace!(target: "shutdown", "Waiting for refs to Client to shutdown");
wait_for_drop(weak_client);
}
}
}
}
fn print_running_environment(data_dir: &str, dirs: &Directories, db_dirs: &DatabaseDirectories) {
info!("Starting {}", Colour::White.bold().paint(version()));
info!(
"Keys path {}",
Colour::White
.bold()
.paint(dirs.keys_path(data_dir).to_string_lossy().into_owned())
);
info!(
"DB path {}",
Colour::White
.bold()
.paint(db_dirs.db_root_path().to_string_lossy().into_owned())
);
}
fn wait_for_drop<T>(w: Weak<T>) {
const SLEEP_DURATION: Duration = Duration::from_secs(1);
const WARN_TIMEOUT: Duration = Duration::from_secs(60);
const MAX_TIMEOUT: Duration = Duration::from_secs(300);
let instant = Instant::now();
let mut warned = false;
while instant.elapsed() < MAX_TIMEOUT {
if w.upgrade().is_none() {
return;
}
if !warned && instant.elapsed() > WARN_TIMEOUT {
warned = true;
warn!("Shutdown is taking longer than expected.");
}
thread::sleep(SLEEP_DURATION);
// When debugging shutdown issues on a nightly build it can help to enable this with the
// `#![feature(weak_counts)]` added to lib.rs (TODO: enable when
// https://github.com/rust-lang/rust/issues/57977 is stable)
// trace!(target: "shutdown", "Waiting for client to drop, strong_count={:?}, weak_count={:?}", w.strong_count(), w.weak_count());
trace!(target: "shutdown", "Waiting for client to drop");
}
warn!("Shutdown timeout reached, exiting uncleanly.");
}

View File

@ -1,333 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crate::{account_utils::AccountProvider, sync::SyncProvider};
use crypto::publickey::{Public, Secret};
use dir::{default_data_path, helpers::replace_home};
use ethcore::{client::Client, miner::Miner};
use ethereum_types::Address;
use ethkey::Password;
use parity_runtime::Executor;
use std::{collections::BTreeMap, sync::Arc};
/// This node secret key.
#[derive(Debug, PartialEq, Clone)]
pub enum NodeSecretKey {
/// Stored as plain text in configuration file.
Plain(Secret),
/// Stored as account in key store.
#[cfg(feature = "accounts")]
KeyStore(Address),
}
/// Secret store service contract address.
#[derive(Debug, PartialEq, Clone)]
pub enum ContractAddress {
/// Contract address is read from registry.
Registry,
/// Contract address is specified.
Address(Address),
}
#[derive(Debug, PartialEq, Clone)]
/// Secret store configuration
pub struct Configuration {
/// Is secret store functionality enabled?
pub enabled: bool,
/// Is HTTP API enabled?
pub http_enabled: bool,
/// Is auto migrate enabled.
pub auto_migrate_enabled: bool,
/// ACL check contract address.
pub acl_check_contract_address: Option<ContractAddress>,
/// Service contract address.
pub service_contract_address: Option<ContractAddress>,
/// Server key generation service contract address.
pub service_contract_srv_gen_address: Option<ContractAddress>,
/// Server key retrieval service contract address.
pub service_contract_srv_retr_address: Option<ContractAddress>,
/// Document key store service contract address.
pub service_contract_doc_store_address: Option<ContractAddress>,
/// Document key shadow retrieval service contract address.
pub service_contract_doc_sretr_address: Option<ContractAddress>,
/// This node secret.
pub self_secret: Option<NodeSecretKey>,
/// Other nodes IDs + addresses.
pub nodes: BTreeMap<Public, (String, u16)>,
/// Key Server Set contract address. If None, 'nodes' map is used.
pub key_server_set_contract_address: Option<ContractAddress>,
/// Interface to listen to
pub interface: String,
/// Port to listen to
pub port: u16,
/// Interface to listen to
pub http_interface: String,
/// Port to listen to
pub http_port: u16,
/// Data directory path for secret store
pub data_path: String,
/// Administrator public key.
pub admin_public: Option<Public>,
}
/// Secret store dependencies
pub struct Dependencies<'a> {
/// Blockchain client.
pub client: Arc<Client>,
/// Sync provider.
pub sync: Arc<dyn SyncProvider>,
/// Miner service.
pub miner: Arc<Miner>,
/// Account provider.
pub account_provider: Arc<AccountProvider>,
/// Passed accounts passwords.
pub accounts_passwords: &'a [Password],
}
#[cfg(not(feature = "secretstore"))]
mod server {
use super::{Configuration, Dependencies, Executor};
/// Noop key server implementation
pub struct KeyServer;
impl KeyServer {
/// Create new noop key server
pub fn new(
_conf: Configuration,
_deps: Dependencies,
_executor: Executor,
) -> Result<Self, String> {
Ok(KeyServer)
}
}
}
#[cfg(feature = "secretstore")]
mod server {
use super::{Configuration, ContractAddress, Dependencies, Executor, NodeSecretKey};
use ansi_term::Colour::{Red, White};
use db;
use ethcore_secretstore;
use ethkey::KeyPair;
use std::sync::Arc;
fn into_service_contract_address(
address: ContractAddress,
) -> ethcore_secretstore::ContractAddress {
match address {
ContractAddress::Registry => ethcore_secretstore::ContractAddress::Registry,
ContractAddress::Address(address) => {
ethcore_secretstore::ContractAddress::Address(address)
}
}
}
/// Key server
pub struct KeyServer {
_key_server: Box<dyn ethcore_secretstore::KeyServer>,
}
impl KeyServer {
/// Create new key server
pub fn new(
mut conf: Configuration,
deps: Dependencies,
executor: Executor,
) -> Result<Self, String> {
let self_secret: Arc<dyn ethcore_secretstore::NodeKeyPair> =
match conf.self_secret.take() {
Some(NodeSecretKey::Plain(secret)) => {
Arc::new(ethcore_secretstore::PlainNodeKeyPair::new(
KeyPair::from_secret(secret)
.map_err(|e| format!("invalid secret: {}", e))?,
))
}
#[cfg(feature = "accounts")]
Some(NodeSecretKey::KeyStore(account)) => {
// Check if account exists
if !deps.account_provider.has_account(account.clone()) {
return Err(format!(
"Account {} passed as secret store node key is not found",
account
));
}
// Check if any passwords have been read from the password file(s)
if deps.accounts_passwords.is_empty() {
return Err(format!(
"No password found for the secret store node account {}",
account
));
}
// Attempt to sign in the engine signer.
let password = deps
.accounts_passwords
.iter()
.find(|p| {
deps.account_provider
.sign(account.clone(), Some((*p).clone()), Default::default())
.is_ok()
})
.ok_or_else(|| {
format!(
"No valid password for the secret store node account {}",
account
)
})?;
Arc::new(
ethcore_secretstore::KeyStoreNodeKeyPair::new(
deps.account_provider,
account,
password.clone(),
)
.map_err(|e| format!("{}", e))?,
)
}
None => return Err("self secret is required when using secretstore".into()),
};
info!(
"Starting SecretStore node: {}",
White.bold().paint(format!("{:?}", self_secret.public()))
);
if conf.acl_check_contract_address.is_none() {
warn!(
"Running SecretStore with disabled ACL check: {}",
Red.bold().paint("everyone has access to stored keys")
);
}
let key_server_name = format!("{}:{}", conf.interface, conf.port);
let mut cconf = ethcore_secretstore::ServiceConfiguration {
listener_address: if conf.http_enabled {
Some(ethcore_secretstore::NodeAddress {
address: conf.http_interface.clone(),
port: conf.http_port,
})
} else {
None
},
service_contract_address: conf
.service_contract_address
.map(into_service_contract_address),
service_contract_srv_gen_address: conf
.service_contract_srv_gen_address
.map(into_service_contract_address),
service_contract_srv_retr_address: conf
.service_contract_srv_retr_address
.map(into_service_contract_address),
service_contract_doc_store_address: conf
.service_contract_doc_store_address
.map(into_service_contract_address),
service_contract_doc_sretr_address: conf
.service_contract_doc_sretr_address
.map(into_service_contract_address),
acl_check_contract_address: conf
.acl_check_contract_address
.map(into_service_contract_address),
cluster_config: ethcore_secretstore::ClusterConfiguration {
listener_address: ethcore_secretstore::NodeAddress {
address: conf.interface.clone(),
port: conf.port,
},
nodes: conf
.nodes
.into_iter()
.map(|(p, (ip, port))| {
(
p,
ethcore_secretstore::NodeAddress {
address: ip,
port: port,
},
)
})
.collect(),
key_server_set_contract_address: conf
.key_server_set_contract_address
.map(into_service_contract_address),
allow_connecting_to_higher_nodes: true,
admin_public: conf.admin_public,
auto_migrate_enabled: conf.auto_migrate_enabled,
},
};
cconf.cluster_config.nodes.insert(
self_secret.public().clone(),
cconf.cluster_config.listener_address.clone(),
);
let db = db::open_secretstore_db(&conf.data_path)?;
let key_server = ethcore_secretstore::start(
deps.client,
deps.sync,
deps.miner,
self_secret,
cconf,
db,
executor,
)
.map_err(|e| format!("Error starting KeyServer {}: {}", key_server_name, e))?;
Ok(KeyServer {
_key_server: key_server,
})
}
}
}
pub use self::server::KeyServer;
impl Default for Configuration {
fn default() -> Self {
let data_dir = default_data_path();
Configuration {
enabled: true,
http_enabled: true,
auto_migrate_enabled: true,
acl_check_contract_address: Some(ContractAddress::Registry),
service_contract_address: None,
service_contract_srv_gen_address: None,
service_contract_srv_retr_address: None,
service_contract_doc_store_address: None,
service_contract_doc_sretr_address: None,
self_secret: None,
admin_public: None,
nodes: BTreeMap::new(),
key_server_set_contract_address: Some(ContractAddress::Registry),
interface: "127.0.0.1".to_owned(),
port: 8083,
http_interface: "127.0.0.1".to_owned(),
http_port: 8082,
data_path: replace_home(&data_dir, "$BASE/secretstore"),
}
}
}
/// Start secret store-related functionality
pub fn start(
conf: Configuration,
deps: Dependencies,
executor: Executor,
) -> Result<Option<KeyServer>, String> {
if !conf.enabled {
return Ok(None);
}
KeyServer::new(conf, deps, executor).map(|s| Some(s))
}

View File

@ -1,98 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{
io,
path::{Path, PathBuf},
};
use crate::{path::restrict_permissions_owner, rpc, rpc_apis};
use ansi_term::Colour::White;
use ethcore_logger::Config as LogConfig;
use parity_rpc;
pub const CODES_FILENAME: &'static str = "authcodes";
pub struct NewToken {
pub token: String,
pub message: String,
}
pub fn new_service(
ws_conf: &rpc::WsConfiguration,
logger_config: &LogConfig,
) -> rpc_apis::SignerService {
let logger_config_color = logger_config.color;
let signer_path = ws_conf.signer_path.clone();
let signer_enabled = ws_conf.support_token_api;
rpc_apis::SignerService::new(
move || {
generate_new_token(&signer_path, logger_config_color).map_err(|e| format!("{:?}", e))
},
signer_enabled,
)
}
pub fn codes_path(path: &Path) -> PathBuf {
let mut p = path.to_owned();
p.push(CODES_FILENAME);
let _ = restrict_permissions_owner(&p, true, false);
p
}
pub fn execute(ws_conf: rpc::WsConfiguration, logger_config: LogConfig) -> Result<String, String> {
Ok(generate_token_and_url(&ws_conf, &logger_config)?.message)
}
pub fn generate_token_and_url(
ws_conf: &rpc::WsConfiguration,
logger_config: &LogConfig,
) -> Result<NewToken, String> {
let code = generate_new_token(&ws_conf.signer_path, logger_config.color)
.map_err(|err| format!("Error generating token: {:?}", err))?;
let colored = |s: String| match logger_config.color {
true => format!("{}", White.bold().paint(s)),
false => s,
};
Ok(NewToken {
token: code.clone(),
message: format!(
r#"
Generated token:
{}
"#,
colored(code)
),
})
}
fn generate_new_token(path: &Path, logger_config_color: bool) -> io::Result<String> {
let path = codes_path(path);
let mut codes = parity_rpc::AuthCodes::from_file(&path)?;
codes.clear_garbage();
let code = codes.generate_new()?;
codes.to_file(&path)?;
trace!(
"New key code created: {}",
match logger_config_color {
true => format!("{}", White.bold().paint(&code[..])),
false => format!("{}", &code[..]),
}
);
Ok(code)
}

View File

@ -1,347 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Snapshot and restoration commands.
use std::{
path::{Path, PathBuf},
sync::Arc,
time::Duration,
};
use crate::{hash::keccak, types::ids::BlockId};
use ethcore::{
client::{DatabaseCompactionProfile, Mode, VMType},
miner::Miner,
snapshot::{
io::{PackedReader, PackedWriter, SnapshotReader},
service::Service as SnapshotService,
Progress, RestorationStatus, SnapshotConfiguration, SnapshotService as SS,
},
};
use ethcore_service::ClientService;
use crate::{
cache::CacheConfig,
db,
helpers::{execute_upgrades, to_client_config},
params::{fatdb_switch_to_bool, tracing_switch_to_bool, Pruning, SpecType, Switch},
user_defaults::UserDefaults,
};
use dir::Directories;
/// Kinds of snapshot commands.
#[derive(Debug, PartialEq, Clone, Copy)]
pub enum Kind {
/// Take a snapshot.
Take,
/// Restore a snapshot.
Restore,
}
/// Command for snapshot creation or restoration.
#[derive(Debug, PartialEq)]
pub struct SnapshotCommand {
pub cache_config: CacheConfig,
pub dirs: Directories,
pub spec: SpecType,
pub pruning: Pruning,
pub pruning_history: u64,
pub pruning_memory: usize,
pub tracing: Switch,
pub fat_db: Switch,
pub compaction: DatabaseCompactionProfile,
pub file_path: Option<String>,
pub kind: Kind,
pub block_at: BlockId,
pub max_round_blocks_to_import: usize,
pub snapshot_conf: SnapshotConfiguration,
}
// helper for reading chunks from arbitrary reader and feeding them into the
// service.
fn restore_using<R: SnapshotReader>(
snapshot: Arc<SnapshotService>,
reader: &R,
recover: bool,
) -> Result<(), String> {
let manifest = reader.manifest();
info!(
"Restoring to block #{} (0x{:?})",
manifest.block_number, manifest.block_hash
);
snapshot
.init_restore(manifest.clone(), recover)
.map_err(|e| format!("Failed to begin restoration: {}", e))?;
let (num_state, num_blocks) = (manifest.state_hashes.len(), manifest.block_hashes.len());
let informant_handle = snapshot.clone();
::std::thread::spawn(move || {
while let RestorationStatus::Ongoing {
state_chunks_done,
block_chunks_done,
..
} = informant_handle.restoration_status()
{
info!(
"Processed {}/{} state chunks and {}/{} block chunks.",
state_chunks_done, num_state, block_chunks_done, num_blocks
);
::std::thread::sleep(Duration::from_secs(5));
}
});
info!("Restoring state");
for &state_hash in &manifest.state_hashes {
if snapshot.restoration_status() == RestorationStatus::Failed {
return Err("Restoration failed".into());
}
let chunk = reader.chunk(state_hash).map_err(|e| {
format!(
"Encountered error while reading chunk {:?}: {}",
state_hash, e
)
})?;
let hash = keccak(&chunk);
if hash != state_hash {
return Err(format!(
"Mismatched chunk hash. Expected {:?}, got {:?}",
state_hash, hash
));
}
snapshot.feed_state_chunk(state_hash, &chunk);
}
info!("Restoring blocks");
for &block_hash in &manifest.block_hashes {
if snapshot.restoration_status() == RestorationStatus::Failed {
return Err("Restoration failed".into());
}
let chunk = reader.chunk(block_hash).map_err(|e| {
format!(
"Encountered error while reading chunk {:?}: {}",
block_hash, e
)
})?;
let hash = keccak(&chunk);
if hash != block_hash {
return Err(format!(
"Mismatched chunk hash. Expected {:?}, got {:?}",
block_hash, hash
));
}
snapshot.feed_block_chunk(block_hash, &chunk);
}
match snapshot.restoration_status() {
RestorationStatus::Ongoing { .. } => {
Err("Snapshot file is incomplete and missing chunks.".into())
}
RestorationStatus::Initializing { .. } => {
Err("Snapshot restoration is still initializing.".into())
}
RestorationStatus::Failed => Err("Snapshot restoration failed.".into()),
RestorationStatus::Inactive => {
info!("Restoration complete.");
Ok(())
}
}
}
impl SnapshotCommand {
// shared portion of snapshot commands: start the client service
fn start_service(self) -> Result<ClientService, String> {
// load spec file
let spec = self.spec.spec(&self.dirs.cache)?;
// load genesis hash
let genesis_hash = spec.genesis_header().hash();
// database paths
let db_dirs = self
.dirs
.database(genesis_hash, None, spec.data_dir.clone());
// user defaults path
let user_defaults_path = db_dirs.user_defaults_path();
// load user defaults
let user_defaults = UserDefaults::load(&user_defaults_path)?;
// select pruning algorithm
let algorithm = self.pruning.to_algorithm(&user_defaults);
// check if tracing is on
let tracing = tracing_switch_to_bool(self.tracing, &user_defaults)?;
// check if fatdb is on
let fat_db = fatdb_switch_to_bool(self.fat_db, &user_defaults, algorithm)?;
// prepare client and snapshot paths.
let client_path = db_dirs.client_path(algorithm);
let snapshot_path = db_dirs.snapshot_path();
// execute upgrades
execute_upgrades(&self.dirs.base, &db_dirs, algorithm, &self.compaction)?;
// prepare client config
let mut client_config = to_client_config(
&self.cache_config,
spec.name.to_lowercase(),
Mode::Active,
tracing,
fat_db,
self.compaction,
VMType::default(),
"".into(),
algorithm,
self.pruning_history,
self.pruning_memory,
true,
self.max_round_blocks_to_import,
);
client_config.snapshot = self.snapshot_conf;
let restoration_db_handler = db::restoration_db_handler(&client_path, &client_config);
let client_db = restoration_db_handler
.open(&client_path)
.map_err(|e| format!("Failed to open database {:?}", e))?;
let service = ClientService::start(
client_config,
&spec,
client_db,
&snapshot_path,
restoration_db_handler,
&self.dirs.ipc_path(),
// TODO [ToDr] don't use test miner here
// (actually don't require miner at all)
Arc::new(Miner::new_for_tests(&spec, None)),
)
.map_err(|e| format!("Client service error: {:?}", e))?;
Ok(service)
}
/// restore from a snapshot
pub fn restore(self) -> Result<(), String> {
let file = self.file_path.clone();
let service = self.start_service()?;
warn!("Snapshot restoration is experimental and the format may be subject to change.");
warn!(
"On encountering an unexpected error, please ensure that you have a recent snapshot."
);
let snapshot = service.snapshot_service();
if let Some(file) = file {
info!("Attempting to restore from snapshot at '{}'", file);
let reader = PackedReader::new(Path::new(&file))
.map_err(|e| format!("Couldn't open snapshot file: {}", e))
.and_then(|x| x.ok_or("Snapshot file has invalid format.".into()));
let reader = reader?;
restore_using(snapshot, &reader, true)?;
} else {
info!("Attempting to restore from local snapshot.");
// attempting restoration with recovery will lead to deadlock
// as we currently hold a read lock on the service's reader.
match *snapshot.reader() {
Some(ref reader) => restore_using(snapshot.clone(), reader, false)?,
None => return Err("No local snapshot found.".into()),
}
}
Ok(())
}
/// Take a snapshot from the head of the chain.
pub fn take_snapshot(self) -> Result<(), String> {
let file_path = self
.file_path
.clone()
.ok_or("No file path provided.".to_owned())?;
let file_path: PathBuf = file_path.into();
let block_at = self.block_at;
let service = self.start_service()?;
warn!("Snapshots are currently experimental. File formats may be subject to change.");
let writer = PackedWriter::new(&file_path)
.map_err(|e| format!("Failed to open snapshot writer: {}", e))?;
let progress = Arc::new(Progress::default());
let p = progress.clone();
let informant_handle = ::std::thread::spawn(move || {
::std::thread::sleep(Duration::from_secs(5));
let mut last_size = 0;
while !p.done() {
let cur_size = p.size();
if cur_size != last_size {
last_size = cur_size;
let bytes = crate::informant::format_bytes(cur_size as usize);
info!(
"Snapshot: {} accounts {} blocks {}",
p.accounts(),
p.blocks(),
bytes
);
}
::std::thread::sleep(Duration::from_secs(5));
}
});
if let Err(e) = service.client().take_snapshot(writer, block_at, &*progress) {
let _ = ::std::fs::remove_file(&file_path);
return Err(format!(
"Encountered fatal error while creating snapshot: {}",
e
));
}
info!("snapshot creation complete");
assert!(progress.done());
informant_handle
.join()
.map_err(|_| "failed to join logger thread")?;
Ok(())
}
}
/// Execute this snapshot command.
pub fn execute(cmd: SnapshotCommand) -> Result<String, String> {
match cmd.kind {
Kind::Take => cmd.take_snapshot()?,
Kind::Restore => cmd.restore()?,
}
Ok(String::new())
}

View File

@ -1,119 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! OpenEthereum sync service
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use ethcore_stratum::{Stratum as StratumServer, PushWorkHandler, RemoteJobDispatcher, ServiceConfiguration};
use modules::service_urls;
use boot;
use hypervisor::service::IpcModuleId;
use hypervisor::{HYPERVISOR_IPC_URL, ControlService};
use std::net::{SocketAddr, IpAddr};
use std::str::FromStr;
use nanoipc;
use std::thread;
use ethcore::miner::stratum::{STRATUM_SOCKET_NAME, JOB_DISPATCHER_SOCKET_NAME};
pub const MODULE_ID: IpcModuleId = 8000;
#[derive(Default)]
struct StratumControlService {
pub stop: Arc<AtomicBool>,
}
impl ControlService for StratumControlService {
fn shutdown(&self) -> bool {
trace!(target: "hypervisor", "Received shutdown from control service");
self.stop.store(true, ::std::sync::atomic::Ordering::SeqCst);
true
}
}
pub fn main() {
boot::setup_cli_logger("stratum");
let service_config: ServiceConfiguration = boot::payload()
.unwrap_or_else(|e| {
println!("Fatal: error reading boot arguments ({:?})", e);
std::process::exit(1)
});
let job_dispatcher = dependency!(
RemoteJobDispatcher,
&service_urls::with_base(&service_config.io_path, JOB_DISPATCHER_SOCKET_NAME)
);
let _ = boot::main_thread();
let service_stop = Arc::new(AtomicBool::new(false));
let server =
StratumServer::start(
&SocketAddr::new(
IpAddr::from_str(&service_config.listen_addr)
.unwrap_or_else(|e| {
println!("Fatal: invalid listen address: '{}' ({:?})", &service_config.listen_addr, e);
std::process::exit(1)
}),
service_config.port,
),
job_dispatcher.service().clone(),
service_config.secret
).unwrap_or_else(
|e| {
println!("Fatal: cannot start stratum server({:?})", e);
std::process::exit(1)
}
);
boot::host_service(
&service_urls::with_base(&service_config.io_path, STRATUM_SOCKET_NAME),
service_stop.clone(),
server.clone() as Arc<PushWorkHandler>
);
let hypervisor = boot::register(
&service_urls::with_base(&service_config.io_path, HYPERVISOR_IPC_URL),
&service_urls::with_base(&service_config.io_path, service_urls::STRATUM_CONTROL),
MODULE_ID
);
let timer_svc = server.clone();
let timer_stop = service_stop.clone();
thread::spawn(move || {
while !timer_stop.load(Ordering::SeqCst) {
thread::park_timeout(::std::time::Duration::from_millis(2000));
// It almost always not doing anything, only greets new peers with a job
timer_svc.maintain();
}
});
let control_service = Arc::new(StratumControlService::default());
let as_control = control_service.clone() as Arc<ControlService>;
let mut worker = nanoipc::Worker::<ControlService>::new(&as_control);
worker.add_reqrep(
&service_urls::with_base(&service_config.io_path, service_urls::STRATUM_CONTROL)
).unwrap();
while !control_service.stop.load(Ordering::SeqCst) {
worker.poll();
}
service_stop.store(true, Ordering::SeqCst);
hypervisor.module_shutdown(MODULE_ID);
trace!(target: "hypervisor", "Stratum process terminated gracefully");
}

View File

@ -1,246 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
//! Parity upgrade logic
use dir::{default_data_path, helpers::replace_home, home_dir, DatabaseDirectories};
use journaldb::Algorithm;
use semver::{SemVerError, Version};
use std::{
collections::*,
fs::{self, create_dir_all, File},
io,
io::{Read, Write},
path::{Path, PathBuf},
};
#[derive(Debug)]
pub enum Error {
CannotCreateConfigPath(io::Error),
CannotWriteVersionFile(io::Error),
CannotUpdateVersionFile(io::Error),
SemVer(SemVerError),
}
impl From<SemVerError> for Error {
fn from(err: SemVerError) -> Self {
Error::SemVer(err)
}
}
const CURRENT_VERSION: &'static str = env!("CARGO_PKG_VERSION");
#[derive(Hash, PartialEq, Eq)]
struct UpgradeKey {
pub old_version: Version,
pub new_version: Version,
}
type UpgradeList = HashMap<UpgradeKey, fn() -> Result<(), Error>>;
impl UpgradeKey {
// given the following config exist
// ver.lock 1.1 (`previous_version`)
//
// current_version 1.4 (`current_version`)
//
//
//upgrades (set of `UpgradeKey`)
// 1.0 -> 1.1 (u1)
// 1.1 -> 1.2 (u2)
// 1.2 -> 1.3 (u3)
// 1.3 -> 1.4 (u4)
// 1.4 -> 1.5 (u5)
//
// then the following upgrades should be applied:
// u2, u3, u4
fn is_applicable(&self, previous_version: &Version, current_version: &Version) -> bool {
self.old_version >= *previous_version && self.new_version <= *current_version
}
}
// dummy upgrade (remove when the first one is in)
fn dummy_upgrade() -> Result<(), Error> {
Ok(())
}
fn push_upgrades(upgrades: &mut UpgradeList) {
// dummy upgrade (remove when the first one is in)
upgrades.insert(
UpgradeKey {
old_version: Version::new(0, 9, 0),
new_version: Version::new(1, 0, 0),
},
dummy_upgrade,
);
}
fn upgrade_from_version(previous_version: &Version) -> Result<usize, Error> {
let mut upgrades = HashMap::new();
push_upgrades(&mut upgrades);
let current_version = Version::parse(CURRENT_VERSION)?;
let mut count = 0;
for upgrade_key in upgrades.keys() {
if upgrade_key.is_applicable(previous_version, &current_version) {
let upgrade_script = upgrades[upgrade_key];
upgrade_script()?;
count += 1;
}
}
Ok(count)
}
fn with_locked_version<F>(db_path: &str, script: F) -> Result<usize, Error>
where
F: Fn(&Version) -> Result<usize, Error>,
{
let mut path = PathBuf::from(db_path);
create_dir_all(&path).map_err(Error::CannotCreateConfigPath)?;
path.push("ver.lock");
let version = File::open(&path)
.ok()
.and_then(|ref mut file| {
let mut version_string = String::new();
file.read_to_string(&mut version_string)
.ok()
.and_then(|_| Version::parse(&version_string).ok())
})
.unwrap_or(Version::new(0, 9, 0));
let mut lock = File::create(&path).map_err(Error::CannotWriteVersionFile)?;
let result = script(&version);
let written_version = Version::parse(CURRENT_VERSION)?;
lock.write_all(written_version.to_string().as_bytes())
.map_err(Error::CannotUpdateVersionFile)?;
result
}
pub fn upgrade(db_path: &str) -> Result<usize, Error> {
with_locked_version(db_path, |ver| upgrade_from_version(ver))
}
fn file_exists(path: &Path) -> bool {
match fs::metadata(&path) {
Err(ref e) if e.kind() == io::ErrorKind::NotFound => false,
_ => true,
}
}
#[cfg(any(test, feature = "accounts"))]
pub fn upgrade_key_location(from: &PathBuf, to: &PathBuf) {
match fs::create_dir_all(&to).and_then(|()| fs::read_dir(from)) {
Ok(entries) => {
let files: Vec<_> = entries
.filter_map(|f| {
f.ok().and_then(|f| {
if f.file_type().ok().map_or(false, |f| f.is_file()) {
f.file_name().to_str().map(|s| s.to_owned())
} else {
None
}
})
})
.collect();
let mut num: usize = 0;
for name in files {
let mut from = from.clone();
from.push(&name);
let mut to = to.clone();
to.push(&name);
if !file_exists(&to) {
if let Err(e) = fs::rename(&from, &to) {
debug!("Error upgrading key {:?}: {:?}", from, e);
} else {
num += 1;
}
} else {
debug!("Skipped upgrading key {:?}", from);
}
}
if num > 0 {
info!(
"Moved {} keys from {} to {}",
num,
from.to_string_lossy(),
to.to_string_lossy()
);
}
}
Err(e) => {
debug!("Error moving keys from {:?} to {:?}: {:?}", from, to, e);
}
}
}
fn upgrade_dir_location(source: &PathBuf, dest: &PathBuf) {
if file_exists(&source) {
if !file_exists(&dest) {
let mut parent = dest.clone();
parent.pop();
if let Err(e) = fs::create_dir_all(&parent).and_then(|()| fs::rename(&source, &dest)) {
debug!("Skipped path {:?} -> {:?} :{:?}", source, dest, e);
} else {
info!(
"Moved {} to {}",
source.to_string_lossy(),
dest.to_string_lossy()
);
}
} else {
debug!(
"Skipped upgrading directory {:?}, Destination already exists at {:?}",
source, dest
);
}
}
}
fn upgrade_user_defaults(dirs: &DatabaseDirectories) {
let source = dirs.legacy_user_defaults_path();
let dest = dirs.user_defaults_path();
if file_exists(&source) {
if !file_exists(&dest) {
if let Err(e) = fs::rename(&source, &dest) {
debug!("Skipped upgrading user defaults {:?}:{:?}", dest, e);
}
} else {
debug!(
"Skipped upgrading user defaults {:?}, File exists at {:?}",
source, dest
);
}
}
}
pub fn upgrade_data_paths(base_path: &str, dirs: &DatabaseDirectories, pruning: Algorithm) {
if home_dir().is_none() {
return;
}
let legacy_root_path = replace_home("", "$HOME/.parity");
let default_path = default_data_path();
if legacy_root_path != base_path && base_path == default_path {
upgrade_dir_location(&PathBuf::from(legacy_root_path), &PathBuf::from(&base_path));
}
upgrade_dir_location(&dirs.legacy_version_path(pruning), &dirs.db_path(pruning));
upgrade_dir_location(&dirs.legacy_snapshot_path(), &dirs.snapshot_path());
upgrade_dir_location(&dirs.legacy_network_path(), &dirs.network_path());
upgrade_user_defaults(&dirs);
}

View File

@ -1,188 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use ethcore::client::Mode as ClientMode;
use journaldb::Algorithm;
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use serde_json::{de::from_reader, ser::to_string};
use std::{fs::File, io::Write, path::Path, time::Duration};
#[derive(Clone)]
pub struct Seconds(Duration);
impl Seconds {
pub fn value(&self) -> u64 {
self.0.as_secs()
}
}
impl From<u64> for Seconds {
fn from(s: u64) -> Seconds {
Seconds(Duration::from_secs(s))
}
}
impl From<Duration> for Seconds {
fn from(d: Duration) -> Seconds {
Seconds(d)
}
}
impl Into<Duration> for Seconds {
fn into(self) -> Duration {
self.0
}
}
impl Serialize for Seconds {
fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {
serializer.serialize_u64(self.value())
}
}
impl<'de> Deserialize<'de> for Seconds {
fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<Self, D::Error> {
let secs = u64::deserialize(deserializer)?;
Ok(Seconds::from(secs))
}
}
#[derive(Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase", tag = "mode")]
pub enum Mode {
Active,
Passive {
#[serde(rename = "mode.timeout")]
timeout: Seconds,
#[serde(rename = "mode.alarm")]
alarm: Seconds,
},
Dark {
#[serde(rename = "mode.timeout")]
timeout: Seconds,
},
Offline,
}
impl Into<ClientMode> for Mode {
fn into(self) -> ClientMode {
match self {
Mode::Active => ClientMode::Active,
Mode::Passive { timeout, alarm } => ClientMode::Passive(timeout.into(), alarm.into()),
Mode::Dark { timeout } => ClientMode::Dark(timeout.into()),
Mode::Offline => ClientMode::Off,
}
}
}
impl From<ClientMode> for Mode {
fn from(mode: ClientMode) -> Mode {
match mode {
ClientMode::Active => Mode::Active,
ClientMode::Passive(timeout, alarm) => Mode::Passive {
timeout: timeout.into(),
alarm: alarm.into(),
},
ClientMode::Dark(timeout) => Mode::Dark {
timeout: timeout.into(),
},
ClientMode::Off => Mode::Offline,
}
}
}
#[derive(Serialize, Deserialize)]
pub struct UserDefaults {
pub is_first_launch: bool,
#[serde(with = "algorithm_serde")]
pub pruning: Algorithm,
pub tracing: bool,
pub fat_db: bool,
#[serde(flatten)]
mode: Mode,
}
impl UserDefaults {
pub fn mode(&self) -> ClientMode {
self.mode.clone().into()
}
pub fn set_mode(&mut self, mode: ClientMode) {
self.mode = mode.into();
}
}
mod algorithm_serde {
use journaldb::Algorithm;
use serde::{de::Error, Deserialize, Deserializer, Serialize, Serializer};
pub fn serialize<S>(algorithm: &Algorithm, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
algorithm.as_str().serialize(serializer)
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Algorithm, D::Error>
where
D: Deserializer<'de>,
{
let pruning = String::deserialize(deserializer)?;
pruning
.parse()
.map_err(|_| Error::custom("invalid pruning method"))
}
}
impl Default for UserDefaults {
fn default() -> Self {
UserDefaults {
is_first_launch: true,
pruning: Algorithm::OverlayRecent,
tracing: false,
fat_db: false,
mode: Mode::Active,
}
}
}
impl UserDefaults {
pub fn load<P>(path: P) -> Result<Self, String>
where
P: AsRef<Path>,
{
match File::open(path) {
Ok(file) => match from_reader(file) {
Ok(defaults) => Ok(defaults),
Err(e) => {
warn!("Error loading user defaults file: {:?}", e);
Ok(UserDefaults::default())
}
},
_ => Ok(UserDefaults::default()),
}
}
pub fn save<P>(&self, path: P) -> Result<(), String>
where
P: AsRef<Path>,
{
let mut file: File =
File::create(path).map_err(|_| "Cannot create user defaults file".to_owned())?;
file.write_all(to_string(&self).unwrap().as_bytes())
.map_err(|_| "Failed to save user defaults".to_owned())
}
}

35
build.rs Normal file
View File

@ -0,0 +1,35 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
extern crate rustc_version;
const MIN_RUSTC_VERSION: &'static str = "1.15.1";
fn main() {
let is = rustc_version::version().unwrap();
let required = MIN_RUSTC_VERSION.parse().unwrap();
assert!(is >= required, format!("
It looks like you are compiling Parity with an old rustc compiler {}.
Parity requires version {}. Please update your compiler.
If you use rustup, try this:
rustup update stable
and try building Parity again.
", is, required));
}

9
chainspec/Cargo.toml Normal file
View File

@ -0,0 +1,9 @@
[package]
name = "chainspec"
version = "0.1.0"
authors = ["debris <marek.kotewicz@gmail.com>"]
[dependencies]
ethjson = { path = "../json" }
serde_json = "1.0"
serde_ignored = "0.0.4"

48
chainspec/src/main.rs Normal file
View File

@ -0,0 +1,48 @@
extern crate serde_json;
extern crate serde_ignored;
extern crate ethjson;
use std::collections::BTreeSet;
use std::{fs, env, process};
use ethjson::spec::Spec;
fn quit(s: &str) -> ! {
println!("{}", s);
process::exit(1);
}
fn main() {
let mut args = env::args();
if args.len() != 2 {
quit("You need to specify chainspec.json\n\
\n\
./chainspec <chainspec.json>");
}
let path = args.nth(1).expect("args.len() == 2; qed");
let file = match fs::File::open(&path) {
Ok(file) => file,
Err(_) => quit(&format!("{} could not be opened", path)),
};
let mut unused = BTreeSet::new();
let mut deserializer = serde_json::Deserializer::from_reader(file);
let spec: Result<Spec, _> = serde_ignored::deserialize(&mut deserializer, |field| {
unused.insert(field.to_string());
});
if let Err(err) = spec {
quit(&format!("{} {}", path, err.to_string()));
}
if !unused.is_empty() {
let err = unused.into_iter()
.map(|field| format!("{} unexpected field `{}`", path, field))
.collect::<Vec<_>>()
.join("\n");
quit(&err);
}
println!("{} is valid", path);
}

View File

@ -1,23 +0,0 @@
[package]
description = "OpenEthereum Account Management"
homepage = "https://github.com/openethereum/openethereum"
license = "GPL-3.0"
name = "ethcore-accounts"
version = "0.1.0"
authors = ["Parity Technologies <admin@parity.io>"]
edition = "2018"
[dependencies]
common-types = { path = "../ethcore/types" }
ethkey = { path = "ethkey" }
ethstore = { path = "ethstore" }
log = "0.4"
parity-crypto = { version = "0.6.2", features = [ "publickey" ] }
parking_lot = "0.11.1"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
[dev-dependencies]
ethereum-types = "0.9.2"
tempdir = "0.3"

View File

@ -1,21 +0,0 @@
[package]
description = "Parity Ethereum Keys Generator"
name = "ethkey"
version = "0.3.0"
authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
edit-distance = "2.0"
parity-crypto = { version = "0.6.2", features = ["publickey"] }
eth-secp256k1 = { git = "https://github.com/paritytech/rust-secp256k1", rev = "9791e79f21a5309dcb6e0bd254b1ef88fca2f1f4" }
ethereum-types = "0.9.2"
lazy_static = "1.0"
log = "0.4"
memzero = { path = "../../../crates/util/memzero" }
parity-wordlist = "1.3"
quick-error = "1.2.2"
rand = "0.7.3"
rustc-hex = "1.0"
serde = "1.0"
serde_derive = "1.0"
tiny-keccak = "1.4"

View File

@ -1,220 +0,0 @@
## ethkey-cli
Parity Ethereum keys generator.
### Usage
```
Parity Ethereum Keys Generator.
Copyright 2015-2019 Parity Technologies (UK) Ltd.
Usage:
ethkey info <secret-or-phrase> [options]
ethkey generate random [options]
ethkey generate prefix <prefix> [options]
ethkey sign <secret> <message>
ethkey verify public <public> <signature> <message>
ethkey verify address <address> <signature> <message>
ethkey recover <address> <known-phrase>
ethkey [-h | --help]
Options:
-h, --help Display this message and exit.
-s, --secret Display only the secret key.
-p, --public Display only the public key.
-a, --address Display only the address.
-b, --brain Use parity brain wallet algorithm. Not recommended.
Commands:
info Display public key and address of the secret.
generate random Generates new random Ethereum key.
generate prefix Random generation, but address must start with a prefix ("vanity address").
sign Sign message using a secret key.
verify Verify signer of the signature by public key or address.
recover Try to find brain phrase matching given address from partial phrase.
```
### Examples
#### `info <secret>`
*Display info about private key.*
- `<secret>` - ethereum secret, 32 bytes long
```
ethkey info 17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55
```
```
secret: 17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55
public: 689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124
address: 26d1ec50b4e62c1d1a40d16e7cacc6a6580757d5
```
--
#### `info --brain <phrase>`
*Display info about private key generate from brain wallet recovery phrase.*
- `<phrase>` - Parity recovery phrase, 12 words
```
ethkey info --brain "this is sparta"
```
```
The recover phrase was not generated by Parity: The word 'this' does not come from the dictionary.
secret: aa22b54c0cb43ee30a014afe5ef3664b1cde299feabca46cd3167a85a57c39f2
public: c4c5398da6843632c123f543d714d2d2277716c11ff612b2a2f23c6bda4d6f0327c31cd58c55a9572c3cc141dade0c32747a13b7ef34c241b26c84adbb28fcf4
address: 006e27b6a72e1f34c626762f3c4761547aff1421
```
--
#### `generate random`
*Generate new keypair randomly.*
```
ethkey generate random
```
```
secret: 7d29fab185a33e2cd955812397354c472d2b84615b645aa135ff539f6b0d70d5
public: 35f222d88b80151857a2877826d940104887376a94c1cbd2c8c7c192eb701df88a18a4ecb8b05b1466c5b3706042027b5e079fe3a3683e66d822b0e047aa3418
address: a8fa5dd30a87bb9e3288d604eb74949c515ab66e
```
--
#### `generate random --brain`
*Generate new keypair with recovery phrase randomly.*
```
ethkey generate random --brain
```
```
recovery phrase: thwarting scandal creamer nuzzle asparagus blast crouch trusting anytime elixir frenzied octagon
secret: 001ce488d50d2f7579dc190c4655f32918d505cee3de63bddc7101bc91c0c2f0
public: 4e19a5fdae82596e1485c69b687c9cc52b5078e5b0668ef3ce8543cd90e712cb00df822489bc1f1dcb3623538a54476c7b3def44e1a51dc174e86448b63f42d0
address: 00cf3711cbd3a1512570639280758118ba0b2bcb
```
--
#### `generate prefix <prefix>`
*Generate new keypair randomly with address starting with prefix.*
- `<prefix>` - desired address prefix, 0 - 32 bytes long.
```
ethkey generate prefix ff
```
```
secret: 2075b1d9c124ea673de7273758ed6de14802a9da8a73ceb74533d7c312ff6acd
public: 48dbce4508566a05509980a5dd1335599fcdac6f9858ba67018cecb9f09b8c4066dc4c18ae2722112fd4d9ac36d626793fffffb26071dfeb0c2300df994bd173
address: fff7e25dff2aa60f61f9d98130c8646a01f31649
```
--
#### `generate prefix --brain <prefix>`
*Generate new keypair with recovery phrase randomly with address starting with prefix.*
- `<prefix>` - desired address prefix, 0 - 32 bytes long.
```
ethkey generate prefix --brain 00cf
```
```
recovery phrase: thwarting scandal creamer nuzzle asparagus blast crouch trusting anytime elixir frenzied octagon
secret: 001ce488d50d2f7579dc190c4655f32918d505cee3de63bddc7101bc91c0c2f0
public: 4e19a5fdae82596e1485c69b687c9cc52b5078e5b0668ef3ce8543cd90e712cb00df822489bc1f1dcb3623538a54476c7b3def44e1a51dc174e86448b63f42d0
address: 00cf3711cbd3a1512570639280758118ba0b2bcb
```
--
#### `sign <secret> <message>`
*Sign a message with a secret.*
- `<secret>` - ethereum secret, 32 bytes long
- `<message>` - message to sign, 32 bytes long
```
ethkey sign 17d08f5fe8c77af811caa0c9a187e668ce3b74a99acc3f6d976f075fa8e0be55 bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987
```
```
c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200
```
--
#### `verify public <public> <signature> <message>`
*Verify the signature.*
- `<public>` - ethereum public, 64 bytes long
- `<signature>` - message signature, 65 bytes long
- `<message>` - message, 32 bytes long
```
ethkey verify public 689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124 c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200 bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987
```
```
true
```
--
#### `verify address <address> <signature> <message>`
*Verify the signature.*
- `<address>` - ethereum address, 20 bytes long
- `<signature>` - message signature, 65 bytes long
- `<message>` - message, 32 bytes long
```
ethkey verify address 689268c0ff57a20cd299fa60d3fb374862aff565b20b5f1767906a99e6e09f3ff04ca2b2a5cd22f62941db103c0356df1a8ed20ce322cab2483db67685afd124 c1878cf60417151c766a712653d26ef350c8c75393458b7a9be715f053215af63dfd3b02c2ae65a8677917a8efa3172acb71cb90196e42106953ea0363c5aaf200 bd50b7370c3f96733b31744c6c45079e7ae6c8d299613246d28ebcef507ec987
```
```
true
```
--
#### `recover <address> <known-phrase>`
*Try to recover an account given expected address and partial (too short or with invalid words) recovery phrase.*
- `<address>` - ethereum address, 20 bytes long
- `<known-phrase>` - known phrase, can be in a form of `thwarting * creamer`
```
RUST_LOG="info" ethkey recover "00cf3711cbd3a1512570639280758118ba0b2bcb" "thwarting scandal creamer nuzzle asparagus blast crouch trusting anytime elixir frenzied octag"
```
```
INFO:ethkey::brain_recover: Invalid word 'octag', looking for potential substitutions.
INFO:ethkey::brain_recover: Closest words: ["ocean", "octagon", "octane", "outage", "tag", "acting", "acts", "aorta", "cage", "chug"]
INFO:ethkey::brain_recover: Starting to test 7776 possible combinations.
thwarting scandal creamer nuzzle asparagus blast crouch trusting anytime elixir frenzied octagon
secret: 001ce488d50d2f7579dc190c4655f32918d505cee3de63bddc7101bc91c0c2f0
public: 4e19a5fdae82596e1485c69b687c9cc52b5078e5b0668ef3ce8543cd90e712cb00df822489bc1f1dcb3623538a54476c7b3def44e1a51dc174e86448b63f42d0
address: 00cf3711cbd3a1512570639280758118ba0b2bcb
```
## Parity Ethereum toolchain
_This project is a part of the Parity Ethereum toolchain._
- [evmbin](https://github.com/paritytech/parity-ethereum/blob/master/evmbin/) - EVM implementation for Parity Ethereum.
- [ethabi](https://github.com/paritytech/ethabi) - Parity Ethereum function calls encoding.
- [ethstore](https://github.com/paritytech/parity-ethereum/blob/master/accounts/ethstore) - Parity Ethereum key management.
- [ethkey](https://github.com/paritytech/parity-ethereum/blob/master/accounts/ethkey) - Parity Ethereum keys generator.

View File

@ -1,69 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use parity_crypto::{
publickey::{KeyPair, Secret},
Keccak256,
};
use parity_wordlist;
/// Simple brainwallet.
pub struct Brain(String);
impl Brain {
pub fn new(s: String) -> Self {
Brain(s)
}
pub fn validate_phrase(phrase: &str, expected_words: usize) -> Result<(), ::WordlistError> {
parity_wordlist::validate_phrase(phrase, expected_words)
}
pub fn generate(&mut self) -> KeyPair {
let seed = self.0.clone();
let mut secret = seed.into_bytes().keccak256();
let mut i = 0;
loop {
secret = secret.keccak256();
match i > 16384 {
false => i += 1,
true => {
if let Ok(pair) = Secret::import_key(&secret).and_then(KeyPair::from_secret) {
if pair.address()[0] == 0 {
trace!("Testing: {}, got: {:?}", self.0, pair.address());
return pair;
}
}
}
}
}
}
}
#[cfg(test)]
mod tests {
use Brain;
#[test]
fn test_brain() {
let words = "this is sparta!".to_owned();
let first_keypair = Brain::new(words.clone()).generate();
let second_keypair = Brain::new(words.clone()).generate();
assert_eq!(first_keypair.secret(), second_keypair.secret());
}
}

View File

@ -1,69 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::Brain;
use parity_crypto::publickey::{Error, KeyPair};
use parity_wordlist as wordlist;
/// Tries to find brain-seed keypair with address starting with given prefix.
pub struct BrainPrefix {
prefix: Vec<u8>,
iterations: usize,
no_of_words: usize,
last_phrase: String,
}
impl BrainPrefix {
pub fn new(prefix: Vec<u8>, iterations: usize, no_of_words: usize) -> Self {
BrainPrefix {
prefix,
iterations,
no_of_words,
last_phrase: String::new(),
}
}
pub fn phrase(&self) -> &str {
&self.last_phrase
}
pub fn generate(&mut self) -> Result<KeyPair, Error> {
for _ in 0..self.iterations {
let phrase = wordlist::random_phrase(self.no_of_words);
let keypair = Brain::new(phrase.clone()).generate();
if keypair.address().as_ref().starts_with(&self.prefix) {
self.last_phrase = phrase;
return Ok(keypair);
}
}
Err(Error::Custom("Could not find keypair".into()))
}
}
#[cfg(test)]
mod tests {
use BrainPrefix;
#[test]
fn prefix_generator() {
let prefix = vec![0x00u8];
let keypair = BrainPrefix::new(prefix.clone(), usize::max_value(), 12)
.generate()
.unwrap();
assert!(keypair.address().as_bytes().starts_with(&prefix));
}
}

View File

@ -1,177 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::collections::HashSet;
use edit_distance::edit_distance;
use parity_crypto::publickey::Address;
use parity_wordlist;
use super::Brain;
/// Tries to find a phrase for address, given the number
/// of expected words and a partial phrase.
///
/// Returns `None` if phrase couldn't be found.
pub fn brain_recover(
address: &Address,
known_phrase: &str,
expected_words: usize,
) -> Option<String> {
let it = PhrasesIterator::from_known_phrase(known_phrase, expected_words);
for phrase in it {
let keypair = Brain::new(phrase.clone()).generate();
trace!("Testing: {}, got: {:?}", phrase, keypair.address());
if &keypair.address() == address {
return Some(phrase);
}
}
None
}
fn generate_substitutions(word: &str) -> Vec<&'static str> {
let mut words = parity_wordlist::WORDS
.iter()
.cloned()
.map(|w| (edit_distance(w, word), w))
.collect::<Vec<_>>();
words.sort_by(|a, b| a.0.cmp(&b.0));
words.into_iter().map(|pair| pair.1).collect()
}
/// Iterator over possible
pub struct PhrasesIterator {
words: Vec<Vec<&'static str>>,
combinations: u64,
indexes: Vec<usize>,
has_next: bool,
}
impl PhrasesIterator {
pub fn from_known_phrase(known_phrase: &str, expected_words: usize) -> Self {
let known_words = parity_wordlist::WORDS
.iter()
.cloned()
.collect::<HashSet<_>>();
let mut words = known_phrase
.split(' ')
.map(|word| match known_words.get(word) {
None => {
info!(
"Invalid word '{}', looking for potential substitutions.",
word
);
let substitutions = generate_substitutions(word);
info!("Closest words: {:?}", &substitutions[..10]);
substitutions
}
Some(word) => vec![*word],
})
.collect::<Vec<_>>();
// add missing words
if words.len() < expected_words {
let to_add = expected_words - words.len();
info!("Number of words is insuficcient adding {} more.", to_add);
for _ in 0..to_add {
words.push(parity_wordlist::WORDS.iter().cloned().collect());
}
}
// start searching
PhrasesIterator::new(words)
}
pub fn new(words: Vec<Vec<&'static str>>) -> Self {
let combinations = words.iter().fold(1u64, |acc, x| acc * x.len() as u64);
let indexes = words.iter().map(|_| 0).collect();
info!("Starting to test {} possible combinations.", combinations);
PhrasesIterator {
words,
combinations,
indexes,
has_next: combinations > 0,
}
}
pub fn combinations(&self) -> u64 {
self.combinations
}
fn current(&self) -> String {
let mut s = self.words[0][self.indexes[0]].to_owned();
for i in 1..self.indexes.len() {
s.push(' ');
s.push_str(self.words[i][self.indexes[i]]);
}
s
}
fn next_index(&mut self) -> bool {
let mut pos = self.indexes.len();
while pos > 0 {
pos -= 1;
self.indexes[pos] += 1;
if self.indexes[pos] >= self.words[pos].len() {
self.indexes[pos] = 0;
} else {
return true;
}
}
false
}
}
impl Iterator for PhrasesIterator {
type Item = String;
fn next(&mut self) -> Option<String> {
if !self.has_next {
return None;
}
let phrase = self.current();
self.has_next = self.next_index();
Some(phrase)
}
}
#[cfg(test)]
mod tests {
use super::PhrasesIterator;
#[test]
fn should_generate_possible_combinations() {
let mut it =
PhrasesIterator::new(vec![vec!["1", "2", "3"], vec!["test"], vec!["a", "b", "c"]]);
assert_eq!(it.combinations(), 9);
assert_eq!(it.next(), Some("1 test a".to_owned()));
assert_eq!(it.next(), Some("1 test b".to_owned()));
assert_eq!(it.next(), Some("1 test c".to_owned()));
assert_eq!(it.next(), Some("2 test a".to_owned()));
assert_eq!(it.next(), Some("2 test b".to_owned()));
assert_eq!(it.next(), Some("2 test c".to_owned()));
assert_eq!(it.next(), Some("3 test a".to_owned()));
assert_eq!(it.next(), Some("3 test b".to_owned()));
assert_eq!(it.next(), Some("3 test c".to_owned()));
assert_eq!(it.next(), None);
}
}

View File

@ -1,88 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crypto::Error as CryptoError;
use std::{error, fmt};
#[derive(Debug)]
/// Crypto error
pub enum Error {
/// Invalid secret key
InvalidSecret,
/// Invalid public key
InvalidPublic,
/// Invalid address
InvalidAddress,
/// Invalid EC signature
InvalidSignature,
/// Invalid AES message
InvalidMessage,
/// IO Error
Io(::std::io::Error),
/// Custom
Custom(String),
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let msg = match *self {
Error::InvalidSecret => "Invalid secret".into(),
Error::InvalidPublic => "Invalid public".into(),
Error::InvalidAddress => "Invalid address".into(),
Error::InvalidSignature => "Invalid EC signature".into(),
Error::InvalidMessage => "Invalid AES message".into(),
Error::Io(ref err) => format!("I/O error: {}", err),
Error::Custom(ref s) => s.clone(),
};
f.write_fmt(format_args!("Crypto error ({})", msg))
}
}
impl error::Error for Error {
fn description(&self) -> &str {
format!("{:?}", &self)
}
}
impl Into<String> for Error {
fn into(self) -> String {
format!("{}", self)
}
}
impl From<CryptoError> for Error {
fn from(e: CryptoError) -> Error {
Error::Custom(e.to_string())
}
}
impl From<::secp256k1::Error> for Error {
fn from(e: ::secp256k1::Error) -> Error {
match e {
::secp256k1::Error::InvalidMessage => Error::InvalidMessage,
::secp256k1::Error::InvalidPublicKey => Error::InvalidPublic,
::secp256k1::Error::InvalidSecretKey => Error::InvalidSecret,
_ => Error::InvalidSignature,
}
}
}
impl From<::std::io::Error> for Error {
fn from(err: ::std::io::Error) -> Error {
Error::Io(err)
}
}

View File

@ -1,39 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
// #![warn(missing_docs)]
extern crate edit_distance;
extern crate parity_crypto;
extern crate parity_wordlist;
extern crate serde;
#[macro_use]
extern crate log;
#[macro_use]
extern crate serde_derive;
mod brain;
mod brain_prefix;
mod password;
mod prefix;
pub mod brain_recover;
pub use self::{
brain::Brain, brain_prefix::BrainPrefix, parity_wordlist::Error as WordlistError,
password::Password, prefix::Prefix,
};

View File

@ -1,59 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use std::{fmt, ptr};
#[derive(Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Password(String);
impl fmt::Debug for Password {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "Password(******)")
}
}
impl Password {
pub fn as_bytes(&self) -> &[u8] {
self.0.as_bytes()
}
pub fn as_str(&self) -> &str {
self.0.as_str()
}
}
// Custom drop impl to zero out memory.
impl Drop for Password {
fn drop(&mut self) {
unsafe {
for byte_ref in self.0.as_mut_vec() {
ptr::write_volatile(byte_ref, 0)
}
}
}
}
impl From<String> for Password {
fn from(s: String) -> Password {
Password(s)
}
}
impl<'a> From<&'a str> for Password {
fn from(s: &'a str) -> Password {
Password::from(String::from(s))
}
}

View File

@ -1,54 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use parity_crypto::publickey::{Error, Generator, KeyPair, Random};
/// Tries to find keypair with address starting with given prefix.
pub struct Prefix {
prefix: Vec<u8>,
iterations: usize,
}
impl Prefix {
pub fn new(prefix: Vec<u8>, iterations: usize) -> Self {
Prefix { prefix, iterations }
}
pub fn generate(&mut self) -> Result<KeyPair, Error> {
for _ in 0..self.iterations {
let keypair = Random.generate();
if keypair.address().as_ref().starts_with(&self.prefix) {
return Ok(keypair);
}
}
Err(Error::Custom("Could not find keypair".into()))
}
}
#[cfg(test)]
mod tests {
use Prefix;
#[test]
fn prefix_generator() {
let prefix = vec![0xffu8];
let keypair = Prefix::new(prefix.clone(), usize::max_value())
.generate()
.unwrap();
assert!(keypair.address().as_bytes().starts_with(&prefix));
}
}

View File

@ -1,29 +0,0 @@
[package]
description = "Parity Ethereum Key Management"
name = "ethstore"
version = "0.2.1"
authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
log = "0.4"
libc = "0.2"
rand = "0.7.3"
ethkey = { path = "../ethkey" }
serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
rustc-hex = "1.0"
time = "0.1.34"
itertools = "0.5"
parking_lot = "0.11.1"
parity-crypto = { version = "0.6.2", features = [ "publickey"] }
ethereum-types = "0.9.2"
smallvec = "0.6"
parity-wordlist = "1.3"
tempdir = "0.3"
lazy_static = "1.2.0"
[dev-dependencies]
matches = "0.1"
[lib]

View File

@ -1,339 +0,0 @@
## ethstore-cli
Parity Ethereum key management.
### Usage
```
Parity Ethereum key management tool.
Copyright 2015-2019 Parity Technologies (UK) Ltd.
Usage:
ethstore insert <secret> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore change-pwd <address> <old-pwd> <new-pwd> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore list [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore import [--src DIR] [--dir DIR]
ethstore import-wallet <path> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore find-wallet-pass <path> <password>
ethstore remove <address> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore sign <address> <password> <message> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore public <address> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore list-vaults [--dir DIR]
ethstore create-vault <vault> <password> [--dir DIR]
ethstore change-vault-pwd <vault> <old-pwd> <new-pwd> [--dir DIR]
ethstore move-to-vault <address> <vault> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]
ethstore move-from-vault <address> <vault> <password> [--dir DIR]
ethstore [-h | --help]
Options:
-h, --help Display this message and exit.
--dir DIR Specify the secret store directory. It may be either
parity, parity-(chain), geth, geth-test
or a path [default: parity].
--vault VAULT Specify vault to use in this operation.
--vault-pwd VAULTPWD Specify vault password to use in this operation. Please note
that this option is required when vault option is set.
Otherwise it is ignored.
--src DIR Specify import source. It may be either
parity, parity-(chain), geth, geth-test
or a path [default: geth].
Commands:
insert Save account with password.
change-pwd Change password.
list List accounts.
import Import accounts from src.
import-wallet Import presale wallet.
find-wallet-pass Tries to open a wallet with list of passwords given.
remove Remove account.
sign Sign message.
public Displays public key for an address.
list-vaults List vaults.
create-vault Create new vault.
change-vault-pwd Change vault password.
move-to-vault Move account to vault from another vault/root directory.
move-from-vault Move account to root directory from given vault.
```
### Examples
#### `insert <secret> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]`
*Encrypt secret with a password and save it in secret store.*
- `<secret>` - ethereum secret, 32 bytes long
- `<password>` - account password, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
- `[--vault VAULT]` - vault to use in this operation
- `[--vault-pwd VAULTPWD]` - vault password to use in this operation, file path
```
ethstore insert 7d29fab185a33e2cd955812397354c472d2b84615b645aa135ff539f6b0d70d5 password.txt
```
```
a8fa5dd30a87bb9e3288d604eb74949c515ab66e
```
--
```
ethstore insert `ethkey generate random -s` "this is sparta"
```
```
24edfff680d536a5f6fe862d36df6f8f6f40f115
```
--
#### `change-pwd <address> <old-pwd> <new-pwd> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]`
*Change account password.*
- `<address>` - ethereum address, 20 bytes long
- `<old-pwd>` - old account password, file path
- `<new-pwd>` - new account password, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
- `[--vault VAULT]` - vault to use in this operation
- `[--vault-pwd VAULTPWD]` - vault password to use in this operation, file path
```
ethstore change-pwd a8fa5dd30a87bb9e3288d604eb74949c515ab66e old_pwd.txt new_pwd.txt
```
```
true
```
--
#### `list [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]`
*List secret store accounts.*
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
- `[--vault VAULT]` - vault to use in this operation
- `[--vault-pwd VAULTPWD]` - vault password to use in this operation, file path
```
ethstore list
```
```
0: 24edfff680d536a5f6fe862d36df6f8f6f40f115
1: 6edddfc6349aff20bc6467ccf276c5b52487f7a8
2: e6a3d25a7cb7cd21cb720df5b5e8afd154af1bbb
```
--
#### `import [--src DIR] [--dir DIR]`
*Import accounts from src.*
- `[--src DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: geth
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
```
ethstore import
```
```
0: e6a3d25a7cb7cd21cb720df5b5e8afd154af1bbb
1: 6edddfc6349aff20bc6467ccf276c5b52487f7a8
```
--
#### `import-wallet <path> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]`
*Import account from presale wallet.*
- `<path>` - presale wallet path
- `<password>` - account password, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
- `[--vault VAULT]` - vault to use in this operation
- `[--vault-pwd VAULTPWD]` - vault password to use in this operation, file path
```
ethstore import-wallet ethwallet.json password.txt
```
```
e6a3d25a7cb7cd21cb720df5b5e8afd154af1bbb
```
--
#### `find-wallet-pass <path> <password>`
Try to open presale wallet given a list of passwords from a file.
The list of passwords can be generated using e.g. [Phildo/brutedist](https://github.com/Phildo/brutedist).
- `<path>` - presale wallet path
- `<password>` - possible passwords, file path
```
ethstore find-wallet-pass ethwallet.json passwords.txt
```
```
Found password: test
```
--
#### `remove <address> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]`
*Remove account from secret store.*
- `<address>` - ethereum address, 20 bytes long
- `<password>` - account password, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
- `[--vault VAULT]` - vault to use in this operation
- `[--vault-pwd VAULTPWD]` - vault password to use in this operation, file path
```
ethstore remove a8fa5dd30a87bb9e3288d604eb74949c515ab66e password.txt
```
```
true
```
--
#### `sign <address> <password> <message> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]`
*Sign message with account's secret.*
- `<address>` - ethereum address, 20 bytes long
- `<password>` - account password, file path
- `<message>` - message to sign, 32 bytes long
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
- `[--vault VAULT]` - vault to use in this operation
- `[--vault-pwd VAULTPWD]` - vault password to use in this operation, file path
```
ethstore sign 24edfff680d536a5f6fe862d36df6f8f6f40f115 password.txt 7d29fab185a33e2cd955812397354c472d2b84615b645aa135ff539f6b0d70d5
```
```
c6649f9555232d90ff716d7e552a744c5af771574425a74860e12f763479eb1b708c1f3a7dc0a0a7f7a81e0a0ca88c6deacf469222bb3d9c5bf0847f98bae54901
```
--
#### `public <address> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]`
*Displays public key for an address.*
- `<address>` - ethereum address, 20 bytes long
- `<password>` - account password, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
- `[--vault VAULT]` - vault to use in this operation
- `[--vault-pwd VAULTPWD]` - vault password to use in this operation, file path
```
ethstore public 00e63fdb87ceb815ec96ae185b8f7381a0b4a5ea account_password.txt --vault vault_name --vault-pwd vault_password.txt
```
```
0x84161d8c05a996a534efbec50f24485cfcc07458efaef749a1b22156d7836c903eeb39bf2df74676e702eacc4cfdde069e5fd86692b5ef6ef81ba906e9e77d82
```
--
#### `list-vaults [--dir DIR]`
*List vaults.*
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
```
ethstore list-vaults
```
```
vault1
vault2
vault3
```
--
#### `create-vault <vault> <password> [--dir DIR]`
*Create new vault.*
- `<vault>` - name of new vault. This can only contain letters, digits, whitespaces, dashes and underscores
- `<password>` - vault password, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
```
ethstore create-vault vault3 vault3_password.txt
```
```
OK
```
--
#### `change-vault-pwd <vault> <old-pwd> <new-pwd> [--dir DIR]`
*Change vault password.*
- `<vault>` - name of existing vault
- `<old-pwd>` - old vault password, file path
- `<new-pwd>` - new vault password, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
```
ethstore change-vault-pwd vault3 vault3_password.txt new_vault3_password.txt
```
```
OK
```
--
#### `move-to-vault <address> <vault> <password> [--dir DIR] [--vault VAULT] [--vault-pwd VAULTPWD]`
*Move account to vault from another vault/root directory.*
- `<address>` - ethereum address, 20 bytes long
- `<vault>` - name of existing vault to move account to
- `<password>` - password of existing `<vault>` to move account to, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
- `[--vault VAULT]` - current vault of the `<address>` argument, if set
- `[--vault-pwd VAULTPWD]` - password for the current vault of the `<address>` argument, if any. file path
```
ethstore move-to-vault 00e63fdb87ceb815ec96ae185b8f7381a0b4a5ea vault3 vault3_password.txt
ethstore move-to-vault 00e63fdb87ceb815ec96ae185b8f7381a0b4a5ea vault1 vault1_password.txt --vault vault3 --vault-pwd vault3_password.txt
```
```
OK
OK
```
--
#### `move-from-vault <address> <vault> <password> [--dir DIR]`
*Move account to root directory from given vault.*
- `<address>` - ethereum address, 20 bytes long
- `<vault>` - name of existing vault to move account to
- `<password>` - password of existing `<vault>` to move account to, file path
- `[--dir DIR]` - secret store directory, It may be either parity, parity-test, geth, geth-test or a path. default: parity
```
ethstore move-from-vault 00e63fdb87ceb815ec96ae185b8f7381a0b4a5ea vault1 vault1_password.txt
```
```
OK
```
## Parity Ethereum toolchain
_This project is a part of the Parity Ethereum toolchain._
- [evmbin](https://github.com/paritytech/parity-ethereum/blob/master/evmbin/) - EVM implementation for Parity Ethereum.
- [ethabi](https://github.com/paritytech/ethabi) - Parity Ethereum function calls encoding.
- [ethstore](https://github.com/paritytech/parity-ethereum/blob/master/accounts/ethstore) - Parity Ethereum key management.
- [ethkey](https://github.com/paritytech/parity-ethereum/blob/master/accounts/ethkey) - Parity Ethereum keys generator.

View File

@ -1,57 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use json;
#[derive(Debug, PartialEq, Clone)]
pub struct Aes128Ctr {
pub iv: [u8; 16],
}
#[derive(Debug, PartialEq, Clone)]
pub enum Cipher {
Aes128Ctr(Aes128Ctr),
}
impl From<json::Aes128Ctr> for Aes128Ctr {
fn from(json: json::Aes128Ctr) -> Self {
Aes128Ctr { iv: json.iv.into() }
}
}
impl Into<json::Aes128Ctr> for Aes128Ctr {
fn into(self) -> json::Aes128Ctr {
json::Aes128Ctr {
iv: From::from(self.iv),
}
}
}
impl From<json::Cipher> for Cipher {
fn from(json: json::Cipher) -> Self {
match json {
json::Cipher::Aes128Ctr(params) => Cipher::Aes128Ctr(From::from(params)),
}
}
}
impl Into<json::Cipher> for Cipher {
fn into(self) -> json::Cipher {
match self {
Cipher::Aes128Ctr(params) => json::Cipher::Aes128Ctr(params.into()),
}
}
}

View File

@ -1,234 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use account::{Aes128Ctr, Cipher, Kdf, Pbkdf2, Prf};
use crypto::{self, publickey::Secret, Keccak256};
use ethkey::Password;
use json;
use random::Random;
use smallvec::SmallVec;
use std::{num::NonZeroU32, str};
use Error;
/// Encrypted data
#[derive(Debug, PartialEq, Clone)]
pub struct Crypto {
/// Encryption parameters
pub cipher: Cipher,
/// Encrypted data buffer
pub ciphertext: Vec<u8>,
/// Key derivation function parameters
pub kdf: Kdf,
/// Message authentication code
pub mac: [u8; 32],
}
impl From<json::Crypto> for Crypto {
fn from(json: json::Crypto) -> Self {
Crypto {
cipher: json.cipher.into(),
ciphertext: json.ciphertext.into(),
kdf: json.kdf.into(),
mac: json.mac.into(),
}
}
}
impl From<Crypto> for json::Crypto {
fn from(c: Crypto) -> Self {
json::Crypto {
cipher: c.cipher.into(),
ciphertext: c.ciphertext.into(),
kdf: c.kdf.into(),
mac: c.mac.into(),
}
}
}
impl str::FromStr for Crypto {
type Err = <json::Crypto as str::FromStr>::Err;
fn from_str(s: &str) -> Result<Self, Self::Err> {
s.parse::<json::Crypto>().map(Into::into)
}
}
impl From<Crypto> for String {
fn from(c: Crypto) -> Self {
json::Crypto::from(c).into()
}
}
impl Crypto {
/// Encrypt account secret
pub fn with_secret(
secret: &Secret,
password: &Password,
iterations: NonZeroU32,
) -> Result<Self, crypto::Error> {
Crypto::with_plain(secret.as_bytes(), password, iterations)
}
/// Encrypt custom plain data
pub fn with_plain(
plain: &[u8],
password: &Password,
iterations: NonZeroU32,
) -> Result<Self, crypto::Error> {
let salt: [u8; 32] = Random::random();
let iv: [u8; 16] = Random::random();
// two parts of derived key
// DK = [ DK[0..15] DK[16..31] ] = [derived_left_bits, derived_right_bits]
let (derived_left_bits, derived_right_bits) =
crypto::derive_key_iterations(password.as_bytes(), &salt, iterations.get());
// preallocated (on-stack in case of `Secret`) buffer to hold cipher
// length = length(plain) as we are using CTR-approach
let plain_len = plain.len();
let mut ciphertext: SmallVec<[u8; 32]> = SmallVec::from_vec(vec![0; plain_len]);
// aes-128-ctr with initial vector of iv
crypto::aes::encrypt_128_ctr(&derived_left_bits, &iv, plain, &mut *ciphertext)?;
// KECCAK(DK[16..31] ++ <ciphertext>), where DK[16..31] - derived_right_bits
let mac = crypto::derive_mac(&derived_right_bits, &*ciphertext).keccak256();
Ok(Crypto {
cipher: Cipher::Aes128Ctr(Aes128Ctr { iv: iv }),
ciphertext: ciphertext.into_vec(),
kdf: Kdf::Pbkdf2(Pbkdf2 {
dklen: crypto::KEY_LENGTH as u32,
salt: salt.to_vec(),
c: iterations,
prf: Prf::HmacSha256,
}),
mac: mac,
})
}
/// Try to decrypt and convert result to account secret
pub fn secret(&self, password: &Password) -> Result<Secret, Error> {
if self.ciphertext.len() > 32 {
return Err(Error::InvalidSecret);
}
let secret = self.do_decrypt(password, 32)?;
Ok(Secret::import_key(&secret)?)
}
/// Try to decrypt and return result as is
pub fn decrypt(&self, password: &Password) -> Result<Vec<u8>, Error> {
let expected_len = self.ciphertext.len();
self.do_decrypt(password, expected_len)
}
fn do_decrypt(&self, password: &Password, expected_len: usize) -> Result<Vec<u8>, Error> {
let (derived_left_bits, derived_right_bits) = match self.kdf {
Kdf::Pbkdf2(ref params) => {
crypto::derive_key_iterations(password.as_bytes(), &params.salt, params.c.get())
}
Kdf::Scrypt(ref params) => crypto::scrypt::derive_key(
password.as_bytes(),
&params.salt,
params.n,
params.p,
params.r,
)?,
};
let mac = crypto::derive_mac(&derived_right_bits, &self.ciphertext).keccak256();
if !crypto::is_equal(&mac, &self.mac) {
return Err(Error::InvalidPassword);
}
let mut plain: SmallVec<[u8; 32]> = SmallVec::from_vec(vec![0; expected_len]);
match self.cipher {
Cipher::Aes128Ctr(ref params) => {
// checker by callers
debug_assert!(expected_len >= self.ciphertext.len());
let from = expected_len - self.ciphertext.len();
crypto::aes::decrypt_128_ctr(
&derived_left_bits,
&params.iv,
&self.ciphertext,
&mut plain[from..],
)?;
Ok(plain.into_iter().collect())
}
}
}
}
#[cfg(test)]
mod tests {
use super::{Crypto, Error, NonZeroU32};
use crypto::publickey::{Generator, Random};
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(10240).expect("10240 > 0; qed");
}
#[test]
fn crypto_with_secret_create() {
let keypair = Random.generate();
let passwd = "this is sparta".into();
let crypto = Crypto::with_secret(keypair.secret(), &passwd, *ITERATIONS).unwrap();
let secret = crypto.secret(&passwd).unwrap();
assert_eq!(keypair.secret(), &secret);
}
#[test]
fn crypto_with_secret_invalid_password() {
let keypair = Random.generate();
let crypto =
Crypto::with_secret(keypair.secret(), &"this is sparta".into(), *ITERATIONS).unwrap();
assert_matches!(
crypto.secret(&"this is sparta!".into()),
Err(Error::InvalidPassword)
)
}
#[test]
fn crypto_with_null_plain_data() {
let original_data = b"";
let passwd = "this is sparta".into();
let crypto = Crypto::with_plain(&original_data[..], &passwd, *ITERATIONS).unwrap();
let decrypted_data = crypto.decrypt(&passwd).unwrap();
assert_eq!(original_data[..], *decrypted_data);
}
#[test]
fn crypto_with_tiny_plain_data() {
let original_data = b"{}";
let passwd = "this is sparta".into();
let crypto = Crypto::with_plain(&original_data[..], &passwd, *ITERATIONS).unwrap();
let decrypted_data = crypto.decrypt(&passwd).unwrap();
assert_eq!(original_data[..], *decrypted_data);
}
#[test]
fn crypto_with_huge_plain_data() {
let original_data: Vec<_> = (1..65536).map(|i| (i % 256) as u8).collect();
let passwd = "this is sparta".into();
let crypto = Crypto::with_plain(&original_data, &passwd, *ITERATIONS).unwrap();
let decrypted_data = crypto.decrypt(&passwd).unwrap();
assert_eq!(&original_data, &decrypted_data);
}
}

View File

@ -1,126 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use json;
use std::num::NonZeroU32;
#[derive(Debug, PartialEq, Clone)]
pub enum Prf {
HmacSha256,
}
#[derive(Debug, PartialEq, Clone)]
pub struct Pbkdf2 {
pub c: NonZeroU32,
pub dklen: u32,
pub prf: Prf,
pub salt: Vec<u8>,
}
#[derive(Debug, PartialEq, Clone)]
pub struct Scrypt {
pub dklen: u32,
pub p: u32,
pub n: u32,
pub r: u32,
pub salt: Vec<u8>,
}
#[derive(Debug, PartialEq, Clone)]
pub enum Kdf {
Pbkdf2(Pbkdf2),
Scrypt(Scrypt),
}
impl From<json::Prf> for Prf {
fn from(json: json::Prf) -> Self {
match json {
json::Prf::HmacSha256 => Prf::HmacSha256,
}
}
}
impl Into<json::Prf> for Prf {
fn into(self) -> json::Prf {
match self {
Prf::HmacSha256 => json::Prf::HmacSha256,
}
}
}
impl From<json::Pbkdf2> for Pbkdf2 {
fn from(json: json::Pbkdf2) -> Self {
Pbkdf2 {
c: json.c,
dklen: json.dklen,
prf: From::from(json.prf),
salt: json.salt.into(),
}
}
}
impl Into<json::Pbkdf2> for Pbkdf2 {
fn into(self) -> json::Pbkdf2 {
json::Pbkdf2 {
c: self.c,
dklen: self.dklen,
prf: self.prf.into(),
salt: From::from(self.salt),
}
}
}
impl From<json::Scrypt> for Scrypt {
fn from(json: json::Scrypt) -> Self {
Scrypt {
dklen: json.dklen,
p: json.p,
n: json.n,
r: json.r,
salt: json.salt.into(),
}
}
}
impl Into<json::Scrypt> for Scrypt {
fn into(self) -> json::Scrypt {
json::Scrypt {
dklen: self.dklen,
p: self.p,
n: self.n,
r: self.r,
salt: From::from(self.salt),
}
}
}
impl From<json::Kdf> for Kdf {
fn from(json: json::Kdf) -> Self {
match json {
json::Kdf::Pbkdf2(params) => Kdf::Pbkdf2(From::from(params)),
json::Kdf::Scrypt(params) => Kdf::Scrypt(From::from(params)),
}
}
}
impl Into<json::Kdf> for Kdf {
fn into(self) -> json::Kdf {
match self {
Kdf::Pbkdf2(params) => json::Kdf::Pbkdf2(params.into()),
Kdf::Scrypt(params) => json::Kdf::Scrypt(params.into()),
}
}
}

View File

@ -1,29 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
mod cipher;
mod crypto;
mod kdf;
mod safe_account;
mod version;
pub use self::{
cipher::{Aes128Ctr, Cipher},
crypto::Crypto,
kdf::{Kdf, Pbkdf2, Prf, Scrypt},
safe_account::SafeAccount,
version::Version,
};

View File

@ -1,287 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::crypto::Crypto;
use account::Version;
use crypto::{
self,
publickey::{ecdh::agree, sign, Address, KeyPair, Message, Public, Secret, Signature},
};
use ethkey::Password;
use json;
use std::num::NonZeroU32;
use Error;
/// Account representation.
#[derive(Debug, PartialEq, Clone)]
pub struct SafeAccount {
/// Account ID
pub id: [u8; 16],
/// Account version
pub version: Version,
/// Account address
pub address: Address,
/// Account private key derivation definition.
pub crypto: Crypto,
/// Account filename
pub filename: Option<String>,
/// Account name
pub name: String,
/// Account metadata
pub meta: String,
}
impl Into<json::KeyFile> for SafeAccount {
fn into(self) -> json::KeyFile {
json::KeyFile {
id: From::from(self.id),
version: self.version.into(),
address: Some(self.address.into()),
crypto: self.crypto.into(),
name: Some(self.name.into()),
meta: Some(self.meta.into()),
}
}
}
impl SafeAccount {
/// Create a new account
pub fn create(
keypair: &KeyPair,
id: [u8; 16],
password: &Password,
iterations: NonZeroU32,
name: String,
meta: String,
) -> Result<Self, crypto::Error> {
Ok(SafeAccount {
id: id,
version: Version::V3,
crypto: Crypto::with_secret(keypair.secret(), password, iterations)?,
address: keypair.address(),
filename: None,
name: name,
meta: meta,
})
}
/// Create a new `SafeAccount` from the given `json`; if it was read from a
/// file, the `filename` should be `Some` name. If it is as yet anonymous, then it
/// can be left `None`.
/// In case `password` is provided, we will attempt to read the secret from the keyfile
/// and derive the address from it instead of reading it directly.
/// Providing password is required for `json::KeyFile`s with no address.
pub fn from_file(
json: json::KeyFile,
filename: Option<String>,
password: &Option<Password>,
) -> Result<Self, Error> {
let crypto = Crypto::from(json.crypto);
let address = match (password, &json.address) {
(None, Some(json_address)) => json_address.into(),
(None, None) => Err(Error::Custom(
"This keystore does not contain address. You need to provide password to import it"
.into(),
))?,
(Some(password), json_address) => {
let derived_address = KeyPair::from_secret(
crypto
.secret(&password)
.map_err(|_| Error::InvalidPassword)?,
)?
.address();
match json_address {
Some(json_address) => {
let json_address = json_address.into();
if derived_address != json_address {
warn!("Detected address mismatch when opening an account. Derived: {:?}, in json got: {:?}",
derived_address, json_address);
}
}
_ => {}
}
derived_address
}
};
Ok(SafeAccount {
id: json.id.into(),
version: json.version.into(),
address,
crypto,
filename,
name: json.name.unwrap_or(String::new()),
meta: json.meta.unwrap_or("{}".to_owned()),
})
}
/// Create a new `SafeAccount` from the given vault `json`; if it was read from a
/// file, the `filename` should be `Some` name. If it is as yet anonymous, then it
/// can be left `None`.
pub fn from_vault_file(
password: &Password,
json: json::VaultKeyFile,
filename: Option<String>,
) -> Result<Self, Error> {
let meta_crypto: Crypto = json.metacrypto.into();
let meta_plain = meta_crypto.decrypt(password)?;
let meta_plain =
json::VaultKeyMeta::load(&meta_plain).map_err(|e| Error::Custom(format!("{:?}", e)))?;
SafeAccount::from_file(
json::KeyFile {
id: json.id,
version: json.version,
crypto: json.crypto,
address: Some(meta_plain.address),
name: meta_plain.name,
meta: meta_plain.meta,
},
filename,
&None,
)
}
/// Create a new `VaultKeyFile` from the given `self`
pub fn into_vault_file(
self,
iterations: NonZeroU32,
password: &Password,
) -> Result<json::VaultKeyFile, Error> {
let meta_plain = json::VaultKeyMeta {
address: self.address.into(),
name: Some(self.name),
meta: Some(self.meta),
};
let meta_plain = meta_plain
.write()
.map_err(|e| Error::Custom(format!("{:?}", e)))?;
let meta_crypto = Crypto::with_plain(&meta_plain, password, iterations)?;
Ok(json::VaultKeyFile {
id: self.id.into(),
version: self.version.into(),
crypto: self.crypto.into(),
metacrypto: meta_crypto.into(),
})
}
/// Sign a message.
pub fn sign(&self, password: &Password, message: &Message) -> Result<Signature, Error> {
let secret = self.crypto.secret(password)?;
sign(&secret, message).map_err(From::from)
}
/// Decrypt a message.
pub fn decrypt(
&self,
password: &Password,
shared_mac: &[u8],
message: &[u8],
) -> Result<Vec<u8>, Error> {
let secret = self.crypto.secret(password)?;
crypto::publickey::ecies::decrypt(&secret, shared_mac, message).map_err(From::from)
}
/// Agree on shared key.
pub fn agree(&self, password: &Password, other: &Public) -> Result<Secret, Error> {
let secret = self.crypto.secret(password)?;
agree(&secret, other).map_err(From::from)
}
/// Derive public key.
pub fn public(&self, password: &Password) -> Result<Public, Error> {
let secret = self.crypto.secret(password)?;
Ok(KeyPair::from_secret(secret)?.public().clone())
}
/// Change account's password.
pub fn change_password(
&self,
old_password: &Password,
new_password: &Password,
iterations: NonZeroU32,
) -> Result<Self, Error> {
let secret = self.crypto.secret(old_password)?;
let result = SafeAccount {
id: self.id.clone(),
version: self.version.clone(),
crypto: Crypto::with_secret(&secret, new_password, iterations)?,
address: self.address.clone(),
filename: self.filename.clone(),
name: self.name.clone(),
meta: self.meta.clone(),
};
Ok(result)
}
/// Check if password matches the account.
pub fn check_password(&self, password: &Password) -> bool {
self.crypto.secret(password).is_ok()
}
}
#[cfg(test)]
mod tests {
use super::{NonZeroU32, SafeAccount};
use crypto::publickey::{verify_public, Generator, Random};
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(10240).expect("10240 > 0; qed");
}
#[test]
fn sign_and_verify_public() {
let keypair = Random.generate();
let password = "hello world".into();
let message = [1u8; 32].into();
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
);
let signature = account.unwrap().sign(&password, &message).unwrap();
assert!(verify_public(keypair.public(), &signature, &message).unwrap());
}
#[test]
fn change_password() {
let keypair = Random.generate();
let first_password = "hello world".into();
let sec_password = "this is sparta".into();
let message = [1u8; 32].into();
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&first_password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
)
.unwrap();
let new_account = account
.change_password(&first_password, &sec_password, *ITERATIONS)
.unwrap();
assert!(account.sign(&first_password, &message).is_ok());
assert!(account.sign(&sec_password, &message).is_err());
assert!(new_account.sign(&first_password, &message).is_err());
assert!(new_account.sign(&sec_password, &message).is_ok());
}
}

View File

@ -1,38 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use json;
#[derive(Debug, PartialEq, Clone)]
pub enum Version {
V3,
}
impl From<json::Version> for Version {
fn from(json: json::Version) -> Self {
match json {
json::Version::V3 => Version::V3,
}
}
}
impl Into<json::Version> for Version {
fn into(self) -> json::Version {
match self {
Version::V3 => json::Version::V3,
}
}
}

View File

@ -1,608 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use super::{
vault::{VaultDiskDirectory, VAULT_FILE_NAME},
KeyDirectory, VaultKey, VaultKeyDirectory, VaultKeyDirectoryProvider,
};
use ethkey::Password;
use json::{self, Uuid};
use std::{
collections::HashMap,
fs, io,
io::Write,
path::{Path, PathBuf},
};
use time;
use Error;
use SafeAccount;
const IGNORED_FILES: &'static [&'static str] = &[
"thumbs.db",
"address_book.json",
"dapps_policy.json",
"dapps_accounts.json",
"dapps_history.json",
"vault.json",
];
/// Find a unique filename that does not exist using four-letter random suffix.
pub fn find_unique_filename_using_random_suffix(
parent_path: &Path,
original_filename: &str,
) -> io::Result<String> {
let mut path = parent_path.join(original_filename);
let mut deduped_filename = original_filename.to_string();
if path.exists() {
const MAX_RETRIES: usize = 500;
let mut retries = 0;
while path.exists() {
if retries >= MAX_RETRIES {
return Err(io::Error::new(
io::ErrorKind::Other,
"Exceeded maximum retries when deduplicating filename.",
));
}
let suffix = ::random::random_string(4);
deduped_filename = format!("{}-{}", original_filename, suffix);
path.set_file_name(&deduped_filename);
retries += 1;
}
}
Ok(deduped_filename)
}
/// Create a new file and restrict permissions to owner only. It errors if the file already exists.
#[cfg(unix)]
pub fn create_new_file_with_permissions_to_owner(file_path: &Path) -> io::Result<fs::File> {
use std::os::unix::fs::OpenOptionsExt;
fs::OpenOptions::new()
.write(true)
.create_new(true)
.mode((libc::S_IWUSR | libc::S_IRUSR) as u32)
.open(file_path)
}
/// Create a new file and restrict permissions to owner only. It errors if the file already exists.
#[cfg(not(unix))]
pub fn create_new_file_with_permissions_to_owner(file_path: &Path) -> io::Result<fs::File> {
fs::OpenOptions::new()
.write(true)
.create_new(true)
.open(file_path)
}
/// Create a new file and restrict permissions to owner only. It replaces the existing file if it already exists.
#[cfg(unix)]
pub fn replace_file_with_permissions_to_owner(file_path: &Path) -> io::Result<fs::File> {
use std::os::unix::fs::PermissionsExt;
let file = fs::File::create(file_path)?;
let mut permissions = file.metadata()?.permissions();
permissions.set_mode((libc::S_IWUSR | libc::S_IRUSR) as u32);
file.set_permissions(permissions)?;
Ok(file)
}
/// Create a new file and restrict permissions to owner only. It replaces the existing file if it already exists.
#[cfg(not(unix))]
pub fn replace_file_with_permissions_to_owner(file_path: &Path) -> io::Result<fs::File> {
fs::File::create(file_path)
}
/// Root keys directory implementation
pub type RootDiskDirectory = DiskDirectory<DiskKeyFileManager>;
/// Disk directory key file manager
pub trait KeyFileManager: Send + Sync {
/// Read `SafeAccount` from given key file stream
fn read<T>(&self, filename: Option<String>, reader: T) -> Result<SafeAccount, Error>
where
T: io::Read;
/// Write `SafeAccount` to given key file stream
fn write<T>(&self, account: SafeAccount, writer: &mut T) -> Result<(), Error>
where
T: io::Write;
}
/// Disk-based keys directory implementation
pub struct DiskDirectory<T>
where
T: KeyFileManager,
{
path: PathBuf,
key_manager: T,
}
/// Keys file manager for root keys directory
#[derive(Default)]
pub struct DiskKeyFileManager {
password: Option<Password>,
}
impl RootDiskDirectory {
pub fn create<P>(path: P) -> Result<Self, Error>
where
P: AsRef<Path>,
{
fs::create_dir_all(&path)?;
Ok(Self::at(path))
}
/// allows to read keyfiles with given password (needed for keyfiles w/o address)
pub fn with_password(&self, password: Option<Password>) -> Self {
DiskDirectory::new(&self.path, DiskKeyFileManager { password })
}
pub fn at<P>(path: P) -> Self
where
P: AsRef<Path>,
{
DiskDirectory::new(path, DiskKeyFileManager::default())
}
}
impl<T> DiskDirectory<T>
where
T: KeyFileManager,
{
/// Create new disk directory instance
pub fn new<P>(path: P, key_manager: T) -> Self
where
P: AsRef<Path>,
{
DiskDirectory {
path: path.as_ref().to_path_buf(),
key_manager: key_manager,
}
}
fn files(&self) -> Result<Vec<PathBuf>, Error> {
Ok(fs::read_dir(&self.path)?
.flat_map(Result::ok)
.filter(|entry| {
let metadata = entry.metadata().ok();
let file_name = entry.file_name();
let name = file_name.to_string_lossy();
// filter directories
metadata.map_or(false, |m| !m.is_dir()) &&
// hidden files
!name.starts_with(".") &&
// other ignored files
!IGNORED_FILES.contains(&&*name)
})
.map(|entry| entry.path())
.collect::<Vec<PathBuf>>())
}
pub fn files_hash(&self) -> Result<u64, Error> {
use std::{collections::hash_map::DefaultHasher, hash::Hasher};
let mut hasher = DefaultHasher::new();
let files = self.files()?;
for file in files {
hasher.write(file.to_str().unwrap_or("").as_bytes())
}
Ok(hasher.finish())
}
fn last_modification_date(&self) -> Result<u64, Error> {
use std::time::{Duration, UNIX_EPOCH};
let duration = fs::metadata(&self.path)?
.modified()?
.duration_since(UNIX_EPOCH)
.unwrap_or(Duration::default());
let timestamp = duration.as_secs() ^ (duration.subsec_nanos() as u64);
Ok(timestamp)
}
/// all accounts found in keys directory
fn files_content(&self) -> Result<HashMap<PathBuf, SafeAccount>, Error> {
// it's not done using one iterator cause
// there is an issue with rustc and it takes tooo much time to compile
let paths = self.files()?;
Ok(paths
.into_iter()
.filter_map(|path| {
let filename = Some(
path.file_name()
.and_then(|n| n.to_str())
.expect("Keys have valid UTF8 names only.")
.to_owned(),
);
fs::File::open(path.clone())
.map_err(Into::into)
.and_then(|file| self.key_manager.read(filename, file))
.map_err(|err| {
warn!("Invalid key file: {:?} ({})", path, err);
err
})
.map(|account| (path, account))
.ok()
})
.collect())
}
/// insert account with given filename. if the filename is a duplicate of any stored account and dedup is set to
/// true, a random suffix is appended to the filename.
pub fn insert_with_filename(
&self,
account: SafeAccount,
mut filename: String,
dedup: bool,
) -> Result<SafeAccount, Error> {
if dedup {
filename = find_unique_filename_using_random_suffix(&self.path, &filename)?;
}
// path to keyfile
let keyfile_path = self.path.join(filename.as_str());
// update account filename
let original_account = account.clone();
let mut account = account;
account.filename = Some(filename);
{
// save the file
let mut file = if dedup {
create_new_file_with_permissions_to_owner(&keyfile_path)?
} else {
replace_file_with_permissions_to_owner(&keyfile_path)?
};
// write key content
self.key_manager
.write(original_account, &mut file)
.map_err(|e| Error::Custom(format!("{:?}", e)))?;
file.flush()?;
file.sync_all()?;
}
Ok(account)
}
/// Get key file manager referece
pub fn key_manager(&self) -> &T {
&self.key_manager
}
}
impl<T> KeyDirectory for DiskDirectory<T>
where
T: KeyFileManager,
{
fn load(&self) -> Result<Vec<SafeAccount>, Error> {
let accounts = self
.files_content()?
.into_iter()
.map(|(_, account)| account)
.collect();
Ok(accounts)
}
fn update(&self, account: SafeAccount) -> Result<SafeAccount, Error> {
// Disk store handles updates correctly iff filename is the same
let filename = account_filename(&account);
self.insert_with_filename(account, filename, false)
}
fn insert(&self, account: SafeAccount) -> Result<SafeAccount, Error> {
let filename = account_filename(&account);
self.insert_with_filename(account, filename, true)
}
fn remove(&self, account: &SafeAccount) -> Result<(), Error> {
// enumerate all entries in keystore
// and find entry with given address
let to_remove = self
.files_content()?
.into_iter()
.find(|&(_, ref acc)| acc.id == account.id && acc.address == account.address);
// remove it
match to_remove {
None => Err(Error::InvalidAccount),
Some((path, _)) => fs::remove_file(path).map_err(From::from),
}
}
fn path(&self) -> Option<&PathBuf> {
Some(&self.path)
}
fn as_vault_provider(&self) -> Option<&dyn VaultKeyDirectoryProvider> {
Some(self)
}
fn unique_repr(&self) -> Result<u64, Error> {
self.last_modification_date()
}
}
impl<T> VaultKeyDirectoryProvider for DiskDirectory<T>
where
T: KeyFileManager,
{
fn create(&self, name: &str, key: VaultKey) -> Result<Box<dyn VaultKeyDirectory>, Error> {
let vault_dir = VaultDiskDirectory::create(&self.path, name, key)?;
Ok(Box::new(vault_dir))
}
fn open(&self, name: &str, key: VaultKey) -> Result<Box<dyn VaultKeyDirectory>, Error> {
let vault_dir = VaultDiskDirectory::at(&self.path, name, key)?;
Ok(Box::new(vault_dir))
}
fn list_vaults(&self) -> Result<Vec<String>, Error> {
Ok(fs::read_dir(&self.path)?
.filter_map(|e| e.ok().map(|e| e.path()))
.filter_map(|path| {
let mut vault_file_path = path.clone();
vault_file_path.push(VAULT_FILE_NAME);
if vault_file_path.is_file() {
path.file_name()
.and_then(|f| f.to_str())
.map(|f| f.to_owned())
} else {
None
}
})
.collect())
}
fn vault_meta(&self, name: &str) -> Result<String, Error> {
VaultDiskDirectory::meta_at(&self.path, name)
}
}
impl KeyFileManager for DiskKeyFileManager {
fn read<T>(&self, filename: Option<String>, reader: T) -> Result<SafeAccount, Error>
where
T: io::Read,
{
let key_file =
json::KeyFile::load(reader).map_err(|e| Error::Custom(format!("{:?}", e)))?;
SafeAccount::from_file(key_file, filename, &self.password)
}
fn write<T>(&self, mut account: SafeAccount, writer: &mut T) -> Result<(), Error>
where
T: io::Write,
{
// when account is moved back to root directory from vault
// => remove vault field from meta
account.meta = json::remove_vault_name_from_json_meta(&account.meta)
.map_err(|err| Error::Custom(format!("{:?}", err)))?;
let key_file: json::KeyFile = account.into();
key_file
.write(writer)
.map_err(|e| Error::Custom(format!("{:?}", e)))
}
}
fn account_filename(account: &SafeAccount) -> String {
// build file path
account.filename.clone().unwrap_or_else(|| {
let timestamp = time::strftime("%Y-%m-%dT%H-%M-%S", &time::now_utc())
.expect("Time-format string is valid.");
format!("UTC--{}Z--{}", timestamp, Uuid::from(account.id))
})
}
#[cfg(test)]
mod test {
extern crate tempdir;
use self::tempdir::TempDir;
use super::{KeyDirectory, RootDiskDirectory, VaultKey};
use account::SafeAccount;
use crypto::publickey::{Generator, Random};
use std::{env, fs, num::NonZeroU32};
lazy_static! {
static ref ITERATIONS: NonZeroU32 = NonZeroU32::new(1024).expect("1024 > 0; qed");
}
#[test]
fn should_create_new_account() {
// given
let mut dir = env::temp_dir();
dir.push("ethstore_should_create_new_account");
let keypair = Random.generate();
let password = "hello world".into();
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
// when
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
);
let res = directory.insert(account.unwrap());
// then
assert!(res.is_ok(), "Should save account succesfuly.");
assert!(
res.unwrap().filename.is_some(),
"Filename has been assigned."
);
// cleanup
let _ = fs::remove_dir_all(dir);
}
#[test]
fn should_handle_duplicate_filenames() {
// given
let mut dir = env::temp_dir();
dir.push("ethstore_should_handle_duplicate_filenames");
let keypair = Random.generate();
let password = "hello world".into();
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
// when
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
)
.unwrap();
let filename = "test".to_string();
let dedup = true;
directory
.insert_with_filename(account.clone(), "foo".to_string(), dedup)
.unwrap();
let file1 = directory
.insert_with_filename(account.clone(), filename.clone(), dedup)
.unwrap()
.filename
.unwrap();
let file2 = directory
.insert_with_filename(account.clone(), filename.clone(), dedup)
.unwrap()
.filename
.unwrap();
let file3 = directory
.insert_with_filename(account.clone(), filename.clone(), dedup)
.unwrap()
.filename
.unwrap();
// then
// the first file should have the original names
assert_eq!(file1, filename);
// the following duplicate files should have a suffix appended
assert!(file2 != file3);
assert_eq!(file2.len(), filename.len() + 5);
assert_eq!(file3.len(), filename.len() + 5);
// cleanup
let _ = fs::remove_dir_all(dir);
}
#[test]
fn should_manage_vaults() {
// given
let mut dir = env::temp_dir();
dir.push("should_create_new_vault");
let directory = RootDiskDirectory::create(dir.clone()).unwrap();
let vault_name = "vault";
let password = "password".into();
// then
assert!(directory.as_vault_provider().is_some());
// and when
let before_root_items_count = fs::read_dir(&dir).unwrap().count();
let vault = directory
.as_vault_provider()
.unwrap()
.create(vault_name, VaultKey::new(&password, *ITERATIONS));
// then
assert!(vault.is_ok());
let after_root_items_count = fs::read_dir(&dir).unwrap().count();
assert!(after_root_items_count > before_root_items_count);
// and when
let vault = directory
.as_vault_provider()
.unwrap()
.open(vault_name, VaultKey::new(&password, *ITERATIONS));
// then
assert!(vault.is_ok());
let after_root_items_count2 = fs::read_dir(&dir).unwrap().count();
assert!(after_root_items_count == after_root_items_count2);
// cleanup
let _ = fs::remove_dir_all(dir);
}
#[test]
fn should_list_vaults() {
// given
let temp_path = TempDir::new("").unwrap();
let directory = RootDiskDirectory::create(&temp_path).unwrap();
let vault_provider = directory.as_vault_provider().unwrap();
let iter = NonZeroU32::new(1).expect("1 > 0; qed");
vault_provider
.create("vault1", VaultKey::new(&"password1".into(), iter))
.unwrap();
vault_provider
.create("vault2", VaultKey::new(&"password2".into(), iter))
.unwrap();
// then
let vaults = vault_provider.list_vaults().unwrap();
assert_eq!(vaults.len(), 2);
assert!(vaults.iter().any(|v| &*v == "vault1"));
assert!(vaults.iter().any(|v| &*v == "vault2"));
}
#[test]
fn hash_of_files() {
let temp_path = TempDir::new("").unwrap();
let directory = RootDiskDirectory::create(&temp_path).unwrap();
let hash = directory
.files_hash()
.expect("Files hash should be calculated ok");
assert_eq!(hash, 15130871412783076140);
let keypair = Random.generate();
let password = "test pass".into();
let account = SafeAccount::create(
&keypair,
[0u8; 16],
&password,
*ITERATIONS,
"Test".to_owned(),
"{}".to_owned(),
);
directory
.insert(account.unwrap())
.expect("Account should be inserted ok");
let new_hash = directory
.files_hash()
.expect("New files hash should be calculated ok");
assert!(
new_hash != hash,
"hash of the file list should change once directory content changed"
);
}
}

View File

@ -1,77 +0,0 @@
// Copyright 2015-2020 Parity Technologies (UK) Ltd.
// This file is part of OpenEthereum.
// OpenEthereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// OpenEthereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with OpenEthereum. If not, see <http://www.gnu.org/licenses/>.
use crypto::publickey::Address;
use itertools;
use parking_lot::RwLock;
use std::collections::HashMap;
use super::KeyDirectory;
use Error;
use SafeAccount;
/// Accounts in-memory storage.
#[derive(Default)]
pub struct MemoryDirectory {
accounts: RwLock<HashMap<Address, Vec<SafeAccount>>>,
}
impl KeyDirectory for MemoryDirectory {
fn load(&self) -> Result<Vec<SafeAccount>, Error> {
Ok(itertools::Itertools::flatten(self.accounts.read().values().cloned()).collect())
}
fn update(&self, account: SafeAccount) -> Result<SafeAccount, Error> {
let mut lock = self.accounts.write();
let accounts = lock.entry(account.address.clone()).or_insert_with(Vec::new);
// If the filename is the same we just need to replace the entry
accounts.retain(|acc| acc.filename != account.filename);
accounts.push(account.clone());
Ok(account)
}
fn insert(&self, account: SafeAccount) -> Result<SafeAccount, Error> {
let mut lock = self.accounts.write();
let accounts = lock.entry(account.address.clone()).or_insert_with(Vec::new);
accounts.push(account.clone());
Ok(account)
}
fn remove(&self, account: &SafeAccount) -> Result<(), Error> {
let mut accounts = self.accounts.write();
let is_empty = if let Some(accounts) = accounts.get_mut(&account.address) {
if let Some(position) = accounts.iter().position(|acc| acc == account) {
accounts.remove(position);
}
accounts.is_empty()
} else {
false
};
if is_empty {
accounts.remove(&account.address);
}
Ok(())
}
fn unique_repr(&self) -> Result<u64, Error> {
let mut val = 0u64;
let accounts = self.accounts.read();
for acc in accounts.keys() {
val = val ^ acc.to_low_u64_be()
}
Ok(val)
}
}

Some files were not shown because too many files have changed in this diff Show More