Compare commits

...

16 Commits

Author SHA1 Message Date
Tomasz Drwięga
86f6cea29d [beta] Backports (#8011)
* Hardware-wallet/usb-subscribe-refactor (#7860)

* Hardware-wallet fix

* More fine-grained initilization of callbacks by vendorID, productID and usb class
* Each device manufacturer gets a seperate handle thread each
* Replaced "dummy for loop" with a delay to wait for the device to boot-up properly
* Haven't been very carefully with checking dependencies cycles etc
* Inline comments explaining where shortcuts have been taken
* Need to test this on Windows machine and with Ledger (both models)

Signed-off-by: niklasad1 <niklasadolfsson1@gmail.com>

* Validate product_id of detected ledger devices

* closed_device => unlocked_device

* address comments

* add target in debug

* Address feedback

* Remove thread joining in HardwareWalletManager
* Remove thread handlers in HardwareWalletManager because this makes them unused

* fixed broken logs (#7934)

* fixed broken logs

* bring back old lock order

* removed bloom groups from blockchain

* revert unrelated changes

* simplify blockchain_block_blooms

* Bump WS (#7952)

* Calculate proper keccak256/sha3 using parity. (#7953)

* Increase max download limit to 128MB (#7965)

* fetch: increase max download limit to 64MB

* parity: increase download size limit for updater service

* Detect too large packets in snapshot sync. (#7977)

* fix traces, removed bloomchain crate, closes #7228, closes #7167 (#7979)

* Remvoe generator.rs

* Make block generator easier to use (#7888)

* Make block generator easier to use

* applied review suggestions

* rename BlockMetadata -> BlockOptions

* removed redundant uses of blockchain generator and genereator.next().unwrap() calls
2018-02-28 14:59:04 +01:00
GitLab Build Bot
3d6670972f [ci skip] js-precompiled 20180219-162828 2018-02-19 16:29:36 +00:00
André Silva
804ddfe31e [Beta] Backports (#7945)
* ECIP 1041 - Remove Difficulty Bomb (#7905)

Enable difficulty bomb defusion at block:
 - 5900000 on Ethereum Classic mainnet,
 - 2300000 on morden testnet.

Reference:
https://github.com/ethereumproject/ECIPs/blob/master/ECIPs/ECIP-1041.md

* spec: Validate required divisor fields are not 0 (#7933)

* Add validate_non_zero function

It's used to validate that a Spec's uint field used as a divisor is not zero.

* Add deserialize_with to gas_limit_bound_divisor

Prevents panics due to divide-by-zero on the gas_limit_bound_divisor
field.

* Add deserialize_with to difficulty_bound_divisor

Prevents panics due to divide-by-zero on the difficulty_bound_divisor
field.

* Add validate_optional_non_zero function

Used to validate Option<Uint> divisor fields.

* Use deserialize_with on optional divisor fields.

* Add #[serde(default)] attribute to divisor fields

When using `#[serde(deserialize_with)]`, `#[serde(default)]` must be specified so that missing
fields can be deserialized with the deserializer for `None`.

* Kovan WASM fork code (#7849)

* kovan fork code

* introduce ethcore level vm_factory and let it fail

* fix json tests

* wasmcosts as option

* review changes

* wasm costs in parser

* fix evm tests

* review fixes

* fix test

* remove redundant json field
2018-02-19 16:05:21 +01:00
Rando
3bfb2fa1aa Beta: Gitlab Cargo Cache (#7944)
* gitlab cache (#7921)

it is necessary to test

* fix snap build master (#7896)

add rhash

* Remove duplicate snap target
2018-02-19 15:36:12 +01:00
Jaco Greeff
9d697c5d0a [beta] Bump react-qr-reader (#7943)
* [beta] Update react-qr-reader

* Explicit webrtc-adapter dependency (package-lock workaround)

* iframe with allow (QR, new Chrome policy)
2018-02-19 14:28:31 +01:00
Pierre Krieger
6a29fea23c Backport of #7844 and #7917 to beta (#7940)
* Randomize the peer we dispatch to

* Fix a division by zero in light client RPC handler
2018-02-19 13:03:49 +01:00
GitLab Build Bot
deecf8927c [ci skip] js-precompiled 20180216-145330 2018-02-16 14:54:41 +00:00
Jaco Greeff
acef56b1ea [beta] Wallet allowJsEval: true (#7913)
* [beta] Wallet allowJsEval: true

* Fix unsafe wallet.

* Enable unsafe-eval for all dapps.
2018-02-16 14:33:02 +01:00
GitLab Build Bot
14d29798e3 [ci skip] js-precompiled 20180215-155645 2018-02-15 15:57:55 +00:00
Afri Schoedon
5fc06c0e24 Fix CSP for dapps that require eval. (#7867) (#7903)
* Add allowJsEval to manifest.

* Enable 'unsafe-eval' if requested in manifest.
2018-02-15 12:21:29 +01:00
GitLab Build Bot
62be23eef5 [ci skip] js-precompiled 20180215-092922 2018-02-15 09:31:06 +00:00
Afri Schoedon
57b8efb86a Do a meaningful commit that does not contain the words "ci" or "skip" in commit message. 2018-02-15 08:25:54 +01:00
GitLab Build Bot
8f841767a4 [ci skip] js-precompiled 20180214-202503 2018-02-14 20:26:12 +00:00
Denis S. Soldatov aka General-Beck
d30c035440 fix snap build beta (#7895)
add rhash
2018-02-14 20:50:43 +01:00
Afri Schoedon
89dc08a5cd Fix snapcraft grade to stable (#7894) 2018-02-14 19:43:50 +01:00
GitLab Build Bot
fa0e2a7449 [ci skip] js-precompiled 20180214-172022 2018-02-14 17:21:31 +00:00
89 changed files with 1400 additions and 1760 deletions

View File

@@ -9,10 +9,12 @@ variables:
CARGOFLAGS: ""
CI_SERVER_NAME: "GitLab CI"
LIBSSL: "libssl1.0.0 (>=1.0.0)"
CARGO_HOME: $CI_PROJECT_DIR/cargo
cache:
key: "$CI_BUILD_STAGE-$CI_BUILD_REF_NAME"
paths:
- target
- target/
- cargo/
untracked: true
linux-stable:
stage: build

20
Cargo.lock generated
View File

@@ -181,11 +181,6 @@ dependencies = [
"keccak-hash 0.1.0",
]
[[package]]
name = "bloomchain"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "bn"
version = "0.4.4"
@@ -476,7 +471,6 @@ version = "1.9.0"
dependencies = [
"ansi_term 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"bloomable 0.1.0",
"bloomchain 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"bn 0.4.4 (git+https://github.com/paritytech/bn)",
"byteorder 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"common-types 0.1.0",
@@ -605,7 +599,6 @@ dependencies = [
"ethcore-io 1.9.0",
"ethcore-network 1.9.0",
"ethcore-util 1.9.0",
"evm 0.1.0",
"futures 0.1.16 (registry+https://github.com/rust-lang/crates.io-index)",
"heapsize 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"itertools 0.5.10 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -1334,7 +1327,7 @@ dependencies = [
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"parking_lot 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"slab 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"ws 0.7.1 (git+https://github.com/tomusdrw/ws-rs)",
"ws 0.7.5 (git+https://github.com/tomusdrw/ws-rs)",
]
[[package]]
@@ -2239,7 +2232,7 @@ dependencies = [
[[package]]
name = "parity-ui-old-precompiled"
version = "1.9.0"
source = "git+https://github.com/js-dist-paritytech/parity-beta-1-9-v1.git#2997296ebba9a48ec8254f172699fcc215dbf8a0"
source = "git+https://github.com/js-dist-paritytech/parity-beta-1-9-v1.git#a79da677766c9c68d57d253f1628bb20fc6172d7"
dependencies = [
"parity-dapps-glue 1.9.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -2247,7 +2240,7 @@ dependencies = [
[[package]]
name = "parity-ui-precompiled"
version = "1.9.0"
source = "git+https://github.com/js-dist-paritytech/parity-beta-1-9-shell.git#81e795b68518c3d8b57ee09fd3305141958eca99"
source = "git+https://github.com/js-dist-paritytech/parity-beta-1-9-shell.git#1fad4b2d6dce1716eff8ae44e0d937b806f710ce"
dependencies = [
"parity-dapps-glue 1.9.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
@@ -3530,8 +3523,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "ws"
version = "0.7.1"
source = "git+https://github.com/tomusdrw/ws-rs#f8306a798b7541d64624299a83a2c934f173beed"
version = "0.7.5"
source = "git+https://github.com/tomusdrw/ws-rs#368ce39e2aa8700d568ca29dbacaecdf1bf749d1"
dependencies = [
"byteorder 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"bytes 0.4.5 (registry+https://github.com/rust-lang/crates.io-index)",
@@ -3609,7 +3602,6 @@ dependencies = [
"checksum bitflags 0.8.2 (registry+https://github.com/rust-lang/crates.io-index)" = "1370e9fc2a6ae53aea8b7a5110edbd08836ed87c88736dfabccade1c2b44bff4"
"checksum bitflags 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)" = "4efd02e230a02e18f92fc2735f44597385ed02ad8f831e7c1c1156ee5e1ab3a5"
"checksum bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b3c30d3802dfb7281680d6285f2ccdaa8c2d8fee41f93805dba5c4cf50dc23cf"
"checksum bloomchain 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "3f421095d2a76fc24cd3fb3f912b90df06be7689912b1bdb423caefae59c258d"
"checksum bn 0.4.4 (git+https://github.com/paritytech/bn)" = "<none>"
"checksum byteorder 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ff81738b726f5d099632ceaffe7fb65b90212e8dce59d518729e7e8634032d3d"
"checksum bytes 0.4.5 (registry+https://github.com/rust-lang/crates.io-index)" = "d828f97b58cc5de3e40c421d0cf2132d6b2da4ee0e11b8632fa838f0f9333ad6"
@@ -3861,7 +3853,7 @@ dependencies = [
"checksum wasmi 0.0.0 (git+https://github.com/pepyakin/wasmi)" = "<none>"
"checksum winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "167dc9d6949a9b857f3451275e911c3f44255842c1f7a76f33c55103a909087a"
"checksum winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "2d315eee3b34aca4797b2da6b13ed88266e6d612562a0c46390af8299fc699bc"
"checksum ws 0.7.1 (git+https://github.com/tomusdrw/ws-rs)" = "<none>"
"checksum ws 0.7.5 (git+https://github.com/tomusdrw/ws-rs)" = "<none>"
"checksum ws2_32-sys 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "d59cefebd0c892fa2dd6de581e937301d8552cb44489cdff035c6187cb63fa5e"
"checksum xdg 2.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "a66b7c2281ebde13cf4391d70d4c7e5946c3c25e72a7b859ca8f677dcd0b0c61"
"checksum xml-rs 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "7ec6c39eaa68382c8e31e35239402c0a9489d4141a8ceb0c716099a0b515b562"

View File

@@ -1,4 +1,4 @@
# [Parity](https://parity.io/) - fast, light, and robust Ethereum client
# [Parity](https://parity.io/) - fast, light, and robust Ethereum client
[![build status](https://gitlab.parity.io/parity/parity/badges/master/build.svg)](https://gitlab.parity.io/parity/parity/commits/master)
[![Snap Status](https://build.snapcraft.io/badge/paritytech/parity.svg)](https://build.snapcraft.io/user/paritytech/parity)

View File

@@ -14,12 +14,10 @@
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use endpoint::EndpointInfo;
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct App {
pub id: String,
pub id: Option<String>,
pub name: String,
pub description: String,
pub version: String,
@@ -28,32 +26,14 @@ pub struct App {
pub icon_url: String,
#[serde(rename="localUrl")]
pub local_url: Option<String>,
#[serde(rename="allowJsEval")]
pub allow_js_eval: Option<bool>,
}
impl App {
/// Creates `App` instance from `EndpointInfo` and `id`.
pub fn from_info(id: &str, info: &EndpointInfo) -> Self {
App {
id: id.to_owned(),
name: info.name.to_owned(),
description: info.description.to_owned(),
version: info.version.to_owned(),
author: info.author.to_owned(),
icon_url: info.icon_url.to_owned(),
local_url: info.local_url.to_owned(),
}
}
}
impl Into<EndpointInfo> for App {
fn into(self) -> EndpointInfo {
EndpointInfo {
name: self.name,
description: self.description,
version: self.version,
author: self.author,
icon_url: self.icon_url,
local_url: self.local_url,
}
pub fn with_id(&self, id: &str) -> Self {
let mut app = self.clone();
app.id = Some(id.into());
app
}
}

View File

@@ -178,7 +178,7 @@ impl ContentValidator for Dapp {
// First find manifest file
let (mut manifest, manifest_dir) = Self::find_manifest(&mut zip)?;
// Overwrite id to match hash
manifest.id = id;
manifest.id = Some(id);
// Unpack zip
for i in 0..zip.len() {

View File

@@ -319,12 +319,14 @@ mod tests {
).allow_dapps(true);
let handler = local::Dapp::new(pool, path, EndpointInfo {
id: None,
name: "fake".into(),
description: "".into(),
version: "".into(),
author: "".into(),
icon_url: "".into(),
local_url: Some("".into()),
allow_js_eval: None,
}, Default::default(), None);
// when

View File

@@ -46,17 +46,18 @@ fn read_manifest(name: &str, mut path: PathBuf) -> EndpointInfo {
// Try to deserialize manifest
deserialize_manifest(s)
})
.map(Into::into)
.unwrap_or_else(|e| {
warn!(target: "dapps", "Cannot read manifest file at: {:?}. Error: {:?}", path, e);
EndpointInfo {
id: None,
name: name.into(),
description: name.into(),
version: "0.0.0".into(),
author: "?".into(),
icon_url: "icon.png".into(),
local_url: None,
allow_js_eval: Some(false),
}
})
}

View File

@@ -20,8 +20,13 @@ pub use apps::App as Manifest;
pub const MANIFEST_FILENAME: &'static str = "manifest.json";
pub fn deserialize_manifest(manifest: String) -> Result<Manifest, String> {
serde_json::from_str::<Manifest>(&manifest).map_err(|e| format!("{:?}", e))
// TODO [todr] Manifest validation (especialy: id (used as path))
let mut manifest = serde_json::from_str::<Manifest>(&manifest).map_err(|e| format!("{:?}", e))?;
if manifest.id.is_none() {
return Err("App 'id' is missing.".into());
}
manifest.allow_js_eval = Some(manifest.allow_js_eval.unwrap_or(false));
Ok(manifest)
}
pub fn serialize_manifest(manifest: &Manifest) -> Result<String, String> {

View File

@@ -44,7 +44,7 @@ pub const WEB_PATH: &'static str = "web";
pub const URL_REFERER: &'static str = "__referer=";
pub fn utils(pool: CpuPool) -> Box<Endpoint> {
Box::new(page::builtin::Dapp::new(pool, parity_ui::App::default()))
Box::new(page::builtin::Dapp::new(pool, false, parity_ui::App::default()))
}
pub fn ui(pool: CpuPool) -> Box<Endpoint> {
@@ -76,9 +76,9 @@ pub fn all_endpoints<F: Fetch>(
}
// NOTE [ToDr] Dapps will be currently embeded on 8180
insert::<parity_ui::App>(&mut pages, "ui", Embeddable::Yes(embeddable.clone()), pool.clone());
insert::<parity_ui::App>(&mut pages, "ui", Embeddable::Yes(embeddable.clone()), pool.clone(), true);
// old version
insert::<parity_ui::old::App>(&mut pages, "v1", Embeddable::Yes(embeddable.clone()), pool.clone());
insert::<parity_ui::old::App>(&mut pages, "v1", Embeddable::Yes(embeddable.clone()), pool.clone(), true);
pages.insert("proxy".into(), ProxyPac::boxed(embeddable.clone(), dapps_domain.to_owned()));
pages.insert(WEB_PATH.into(), Web::boxed(embeddable.clone(), web_proxy_tokens.clone(), fetch.clone()));
@@ -86,10 +86,16 @@ pub fn all_endpoints<F: Fetch>(
(local_endpoints, pages)
}
fn insert<T : WebApp + Default + 'static>(pages: &mut Endpoints, id: &str, embed_at: Embeddable, pool: CpuPool) {
fn insert<T : WebApp + Default + 'static>(
pages: &mut Endpoints,
id: &str,
embed_at: Embeddable,
pool: CpuPool,
allow_js_eval: bool,
) {
pages.insert(id.to_owned(), Box::new(match embed_at {
Embeddable::Yes(address) => page::builtin::Dapp::new_safe_to_embed(pool, T::default(), address),
Embeddable::No => page::builtin::Dapp::new(pool, T::default()),
Embeddable::Yes(address) => page::builtin::Dapp::new_safe_to_embed(pool, allow_js_eval, T::default(), address),
Embeddable::No => page::builtin::Dapp::new(pool, allow_js_eval, T::default()),
}));
}

View File

@@ -37,16 +37,7 @@ impl EndpointPath {
}
}
#[derive(Debug, PartialEq, Clone)]
pub struct EndpointInfo {
pub name: String,
pub description: String,
pub version: String,
pub author: String,
pub icon_url: String,
pub local_url: Option<String>,
}
pub type EndpointInfo = ::apps::App;
pub type Endpoints = BTreeMap<String, Box<Endpoint>>;
pub type Response = Box<Future<Item=hyper::Response, Error=hyper::Error> + Send>;
pub type Request = hyper::Request;

View File

@@ -82,7 +82,7 @@ impl Into<hyper::Response> for ContentHandler {
.with_status(self.code)
.with_header(header::ContentType(self.mimetype))
.with_body(self.content);
add_security_headers(&mut res.headers_mut(), self.safe_to_embed_on);
add_security_headers(&mut res.headers_mut(), self.safe_to_embed_on, false);
res
}
}

View File

@@ -40,7 +40,7 @@ impl Into<hyper::Response> for EchoHandler {
.with_header(content_type.unwrap_or(header::ContentType::json()))
.with_body(self.request.body());
add_security_headers(res.headers_mut(), None);
add_security_headers(res.headers_mut(), None, false);
res
}
}

View File

@@ -36,7 +36,7 @@ use hyper::header;
use {apps, address, Embeddable};
/// Adds security-related headers to the Response.
pub fn add_security_headers(headers: &mut header::Headers, embeddable_on: Embeddable) {
pub fn add_security_headers(headers: &mut header::Headers, embeddable_on: Embeddable, allow_js_eval: bool) {
headers.set_raw("X-XSS-Protection", "1; mode=block");
headers.set_raw("X-Content-Type-Options", "nosniff");
@@ -75,9 +75,12 @@ pub fn add_security_headers(headers: &mut header::Headers, embeddable_on: Embedd
.map(|&(ref host, port)| address(host, port))
.join(" ")
).unwrap_or_default();
let eval = if allow_js_eval { " 'unsafe-eval'" } else { "" };
&format!(
"script-src 'self' {};",
script_src
"script-src 'self' {}{};",
script_src,
eval
)
}
// Same restrictions as script-src with additional

View File

@@ -51,7 +51,7 @@ impl<R: io::Read> StreamingHandler<R> {
.with_status(self.status)
.with_header(header::ContentType(self.mimetype))
.with_body(body);
add_security_headers(&mut res.headers_mut(), self.safe_to_embed_on);
add_security_headers(&mut res.headers_mut(), self.safe_to_embed_on, false);
(reader, res)
}

View File

@@ -109,7 +109,7 @@ impl Endpoints {
/// Returns a current list of app endpoints.
pub fn list(&self) -> Vec<apps::App> {
self.endpoints.read().iter().filter_map(|(ref k, ref e)| {
e.info().map(|ref info| apps::App::from_info(k, info))
e.info().map(|ref info| info.with_id(k))
}).collect()
}

View File

@@ -38,13 +38,14 @@ pub struct Dapp<T: WebApp + 'static> {
impl<T: WebApp + 'static> Dapp<T> {
/// Creates new `Dapp` for builtin (compile time) Dapp.
pub fn new(pool: CpuPool, app: T) -> Self {
let info = app.info();
pub fn new(pool: CpuPool, allow_js_eval: bool, app: T) -> Self {
let mut info = EndpointInfo::from(app.info());
info.allow_js_eval = Some(allow_js_eval);
Dapp {
pool,
app,
safe_to_embed_on: None,
info: EndpointInfo::from(info),
info,
fallback_to_index_html: false,
}
}
@@ -65,13 +66,14 @@ impl<T: WebApp + 'static> Dapp<T> {
/// Creates new `Dapp` which can be safely used in iframe
/// even from different origin. It might be dangerous (clickjacking).
/// Use wisely!
pub fn new_safe_to_embed(pool: CpuPool, app: T, address: Embeddable) -> Self {
let info = app.info();
pub fn new_safe_to_embed(pool: CpuPool, allow_js_eval: bool, app: T, address: Embeddable) -> Self {
let mut info = EndpointInfo::from(app.info());
info.allow_js_eval = Some(allow_js_eval);
Dapp {
pool,
app,
safe_to_embed_on: address,
info: EndpointInfo::from(info),
info,
fallback_to_index_html: false,
}
}
@@ -117,6 +119,7 @@ impl<T: WebApp> Endpoint for Dapp<T> {
file,
cache: PageCache::Disabled,
safe_to_embed_on: self.safe_to_embed_on.clone(),
allow_js_eval: self.info.allow_js_eval.clone().unwrap_or(false),
}.into_response();
self.pool.spawn(reader).forget();
@@ -128,12 +131,14 @@ impl<T: WebApp> Endpoint for Dapp<T> {
impl From<Info> for EndpointInfo {
fn from(info: Info) -> Self {
EndpointInfo {
id: None,
name: info.name.into(),
description: info.description.into(),
author: info.author.into(),
icon_url: info.icon_url.into(),
local_url: None,
version: info.version.into(),
allow_js_eval: None,
}
}
}

View File

@@ -59,6 +59,8 @@ pub struct PageHandler<T: DappFile> {
pub safe_to_embed_on: Embeddable,
/// Cache settings for this page.
pub cache: PageCache,
/// Allow JS unsafe-eval.
pub allow_js_eval: bool,
}
impl<T: DappFile> PageHandler<T> {
@@ -93,7 +95,7 @@ impl<T: DappFile> PageHandler<T> {
headers.set(header::ContentType(file.content_type().to_owned()));
add_security_headers(&mut headers, self.safe_to_embed_on);
add_security_headers(&mut headers, self.safe_to_embed_on, self.allow_js_eval);
}
let initial_content = if file.content_type().to_owned() == mime::TEXT_HTML {

View File

@@ -98,6 +98,7 @@ impl Dapp {
file: self.get_file(path),
cache: self.cache,
safe_to_embed_on: self.embeddable_on.clone(),
allow_js_eval: self.info.as_ref().and_then(|x| x.allow_js_eval).unwrap_or(false),
}.into_response();
self.pool.spawn(reader).forget();

View File

@@ -181,7 +181,7 @@ fn should_return_fetched_dapp_content() {
assert_security_headers_for_embed(&response2.headers);
assert_eq!(
response2.body,
r#"D2
r#"EA
{
"id": "9c94e154dab8acf859b30ee80fc828fb1d38359d938751b65db71d460588d82a",
"name": "Gavcoin",
@@ -189,7 +189,8 @@ fn should_return_fetched_dapp_content() {
"version": "1.0.0",
"author": "",
"iconUrl": "icon.png",
"localUrl": null
"localUrl": null,
"allowJsEval": false
}
0

View File

@@ -8,7 +8,6 @@ authors = ["Parity Technologies <admin@parity.io>"]
[dependencies]
ansi_term = "0.9"
bloomchain = "0.1"
bn = { git = "https://github.com/paritytech/bn" }
byteorder = "1.0"
common-types = { path = "types" }

View File

@@ -33,7 +33,7 @@ impl Factory {
/// Create fresh instance of VM
/// Might choose implementation depending on supplied gas.
#[cfg(feature = "jit")]
pub fn create(&self, gas: U256) -> Box<Vm> {
pub fn create(&self, gas: &U256) -> Box<Vm> {
match self.evm {
VMType::Jit => {
Box::new(super::jit::JitEvm::default())
@@ -49,7 +49,7 @@ impl Factory {
/// Create fresh instance of VM
/// Might choose implementation depending on supplied gas.
#[cfg(not(feature = "jit"))]
pub fn create(&self, gas: U256) -> Box<Vm> {
pub fn create(&self, gas: &U256) -> Box<Vm> {
match self.evm {
VMType::Interpreter => if Self::can_fit_in_usize(gas) {
Box::new(super::interpreter::Interpreter::<usize>::new(self.evm_cache.clone()))
@@ -68,8 +68,8 @@ impl Factory {
}
}
fn can_fit_in_usize(gas: U256) -> bool {
gas == U256::from(gas.low_u64() as usize)
fn can_fit_in_usize(gas: &U256) -> bool {
gas == &U256::from(gas.low_u64() as usize)
}
}
@@ -95,7 +95,7 @@ impl Default for Factory {
#[test]
fn test_create_vm() {
let _vm = Factory::default().create(U256::zero());
let _vm = Factory::default().create(&U256::zero());
}
/// Create tests by injecting different VM factories

View File

@@ -914,8 +914,13 @@ mod tests {
use rustc_hex::FromHex;
use vmtype::VMType;
use factory::Factory;
use vm::{ActionParams, ActionValue};
use vm::{Vm, ActionParams, ActionValue};
use vm::tests::{FakeExt, test_finalize};
use bigint::prelude::U256;
fn interpreter(gas: &U256) -> Box<Vm> {
Factory::new(VMType::Interpreter, 1).create(gas)
}
#[test]
fn should_not_fail_on_tracing_mem() {
@@ -932,7 +937,7 @@ mod tests {
ext.tracing = true;
let gas_left = {
let mut vm = Factory::new(VMType::Interpreter, 1).create(params.gas);
let mut vm = interpreter(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -954,7 +959,7 @@ mod tests {
ext.tracing = true;
let err = {
let mut vm = Factory::new(VMType::Interpreter, 1).create(params.gas);
let mut vm = interpreter(&params.gas);
test_finalize(vm.exec(params, &mut ext)).err().unwrap()
};

View File

@@ -40,7 +40,7 @@ fn test_add(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -60,7 +60,7 @@ fn test_sha3(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -80,7 +80,7 @@ fn test_address(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -102,7 +102,7 @@ fn test_origin(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -124,7 +124,7 @@ fn test_sender(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -159,7 +159,7 @@ fn test_extcodecopy(factory: super::Factory) {
ext.codes.insert(sender, Arc::new(sender_code));
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -179,7 +179,7 @@ fn test_log_empty(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -211,7 +211,7 @@ fn test_log_sender(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -236,7 +236,7 @@ fn test_blockhash(factory: super::Factory) {
ext.blockhashes.insert(U256::zero(), blockhash.clone());
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -258,7 +258,7 @@ fn test_calldataload(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -279,7 +279,7 @@ fn test_author(factory: super::Factory) {
ext.info.author = author;
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -299,7 +299,7 @@ fn test_timestamp(factory: super::Factory) {
ext.info.timestamp = timestamp;
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -319,7 +319,7 @@ fn test_number(factory: super::Factory) {
ext.info.number = number;
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -339,7 +339,7 @@ fn test_difficulty(factory: super::Factory) {
ext.info.difficulty = difficulty;
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -359,7 +359,7 @@ fn test_gas_limit(factory: super::Factory) {
ext.info.gas_limit = gas_limit;
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -377,7 +377,7 @@ fn test_mul(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -395,7 +395,7 @@ fn test_sub(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -413,7 +413,7 @@ fn test_div(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -431,7 +431,7 @@ fn test_div_zero(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -449,7 +449,7 @@ fn test_mod(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -468,7 +468,7 @@ fn test_smod(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -487,7 +487,7 @@ fn test_sdiv(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -506,7 +506,7 @@ fn test_exp(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -526,7 +526,7 @@ fn test_comparison(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -547,7 +547,7 @@ fn test_signed_comparison(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -568,7 +568,7 @@ fn test_bitops(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -591,7 +591,7 @@ fn test_addmod_mulmod(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -612,7 +612,7 @@ fn test_byte(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -631,7 +631,7 @@ fn test_signextend(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -651,7 +651,7 @@ fn test_badinstruction_int() {
let mut ext = FakeExt::new();
let err = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap_err()
};
@@ -671,7 +671,7 @@ fn test_pop(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -691,7 +691,7 @@ fn test_extops(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -714,7 +714,7 @@ fn test_jumps(factory: super::Factory) {
let mut ext = FakeExt::new();
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -742,7 +742,7 @@ fn test_calls(factory: super::Factory) {
};
let gas_left = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap()
};
@@ -781,7 +781,7 @@ fn test_create_in_staticcall(factory: super::Factory) {
ext.is_static = true;
let err = {
let mut vm = factory.create(params.gas);
let mut vm = factory.create(&params.gas);
test_finalize(vm.exec(params, &mut ext)).unwrap_err()
};

View File

@@ -16,7 +16,6 @@ memorydb = { path = "../../util/memorydb" }
patricia-trie = { path = "../../util/patricia_trie" }
ethcore-network = { path = "../../util/network" }
ethcore-io = { path = "../../util/io" }
evm = { path = "../evm" }
heapsize = "0.4"
vm = { path = "../vm" }
rlp = { path = "../../util/rlp" }

View File

@@ -60,7 +60,6 @@ extern crate ethcore_util as util;
extern crate ethcore_bigint as bigint;
extern crate ethcore_bytes as bytes;
extern crate ethcore;
extern crate evm;
extern crate heapsize;
extern crate futures;
extern crate itertools;

View File

@@ -18,6 +18,7 @@
//! The request service is implemented using Futures. Higher level request handlers
//! will take the raw data received here and extract meaningful results from it.
use std::cmp;
use std::collections::HashMap;
use std::marker::PhantomData;
use std::sync::Arc;
@@ -28,6 +29,7 @@ use futures::{Async, Poll, Future};
use futures::sync::oneshot::{self, Sender, Receiver, Canceled};
use network::PeerId;
use parking_lot::{RwLock, Mutex};
use rand;
use net::{
self, Handler, PeerStatus, Status, Capabilities,
@@ -389,7 +391,10 @@ impl OnDemand {
true => None,
})
.filter_map(|pending| {
for (peer_id, peer) in peers.iter() { // .shuffle?
// the peer we dispatch to is chosen randomly
let num_peers = peers.len();
let rng = rand::random::<usize>() % cmp::max(num_peers, 1);
for (peer_id, peer) in peers.iter().chain(peers.iter()).skip(rng).take(num_peers) {
// TODO: see which requests can be answered by the cache?
if !peer.can_fulfill(&pending.required_capabilities) {

View File

@@ -15,7 +15,8 @@
"ecip1010ContinueTransition": 5000000,
"ecip1017EraRounds": 5000000,
"eip161abcTransition": "0x7fffffffffffffff",
"eip161dTransition": "0x7fffffffffffffff"
"eip161dTransition": "0x7fffffffffffffff",
"bombDefuseTransition": 5900000
}
}
},

View File

@@ -0,0 +1,74 @@
{
"name": "Kovan (Test)",
"dataDir": "kovan-test",
"engine": {
"authorityRound": {
"params": {
"stepDuration": "4",
"blockReward": "0x4563918244F40000",
"validators" : {
"list": [
"0x00D6Cc1BA9cf89BD2e58009741f4F7325BAdc0ED",
"0x00427feae2419c15b89d1c21af10d1b6650a4d3d",
"0x4Ed9B08e6354C70fE6F8CB0411b0d3246b424d6c",
"0x0020ee4Be0e2027d76603cB751eE069519bA81A1",
"0x0010f94b296a852aaac52ea6c5ac72e03afd032d",
"0x007733a1FE69CF3f2CF989F81C7b4cAc1693387A",
"0x00E6d2b931F55a3f1701c7389d592a7778897879",
"0x00e4a10650e5a6D6001C38ff8E64F97016a1645c",
"0x00a0a24b9f0e5ec7aa4c7389b8302fd0123194de"
]
},
"validateScoreTransition": 1000000,
"validateStepTransition": 1500000,
"maximumUncleCountTransition": 5067000,
"maximumUncleCount": 0
}
}
},
"params": {
"gasLimitBoundDivisor": "0x400",
"registrar" : "0xfAb104398BBefbd47752E7702D9fE23047E1Bca3",
"maximumExtraDataSize": "0x20",
"minGasLimit": "0x1388",
"networkID" : "0x2A",
"forkBlock": 4297256,
"forkCanonHash": "0x0a66d93c2f727dca618fabaf70c39b37018c73d78b939d8b11efbbd09034778f",
"validateReceiptsTransition" : 1000000,
"eip155Transition": 1000000,
"validateChainIdTransition": 1000000,
"eip140Transition": 5067000,
"eip211Transition": 5067000,
"eip214Transition": 5067000,
"eip658Transition": 5067000,
"wasmActivationTransition": 10
},
"genesis": {
"seal": {
"authorityRound": {
"step": "0x0",
"signature": "0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
}
},
"difficulty": "0x20000",
"gasLimit": "0x5B8D80"
},
"accounts": {
"0x0000000000000000000000000000000000000001": { "balance": "1", "builtin": { "name": "ecrecover", "pricing": { "linear": { "base": 3000, "word": 0 } } } },
"0x0000000000000000000000000000000000000002": { "balance": "1", "builtin": { "name": "sha256", "pricing": { "linear": { "base": 60, "word": 12 } } } },
"0x0000000000000000000000000000000000000003": { "balance": "1", "builtin": { "name": "ripemd160", "pricing": { "linear": { "base": 600, "word": 120 } } } },
"0x0000000000000000000000000000000000000004": { "balance": "1", "builtin": { "name": "identity", "pricing": { "linear": { "base": 15, "word": 3 } } } },
"0x0000000000000000000000000000000000000005": { "builtin": { "name": "modexp", "activate_at": 5067000, "pricing": { "modexp": { "divisor": 20 } } } },
"0x0000000000000000000000000000000000000006": { "builtin": { "name": "alt_bn128_add", "activate_at": 5067000, "pricing": { "linear": { "base": 500, "word": 0 } } } },
"0x0000000000000000000000000000000000000007": { "builtin": { "name": "alt_bn128_mul", "activate_at": 5067000, "pricing": { "linear": { "base": 40000, "word": 0 } } } },
"0x0000000000000000000000000000000000000008": { "builtin": { "name": "alt_bn128_pairing", "activate_at": 5067000, "pricing": { "alt_bn128_pairing": { "base": 100000, "pair": 80000 } } } },
"0x00521965e7bd230323c423d96c657db5b79d099f": { "balance": "1606938044258990275541962092341162602522202993782792835301376" }
},
"nodes": [
"enode://56abaf065581a5985b8c5f4f88bd202526482761ba10be9bfdcd14846dd01f652ec33fde0f8c0fd1db19b59a4c04465681fcef50e11380ca88d25996191c52de@40.71.221.215:30303",
"enode://d07827483dc47b368eaf88454fb04b41b7452cf454e194e2bd4c14f98a3278fed5d819dbecd0d010407fc7688d941ee1e58d4f9c6354d3da3be92f55c17d7ce3@52.166.117.77:30303",
"enode://8fa162563a8e5a05eef3e1cd5abc5828c71344f7277bb788a395cce4a0e30baf2b34b92fe0b2dbbba2313ee40236bae2aab3c9811941b9f5a7e8e90aaa27ecba@52.165.239.18:30303",
"enode://7e2e7f00784f516939f94e22bdc6cf96153603ca2b5df1c7cc0f90a38e7a2f218ffb1c05b156835e8b49086d11fdd1b3e2965be16baa55204167aa9bf536a4d9@52.243.47.56:30303",
"enode://0518a3d35d4a7b3e8c433e7ffd2355d84a1304ceb5ef349787b556197f0c87fad09daed760635b97d52179d645d3e6d16a37d2cc0a9945c2ddf585684beb39ac@40.68.248.100:30303"
]
}

View File

@@ -14,9 +14,9 @@
"ecip1010PauseTransition": 1915000,
"ecip1010ContinueTransition": 3415000,
"ecip1017EraRounds": 2000000,
"eip161abcTransition": "0x7fffffffffffffff",
"eip161dTransition": "0x7fffffffffffffff"
"eip161dTransition": "0x7fffffffffffffff",
"bombDefuseTransition": 2300000
}
}
},
@@ -31,7 +31,6 @@
"forkBlock": "0x1b34d8",
"forkCanonHash": "0xf376243aeff1f256d970714c3de9fd78fa4e63cf63e32a51fe1169e375d98145",
"eip155Transition": 1915000,
"eip98Transition": "0x7fffffffffffff",
"eip86Transition": "0x7fffffffffffff"
},

File diff suppressed because it is too large Load Diff

View File

@@ -23,8 +23,6 @@ pub struct CacheSize {
pub block_details: usize,
/// Transaction addresses cache size.
pub transaction_addresses: usize,
/// Blooms cache size.
pub blocks_blooms: usize,
/// Block receipts size.
pub block_receipts: usize,
}
@@ -32,6 +30,6 @@ pub struct CacheSize {
impl CacheSize {
/// Total amount used by the cache.
pub fn total(&self) -> usize {
self.blocks + self.block_details + self.transaction_addresses + self.blocks_blooms + self.block_receipts
self.blocks + self.block_details + self.transaction_addresses + self.block_receipts
}
}

View File

@@ -18,8 +18,6 @@
use std::ops;
use std::io::Write;
use bloomchain;
use blooms::{GroupPosition, BloomGroup};
use db::Key;
use engines::epoch::{Transition as EpochTransition};
use header::BlockNumber;
@@ -39,8 +37,6 @@ pub enum ExtrasIndex {
BlockHash = 1,
/// Transaction address index
TransactionAddress = 2,
/// Block blooms index
BlocksBlooms = 3,
/// Block receipts index
BlockReceipts = 4,
/// Epoch transition data index.
@@ -88,46 +84,6 @@ impl Key<BlockDetails> for H256 {
}
}
pub struct LogGroupKey([u8; 6]);
impl ops::Deref for LogGroupKey {
type Target = [u8];
fn deref(&self) -> &Self::Target {
&self.0
}
}
#[derive(Debug, PartialEq, Eq, Hash, Clone)]
pub struct LogGroupPosition(GroupPosition);
impl From<bloomchain::group::GroupPosition> for LogGroupPosition {
fn from(position: bloomchain::group::GroupPosition) -> Self {
LogGroupPosition(From::from(position))
}
}
impl HeapSizeOf for LogGroupPosition {
fn heap_size_of_children(&self) -> usize {
self.0.heap_size_of_children()
}
}
impl Key<BloomGroup> for LogGroupPosition {
type Target = LogGroupKey;
fn key(&self) -> Self::Target {
let mut result = [0u8; 6];
result[0] = ExtrasIndex::BlocksBlooms as u8;
result[1] = self.0.level;
result[2] = (self.0.index >> 24) as u8;
result[3] = (self.0.index >> 16) as u8;
result[4] = (self.0.index >> 8) as u8;
result[5] = self.0.index as u8;
LogGroupKey(result)
}
}
impl Key<TransactionAddress> for H256 {
type Target = H264;

View File

@@ -0,0 +1,218 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Blockchain generator for tests.
use std::collections::VecDeque;
use bigint::prelude::{U256, H256, H2048 as Bloom};
use bytes::Bytes;
use header::Header;
use rlp::encode;
use transaction::SignedTransaction;
use views::BlockView;
/// Helper structure, used for encoding blocks.
#[derive(Default, Clone, RlpEncodable)]
pub struct Block {
pub header: Header,
pub transactions: Vec<SignedTransaction>,
pub uncles: Vec<Header>
}
impl Block {
#[inline]
pub fn header(&self) -> Header {
self.header.clone()
}
#[inline]
pub fn hash(&self) -> H256 {
BlockView::new(&self.encoded()).header_view().hash()
}
#[inline]
pub fn number(&self) -> u64 {
self.header.number()
}
#[inline]
pub fn encoded(&self) -> Bytes {
encode(self).into_vec()
}
}
#[derive(Debug)]
pub struct BlockOptions {
pub difficulty: U256,
pub bloom: Bloom,
pub transactions: Vec<SignedTransaction>,
}
impl Default for BlockOptions {
fn default() -> Self {
BlockOptions {
difficulty: 10.into(),
bloom: Bloom::default(),
transactions: Vec::new(),
}
}
}
#[derive(Clone)]
pub struct BlockBuilder {
blocks: VecDeque<Block>,
}
impl BlockBuilder {
pub fn genesis() -> Self {
let mut blocks = VecDeque::with_capacity(1);
blocks.push_back(Block::default());
BlockBuilder {
blocks,
}
}
#[inline]
pub fn add_block(&self) -> Self {
self.add_block_with(|| BlockOptions::default())
}
#[inline]
pub fn add_blocks(&self, count: usize) -> Self {
self.add_blocks_with(count, || BlockOptions::default())
}
#[inline]
pub fn add_block_with<T>(&self, get_metadata: T) -> Self where T: Fn() -> BlockOptions {
self.add_blocks_with(1, get_metadata)
}
#[inline]
pub fn add_block_with_difficulty<T>(&self, difficulty: T) -> Self where T: Into<U256> {
let difficulty = difficulty.into();
self.add_blocks_with(1, move || BlockOptions {
difficulty,
..Default::default()
})
}
#[inline]
pub fn add_block_with_transactions<T>(&self, transactions: T) -> Self
where T: IntoIterator<Item = SignedTransaction> {
let transactions = transactions.into_iter().collect::<Vec<_>>();
self.add_blocks_with(1, || BlockOptions {
transactions: transactions.clone(),
..Default::default()
})
}
#[inline]
pub fn add_block_with_bloom(&self, bloom: Bloom) -> Self {
self.add_blocks_with(1, move || BlockOptions {
bloom,
..Default::default()
})
}
pub fn add_blocks_with<T>(&self, count: usize, get_metadata: T) -> Self where T: Fn() -> BlockOptions {
assert!(count > 0, "There must be at least 1 block");
let mut parent_hash = self.last().hash();
let mut parent_number = self.last().number();
let mut blocks = VecDeque::with_capacity(count);
for _ in 0..count {
let mut block = Block::default();
let metadata = get_metadata();
let block_number = parent_number + 1;
block.header.set_parent_hash(parent_hash);
block.header.set_number(block_number);
block.header.set_log_bloom(metadata.bloom);
block.header.set_difficulty(metadata.difficulty);
block.transactions = metadata.transactions;
parent_hash = block.hash();
parent_number = block_number;
blocks.push_back(block);
}
BlockBuilder {
blocks,
}
}
#[inline]
pub fn last(&self) -> &Block {
self.blocks.back().expect("There is always at least 1 block")
}
}
#[derive(Clone)]
pub struct BlockGenerator {
builders: VecDeque<BlockBuilder>,
}
impl BlockGenerator {
pub fn new<T>(builders: T) -> Self where T: IntoIterator<Item = BlockBuilder> {
BlockGenerator {
builders: builders.into_iter().collect(),
}
}
}
impl Iterator for BlockGenerator {
type Item = Block;
fn next(&mut self) -> Option<Self::Item> {
loop {
match self.builders.front_mut() {
Some(ref mut builder) => {
if let Some(block) = builder.blocks.pop_front() {
return Some(block);
}
},
None => return None,
}
self.builders.pop_front();
}
}
}
#[cfg(test)]
mod tests {
use super::{BlockBuilder, BlockOptions, BlockGenerator};
#[test]
fn test_block_builder() {
let genesis = BlockBuilder::genesis();
let block_1 = genesis.add_block();
let block_1001 = block_1.add_blocks(1000);
let block_1002 = block_1001.add_block_with(|| BlockOptions::default());
let generator = BlockGenerator::new(vec![genesis, block_1, block_1001, block_1002]);
assert_eq!(generator.count(), 1003);
}
#[test]
fn test_block_builder_fork() {
let genesis = BlockBuilder::genesis();
let block_10a = genesis.add_blocks(10);
let block_11b = genesis.add_blocks(11);
assert_eq!(block_10a.last().number(), 10);
assert_eq!(block_11b.last().number(), 11);
}
}

View File

@@ -1,72 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use rlp::*;
use bigint::hash::{H256, H2048};
use bytes::Bytes;
use header::Header;
use transaction::SignedTransaction;
use super::fork::Forkable;
use super::bloom::WithBloom;
use super::complete::CompleteBlock;
use super::transaction::WithTransaction;
/// Helper structure, used for encoding blocks.
#[derive(Default)]
pub struct Block {
pub header: Header,
pub transactions: Vec<SignedTransaction>,
pub uncles: Vec<Header>
}
impl Encodable for Block {
fn rlp_append(&self, s: &mut RlpStream) {
s.begin_list(3);
s.append(&self.header);
s.append_list(&self.transactions);
s.append_list(&self.uncles);
}
}
impl Forkable for Block {
fn fork(mut self, fork_number: usize) -> Self where Self: Sized {
let difficulty = self.header.difficulty().clone() - fork_number.into();
self.header.set_difficulty(difficulty);
self
}
}
impl WithBloom for Block {
fn with_bloom(mut self, bloom: H2048) -> Self where Self: Sized {
self.header.set_log_bloom(bloom);
self
}
}
impl WithTransaction for Block {
fn with_transaction(mut self, transaction: SignedTransaction) -> Self where Self: Sized {
self.transactions.push(transaction);
self
}
}
impl CompleteBlock for Block {
fn complete(mut self, parent_hash: H256) -> Bytes {
self.header.set_parent_hash(parent_hash);
encode(&self).into_vec()
}
}

View File

@@ -1,35 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use bigint::hash::H2048;
pub trait WithBloom {
fn with_bloom(self, bloom: H2048) -> Self where Self: Sized;
}
pub struct Bloom<'a, I> where I: 'a {
pub iter: &'a mut I,
pub bloom: H2048,
}
impl<'a, I> Iterator for Bloom<'a, I> where I: Iterator, <I as Iterator>::Item: WithBloom {
type Item = <I as Iterator>::Item;
#[inline]
fn next(&mut self) -> Option<Self::Item> {
self.iter.next().map(|item| item.with_bloom(self.bloom.clone()))
}
}

View File

@@ -1,52 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use bigint::hash::H256;
use bytes::Bytes;
use views::BlockView;
#[derive(Default, Clone)]
pub struct BlockFinalizer {
parent_hash: H256
}
impl BlockFinalizer {
pub fn fork(&self) -> Self {
self.clone()
}
}
pub trait CompleteBlock {
fn complete(self, parent_hash: H256) -> Bytes;
}
pub struct Complete<'a, I> where I: 'a {
pub iter: &'a mut I,
pub finalizer: &'a mut BlockFinalizer,
}
impl<'a, I> Iterator for Complete<'a, I> where I: Iterator, <I as Iterator>::Item: CompleteBlock {
type Item = Bytes;
#[inline]
fn next(&mut self) -> Option<Self::Item> {
self.iter.next().map(|item| {
let rlp = item.complete(self.finalizer.parent_hash.clone());
self.finalizer.parent_hash = BlockView::new(&rlp).header_view().hash();
rlp
})
}
}

View File

@@ -1,42 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
pub trait Forkable {
fn fork(self, fork_number: usize) -> Self where Self: Sized;
}
pub struct Fork<I> {
pub iter: I,
pub fork_number: usize,
}
impl<I> Clone for Fork<I> where I: Iterator + Clone {
fn clone(&self) -> Self {
Fork {
iter: self.iter.clone(),
fork_number: self.fork_number
}
}
}
impl<I> Iterator for Fork<I> where I: Iterator, <I as Iterator>::Item: Forkable {
type Item = <I as Iterator>::Item;
#[inline]
fn next(&mut self) -> Option<Self::Item> {
self.iter.next().map(|item| item.fork(self.fork_number))
}
}

View File

@@ -1,179 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use bigint::prelude::U256;
use bigint::hash::H2048;
use bytes::Bytes;
use header::BlockNumber;
use transaction::SignedTransaction;
use super::fork::Fork;
use super::bloom::Bloom;
use super::complete::{BlockFinalizer, CompleteBlock, Complete};
use super::block::Block;
use super::transaction::Transaction;
/// Chain iterator interface.
pub trait ChainIterator: Iterator + Sized {
/// Should be called to create a fork of current iterator.
/// Blocks generated by fork will have lower difficulty than current chain.
fn fork(&self, fork_number: usize) -> Fork<Self> where Self: Clone;
/// Should be called to make every consecutive block have given bloom.
fn with_bloom(&mut self, bloom: H2048) -> Bloom<Self>;
/// Should be called to make every consecutive block have given transaction.
fn with_transaction(&mut self, transaction: SignedTransaction) -> Transaction<Self>;
/// Should be called to complete block. Without complete, block may have incorrect hash.
fn complete<'a>(&'a mut self, finalizer: &'a mut BlockFinalizer) -> Complete<'a, Self>;
/// Completes and generates block.
fn generate<'a>(&'a mut self, finalizer: &'a mut BlockFinalizer) -> Option<Bytes> where Self::Item: CompleteBlock;
}
impl<I> ChainIterator for I where I: Iterator + Sized {
fn fork(&self, fork_number: usize) -> Fork<Self> where I: Clone {
Fork {
iter: self.clone(),
fork_number: fork_number
}
}
fn with_bloom(&mut self, bloom: H2048) -> Bloom<Self> {
Bloom {
iter: self,
bloom: bloom
}
}
fn with_transaction(&mut self, transaction: SignedTransaction) -> Transaction<Self> {
Transaction {
iter: self,
transaction: transaction,
}
}
fn complete<'a>(&'a mut self, finalizer: &'a mut BlockFinalizer) -> Complete<'a, Self> {
Complete {
iter: self,
finalizer: finalizer
}
}
fn generate<'a>(&'a mut self, finalizer: &'a mut BlockFinalizer) -> Option<Bytes> where <I as Iterator>::Item: CompleteBlock {
self.complete(finalizer).next()
}
}
/// Blockchain generator.
#[derive(Clone)]
pub struct ChainGenerator {
/// Next block number.
number: BlockNumber,
/// Next block difficulty.
difficulty: U256,
}
impl ChainGenerator {
fn prepare_block(&self) -> Block {
let mut block = Block::default();
block.header.set_number(self.number);
block.header.set_difficulty(self.difficulty);
block
}
}
impl Default for ChainGenerator {
fn default() -> Self {
ChainGenerator {
number: 0,
difficulty: 1000.into(),
}
}
}
impl Iterator for ChainGenerator {
type Item = Block;
fn next(&mut self) -> Option<Self::Item> {
let block = self.prepare_block();
self.number += 1;
Some(block)
}
}
mod tests {
use bigint::hash::{H256, H2048};
use views::BlockView;
use blockchain::generator::{ChainIterator, ChainGenerator, BlockFinalizer};
#[test]
fn canon_chain_generator() {
let mut canon_chain = ChainGenerator::default();
let mut finalizer = BlockFinalizer::default();
let genesis_rlp = canon_chain.generate(&mut finalizer).unwrap();
let genesis = BlockView::new(&genesis_rlp);
assert_eq!(genesis.header_view().parent_hash(), H256::default());
assert_eq!(genesis.header_view().number(), 0);
let b1_rlp = canon_chain.generate(&mut finalizer).unwrap();
let b1 = BlockView::new(&b1_rlp);
assert_eq!(b1.header_view().parent_hash(), genesis.header_view().hash());
assert_eq!(b1.header_view().number(), 1);
let mut fork_chain = canon_chain.fork(1);
let b2_rlp_fork = fork_chain.generate(&mut finalizer.fork()).unwrap();
let b2_fork = BlockView::new(&b2_rlp_fork);
assert_eq!(b2_fork.header_view().parent_hash(), b1.header_view().hash());
assert_eq!(b2_fork.header_view().number(), 2);
let b2_rlp = canon_chain.generate(&mut finalizer).unwrap();
let b2 = BlockView::new(&b2_rlp);
assert_eq!(b2.header_view().parent_hash(), b1.header_view().hash());
assert_eq!(b2.header_view().number(), 2);
assert!(b2.header_view().difficulty() > b2_fork.header_view().difficulty());
}
#[test]
fn with_bloom_generator() {
let bloom = H2048([0x1; 256]);
let mut gen = ChainGenerator::default();
let mut finalizer = BlockFinalizer::default();
let block0_rlp = gen.with_bloom(bloom).generate(&mut finalizer).unwrap();
let block1_rlp = gen.generate(&mut finalizer).unwrap();
let block0 = BlockView::new(&block0_rlp);
let block1 = BlockView::new(&block1_rlp);
assert_eq!(block0.header_view().number(), 0);
assert_eq!(block0.header_view().parent_hash(), H256::default());
assert_eq!(block1.header_view().number(), 1);
assert_eq!(block1.header_view().parent_hash(), block0.header_view().hash());
}
#[test]
fn generate_1000_blocks() {
let generator = ChainGenerator::default();
let mut finalizer = BlockFinalizer::default();
let blocks: Vec<_> = generator.take(1000).complete(&mut finalizer).collect();
assert_eq!(blocks.len(), 1000);
}
}

View File

@@ -1,27 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Blockchain generator for tests.
mod bloom;
mod block;
mod complete;
mod fork;
pub mod generator;
mod transaction;
pub use self::complete::BlockFinalizer;
pub use self::generator::{ChainIterator, ChainGenerator};

View File

@@ -1,35 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use transaction::SignedTransaction;
pub trait WithTransaction {
fn with_transaction(self, transaction: SignedTransaction) -> Self where Self: Sized;
}
pub struct Transaction<'a, I> where I: 'a {
pub iter: &'a mut I,
pub transaction: SignedTransaction,
}
impl <'a, I> Iterator for Transaction<'a, I> where I: Iterator, <I as Iterator>::Item: WithTransaction {
type Item = <I as Iterator>::Item;
#[inline]
fn next(&mut self) -> Option<Self::Item> {
self.iter.next().map(|item| item.with_transaction(self.transaction.clone()))
}
}

View File

@@ -2,8 +2,7 @@ use std::collections::HashMap;
use bigint::hash::H256;
use header::BlockNumber;
use blockchain::block_info::BlockInfo;
use blooms::BloomGroup;
use super::extras::{BlockDetails, BlockReceipts, TransactionAddress, LogGroupPosition};
use blockchain::extras::{BlockDetails, BlockReceipts, TransactionAddress};
/// Block extras update info.
pub struct ExtrasUpdate<'a> {
@@ -19,8 +18,6 @@ pub struct ExtrasUpdate<'a> {
pub block_details: HashMap<H256, BlockDetails>,
/// Modified block receipts.
pub block_receipts: HashMap<H256, BlockReceipts>,
/// Modified blocks blooms.
pub blocks_blooms: HashMap<LogGroupPosition, BloomGroup>,
/// Modified transaction addresses (None signifies removed transactions).
pub transactions_addresses: HashMap<H256, Option<TransactionAddress>>,
}

View File

@@ -1,74 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use bloomchain::group as bc;
use rlp::*;
use heapsize::HeapSizeOf;
use super::Bloom;
/// Represents group of X consecutive blooms.
#[derive(Debug, Clone)]
pub struct BloomGroup {
blooms: Vec<Bloom>,
}
impl From<bc::BloomGroup> for BloomGroup {
fn from(group: bc::BloomGroup) -> Self {
let blooms = group.blooms
.into_iter()
.map(From::from)
.collect();
BloomGroup {
blooms: blooms
}
}
}
impl Into<bc::BloomGroup> for BloomGroup {
fn into(self) -> bc::BloomGroup {
let blooms = self.blooms
.into_iter()
.map(Into::into)
.collect();
bc::BloomGroup {
blooms: blooms
}
}
}
impl Decodable for BloomGroup {
fn decode(rlp: &UntrustedRlp) -> Result<Self, DecoderError> {
let blooms = rlp.as_list()?;
let group = BloomGroup {
blooms: blooms
};
Ok(group)
}
}
impl Encodable for BloomGroup {
fn rlp_append(&self, s: &mut RlpStream) {
s.append_list(&self.blooms);
}
}
impl HeapSizeOf for BloomGroup {
fn heap_size_of_children(&self) -> usize {
self.blooms.heap_size_of_children()
}
}

View File

@@ -1,42 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use bloomchain::group as bc;
use heapsize::HeapSizeOf;
/// Represents `BloomGroup` position in database.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct GroupPosition {
/// Bloom level.
pub level: u8,
/// Group index.
pub index: u32,
}
impl From<bc::GroupPosition> for GroupPosition {
fn from(p: bc::GroupPosition) -> Self {
GroupPosition {
level: p.level as u8,
index: p.index as u32,
}
}
}
impl HeapSizeOf for GroupPosition {
fn heap_size_of_children(&self) -> usize {
0
}
}

View File

@@ -50,9 +50,9 @@ use encoded;
use engines::{EthEngine, EpochTransition};
use error::{ImportError, ExecutionError, CallError, BlockError, ImportResult, Error as EthcoreError};
use vm::{EnvInfo, LastHashes};
use evm::{Factory as EvmFactory, Schedule};
use evm::Schedule;
use executive::{Executive, Executed, TransactOptions, contract_address};
use factory::Factories;
use factory::{Factories, VmFactory};
use futures::{future, Future};
use header::{BlockNumber, Header};
use io::*;
@@ -189,7 +189,7 @@ impl Client {
let trie_factory = TrieFactory::new(trie_spec);
let factories = Factories {
vm: EvmFactory::new(config.vm_type.clone(), config.jump_table_size),
vm: VmFactory::new(config.vm_type.clone(), config.jump_table_size),
trie: trie_factory,
accountdb: Default::default(),
};
@@ -1673,17 +1673,8 @@ impl BlockChainClient for Client {
};
let chain = self.chain.read();
let blocks = filter.bloom_possibilities().iter()
.map(move |bloom| {
chain.blocks_with_bloom(bloom, from, to)
})
.flat_map(|m| m)
// remove duplicate elements
.collect::<HashSet<u64>>()
.into_iter()
.collect::<Vec<u64>>();
self.chain.read().logs(blocks, |entry| filter.matches(entry), filter.limit)
let blocks = chain.blocks_with_blooms(&filter.bloom_possibilities(), from, to);
chain.logs(blocks, |entry| filter.matches(entry), filter.limit)
}
fn filter_traces(&self, filter: TraceFilter) -> Option<Vec<LocalizedTrace>> {
@@ -1910,7 +1901,7 @@ impl MiningBlockChainClient for Client {
block
}
fn vm_factory(&self) -> &EvmFactory {
fn vm_factory(&self) -> &VmFactory {
&self.factories.vm
}

View File

@@ -20,12 +20,11 @@ use std::fmt;
use std::sync::Arc;
use bigint::prelude::U256;
use bigint::hash::H256;
use journaldb;
use {trie, kvdb_memorydb, bytes};
use {factory, journaldb, trie, kvdb_memorydb, bytes};
use kvdb::{self, KeyValueDB};
use {state, state_db, client, executive, trace, transaction, db, spec, pod_state};
use factory::Factories;
use evm::{self, VMType, FinalizationResult};
use evm::{VMType, FinalizationResult};
use vm::{self, ActionParams};
/// EVM test Error.
@@ -120,7 +119,7 @@ impl<'a> EvmTestClient<'a> {
fn factories() -> Factories {
Factories {
vm: evm::Factory::new(VMType::Interpreter, 5 * 1024),
vm: factory::VmFactory::new(VMType::Interpreter, 5 * 1024),
trie: trie::TrieFactory::new(trie::TrieSpec::Secure),
accountdb: Default::default(),
}

View File

@@ -47,7 +47,8 @@ use log_entry::LocalizedLogEntry;
use receipt::{Receipt, LocalizedReceipt, TransactionOutcome};
use blockchain::extras::BlockReceipts;
use error::{ImportResult, Error as EthcoreError};
use evm::{Factory as EvmFactory, VMType};
use evm::VMType;
use factory::VmFactory;
use vm::Schedule;
use miner::{Miner, MinerService, TransactionImportResult};
use spec::Spec;
@@ -98,7 +99,7 @@ pub struct TestBlockChainClient {
/// Spec
pub spec: Spec,
/// VM Factory
pub vm_factory: EvmFactory,
pub vm_factory: VmFactory,
/// Timestamp assigned to latest sealed block
pub latest_block_timestamp: RwLock<u64>,
/// Ancient block info.
@@ -169,7 +170,7 @@ impl TestBlockChainClient {
queue_size: AtomicUsize::new(0),
miner: Arc::new(Miner::with_spec(&spec)),
spec: spec,
vm_factory: EvmFactory::new(VMType::Interpreter, 1024 * 1024),
vm_factory: VmFactory::new(VMType::Interpreter, 1024 * 1024),
latest_block_timestamp: RwLock::new(10_000_000),
ancient_block: RwLock::new(None),
first_block: RwLock::new(None),
@@ -399,7 +400,7 @@ impl MiningBlockChainClient for TestBlockChainClient {
block.reopen(&*self.spec.engine)
}
fn vm_factory(&self) -> &EvmFactory {
fn vm_factory(&self) -> &VmFactory {
&self.vm_factory
}

View File

@@ -21,9 +21,9 @@ use block::{OpenBlock, SealedBlock, ClosedBlock};
use blockchain::TreeRoute;
use encoded;
use vm::LastHashes;
use error::{ImportResult, CallError, Error as EthcoreError};
use error::{TransactionImportResult, BlockImportError};
use evm::{Factory as EvmFactory, Schedule};
use error::{ImportResult, CallError, Error as EthcoreError, TransactionImportResult, BlockImportError};
use evm::Schedule;
use factory::VmFactory;
use executive::Executed;
use filter::Filter;
use header::{BlockNumber};
@@ -298,7 +298,7 @@ pub trait MiningBlockChainClient: BlockChainClient {
fn reopen_block(&self, block: ClosedBlock) -> OpenBlock;
/// Returns EvmFactory.
fn vm_factory(&self) -> &EvmFactory;
fn vm_factory(&self) -> &VmFactory;
/// Broadcast a block proposal.
fn broadcast_proposal_block(&self, block: SealedBlock);

View File

@@ -390,11 +390,6 @@ pub trait EthEngine: Engine<::machine::EthereumMachine> {
self.machine().verify_transaction_basic(t, header)
}
/// If this machine supports wasm.
fn supports_wasm(&self) -> bool {
self.machine().supports_wasm()
}
/// Additional information.
fn additional_params(&self) -> HashMap<String, String> {
self.machine().additional_params()

View File

@@ -141,6 +141,9 @@ pub fn new_constantinople_test_machine() -> EthereumMachine { load_machine(inclu
/// Create a new Musicoin-MCIP3-era spec.
pub fn new_mcip3_test_machine() -> EthereumMachine { load_machine(include_bytes!("../../res/ethereum/mcip3_test.json")) }
/// Create new Kovan spec with wasm activated at certain block
pub fn new_kovan_wasm_test_machine() -> EthereumMachine { load_machine(include_bytes!("../../res/ethereum/kovan_wasm_test.json")) }
#[cfg(test)]
mod tests {
use bigint::prelude::U256;

View File

@@ -24,11 +24,12 @@ use util::*;
use bytes::{Bytes, BytesRef};
use state::{Backend as StateBackend, State, Substate, CleanupMode};
use machine::EthereumMachine as Machine;
use vm::EnvInfo;
use error::ExecutionError;
use evm::{CallType, Factory, Finalize, FinalizationResult};
use vm::{self, Ext, CreateContractAddress, ReturnData, CleanDustMode, ActionParams, ActionValue};
use wasm;
use evm::{CallType, Finalize, FinalizationResult};
use vm::{
self, Ext, EnvInfo, CreateContractAddress, ReturnData, CleanDustMode, ActionParams,
ActionValue, Schedule,
};
use externalities::*;
use trace::{self, Tracer, VMTracer};
use transaction::{Action, SignedTransaction};
@@ -40,8 +41,6 @@ pub use executed::{Executed, ExecutionResult};
/// Maybe something like here: `https://github.com/ethereum/libethereum/blob/4db169b8504f2b87f7d5a481819cfb959fc65f6c/libethereum/ExtVM.cpp`
const STACK_SIZE_PER_DEPTH: usize = 24*1024;
const WASM_MAGIC_NUMBER: &'static [u8; 4] = b"\0asm";
/// Returns new address created from address, nonce, and code hash
pub fn contract_address(address_scheme: CreateContractAddress, sender: &Address, nonce: &U256, code: &[u8]) -> (Address, Option<H256>) {
use rlp::RlpStream;
@@ -154,14 +153,6 @@ impl TransactOptions<trace::NoopTracer, trace::NoopVMTracer> {
}
}
pub fn executor(machine: &Machine, vm_factory: &Factory, params: &ActionParams) -> Box<vm::Vm> {
if machine.supports_wasm() && params.code.as_ref().map_or(false, |code| code.len() > 4 && &code[0..4] == WASM_MAGIC_NUMBER) {
Box::new(wasm::WasmInterpreter)
} else {
vm_factory.create(params.gas)
}
}
/// Transaction executor.
pub struct Executive<'a, B: 'a + StateBackend> {
state: &'a mut State<B>,
@@ -336,6 +327,7 @@ impl<'a, B: 'a + StateBackend> Executive<'a, B> {
fn exec_vm<T, V>(
&mut self,
schedule: Schedule,
params: ActionParams,
unconfirmed_substate: &mut Substate,
output_policy: OutputPolicy,
@@ -351,19 +343,20 @@ impl<'a, B: 'a + StateBackend> Executive<'a, B> {
let vm_factory = self.state.vm_factory();
let mut ext = self.as_externalities(OriginInfo::from(&params), unconfirmed_substate, output_policy, tracer, vm_tracer, static_call);
trace!(target: "executive", "ext.schedule.have_delegate_call: {}", ext.schedule().have_delegate_call);
return executor(self.machine, &vm_factory, &params).exec(params, &mut ext).finalize(ext);
let mut vm = vm_factory.create(&params, &schedule);
return vm.exec(params, &mut ext).finalize(ext);
}
// Start in new thread to reset stack
// TODO [todr] No thread builder yet, so we need to reset once for a while
// https://github.com/aturon/crossbeam/issues/16
crossbeam::scope(|scope| {
let machine = self.machine;
let vm_factory = self.state.vm_factory();
let mut ext = self.as_externalities(OriginInfo::from(&params), unconfirmed_substate, output_policy, tracer, vm_tracer, static_call);
scope.spawn(move || {
executor(machine, &vm_factory, &params).exec(params, &mut ext).finalize(ext)
let mut vm = vm_factory.create(&params, &schedule);
vm.exec(params, &mut ext).finalize(ext)
})
}).join()
}
@@ -473,7 +466,7 @@ impl<'a, B: 'a + StateBackend> Executive<'a, B> {
let mut subvmtracer = vm_tracer.prepare_subtrace(params.code.as_ref().expect("scope is conditional on params.code.is_some(); qed"));
let res = {
self.exec_vm(params, &mut unconfirmed_substate, OutputPolicy::Return(output, trace_output.as_mut()), &mut subtracer, &mut subvmtracer)
self.exec_vm(schedule, params, &mut unconfirmed_substate, OutputPolicy::Return(output, trace_output.as_mut()), &mut subtracer, &mut subvmtracer)
};
vm_tracer.done_subtrace(subvmtracer);
@@ -564,9 +557,14 @@ impl<'a, B: 'a + StateBackend> Executive<'a, B> {
let mut subvmtracer = vm_tracer.prepare_subtrace(params.code.as_ref().expect("two ways into create (Externalities::create and Executive::transact_with_tracer); both place `Some(...)` `code` in `params`; qed"));
let res = {
self.exec_vm(params, &mut unconfirmed_substate, OutputPolicy::InitContract(output.as_mut().or(trace_output.as_mut())), &mut subtracer, &mut subvmtracer)
};
let res = self.exec_vm(
schedule,
params,
&mut unconfirmed_substate,
OutputPolicy::InitContract(output.as_mut().or(trace_output.as_mut())),
&mut subtracer,
&mut subvmtracer
);
vm_tracer.done_subtrace(subvmtracer);
@@ -1485,8 +1483,6 @@ mod tests {
params.gas = U256::from(20025);
params.code = Some(Arc::new(code));
params.value = ActionValue::Transfer(U256::zero());
let mut state = get_temp_state_with_factory(factory);
state.add_balance(&sender, &U256::from_str("152d02c7e14af68000000").unwrap(), CleanupMode::NoEmpty).unwrap();
let info = EnvInfo::default();
let machine = ::ethereum::new_byzantium_test_machine();
let mut substate = Substate::new();
@@ -1501,4 +1497,60 @@ mod tests {
assert_eq!(output[..], returns[..]);
assert_eq!(state.storage_at(&contract_address, &H256::from(&U256::zero())).unwrap(), H256::from(&U256::from(0)));
}
fn wasm_sample_code() -> Arc<Vec<u8>> {
Arc::new(
"0061736d01000000010d0360027f7f0060017f0060000002270303656e7603726574000003656e760673656e646572000103656e76066d656d6f727902010110030201020404017000000501000708010463616c6c00020901000ac10101be0102057f017e4100410028020441c0006b22043602042004412c6a41106a220041003602002004412c6a41086a22014200370200200441186a41106a22024100360200200441186a41086a220342003703002004420037022c2004410036021c20044100360218200441186a1001200020022802002202360200200120032903002205370200200441106a2002360200200441086a200537030020042004290318220537022c200420053703002004411410004100200441c0006a3602040b0b0a010041040b0410c00000"
.from_hex()
.unwrap()
)
}
#[test]
fn wasm_activated_test() {
let contract_address = Address::from_str("cd1722f3947def4cf144679da39c4c32bdc35681").unwrap();
let sender = Address::from_str("0f572e5295c57f15886f9b263e2f6d2d6c7b5ec6").unwrap();
let mut state = get_temp_state();
state.add_balance(&sender, &U256::from(10000000000u64), CleanupMode::NoEmpty).unwrap();
state.commit().unwrap();
let mut params = ActionParams::default();
params.origin = sender.clone();
params.sender = sender.clone();
params.address = contract_address.clone();
params.gas = U256::from(20025);
params.code = Some(wasm_sample_code());
let mut info = EnvInfo::default();
// 100 > 10
info.number = 100;
// Network with wasm activated at block 10
let machine = ::ethereum::new_kovan_wasm_test_machine();
let mut output = [0u8; 20];
let FinalizationResult { gas_left: result, .. } = {
let mut ex = Executive::new(&mut state, &info, &machine);
ex.call(params.clone(), &mut Substate::new(), BytesRef::Fixed(&mut output), &mut NoopTracer, &mut NoopVMTracer).unwrap()
};
assert_eq!(result, U256::from(18433));
// Transaction successfully returned sender
assert_eq!(output[..], sender[..]);
// 1 < 10
info.number = 1;
let mut output = [0u8; 20];
let FinalizationResult { gas_left: result, .. } = {
let mut ex = Executive::new(&mut state, &info, &machine);
ex.call(params, &mut Substate::new(), BytesRef::Fixed(&mut output), &mut NoopTracer, &mut NoopVMTracer).unwrap()
};
assert_eq!(result, U256::from(20025));
// Since transaction errored due to wasm was not activated, result is just empty
assert_eq!(output[..], [0u8; 20][..]);
}
}

View File

@@ -15,14 +15,44 @@
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use trie::TrieFactory;
use evm::Factory as EvmFactory;
use account_db::Factory as AccountFactory;
use evm::{Factory as EvmFactory, VMType};
use vm::{Vm, ActionParams, Schedule};
use wasm::WasmInterpreter;
const WASM_MAGIC_NUMBER: &'static [u8; 4] = b"\0asm";
/// Virtual machine factory
#[derive(Default, Clone)]
pub struct VmFactory {
evm: EvmFactory,
}
impl VmFactory {
pub fn create(&self, params: &ActionParams, schedule: &Schedule) -> Box<Vm> {
if schedule.wasm.is_some() && params.code.as_ref().map_or(false, |code| code.len() > 4 && &code[0..4] == WASM_MAGIC_NUMBER) {
Box::new(WasmInterpreter)
} else {
self.evm.create(&params.gas)
}
}
pub fn new(evm: VMType, cache_size: usize) -> Self {
VmFactory { evm: EvmFactory::new(evm, cache_size) }
}
}
impl From<EvmFactory> for VmFactory {
fn from(evm: EvmFactory) -> Self {
VmFactory { evm: evm }
}
}
/// Collection of factories.
#[derive(Default, Clone)]
pub struct Factories {
/// factory for evm.
pub vm: EvmFactory,
pub vm: VmFactory,
/// factory for tries.
pub trie: TrieFactory,
/// factory for account databases.

View File

@@ -259,7 +259,7 @@ fn do_json_test_for(vm_type: &VMType, json_data: &[u8]) -> Vec<String> {
&mut tracer,
&mut vm_tracer,
));
let mut evm = vm_factory.create(params.gas);
let mut evm = vm_factory.create(&params, &machine.schedule(0u64.into()));
let res = evm.exec(params, &mut ex);
// a return in finalize will not alter callcreates
let callcreates = ex.callcreates.clone();

View File

@@ -54,7 +54,6 @@
//! cargo build --release
//! ```
extern crate bloomchain;
extern crate bn;
extern crate byteorder;
extern crate crossbeam;
@@ -155,7 +154,6 @@ pub mod verification;
pub mod views;
mod cache_manager;
mod blooms;
mod basic_types;
mod pod_account;
mod state_db;

View File

@@ -377,11 +377,6 @@ impl EthereumMachine {
Ok(())
}
/// If this machine supports wasm.
pub fn supports_wasm(&self) -> bool {
self.params().wasm
}
/// Additional params.
pub fn additional_params(&self) -> HashMap<String, String> {
hash_map![

View File

@@ -57,6 +57,8 @@ pub enum Error {
VersionNotSupported(u64),
/// Max chunk size is to small to fit basic account data.
ChunkTooSmall,
/// Oversized chunk
ChunkTooLarge,
/// Snapshots not supported by the consensus engine.
SnapshotsUnsupported,
/// Bad epoch transition.
@@ -85,6 +87,7 @@ impl fmt::Display for Error {
Error::Trie(ref err) => err.fmt(f),
Error::VersionNotSupported(ref ver) => write!(f, "Snapshot version {} is not supprted.", ver),
Error::ChunkTooSmall => write!(f, "Chunk size is too small."),
Error::ChunkTooLarge => write!(f, "Chunk size is too large."),
Error::SnapshotsUnsupported => write!(f, "Snapshots unsupported by consensus engine."),
Error::BadEpochProof(i) => write!(f, "Bad epoch proof for transition to epoch {}", i),
Error::WrongChunkFormat(ref msg) => write!(f, "Wrong chunk format: {}", msg),

View File

@@ -77,6 +77,11 @@ mod traits;
// Try to have chunks be around 4MB (before compression)
const PREFERRED_CHUNK_SIZE: usize = 4 * 1024 * 1024;
// Maximal chunk size (decompressed)
// Snappy::decompressed_len estimation may sometimes yield results greater
// than PREFERRED_CHUNK_SIZE so allow some threshold here.
const MAX_CHUNK_SIZE: usize = PREFERRED_CHUNK_SIZE / 4 * 5;
// Minimum supported state chunk version.
const MIN_SUPPORTED_STATE_CHUNK_VERSION: u64 = 1;
// current state chunk version.

View File

@@ -23,7 +23,7 @@ use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use super::{ManifestData, StateRebuilder, Rebuilder, RestorationStatus, SnapshotService};
use super::{ManifestData, StateRebuilder, Rebuilder, RestorationStatus, SnapshotService, MAX_CHUNK_SIZE};
use super::io::{SnapshotReader, LooseReader, SnapshotWriter, LooseWriter};
use blockchain::BlockChain;
@@ -130,6 +130,11 @@ impl Restoration {
// feeds a state chunk, aborts early if `flag` becomes false.
fn feed_state(&mut self, hash: H256, chunk: &[u8], flag: &AtomicBool) -> Result<(), Error> {
if self.state_chunks_left.contains(&hash) {
let expected_len = snappy::decompressed_len(chunk)?;
if expected_len > MAX_CHUNK_SIZE {
trace!(target: "snapshot", "Discarding large chunk: {} vs {}", expected_len, MAX_CHUNK_SIZE);
return Err(::snapshot::Error::ChunkTooLarge.into());
}
let len = snappy::decompress_into(chunk, &mut self.snappy_buffer)?;
self.state.feed(&self.snappy_buffer[..len], flag)?;
@@ -147,6 +152,11 @@ impl Restoration {
// feeds a block chunk
fn feed_blocks(&mut self, hash: H256, chunk: &[u8], engine: &EthEngine, flag: &AtomicBool) -> Result<(), Error> {
if self.block_chunks_left.contains(&hash) {
let expected_len = snappy::decompressed_len(chunk)?;
if expected_len > MAX_CHUNK_SIZE {
trace!(target: "snapshot", "Discarding large chunk: {} vs {}", expected_len, MAX_CHUNK_SIZE);
return Err(::snapshot::Error::ChunkTooLarge.into());
}
let len = snappy::decompress_into(chunk, &mut self.snappy_buffer)?;
self.secondary.feed(&self.snappy_buffer[..len], engine, flag)?;

View File

@@ -19,7 +19,7 @@
use devtools::RandomTempPath;
use error::Error;
use blockchain::generator::{ChainGenerator, ChainIterator, BlockFinalizer};
use blockchain::generator::{BlockGenerator, BlockBuilder};
use blockchain::BlockChain;
use snapshot::{chunk_secondary, Error as SnapshotError, Progress, SnapshotComponents};
use snapshot::io::{PackedReader, PackedWriter, SnapshotReader, SnapshotWriter};
@@ -35,9 +35,10 @@ use std::sync::atomic::AtomicBool;
const SNAPSHOT_MODE: ::snapshot::PowSnapshot = ::snapshot::PowSnapshot { blocks: 30000, max_restore_blocks: 30000 };
fn chunk_and_restore(amount: u64) {
let mut canon_chain = ChainGenerator::default();
let mut finalizer = BlockFinalizer::default();
let genesis = canon_chain.generate(&mut finalizer).unwrap();
let genesis = BlockBuilder::genesis();
let rest = genesis.add_blocks(amount as usize);
let generator = BlockGenerator::new(vec![rest]);
let genesis = genesis.last();
let engine = ::spec::Spec::new_test().engine;
let new_path = RandomTempPath::create_dir();
@@ -45,13 +46,12 @@ fn chunk_and_restore(amount: u64) {
snapshot_path.push("SNAP");
let old_db = Arc::new(kvdb_memorydb::create(::db::NUM_COLUMNS.unwrap_or(0)));
let bc = BlockChain::new(Default::default(), &genesis, old_db.clone());
let bc = BlockChain::new(Default::default(), &genesis.encoded(), old_db.clone());
// build the blockchain.
let mut batch = DBTransaction::new();
for _ in 0..amount {
let block = canon_chain.generate(&mut finalizer).unwrap();
bc.insert_block(&mut batch, &block, vec![]);
for block in generator {
bc.insert_block(&mut batch, &block.encoded(), vec![]);
bc.commit();
}
@@ -82,7 +82,7 @@ fn chunk_and_restore(amount: u64) {
// restore it.
let new_db = Arc::new(kvdb_memorydb::create(::db::NUM_COLUMNS.unwrap_or(0)));
let new_chain = BlockChain::new(Default::default(), &genesis, new_db.clone());
let new_chain = BlockChain::new(Default::default(), &genesis.encoded(), new_db.clone());
let mut rebuilder = SNAPSHOT_MODE.rebuilder(new_chain, new_db.clone(), &manifest).unwrap();
let reader = PackedReader::new(&snapshot_path).unwrap().unwrap();
@@ -97,15 +97,19 @@ fn chunk_and_restore(amount: u64) {
drop(rebuilder);
// and test it.
let new_chain = BlockChain::new(Default::default(), &genesis, new_db);
let new_chain = BlockChain::new(Default::default(), &genesis.encoded(), new_db);
assert_eq!(new_chain.best_block_hash(), best_hash);
}
#[test]
fn chunk_and_restore_500() { chunk_and_restore(500) }
fn chunk_and_restore_500() {
chunk_and_restore(500)
}
#[test]
fn chunk_and_restore_40k() { chunk_and_restore(40000) }
fn chunk_and_restore_4k() {
chunk_and_restore(4000)
}
#[test]
fn checks_flag() {
@@ -120,17 +124,12 @@ fn checks_flag() {
stream.append_empty_data().append_empty_data();
let genesis = {
let mut canon_chain = ChainGenerator::default();
let mut finalizer = BlockFinalizer::default();
canon_chain.generate(&mut finalizer).unwrap()
};
let genesis = BlockBuilder::genesis();
let chunk = stream.out();
let db = Arc::new(kvdb_memorydb::create(::db::NUM_COLUMNS.unwrap_or(0)));
let engine = ::spec::Spec::new_test().engine;
let chain = BlockChain::new(Default::default(), &genesis, db.clone());
let chain = BlockChain::new(Default::default(), &genesis.last().encoded(), db.clone());
let manifest = ::snapshot::ManifestData {
version: 2,

View File

@@ -110,8 +110,8 @@ pub struct CommonParams {
pub nonce_cap_increment: u64,
/// Enable dust cleanup for contracts.
pub remove_dust_contracts: bool,
/// Wasm support
pub wasm: bool,
/// Wasm activation blocknumber, if any disabled initially.
pub wasm_activation_transition: BlockNumber,
/// Gas limit bound divisor (how much gas limit can change per block)
pub gas_limit_bound_divisor: U256,
/// Registrar contract address.
@@ -147,6 +147,9 @@ impl CommonParams {
false => ::vm::CleanDustMode::BasicOnly,
};
}
if block_number >= self.wasm_activation_transition {
schedule.wasm = Some(Default::default());
}
}
/// Whether these params contain any bug-fix hard forks.
@@ -221,12 +224,15 @@ impl From<ethjson::spec::Params> for CommonParams {
),
nonce_cap_increment: p.nonce_cap_increment.map_or(64, Into::into),
remove_dust_contracts: p.remove_dust_contracts.unwrap_or(false),
wasm: p.wasm.unwrap_or(false),
gas_limit_bound_divisor: p.gas_limit_bound_divisor.into(),
registrar: p.registrar.map_or_else(Address::new, Into::into),
node_permission_contract: p.node_permission_contract.map(Into::into),
max_code_size: p.max_code_size.map_or(u64::max_value(), Into::into),
transaction_permission_contract: p.transaction_permission_contract.map(Into::into),
wasm_activation_transition: p.wasm_activation_transition.map_or(
BlockNumber::max_value(),
Into::into
),
}
}
}

View File

@@ -40,7 +40,7 @@ use executed::{Executed, ExecutionError};
use types::state_diff::StateDiff;
use transaction::SignedTransaction;
use state_db::StateDB;
use evm::{Factory as EvmFactory};
use factory::VmFactory;
use bigint::prelude::U256;
use bigint::hash::H256;
@@ -376,7 +376,7 @@ impl<B: Backend> State<B> {
}
/// Get a VM factory that can execute on this state.
pub fn vm_factory(&self) -> EvmFactory {
pub fn vm_factory(&self) -> VmFactory {
self.factories.vm.clone()
}

View File

@@ -275,7 +275,7 @@ pub fn get_temp_state() -> State<::state_db::StateDB> {
pub fn get_temp_state_with_factory(factory: EvmFactory) -> State<::state_db::StateDB> {
let journal_db = get_temp_state_db();
let mut factories = Factories::default();
factories.vm = factory;
factories.vm = factory.into();
State::new(journal_db, U256::from(0), factories)
}

View File

@@ -1,77 +0,0 @@
use bloomchain::Bloom;
use bloomchain::group::{BloomGroup, GroupPosition};
use basic_types::LogBloom;
/// Helper structure representing bloom of the trace.
#[derive(Clone, RlpEncodableWrapper, RlpDecodableWrapper)]
pub struct BlockTracesBloom(LogBloom);
impl From<LogBloom> for BlockTracesBloom {
fn from(bloom: LogBloom) -> BlockTracesBloom {
BlockTracesBloom(bloom)
}
}
impl From<Bloom> for BlockTracesBloom {
fn from(bloom: Bloom) -> BlockTracesBloom {
let bytes: [u8; 256] = bloom.into();
BlockTracesBloom(LogBloom::from(bytes))
}
}
impl Into<Bloom> for BlockTracesBloom {
fn into(self) -> Bloom {
let log = self.0;
Bloom::from(log.0)
}
}
/// Represents group of X consecutive blooms.
#[derive(Clone, RlpEncodableWrapper, RlpDecodableWrapper)]
pub struct BlockTracesBloomGroup {
blooms: Vec<BlockTracesBloom>,
}
impl From<BloomGroup> for BlockTracesBloomGroup {
fn from(group: BloomGroup) -> Self {
let blooms = group.blooms
.into_iter()
.map(From::from)
.collect();
BlockTracesBloomGroup {
blooms: blooms
}
}
}
impl Into<BloomGroup> for BlockTracesBloomGroup {
fn into(self) -> BloomGroup {
let blooms = self.blooms
.into_iter()
.map(Into::into)
.collect();
BloomGroup {
blooms: blooms
}
}
}
/// Represents `BloomGroup` position in database.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct TraceGroupPosition {
/// Bloom level.
pub level: u8,
/// Group index.
pub index: u32,
}
impl From<GroupPosition> for TraceGroupPosition {
fn from(p: GroupPosition) -> Self {
TraceGroupPosition {
level: p.level as u8,
index: p.index as u32,
}
}
}

View File

@@ -15,7 +15,6 @@
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Traces config.
use bloomchain::Config as BloomConfig;
/// Traces config.
#[derive(Debug, PartialEq, Clone)]
@@ -23,8 +22,6 @@ pub struct Config {
/// Indicates if tracing should be enabled or not.
/// If it's None, it will be automatically configured.
pub enabled: bool,
/// Traces blooms configuration.
pub blooms: BloomConfig,
/// Preferef cache-size.
pub pref_cache_size: usize,
/// Max cache-size.
@@ -35,10 +32,6 @@ impl Default for Config {
fn default() -> Self {
Config {
enabled: false,
blooms: BloomConfig {
levels: 3,
elements_per_index: 16,
},
pref_cache_size: 15 * 1024 * 1024,
max_cache_size: 20 * 1024 * 1024,
}

View File

@@ -15,19 +15,15 @@
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Trace database.
use std::ops::Deref;
use std::collections::{HashMap, VecDeque};
use std::sync::Arc;
use bloomchain::{Number, Config as BloomConfig};
use bloomchain::group::{BloomGroupDatabase, BloomGroupChain, GroupPosition, BloomGroup};
use heapsize::HeapSizeOf;
use bigint::hash::{H256, H264};
use bigint::hash::{H256, H264, H2048 as Bloom};
use kvdb::{KeyValueDB, DBTransaction};
use parking_lot::RwLock;
use header::BlockNumber;
use trace::{LocalizedTrace, Config, Filter, Database as TraceDatabase, ImportRequest, DatabaseExtras};
use db::{self, Key, Writable, Readable, CacheUpdatePolicy};
use blooms;
use super::flat::{FlatTrace, FlatBlockTraces, FlatTransactionTraces};
use cache_manager::CacheManager;
@@ -37,8 +33,8 @@ const TRACE_DB_VER: &'static [u8] = b"1.0";
enum TraceDBIndex {
/// Block traces index.
BlockTraces = 0,
/// Trace bloom group index.
BloomGroups = 1,
/// Blooms index.
Blooms = 2,
}
impl Key<FlatBlockTraces> for H256 {
@@ -52,80 +48,37 @@ impl Key<FlatBlockTraces> for H256 {
}
}
/// Wrapper around `blooms::GroupPosition` so it could be
/// uniquely identified in the database.
#[derive(Debug, PartialEq, Eq, Hash, Clone)]
struct TraceGroupPosition(blooms::GroupPosition);
impl Key<Bloom> for H256 {
type Target = H264;
impl From<GroupPosition> for TraceGroupPosition {
fn from(position: GroupPosition) -> Self {
TraceGroupPosition(From::from(position))
}
}
impl HeapSizeOf for TraceGroupPosition {
fn heap_size_of_children(&self) -> usize {
0
}
}
/// Helper data structure created cause [u8; 6] does not implement Deref to &[u8].
pub struct TraceGroupKey([u8; 6]);
impl Deref for TraceGroupKey {
type Target = [u8];
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl Key<blooms::BloomGroup> for TraceGroupPosition {
type Target = TraceGroupKey;
fn key(&self) -> Self::Target {
let mut result = [0u8; 6];
result[0] = TraceDBIndex::BloomGroups as u8;
result[1] = self.0.level;
result[2] = self.0.index as u8;
result[3] = (self.0.index >> 8) as u8;
result[4] = (self.0.index >> 16) as u8;
result[5] = (self.0.index >> 24) as u8;
TraceGroupKey(result)
fn key(&self) -> H264 {
let mut result = H264::default();
result[0] = TraceDBIndex::Blooms as u8;
result[1..33].copy_from_slice(self);
result
}
}
#[derive(Debug, Hash, Eq, PartialEq)]
enum CacheId {
Trace(H256),
Bloom(TraceGroupPosition),
Bloom(H256),
}
/// Trace database.
pub struct TraceDB<T> where T: DatabaseExtras {
// cache
traces: RwLock<HashMap<H256, FlatBlockTraces>>,
blooms: RwLock<HashMap<TraceGroupPosition, blooms::BloomGroup>>,
blooms: RwLock<HashMap<H256, Bloom>>,
cache_manager: RwLock<CacheManager<CacheId>>,
// db
tracesdb: Arc<KeyValueDB>,
// config,
bloom_config: BloomConfig,
// tracing enabled
enabled: bool,
// extras
extras: Arc<T>,
}
impl<T> BloomGroupDatabase for TraceDB<T> where T: DatabaseExtras {
fn blooms_at(&self, position: &GroupPosition) -> Option<BloomGroup> {
let position = TraceGroupPosition::from(position.clone());
let result = self.tracesdb.read_with_cache(db::COL_TRACE, &self.blooms, &position).map(Into::into);
self.note_used(CacheId::Bloom(position));
result
}
}
impl<T> TraceDB<T> where T: DatabaseExtras {
/// Creates new instance of `TraceDB`.
pub fn new(config: Config, tracesdb: Arc<KeyValueDB>, extras: Arc<T>) -> Self {
@@ -137,13 +90,12 @@ impl<T> TraceDB<T> where T: DatabaseExtras {
tracesdb.write(batch).expect("failed to update version");
TraceDB {
traces: RwLock::new(HashMap::new()),
blooms: RwLock::new(HashMap::new()),
cache_manager: RwLock::new(CacheManager::new(config.pref_cache_size, config.max_cache_size, 10 * 1024)),
tracesdb: tracesdb,
bloom_config: config.blooms,
tracesdb,
enabled: config.enabled,
extras: extras,
extras,
traces: RwLock::default(),
blooms: RwLock::default(),
}
}
@@ -188,6 +140,12 @@ impl<T> TraceDB<T> where T: DatabaseExtras {
result
}
fn bloom(&self, block_hash: &H256) -> Option<Bloom> {
let result = self.tracesdb.read_with_cache(db::COL_TRACE, &self.blooms, block_hash);
self.note_used(CacheId::Bloom(block_hash.clone()));
result
}
/// Returns vector of transaction traces for given block.
fn transactions_traces(&self, block_hash: &H256) -> Option<Vec<FlatTransactionTraces>> {
self.traces(block_hash).map(Into::into)
@@ -264,49 +222,16 @@ impl<T> TraceDatabase for TraceDB<T> where T: DatabaseExtras {
return;
}
// now let's rebuild the blooms
if !request.enacted.is_empty() {
let range_start = request.block_number as Number + 1 - request.enacted.len();
let range_end = range_start + request.retracted;
let replaced_range = range_start..range_end;
let enacted_blooms = request.enacted
.iter()
// all traces are expected to be found here. That's why `expect` has been used
// instead of `filter_map`. If some traces haven't been found, it meens that
// traces database is corrupted or incomplete.
.map(|block_hash| if block_hash == &request.block_hash {
request.traces.bloom()
} else {
self.traces(block_hash).expect("Traces database is incomplete.").bloom()
})
.map(blooms::Bloom::from)
.map(Into::into)
.collect();
let chain = BloomGroupChain::new(self.bloom_config, self);
let trace_blooms = chain.replace(&replaced_range, enacted_blooms);
let blooms_to_insert = trace_blooms.into_iter()
.map(|p| (From::from(p.0), From::from(p.1)))
.collect::<HashMap<TraceGroupPosition, blooms::BloomGroup>>();
let blooms_keys: Vec<_> = blooms_to_insert.keys().cloned().collect();
let mut blooms = self.blooms.write();
batch.extend_with_cache(db::COL_TRACE, &mut *blooms, blooms_to_insert, CacheUpdatePolicy::Remove);
// note_used must be called after locking blooms to avoid cache/traces deadlock on garbage collection
for key in blooms_keys {
self.note_used(CacheId::Bloom(key));
}
}
// insert new block traces into the cache and the database
{
let mut traces = self.traces.write();
// it's important to use overwrite here,
// cause this value might be queried by hash later
batch.write_with_cache(db::COL_TRACE, &mut *traces, request.block_hash, request.traces, CacheUpdatePolicy::Overwrite);
// note_used must be called after locking traces to avoid cache/traces deadlock on garbage collection
self.note_used(CacheId::Trace(request.block_hash.clone()));
}
let mut traces = self.traces.write();
let mut blooms = self.blooms.write();
// it's important to use overwrite here,
// cause this value might be queried by hash later
batch.write_with_cache(db::COL_TRACE, &mut *blooms, request.block_hash, request.traces.bloom(), CacheUpdatePolicy::Overwrite);
batch.write_with_cache(db::COL_TRACE, &mut *traces, request.block_hash, request.traces, CacheUpdatePolicy::Overwrite);
// note_used must be called after locking traces to avoid cache/traces deadlock on garbage collection
self.note_used(CacheId::Trace(request.block_hash));
self.note_used(CacheId::Bloom(request.block_hash));
}
fn trace(&self, block_number: BlockNumber, tx_position: usize, trace_position: Vec<usize>) -> Option<LocalizedTrace> {
@@ -393,15 +318,17 @@ impl<T> TraceDatabase for TraceDB<T> where T: DatabaseExtras {
}
fn filter(&self, filter: &Filter) -> Vec<LocalizedTrace> {
let chain = BloomGroupChain::new(self.bloom_config, self);
let numbers = chain.filter(filter);
numbers.into_iter()
.flat_map(|n| {
let number = n as BlockNumber;
let hash = self.extras.block_hash(number)
.expect("Expected to find block hash. Extras db is probably corrupted");
let traces = self.traces(&hash)
.expect("Expected to find a trace. Db is probably corrupted.");
let possibilities = filter.bloom_possibilities();
// + 1, cause filters are inclusive
(filter.range.start..filter.range.end + 1).into_iter()
.map(|n| n as BlockNumber)
.filter_map(|n| self.extras.block_hash(n).map(|hash| (n, hash)))
.filter(|&(_,ref hash)| {
let bloom = self.bloom(hash).expect("hash exists; qed");
possibilities.iter().any(|p| bloom.contains(p))
})
.flat_map(|(number, hash)| {
let traces = self.traces(&hash).expect("hash exists; qed");
self.matching_block_traces(filter, traces, hash, number)
})
.collect()

View File

@@ -16,7 +16,6 @@
//! Tracing
mod bloom;
mod config;
mod db;
mod executive_tracer;

View File

@@ -17,10 +17,10 @@
//! Trace filters type definitions
use std::ops::Range;
use bloomchain::{Filter as BloomFilter, Bloom, Number};
use hash::keccak;
use util::Address;
use bloomable::Bloomable;
use bigint::prelude::H2048 as Bloom;
use basic_types::LogBloom;
use trace::flat::FlatTrace;
use super::trace::{Action, Res};
@@ -87,22 +87,9 @@ pub struct Filter {
pub to_address: AddressesFilter,
}
impl BloomFilter for Filter {
fn bloom_possibilities(&self) -> Vec<Bloom> {
self.bloom_possibilities()
.into_iter()
.map(|b| Bloom::from(b.0))
.collect()
}
fn range(&self) -> Range<Number> {
self.range.clone()
}
}
impl Filter {
/// Returns combinations of each address.
fn bloom_possibilities(&self) -> Vec<LogBloom> {
pub fn bloom_possibilities(&self) -> Vec<Bloom> {
self.to_address.with_blooms(self.from_address.blooms())
}

View File

@@ -461,7 +461,7 @@ mod tests {
unimplemented!()
}
fn blocks_with_bloom(&self, _bloom: &H2048, _from_block: BlockNumber, _to_block: BlockNumber) -> Vec<BlockNumber> {
fn blocks_with_blooms(&self, _blooms: &[H2048], _from_block: BlockNumber, _to_block: BlockNumber) -> Vec<BlockNumber> {
unimplemented!()
}

View File

@@ -38,7 +38,7 @@ pub mod tests;
pub use action_params::{ActionParams, ActionValue, ParamsType};
pub use call_type::CallType;
pub use env_info::{EnvInfo, LastHashes};
pub use schedule::{Schedule, CleanDustMode};
pub use schedule::{Schedule, CleanDustMode, WasmCosts};
pub use ext::{Ext, MessageCallResult, ContractCreateResult, CreateContractAddress};
pub use return_data::{ReturnData, GasLeft};
pub use error::{Error, Result};

View File

@@ -113,8 +113,8 @@ pub struct Schedule {
pub kill_dust: CleanDustMode,
/// Enable EIP-86 rules
pub eip86: bool,
/// Wasm extra schedule settings
pub wasm: WasmCosts,
/// Wasm extra schedule settings, if wasm activated
pub wasm: Option<WasmCosts>,
}
/// Wasm cost table
@@ -231,7 +231,7 @@ impl Schedule {
have_static_call: false,
kill_dust: CleanDustMode::Off,
eip86: false,
wasm: Default::default(),
wasm: None,
}
}
@@ -294,9 +294,17 @@ impl Schedule {
have_static_call: false,
kill_dust: CleanDustMode::Off,
eip86: false,
wasm: Default::default(),
wasm: None,
}
}
/// Returns wasm schedule
///
/// May panic if there is no wasm schedule
pub fn wasm(&self) -> &WasmCosts {
// *** Prefer PANIC here instead of silently breaking consensus! ***
self.wasm.as_ref().expect("Wasm schedule expected to exist while checking wasm contract. Misconfigured client?")
}
}
impl Default for Schedule {

View File

@@ -78,15 +78,23 @@ pub fn test_finalize(res: Result<GasLeft>) -> Result<U256> {
}
impl FakeExt {
/// New fake externalities
pub fn new() -> Self {
FakeExt::default()
}
/// New fake externalities with byzantium schedule rules
pub fn new_byzantium() -> Self {
let mut ext = FakeExt::default();
ext.schedule = Schedule::new_byzantium();
ext
}
/// Alter fake externalities to allow wasm
pub fn with_wasm(mut self) -> Self {
self.schedule.wasm = Some(Default::default());
self
}
}
impl Ext for FakeExt {

View File

@@ -69,7 +69,7 @@ impl From<runtime::Error> for vm::Error {
impl vm::Vm for WasmInterpreter {
fn exec(&mut self, params: ActionParams, ext: &mut vm::Ext) -> vm::Result<GasLeft> {
let (module, data) = parser::payload(&params, ext.schedule())?;
let (module, data) = parser::payload(&params, ext.schedule().wasm())?;
let loaded_module = wasmi::Module::from_parity_wasm_module(module).map_err(Error)?;
@@ -80,8 +80,8 @@ impl vm::Vm for WasmInterpreter {
&wasmi::ImportsBuilder::new().with_resolver("env", &instantiation_resolover)
).map_err(Error)?;
let adjusted_gas = params.gas * U256::from(ext.schedule().wasm.opcodes_div) /
U256::from(ext.schedule().wasm.opcodes_mul);
let adjusted_gas = params.gas * U256::from(ext.schedule().wasm().opcodes_div) /
U256::from(ext.schedule().wasm().opcodes_mul);
if adjusted_gas > ::std::u64::MAX.into()
{
@@ -112,8 +112,8 @@ impl vm::Vm for WasmInterpreter {
// total_charge <- static_region * 2^32 * 2^16
// total_charge ∈ [0..2^64) if static_region ∈ [0..2^16)
// qed
assert!(runtime.schedule().wasm.initial_mem < 1 << 16);
runtime.charge(|s| initial_memory as u64 * s.wasm.initial_mem as u64)?;
assert!(runtime.schedule().wasm().initial_mem < 1 << 16);
runtime.charge(|s| initial_memory as u64 * s.wasm().initial_mem as u64)?;
let module_instance = module_instance.run_start(&mut runtime).map_err(Error)?;
@@ -149,8 +149,8 @@ impl vm::Vm for WasmInterpreter {
};
let gas_left =
U256::from(gas_left) * U256::from(ext.schedule().wasm.opcodes_mul)
/ U256::from(ext.schedule().wasm.opcodes_div);
U256::from(gas_left) * U256::from(ext.schedule().wasm().opcodes_mul)
/ U256::from(ext.schedule().wasm().opcodes_div);
if result.is_empty() {
trace!(target: "wasm", "Contract execution result is empty.");

View File

@@ -21,21 +21,21 @@ use wasm_utils::{self, rules};
use parity_wasm::elements::{self, Deserialize};
use parity_wasm::peek_size;
fn gas_rules(schedule: &vm::Schedule) -> rules::Set {
fn gas_rules(wasm_costs: &vm::WasmCosts) -> rules::Set {
rules::Set::new({
let mut vals = ::std::collections::HashMap::with_capacity(4);
vals.insert(rules::InstructionType::Load, schedule.wasm.mem as u32);
vals.insert(rules::InstructionType::Store, schedule.wasm.mem as u32);
vals.insert(rules::InstructionType::Div, schedule.wasm.div as u32);
vals.insert(rules::InstructionType::Mul, schedule.wasm.mul as u32);
vals.insert(rules::InstructionType::Load, wasm_costs.mem as u32);
vals.insert(rules::InstructionType::Store, wasm_costs.mem as u32);
vals.insert(rules::InstructionType::Div, wasm_costs.div as u32);
vals.insert(rules::InstructionType::Mul, wasm_costs.mul as u32);
vals
}).with_grow_cost(schedule.wasm.grow_mem)
}).with_grow_cost(wasm_costs.grow_mem)
}
/// Splits payload to code and data according to params.params_type, also
/// loads the module instance from payload and injects gas counter according
/// to schedule.
pub fn payload<'a>(params: &'a vm::ActionParams, schedule: &vm::Schedule)
pub fn payload<'a>(params: &'a vm::ActionParams, wasm_costs: &vm::WasmCosts)
-> Result<(elements::Module, &'a [u8]), vm::Error>
{
let code = match params.code {
@@ -70,7 +70,7 @@ pub fn payload<'a>(params: &'a vm::ActionParams, schedule: &vm::Schedule)
let contract_module = wasm_utils::inject_gas_counter(
deserialized_module,
&gas_rules(schedule),
&gas_rules(wasm_costs),
);
let data = match params.params_type {

View File

@@ -186,7 +186,7 @@ impl<'a> Runtime<'a> {
pub fn adjusted_charge<F>(&mut self, f: F) -> Result<()>
where F: FnOnce(&vm::Schedule) -> u64
{
self.charge(|schedule| f(schedule) * schedule.wasm.opcodes_div as u64 / schedule.wasm.opcodes_mul as u64)
self.charge(|schedule| f(schedule) * schedule.wasm().opcodes_div as u64 / schedule.wasm().opcodes_mul as u64)
}
/// Charge gas provided by the closure, and closure also can return overflowing
@@ -212,8 +212,8 @@ impl<'a> Runtime<'a> {
{
self.overflow_charge(|schedule|
f(schedule)
.and_then(|x| x.checked_mul(schedule.wasm.opcodes_div as u64))
.map(|x| x / schedule.wasm.opcodes_mul as u64)
.and_then(|x| x.checked_mul(schedule.wasm().opcodes_div as u64))
.map(|x| x / schedule.wasm().opcodes_mul as u64)
)
}
@@ -385,8 +385,8 @@ impl<'a> Runtime<'a> {
// todo: optimize to use memory views once it's in
let payload = self.memory.get(input_ptr, input_len as usize)?;
let adjusted_gas = match gas.checked_mul(self.ext.schedule().wasm.opcodes_div as u64)
.map(|x| x / self.ext.schedule().wasm.opcodes_mul as u64)
let adjusted_gas = match gas.checked_mul(self.ext.schedule().wasm().opcodes_div as u64)
.map(|x| x / self.ext.schedule().wasm().opcodes_mul as u64)
{
Some(x) => x,
None => {
@@ -412,8 +412,8 @@ impl<'a> Runtime<'a> {
vm::MessageCallResult::Success(gas_left, _) => {
// cannot overflow, before making call gas_counter was incremented with gas, and gas_left < gas
self.gas_counter = self.gas_counter -
gas_left.low_u64() * self.ext.schedule().wasm.opcodes_div as u64
/ self.ext.schedule().wasm.opcodes_mul as u64;
gas_left.low_u64() * self.ext.schedule().wasm().opcodes_div as u64
/ self.ext.schedule().wasm().opcodes_mul as u64;
self.memory.set(result_ptr, &result)?;
Ok(0i32.into())
@@ -421,8 +421,8 @@ impl<'a> Runtime<'a> {
vm::MessageCallResult::Reverted(gas_left, _) => {
// cannot overflow, before making call gas_counter was incremented with gas, and gas_left < gas
self.gas_counter = self.gas_counter -
gas_left.low_u64() * self.ext.schedule().wasm.opcodes_div as u64
/ self.ext.schedule().wasm.opcodes_mul as u64;
gas_left.low_u64() * self.ext.schedule().wasm().opcodes_div as u64
/ self.ext.schedule().wasm().opcodes_mul as u64;
self.memory.set(result_ptr, &result)?;
Ok((-1i32).into())
@@ -450,14 +450,14 @@ impl<'a> Runtime<'a> {
fn return_address_ptr(&mut self, ptr: u32, val: Address) -> Result<()>
{
self.charge(|schedule| schedule.wasm.static_address as u64)?;
self.charge(|schedule| schedule.wasm().static_address as u64)?;
self.memory.set(ptr, &*val)?;
Ok(())
}
fn return_u256_ptr(&mut self, ptr: u32, val: U256) -> Result<()> {
let value: H256 = val.into();
self.charge(|schedule| schedule.wasm.static_u256 as u64)?;
self.charge(|schedule| schedule.wasm().static_u256 as u64)?;
self.memory.set(ptr, &*value)?;
Ok(())
}
@@ -489,8 +489,8 @@ impl<'a> Runtime<'a> {
self.adjusted_charge(|schedule| schedule.create_data_gas as u64 * code.len() as u64)?;
let gas_left: U256 = U256::from(self.gas_left()?)
* U256::from(self.ext.schedule().wasm.opcodes_mul)
/ U256::from(self.ext.schedule().wasm.opcodes_div);
* U256::from(self.ext.schedule().wasm().opcodes_mul)
/ U256::from(self.ext.schedule().wasm().opcodes_div);
match self.ext.create(&gas_left, &endowment, &code, vm::CreateContractAddress::FromSenderAndCodeHash) {
vm::ContractCreateResult::Created(address, gas_left) => {
@@ -498,8 +498,8 @@ impl<'a> Runtime<'a> {
self.gas_counter = self.gas_limit -
// this cannot overflow, since initial gas is in [0..u64::max) range,
// and gas_left cannot be bigger
gas_left.low_u64() * self.ext.schedule().wasm.opcodes_div as u64
/ self.ext.schedule().wasm.opcodes_mul as u64;
gas_left.low_u64() * self.ext.schedule().wasm().opcodes_div as u64
/ self.ext.schedule().wasm().opcodes_mul as u64;
trace!(target: "wasm", "runtime: create contract success (@{:?})", address);
Ok(0i32.into())
},
@@ -512,8 +512,8 @@ impl<'a> Runtime<'a> {
self.gas_counter = self.gas_limit -
// this cannot overflow, since initial gas is in [0..u64::max) range,
// and gas_left cannot be bigger
gas_left.low_u64() * self.ext.schedule().wasm.opcodes_div as u64
/ self.ext.schedule().wasm.opcodes_mul as u64;
gas_left.low_u64() * self.ext.schedule().wasm().opcodes_div as u64
/ self.ext.schedule().wasm().opcodes_mul as u64;
Ok((-1i32).into())
},

View File

@@ -45,7 +45,7 @@ macro_rules! reqrep_test {
params.code = Some(Arc::new(code));
params.data = Some($input);
let mut fake_ext = FakeExt::new();
let mut fake_ext = FakeExt::new().with_wasm();
fake_ext.info = $info;
fake_ext.blockhashes = $block_hashes;
@@ -81,7 +81,7 @@ fn empty() {
params.address = address.clone();
params.gas = U256::from(100_000);
params.code = Some(Arc::new(code));
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let gas_left = {
let mut interpreter = wasm_interpreter();
@@ -110,7 +110,7 @@ fn logger() {
params.gas = U256::from(100_000);
params.value = ActionValue::transfer(1_000_000_000);
params.code = Some(Arc::new(code));
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let gas_left = {
let mut interpreter = wasm_interpreter();
@@ -159,7 +159,7 @@ fn identity() {
params.sender = sender.clone();
params.gas = U256::from(100_000);
params.code = Some(Arc::new(code));
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -194,7 +194,7 @@ fn dispersion() {
params.data = Some(vec![
0u8, 125, 197, 255, 19
]);
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -222,7 +222,7 @@ fn suicide_not() {
params.data = Some(vec![
0u8
]);
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -255,7 +255,7 @@ fn suicide() {
args.extend(refund.to_vec());
params.data = Some(args);
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let gas_left = {
let mut interpreter = wasm_interpreter();
@@ -282,7 +282,7 @@ fn create() {
params.data = Some(vec![0u8, 2, 4, 8, 16, 32, 64, 128]);
params.value = ActionValue::transfer(1_000_000_000);
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let gas_left = {
let mut interpreter = wasm_interpreter();
@@ -326,7 +326,7 @@ fn call_msg() {
params.code = Some(Arc::new(load_sample!("call.wasm")));
params.data = Some(Vec::new());
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
ext.balances.insert(receiver.clone(), U256::from(10000000000u64));
let gas_left = {
@@ -369,7 +369,7 @@ fn call_code() {
params.data = Some(Vec::new());
params.value = ActionValue::transfer(1_000_000_000);
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -416,7 +416,7 @@ fn call_static() {
params.value = ActionValue::transfer(1_000_000_000);
params.code_address = contract_address.clone();
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -456,7 +456,7 @@ fn realloc() {
params.gas = U256::from(100_000);
params.code = Some(Arc::new(code));
params.data = Some(vec![0u8]);
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -478,7 +478,7 @@ fn alloc() {
params.gas = U256::from(10_000_000);
params.code = Some(Arc::new(code));
params.data = Some(vec![0u8]);
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -504,7 +504,7 @@ fn storage_read() {
let mut params = ActionParams::default();
params.gas = U256::from(100_000);
params.code = Some(Arc::new(code));
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
ext.store.insert("0100000000000000000000000000000000000000000000000000000000000000".into(), address.into());
let (gas_left, result) = {
@@ -531,7 +531,7 @@ fn keccak() {
params.gas = U256::from(100_000);
params.code = Some(Arc::new(code));
params.data = Some(b"something".to_vec());
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -666,7 +666,7 @@ fn storage_metering() {
::ethcore_logger::init_log();
// #1
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let code = Arc::new(load_sample!("setter.wasm"));
let address: Address = "0f572e5295c57f15886f9b263e2f6d2d6c7b5ec6".parse().unwrap();
@@ -807,7 +807,7 @@ fn embedded_keccak() {
params.code = Some(Arc::new(code));
params.params_type = vm::ParamsType::Embedded;
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();
@@ -835,7 +835,7 @@ fn events() {
params.code = Some(Arc::new(code));
params.data = Some(b"something".to_vec());
let mut ext = FakeExt::new();
let mut ext = FakeExt::new().with_wasm();
let (gas_left, result) = {
let mut interpreter = wasm_interpreter();

View File

@@ -22,16 +22,21 @@ use super::{WalletInfo, KeyPath};
use bigint::hash::H256;
use ethkey::{Address, Signature};
use hidapi;
use libusb;
use parking_lot::{Mutex, RwLock};
use std::cmp::min;
use std::fmt;
use std::str::FromStr;
use std::sync::Arc;
use std::sync::{Arc, Weak};
use std::time::Duration;
use std::thread;
/// Ledger vendor ID
pub const LEDGER_VID: u16 = 0x2c97;
/// Legder product IDs: [Nano S and Blue]
pub const LEDGER_PIDS: [u16; 2] = [0x0000, 0x0001];
const LEDGER_VID: u16 = 0x2c97;
const LEDGER_PIDS: [u16; 2] = [0x0000, 0x0001]; // Nano S and Blue
const ETH_DERIVATION_PATH_BE: [u8; 17] = [4, 0x80, 0, 0, 44, 0x80, 0, 0, 60, 0x80, 0, 0, 0, 0, 0, 0, 0]; // 44'/60'/0'/0
const ETC_DERIVATION_PATH_BE: [u8; 21] = [5, 0x80, 0, 0, 44, 0x80, 0, 0, 60, 0x80, 0x02, 0x73, 0xd0, 0x80, 0, 0, 0, 0, 0, 0, 0]; // 44'/60'/160720'/0'/0
@@ -54,10 +59,14 @@ pub enum Error {
Protocol(&'static str),
/// Hidapi error.
Usb(hidapi::HidError),
/// Libusb error
LibUsb(libusb::Error),
/// Device with request key is not available.
KeyNotFound,
/// Signing has been cancelled by user.
UserCancel,
/// Invalid Device
InvalidDevice,
}
impl fmt::Display for Error {
@@ -65,8 +74,10 @@ impl fmt::Display for Error {
match *self {
Error::Protocol(ref s) => write!(f, "Ledger protocol error: {}", s),
Error::Usb(ref e) => write!(f, "USB communication error: {}", e),
Error::LibUsb(ref e) => write!(f, "LibUSB communication error: {}", e),
Error::KeyNotFound => write!(f, "Key not found"),
Error::UserCancel => write!(f, "Operation has been cancelled"),
Error::InvalidDevice => write!(f, "Unsupported product was entered"),
}
}
}
@@ -77,6 +88,12 @@ impl From<hidapi::HidError> for Error {
}
}
impl From<libusb::Error> for Error {
fn from(err: libusb::Error) -> Error {
Error::LibUsb(err)
}
}
/// Ledger device manager.
pub struct Manager {
usb: Arc<Mutex<hidapi::HidApi>>,
@@ -234,16 +251,7 @@ impl Manager {
fn open_path<R, F>(&self, f: F) -> Result<R, Error>
where F: Fn() -> Result<R, &'static str>
{
let mut err = Error::KeyNotFound;
// Try to open device a few times.
for _ in 0..10 {
match f() {
Ok(handle) => return Ok(handle),
Err(e) => err = From::from(e),
}
::std::thread::sleep(Duration::from_millis(200));
}
Err(err)
f().map_err(Into::into)
}
fn send_apdu(handle: &hidapi::HidDevice, command: u8, p1: u8, p2: u8, data: &[u8]) -> Result<Vec<u8>, Error> {
@@ -333,6 +341,54 @@ impl Manager {
message.truncate(new_len);
Ok(message)
}
fn is_valid_ledger(device: &libusb::Device) -> Result<(), Error> {
let desc = device.device_descriptor()?;
let vendor_id = desc.vendor_id();
let product_id = desc.product_id();
if vendor_id == LEDGER_VID && LEDGER_PIDS.contains(&product_id) {
Ok(())
} else {
Err(Error::InvalidDevice)
}
}
}
/// Ledger event handler
/// A seperate thread is handling incoming events
pub struct EventHandler {
ledger: Weak<Manager>,
}
impl EventHandler {
/// Ledger event handler constructor
pub fn new(ledger: Weak<Manager>) -> Self {
Self { ledger: ledger }
}
}
impl libusb::Hotplug for EventHandler {
fn device_arrived(&mut self, device: libusb::Device) {
if let (Some(ledger), Ok(_)) = (self.ledger.upgrade(), Manager::is_valid_ledger(&device)) {
debug!(target: "hw", "Ledger arrived");
// Wait for the device to boot up
thread::sleep(Duration::from_millis(1000));
if let Err(e) = ledger.update_devices() {
debug!(target: "hw", "Ledger connect error: {:?}", e);
}
}
}
fn device_left(&mut self, device: libusb::Device) {
if let (Some(ledger), Ok(_)) = (self.ledger.upgrade(), Manager::is_valid_ledger(&device)) {
debug!(target: "hw", "Ledger left");
if let Err(e) = ledger.update_devices() {
debug!(target: "hw", "Ledger disconnect error: {:?}", e);
}
}
}
}
#[test]

View File

@@ -33,13 +33,15 @@ use ethkey::{Address, Signature};
use parking_lot::Mutex;
use std::fmt;
use std::sync::{Arc, Weak};
use std::sync::Arc;
use std::sync::atomic;
use std::sync::atomic::AtomicBool;
use std::thread;
use std::time::Duration;
use bigint::prelude::uint::U256;
const USB_DEVICE_CLASS_DEVICE: u8 = 0;
/// Hardware wallet error.
#[derive(Debug)]
pub enum Error {
@@ -128,84 +130,78 @@ impl From<libusb::Error> for Error {
/// Hardware wallet management interface.
pub struct HardwareWalletManager {
update_thread: Option<thread::JoinHandle<()>>,
exiting: Arc<AtomicBool>,
ledger: Arc<ledger::Manager>,
trezor: Arc<trezor::Manager>,
}
struct EventHandler {
ledger: Weak<ledger::Manager>,
trezor: Weak<trezor::Manager>,
}
impl libusb::Hotplug for EventHandler {
fn device_arrived(&mut self, _device: libusb::Device) {
debug!("USB Device arrived");
if let (Some(l), Some(t)) = (self.ledger.upgrade(), self.trezor.upgrade()) {
for _ in 0..10 {
let l_devices = l.update_devices().unwrap_or_else(|e| {
debug!("Error enumerating Ledger devices: {}", e);
0
});
let t_devices = t.update_devices().unwrap_or_else(|e| {
debug!("Error enumerating Trezor devices: {}", e);
0
});
if l_devices + t_devices > 0 {
break;
}
thread::sleep(Duration::from_millis(200));
}
}
}
fn device_left(&mut self, _device: libusb::Device) {
debug!("USB Device lost");
if let (Some(l), Some(t)) = (self.ledger.upgrade(), self.trezor.upgrade()) {
l.update_devices().unwrap_or_else(|e| {debug!("Error enumerating Ledger devices: {}", e); 0});
t.update_devices().unwrap_or_else(|e| {debug!("Error enumerating Trezor devices: {}", e); 0});
}
}
}
impl HardwareWalletManager {
/// Hardware wallet constructor
pub fn new() -> Result<HardwareWalletManager, Error> {
let usb_context = Arc::new(libusb::Context::new()?);
let usb_context_trezor = Arc::new(libusb::Context::new()?);
let usb_context_ledger = Arc::new(libusb::Context::new()?);
let hidapi = Arc::new(Mutex::new(hidapi::HidApi::new().map_err(|e| Error::Hid(e.to_string().clone()))?));
let ledger = Arc::new(ledger::Manager::new(hidapi.clone()));
let trezor = Arc::new(trezor::Manager::new(hidapi.clone()));
usb_context.register_callback(
None, None, None,
Box::new(EventHandler {
ledger: Arc::downgrade(&ledger),
trezor: Arc::downgrade(&trezor),
}),
)?;
// Subscribe to TREZOR V1
// Note, this support only TREZOR V1 becasue TREZOR V2 has another vendorID for some reason
// Also, we now only support one product as the second argument specifies
usb_context_trezor.register_callback(
Some(trezor::TREZOR_VID), Some(trezor::TREZOR_PIDS[0]), Some(USB_DEVICE_CLASS_DEVICE),
Box::new(trezor::EventHandler::new(Arc::downgrade(&trezor))))?;
// Subscribe to all Ledger Devices
// This means that we need to check that the given productID is supported
// None => LIBUSB_HOTPLUG_MATCH_ANY, in other words that all are subscribed to
// More info can be found: http://libusb.sourceforge.net/api-1.0/group__hotplug.html#gae6c5f1add6cc754005549c7259dc35ea
usb_context_ledger.register_callback(
Some(ledger::LEDGER_VID), None, Some(USB_DEVICE_CLASS_DEVICE),
Box::new(ledger::EventHandler::new(Arc::downgrade(&ledger))))?;
let exiting = Arc::new(AtomicBool::new(false));
let thread_exiting = exiting.clone();
let thread_exiting_ledger = exiting.clone();
let thread_exiting_trezor = exiting.clone();
let l = ledger.clone();
let t = trezor.clone();
let thread = thread::Builder::new()
.name("hw_wallet".to_string())
// Ledger event thread
thread::Builder::new()
.name("hw_wallet_ledger".to_string())
.spawn(move || {
if let Err(e) = l.update_devices() {
debug!("Error updating ledger devices: {}", e);
}
if let Err(e) = t.update_devices() {
debug!("Error updating trezor devices: {}", e);
debug!(target: "hw", "Ledger couldn't connect at startup, error: {}", e);
//debug!("Ledger could not connect at startup, error: {}", e);
}
loop {
usb_context.handle_events(Some(Duration::from_millis(500)))
.unwrap_or_else(|e| debug!("Error processing USB events: {}", e));
if thread_exiting.load(atomic::Ordering::Acquire) {
usb_context_ledger.handle_events(Some(Duration::from_millis(500)))
.unwrap_or_else(|e| debug!(target: "hw", "Ledger event handler error: {}", e));
if thread_exiting_ledger.load(atomic::Ordering::Acquire) {
break;
}
}
})
.ok();
// Trezor event thread
thread::Builder::new()
.name("hw_wallet_trezor".to_string())
.spawn(move || {
if let Err(e) = t.update_devices() {
debug!(target: "hw", "Trezor couldn't connect at startup, error: {}", e);
}
loop {
usb_context_trezor.handle_events(Some(Duration::from_millis(500)))
.unwrap_or_else(|e| debug!(target: "hw", "Trezor event handler error: {}", e));
if thread_exiting_trezor.load(atomic::Ordering::Acquire) {
break;
}
}
})
.ok();
Ok(HardwareWalletManager {
update_thread: thread,
exiting: exiting,
ledger: ledger,
trezor: trezor,
@@ -259,10 +255,10 @@ impl HardwareWalletManager {
impl Drop for HardwareWalletManager {
fn drop(&mut self) {
// Indicate to the USB Hotplug handlers that they
// shall terminate but don't wait for them to terminate.
// If they don't terminate for some reason USB Hotplug events will be handled
// even if the HardwareWalletManger has been dropped
self.exiting.store(true, atomic::Ordering::Release);
if let Some(thread) = self.update_thread.take() {
thread.thread().unpark();
thread.join().ok();
}
}
}

View File

@@ -24,23 +24,26 @@ use super::{WalletInfo, TransactionInfo, KeyPath};
use bigint::hash::H256;
use ethkey::{Address, Signature};
use hidapi;
use libusb;
use parking_lot::{Mutex, RwLock};
use protobuf;
use protobuf::{Message, ProtobufEnum};
use std::cmp::{min, max};
use std::fmt;
use std::sync::Arc;
use std::sync::{Arc, Weak};
use std::time::Duration;
use bigint::prelude::uint::U256;
use trezor_sys::messages::{EthereumAddress, PinMatrixAck, MessageType, EthereumTxRequest, EthereumSignTx, EthereumGetAddress, EthereumTxAck, ButtonAck};
const TREZOR_VID: u16 = 0x534c;
const TREZOR_PIDS: [u16; 1] = [0x0001]; // Trezor v1, keeping this as an array to leave room for Trezor v2 which is in progress
/// Trezor v1 vendor ID
pub const TREZOR_VID: u16 = 0x534c;
/// Trezor product IDs
pub const TREZOR_PIDS: [u16; 1] = [0x0001];
const ETH_DERIVATION_PATH: [u32; 5] = [0x8000002C, 0x8000003C, 0x80000000, 0, 0]; // m/44'/60'/0'/0/0
const ETC_DERIVATION_PATH: [u32; 5] = [0x8000002C, 0x8000003D, 0x80000000, 0, 0]; // m/44'/61'/0'/0/0
/// Hardware wallet error.
#[derive(Debug)]
pub enum Error {
@@ -55,7 +58,7 @@ pub enum Error {
/// The Message Type given in the trezor RPC call is not something we recognize
BadMessageType,
/// Trying to read from a closed device at the given path
ClosedDevice(String),
LockedDevice(String),
}
impl fmt::Display for Error {
@@ -66,7 +69,7 @@ impl fmt::Display for Error {
Error::KeyNotFound => write!(f, "Key not found"),
Error::UserCancel => write!(f, "Operation has been cancelled"),
Error::BadMessageType => write!(f, "Bad Message Type in RPC call"),
Error::ClosedDevice(ref s) => write!(f, "Device is closed, needs PIN to perform operations: {}", s),
Error::LockedDevice(ref s) => write!(f, "Device is locked, needs PIN to perform operations: {}", s),
}
}
}
@@ -83,11 +86,11 @@ impl From<protobuf::ProtobufError> for Error {
}
}
/// Ledger device manager.
/// Ledger device manager
pub struct Manager {
usb: Arc<Mutex<hidapi::HidApi>>,
devices: RwLock<Vec<Device>>,
closed_devices: RwLock<Vec<String>>,
locked_devices: RwLock<Vec<String>>,
key_path: RwLock<KeyPath>,
}
@@ -109,7 +112,7 @@ impl Manager {
Manager {
usb: hidapi,
devices: RwLock::new(Vec::new()),
closed_devices: RwLock::new(Vec::new()),
locked_devices: RwLock::new(Vec::new()),
key_path: RwLock::new(KeyPath::Ethereum),
}
}
@@ -120,7 +123,7 @@ impl Manager {
usb.refresh_devices();
let devices = usb.devices();
let mut new_devices = Vec::new();
let mut closed_devices = Vec::new();
let mut locked_devices = Vec::new();
let mut error = None;
for usb_device in devices {
let is_trezor = usb_device.vendor_id == TREZOR_VID;
@@ -139,7 +142,7 @@ impl Manager {
}
match self.read_device_info(&usb, &usb_device) {
Ok(device) => new_devices.push(device),
Err(Error::ClosedDevice(path)) => closed_devices.push(path.to_string()),
Err(Error::LockedDevice(path)) => locked_devices.push(path.to_string()),
Err(e) => {
warn!("Error reading device: {:?}", e);
error = Some(e);
@@ -147,9 +150,9 @@ impl Manager {
}
}
let count = new_devices.len();
trace!("Got devices: {:?}, closed: {:?}", new_devices, closed_devices);
trace!("Got devices: {:?}, closed: {:?}", new_devices, locked_devices);
*self.devices.write() = new_devices;
*self.closed_devices.write() = closed_devices;
*self.locked_devices.write() = locked_devices;
match error {
Some(e) => Err(e),
None => Ok(count),
@@ -173,7 +176,7 @@ impl Manager {
},
})
}
Ok(None) => Err(Error::ClosedDevice(dev_info.path.clone())),
Ok(None) => Err(Error::LockedDevice(dev_info.path.clone())),
Err(e) => Err(e),
}
}
@@ -189,7 +192,7 @@ impl Manager {
}
pub fn list_locked_devices(&self) -> Vec<String> {
(*self.closed_devices.read()).clone()
(*self.locked_devices.read()).clone()
}
/// Get wallet info.
@@ -200,16 +203,7 @@ impl Manager {
fn open_path<R, F>(&self, f: F) -> Result<R, Error>
where F: Fn() -> Result<R, &'static str>
{
let mut err = Error::KeyNotFound;
// Try to open device a few times.
for _ in 0..10 {
match f() {
Ok(handle) => return Ok(handle),
Err(e) => err = From::from(e),
}
::std::thread::sleep(Duration::from_millis(200));
}
Err(err)
f().map_err(Into::into)
}
pub fn pin_matrix_ack(&self, device_path: &str, pin: &str) -> Result<bool, Error> {
@@ -406,6 +400,42 @@ impl Manager {
}
}
/// Trezor event handler
/// A separate thread is handeling incoming events
pub struct EventHandler {
trezor: Weak<Manager>,
}
impl EventHandler {
// Trezor event handler constructor
pub fn new(trezor: Weak<Manager>) -> Self {
Self { trezor: trezor }
}
}
impl libusb::Hotplug for EventHandler {
fn device_arrived(&mut self, _device: libusb::Device) {
debug!(target: "hw", "Trezor V1 arrived");
if let Some(trezor) = self.trezor.upgrade() {
// Wait for the device to boot up
::std::thread::sleep(Duration::from_millis(1000));
if let Err(e) = trezor.update_devices() {
debug!(target: "hw", "Trezor V1 connect error: {:?}", e);
}
}
}
fn device_left(&mut self, _device: libusb::Device) {
debug!(target: "hw", "Trezor V1 left");
if let Some(trezor) = self.trezor.upgrade() {
if let Err(e) = trezor.update_devices() {
debug!(target: "hw", "Trezor V1 disconnect error: {:?}", e);
}
}
}
}
#[test]
#[ignore]
/// This test can't be run without an actual trezor device connected

View File

@@ -7522,7 +7522,7 @@
}
},
"jsqr": {
"version": "git+https://github.com/JodusNodus/jsQR.git#5ba1acefa1cbb9b2bc92b49f503f2674e2ec212b"
"version": "git+https://github.com/cozmo/jsQR.git#1fb946a235abdc7709f04cd0e4aa316a3b6eae70"
},
"jsx-ast-utils": {
"version": "1.4.1",
@@ -10576,13 +10576,29 @@
}
},
"react-qr-reader": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/react-qr-reader/-/react-qr-reader-1.1.3.tgz",
"integrity": "sha512-ruBF8KaSwUW9nbzjO4rA7/HOCGYZuNUz9od7uBRy8SRBi24nwxWWmwa2z8R6vPGDRglA0y2Qk1aVBuC1olTnHw==",
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/react-qr-reader/-/react-qr-reader-2.0.1.tgz",
"integrity": "sha512-J/VuCq/udEqry7Z4nXGTbguetfSdRJr1Cv0lYKbVKIW1blfhB0Xe6OjO+5Th5B8884+s40BDFwtqP67h7YTtYA==",
"requires": {
"jsqr": "git+https://github.com/JodusNodus/jsQR.git#5ba1acefa1cbb9b2bc92b49f503f2674e2ec212b",
"jsqr": "git+https://github.com/cozmo/jsQR.git#1fb946a235abdc7709f04cd0e4aa316a3b6eae70",
"prop-types": "15.6.0",
"webrtc-adapter": "2.1.0"
"webrtc-adapter": "5.0.6"
},
"dependencies": {
"sdp": {
"version": "2.6.0",
"resolved": "https://registry.npmjs.org/sdp/-/sdp-2.6.0.tgz",
"integrity": "sha512-/q5nUDSqvfh+P5pvb4Ez1IsF6F9aLLgslHrSDSltqvUuS7raTY9ROjbGJTyvGSYRs99FY59c8Od1lT7WVaiNAw=="
},
"webrtc-adapter": {
"version": "5.0.6",
"resolved": "https://registry.npmjs.org/webrtc-adapter/-/webrtc-adapter-5.0.6.tgz",
"integrity": "sha512-dh2hPQFOPP0tLEYlFxtGI5vuQmRqkOdYni5wMKUHIx5I2dw0TJ1HdG7P+UechRWt6TvwPWhtbjVNQcQf1KXJmQ==",
"requires": {
"rtcpeerconnection-shim": "1.2.8",
"sdp": "2.6.0"
}
}
}
},
"react-redux": {
@@ -11221,6 +11237,21 @@
"resolved": "https://registry.npmjs.org/rlp/-/rlp-2.0.0.tgz",
"integrity": "sha1-nbOE/0uJqPYVY9kjldhiWxjzr7A="
},
"rtcpeerconnection-shim": {
"version": "1.2.8",
"resolved": "https://registry.npmjs.org/rtcpeerconnection-shim/-/rtcpeerconnection-shim-1.2.8.tgz",
"integrity": "sha512-5Sx90FGru1sQw9aGOM+kHU4i6mbP8eJPgxliu2X3Syhg8qgDybx8dpDTxUwfJvPnubXFnZeRNl59DWr4AttJKQ==",
"requires": {
"sdp": "2.6.0"
},
"dependencies": {
"sdp": {
"version": "2.6.0",
"resolved": "https://registry.npmjs.org/sdp/-/sdp-2.6.0.tgz",
"integrity": "sha512-/q5nUDSqvfh+P5pvb4Ez1IsF6F9aLLgslHrSDSltqvUuS7raTY9ROjbGJTyvGSYRs99FY59c8Od1lT7WVaiNAw=="
}
}
},
"rucksack-css": {
"version": "0.9.1",
"resolved": "https://registry.npmjs.org/rucksack-css/-/rucksack-css-0.9.1.tgz",
@@ -11304,9 +11335,9 @@
"integrity": "sha1-Jiw28CMc+nZU4jY/o5TNLexm83g="
},
"sdp": {
"version": "1.5.4",
"resolved": "https://registry.npmjs.org/sdp/-/sdp-1.5.4.tgz",
"integrity": "sha1-jgOPbdsUvXZa4fS1IW4SCUUR4NA="
"version": "2.6.0",
"resolved": "https://registry.npmjs.org/sdp/-/sdp-2.6.0.tgz",
"integrity": "sha512-/q5nUDSqvfh+P5pvb4Ez1IsF6F9aLLgslHrSDSltqvUuS7raTY9ROjbGJTyvGSYRs99FY59c8Od1lT7WVaiNAw=="
},
"secp256k1": {
"version": "3.4.0",
@@ -13171,11 +13202,12 @@
}
},
"webrtc-adapter": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/webrtc-adapter/-/webrtc-adapter-2.1.0.tgz",
"integrity": "sha1-YStbxs6Oc8nQZgA4oh+SVahnvz4=",
"version": "5.0.6",
"resolved": "https://registry.npmjs.org/webrtc-adapter/-/webrtc-adapter-5.0.6.tgz",
"integrity": "sha512-dh2hPQFOPP0tLEYlFxtGI5vuQmRqkOdYni5wMKUHIx5I2dw0TJ1HdG7P+UechRWt6TvwPWhtbjVNQcQf1KXJmQ==",
"requires": {
"sdp": "1.5.4"
"rtcpeerconnection-shim": "1.2.8",
"sdp": "2.6.0"
}
},
"websocket": {

View File

@@ -179,7 +179,7 @@
"react-intl": "2.1.5",
"react-markdown": "2.4.4",
"react-portal": "3.0.0",
"react-qr-reader": "1.1.3",
"react-qr-reader": "2.0.1",
"react-redux": "4.4.6",
"react-router": "3.0.0",
"react-router-redux": "4.0.7",
@@ -201,6 +201,7 @@
"utf8": "2.1.2",
"valid-url": "1.0.9",
"validator": "6.2.0",
"webrtc-adapter": "5.0.6",
"whatwg-fetch": "2.0.1",
"worker-loader": "^0.8.0",
"zxcvbn": "4.4.1"

View File

@@ -4,4 +4,5 @@
"author": "Parity <admin@parity.io>",
"description": "Parity Wallet and Account management tools",
"iconUrl": "icon.png",
"allowJsEval": true
}

88
js/package-lock.json generated
View File

@@ -49,9 +49,9 @@
"dev": true,
"requires": {
"@parity/api": "2.1.15",
"@parity/mobx": "1.0.7",
"@parity/mobx": "1.1.2",
"@parity/ui": "3.0.22",
"mobx": "3.4.1",
"mobx": "3.5.1",
"mobx-react": "4.3.5",
"prop-types": "15.6.0",
"react": "16.2.0",
@@ -62,19 +62,27 @@
"semantic-ui-react": "0.77.2"
},
"dependencies": {
"@parity/jsonrpc": {
"version": "2.1.5",
"resolved": "https://registry.npmjs.org/@parity/jsonrpc/-/jsonrpc-2.1.5.tgz",
"integrity": "sha512-M6aLgssTfqloNgVFuzxSQ3J5RJ5T9g4a4wka1QVumaud7e4ubFjuJgR0F+0aQ/H1zdiTSMDHSmoaeAp8UoE4fA==",
"dev": true
},
"@parity/mobx": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/@parity/mobx/-/mobx-1.0.7.tgz",
"integrity": "sha512-HC9VFcFnZ+h/YZWSiA2vIJcXK2yhLNFipPxAIMkDMClgNX9sOxrItmjmTfETAlHVM/axO2FIluLCd3VO/Xze8w==",
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@parity/mobx/-/mobx-1.1.2.tgz",
"integrity": "sha512-ctAYYGYVVWwoPjn1TdWMdKZRLEGgcgUvSLUFrHbr+IBq3T+2fUedMLcIxGzEOatd/Y7s+YKvk9S1TcT954GH8g==",
"dev": true,
"requires": {
"@parity/ledger": "2.1.2"
"@parity/jsonrpc": "2.1.5",
"@parity/ledger": "2.1.2",
"@parity/shared": "2.2.23"
}
},
"mobx": {
"version": "3.4.1",
"resolved": "https://registry.npmjs.org/mobx/-/mobx-3.4.1.tgz",
"integrity": "sha1-N6vl7ogtQBgo2fJsbBovR2FLu+8=",
"version": "3.5.1",
"resolved": "https://registry.npmjs.org/mobx/-/mobx-3.5.1.tgz",
"integrity": "sha1-jmguxTXPROBABbnjfi32asyXWkI=",
"dev": true
},
"prop-types": {
@@ -132,9 +140,9 @@
"dev": true,
"requires": {
"@parity/api": "2.1.15",
"@parity/mobx": "1.0.7",
"@parity/mobx": "1.1.2",
"@parity/ui": "3.0.22",
"mobx": "3.4.1",
"mobx": "3.5.1",
"mobx-react": "4.3.5",
"prop-types": "15.6.0",
"react": "16.2.0",
@@ -145,19 +153,27 @@
"semantic-ui-react": "0.77.2"
},
"dependencies": {
"@parity/jsonrpc": {
"version": "2.1.5",
"resolved": "https://registry.npmjs.org/@parity/jsonrpc/-/jsonrpc-2.1.5.tgz",
"integrity": "sha512-M6aLgssTfqloNgVFuzxSQ3J5RJ5T9g4a4wka1QVumaud7e4ubFjuJgR0F+0aQ/H1zdiTSMDHSmoaeAp8UoE4fA==",
"dev": true
},
"@parity/mobx": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/@parity/mobx/-/mobx-1.0.7.tgz",
"integrity": "sha512-HC9VFcFnZ+h/YZWSiA2vIJcXK2yhLNFipPxAIMkDMClgNX9sOxrItmjmTfETAlHVM/axO2FIluLCd3VO/Xze8w==",
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@parity/mobx/-/mobx-1.1.2.tgz",
"integrity": "sha512-ctAYYGYVVWwoPjn1TdWMdKZRLEGgcgUvSLUFrHbr+IBq3T+2fUedMLcIxGzEOatd/Y7s+YKvk9S1TcT954GH8g==",
"dev": true,
"requires": {
"@parity/ledger": "2.1.2"
"@parity/jsonrpc": "2.1.5",
"@parity/ledger": "2.1.2",
"@parity/shared": "2.2.23"
}
},
"mobx": {
"version": "3.4.1",
"resolved": "https://registry.npmjs.org/mobx/-/mobx-3.4.1.tgz",
"integrity": "sha1-N6vl7ogtQBgo2fJsbBovR2FLu+8=",
"version": "3.5.1",
"resolved": "https://registry.npmjs.org/mobx/-/mobx-3.5.1.tgz",
"integrity": "sha1-jmguxTXPROBABbnjfi32asyXWkI=",
"dev": true
},
"prop-types": {
@@ -235,10 +251,10 @@
"dev": true,
"requires": {
"@parity/api": "2.1.15",
"@parity/mobx": "1.0.7",
"@parity/mobx": "1.1.2",
"@parity/ui": "3.0.22",
"format-number": "3.0.0",
"mobx": "3.4.1",
"mobx": "3.5.1",
"mobx-react": "4.3.5",
"prop-types": "15.6.0",
"react": "16.2.0",
@@ -250,19 +266,27 @@
"semantic-ui-react": "0.77.0"
},
"dependencies": {
"@parity/jsonrpc": {
"version": "2.1.5",
"resolved": "https://registry.npmjs.org/@parity/jsonrpc/-/jsonrpc-2.1.5.tgz",
"integrity": "sha512-M6aLgssTfqloNgVFuzxSQ3J5RJ5T9g4a4wka1QVumaud7e4ubFjuJgR0F+0aQ/H1zdiTSMDHSmoaeAp8UoE4fA==",
"dev": true
},
"@parity/mobx": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/@parity/mobx/-/mobx-1.0.7.tgz",
"integrity": "sha512-HC9VFcFnZ+h/YZWSiA2vIJcXK2yhLNFipPxAIMkDMClgNX9sOxrItmjmTfETAlHVM/axO2FIluLCd3VO/Xze8w==",
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@parity/mobx/-/mobx-1.1.2.tgz",
"integrity": "sha512-ctAYYGYVVWwoPjn1TdWMdKZRLEGgcgUvSLUFrHbr+IBq3T+2fUedMLcIxGzEOatd/Y7s+YKvk9S1TcT954GH8g==",
"dev": true,
"requires": {
"@parity/ledger": "2.1.2"
"@parity/jsonrpc": "2.1.5",
"@parity/ledger": "2.1.2",
"@parity/shared": "2.2.23"
}
},
"mobx": {
"version": "3.4.1",
"resolved": "https://registry.npmjs.org/mobx/-/mobx-3.4.1.tgz",
"integrity": "sha1-N6vl7ogtQBgo2fJsbBovR2FLu+8=",
"version": "3.5.1",
"resolved": "https://registry.npmjs.org/mobx/-/mobx-3.5.1.tgz",
"integrity": "sha1-jmguxTXPROBABbnjfi32asyXWkI=",
"dev": true
},
"prop-types": {
@@ -9045,6 +9069,9 @@
"verror": "1.10.0"
}
},
"jsqr": {
"version": "git+https://github.com/cozmo/jsQR.git#1fb946a235abdc7709f04cd0e4aa316a3b6eae70"
},
"jsx-ast-utils": {
"version": "1.4.1",
"resolved": "https://registry.npmjs.org/jsx-ast-utils/-/jsx-ast-utils-1.4.1.tgz",
@@ -12850,14 +12877,9 @@
"resolved": "https://registry.npmjs.org/react-qr-reader/-/react-qr-reader-2.0.1.tgz",
"integrity": "sha512-J/VuCq/udEqry7Z4nXGTbguetfSdRJr1Cv0lYKbVKIW1blfhB0Xe6OjO+5Th5B8884+s40BDFwtqP67h7YTtYA==",
"requires": {
"jsqr": "git+https://github.com/cozmo/jsQR.git#397a6eb8f90752cf640cb4bb67ba6f86e3bb5d1f",
"jsqr": "git+https://github.com/cozmo/jsQR.git#1fb946a235abdc7709f04cd0e4aa316a3b6eae70",
"prop-types": "15.5.10",
"webrtc-adapter": "5.0.6"
},
"dependencies": {
"jsqr": {
"version": "git+https://github.com/cozmo/jsQR.git#397a6eb8f90752cf640cb4bb67ba6f86e3bb5d1f"
}
}
},
"react-redux": {

View File

@@ -135,6 +135,7 @@ export default class Dapp extends Component {
return (
<iframe
allow='camera'
className={ styles.frame }
frameBorder={ 0 }
id='dappFrame'

View File

@@ -16,7 +16,7 @@
//! Ethash params deserialization.
use uint::Uint;
use uint::{self, Uint};
use hash::Address;
/// Deserializable doppelganger of EthashParams.
@@ -27,12 +27,15 @@ pub struct EthashParams {
pub minimum_difficulty: Uint,
/// See main EthashParams docs.
#[serde(rename="difficultyBoundDivisor")]
#[serde(deserialize_with="uint::validate_non_zero")]
pub difficulty_bound_divisor: Uint,
/// See main EthashParams docs.
#[serde(rename="difficultyIncrementDivisor")]
#[serde(default, deserialize_with="uint::validate_optional_non_zero")]
pub difficulty_increment_divisor: Option<Uint>,
/// See main EthashParams docs.
#[serde(rename="metropolisDifficultyIncrementDivisor")]
#[serde(default, deserialize_with="uint::validate_optional_non_zero")]
pub metropolis_difficulty_increment_divisor: Option<Uint>,
/// See main EthashParams docs.
#[serde(rename="durationLimit")]
@@ -60,6 +63,7 @@ pub struct EthashParams {
pub difficulty_hardfork_transition: Option<Uint>,
/// See main EthashParams docs.
#[serde(rename="difficultyHardforkBoundDivisor")]
#[serde(default, deserialize_with="uint::validate_optional_non_zero")]
pub difficulty_hardfork_bound_divisor: Option<Uint>,
/// See main EthashParams docs.
#[serde(rename="bombDefuseTransition")]
@@ -302,4 +306,17 @@ mod tests {
}
});
}
#[test]
#[should_panic(expected = "a non-zero value")]
fn test_zero_value_divisor() {
let s = r#"{
"params": {
"difficultyBoundDivisor": "0x0",
"minimumDifficulty": "0x020000"
}
}"#;
let _deserialized: Ethash = serde_json::from_str(s).unwrap();
}
}

View File

@@ -16,7 +16,7 @@
//! Spec params deserialization.
use uint::Uint;
use uint::{self, Uint};
use hash::{H256, Address};
use bytes::Bytes;
@@ -98,10 +98,9 @@ pub struct Params {
pub nonce_cap_increment: Option<Uint>,
/// See `CommonParams` docs.
pub remove_dust_contracts : Option<bool>,
/// Wasm support flag
pub wasm: Option<bool>,
/// See `CommonParams` docs.
#[serde(rename="gasLimitBoundDivisor")]
#[serde(deserialize_with="uint::validate_non_zero")]
pub gas_limit_bound_divisor: Uint,
/// See `CommonParams` docs.
pub registrar: Option<Address>,
@@ -117,6 +116,9 @@ pub struct Params {
/// Transaction permission contract address.
#[serde(rename="transactionPermissionContract")]
pub transaction_permission_contract: Option<Address>,
/// Wasm activation block height, if not activated from start
#[serde(rename="wasmActivationTransition")]
pub wasm_activation_transition: Option<Uint>,
}
#[cfg(test)]
@@ -136,7 +138,8 @@ mod tests {
"minGasLimit": "0x1388",
"accountStartNonce": "0x01",
"gasLimitBoundDivisor": "0x20",
"maxCodeSize": "0x1000"
"maxCodeSize": "0x1000",
"wasmActivationTransition": "0x1010"
}"#;
let deserialized: Params = serde_json::from_str(s).unwrap();
@@ -148,5 +151,23 @@ mod tests {
assert_eq!(deserialized.account_start_nonce, Some(Uint(U256::from(0x01))));
assert_eq!(deserialized.gas_limit_bound_divisor, Uint(U256::from(0x20)));
assert_eq!(deserialized.max_code_size, Some(Uint(U256::from(0x1000))));
assert_eq!(deserialized.wasm_activation_transition, Some(Uint(U256::from(0x1010))));
}
#[test]
#[should_panic(expected = "a non-zero value")]
fn test_zero_value_divisor() {
let s = r#"{
"maximumExtraDataSize": "0x20",
"networkID" : "0x1",
"chainID" : "0x15",
"subprotocolName" : "exp",
"minGasLimit": "0x1388",
"accountStartNonce": "0x01",
"gasLimitBoundDivisor": "0x0",
"maxCodeSize": "0x1000"
}"#;
let _deserialized: Params = serde_json::from_str(s).unwrap();
}
}

View File

@@ -19,7 +19,7 @@
use std::fmt;
use std::str::FromStr;
use serde::{Deserialize, Deserializer};
use serde::de::{Error, Visitor};
use serde::de::{Error, Visitor, Unexpected};
use bigint::prelude::U256;
/// Lenient uint json deserialization for test json files.
@@ -90,6 +90,28 @@ impl<'a> Visitor<'a> for UintVisitor {
}
}
pub fn validate_non_zero<'de, D>(d: D) -> Result<Uint, D::Error> where D: Deserializer<'de> {
let value = Uint::deserialize(d)?;
if value == Uint(U256::from(0)) {
return Err(Error::invalid_value(Unexpected::Unsigned(value.into()), &"a non-zero value"))
}
Ok(value)
}
pub fn validate_optional_non_zero<'de, D>(d: D) -> Result<Option<Uint>, D::Error> where D: Deserializer<'de> {
let value: Option<Uint> = Option::deserialize(d)?;
if let Some(value) = value {
if value == Uint(U256::from(0)) {
return Err(Error::invalid_value(Unexpected::Unsigned(value.into()), &"a non-zero value"))
}
}
Ok(value)
}
#[cfg(test)]
mod test {
use serde_json;

View File

@@ -289,7 +289,7 @@ mod server {
self.endpoints.list()
.into_iter()
.map(|app| rpc_apis::LocalDapp {
id: app.id,
id: app.id.unwrap_or_else(|| "unknown".into()),
name: app.name,
description: app.description,
version: app.version,

View File

@@ -676,11 +676,15 @@ pub fn execute_impl(cmd: RunCmd, can_restart: bool, logger: Arc<RotatingLogger>)
let event_loop = EventLoop::spawn();
// the updater service
let mut updater_fetch = fetch.clone();
// parity binaries should be smaller than 128MB
updater_fetch.set_limit(Some(128 * 1024 * 1024));
let updater = Updater::new(
Arc::downgrade(&(service.client() as Arc<BlockChainClient>)),
Arc::downgrade(&sync_provider),
update_policy,
fetch.clone(),
updater_fetch,
event_loop.remote(),
);
service.add_notify(updater.clone());

View File

@@ -22,13 +22,10 @@ echo "Parity version: " $VER
echo "Branch: " $CI_BUILD_REF_NAME
echo "--------------------"
echo "Rhash version:"
# NOTE for md5 and sha256 we want to display filename as well
# hence we use --* instead of -p *
MD5_BIN="rhash --md5"
SHA256_BIN="rhash --sha256"
# NOTE For SHA3 we need only hash (hence -p)
SHA3_BIN="rhash -p %{sha3-256}"
set_env () {
echo "Set ENVIROMENT"
@@ -70,14 +67,12 @@ strip_binaries () {
calculate_checksums () {
echo "Checksum calculation:"
rhash --version
rm -rf *.md5
rm -rf *.sha256
export SHA3="$($SHA3_BIN target/$PLATFORM/release/parity$S3WIN)"
# NOTE rhash 1.3.1 doesnt support keccak, workaround
if [ "$SHA3" == "%{sha3-256}" ]; then
export SHA3="$(target/$PLATFORM/release/parity$S3WIN tools hash target/$PLATFORM/release/parity$S3WIN)"
fi
BIN="target/$PLATFORM/release/parity$S3WIN"
export SHA3="$($BIN tools hash $BIN)"
echo "Parity file SHA3: $SHA3"
$MD5_BIN target/$PLATFORM/release/parity$S3WIN > parity$S3WIN.md5
@@ -308,13 +303,13 @@ case $BUILD_PLATFORM in
x86_64-unknown-snap-gnu)
ARC="amd64"
EXT="snap"
apt install -y expect zip
apt install -y expect zip rhash
snapcraft clean
echo "Prepare snapcraft.yaml for build on Gitlab CI in Docker image"
sed -i 's/git/'"$VER"'/g' snap/snapcraft.yaml
if [[ "$CI_BUILD_REF_NAME" = "beta" || "$VER" == *1.9* ]];
then
sed -i -e 's/grade: devel/grade: beta/' snap/snapcraft.yaml;
sed -i -e 's/grade: devel/grade: stable/' snap/snapcraft.yaml;
fi
mv -f snap/snapcraft.yaml snapcraft.yaml
snapcraft -d

View File

@@ -127,6 +127,11 @@ impl Client {
})
}
/// Sets a limit on the maximum download size.
pub fn set_limit(&mut self, limit: Option<usize>) {
self.limit = limit
}
fn client(&self) -> Result<Arc<reqwest::Client>, Error> {
{
let (ref time, ref client) = *self.client.read();
@@ -150,8 +155,8 @@ impl Fetch for Client {
type Result = CpuFuture<Response, Error>;
fn new() -> Result<Self, Error> {
// Max 50MB will be downloaded.
Self::with_limit(Some(50*1024*1024))
// Max 64MB will be downloaded.
Self::with_limit(Some(64 * 1024 * 1024))
}
fn process<F, I, E>(&self, f: F) -> BoxFuture<I, E> where