* ethash: implement EIP-1234 (#9187)
* Implement EIP-1052 (EXTCODEHASH) and fix several issues in state account cache (#9234)
* Implement EIP-1052 and fix several issues related to account cache
* Fix jsontests
* Merge two matches together
* Avoid making unnecessary Arc<Vec>
* Address grumbles
* Comply EIP-86 with the new definition (#9140)
* Comply EIP-86 with the new CREATE2 opcode
* Fix rpc compile
* Fix interpreter CREATE/CREATE2 stack pop difference
* Add unreachable! to fix compile
* Fix instruction_info
* Fix gas check due to new stack item
* Add new tests in executive
* Fix have_create2 comment
* Remove all unused references of eip86_transition and block_number
* Implement KIP4: create2 for wasm (#9277)
* Basic implementation for kip4
* Add KIP-4 config flags
* typo: docs fix
* Fix args offset
* Add tests for create2
* tests: evm
* Update wasm-tests and fix all gas costs
* Update wasm-tests
* Update wasm-tests and fix gas costs
* `gasleft` extern implemented for WASM runtime (kip-6) (#9357)
* Wasm gasleft extern added
* wasm_gasleft_activation_transition -> kip4_transition
* use kip-6 switch
* gasleft_panic -> gasleft_fail rename
* call_msg_gasleft test added and gas_left agustments because this https://github.com/paritytech/wasm-tests/pull/52
* change .. to _
* fix comment for the have_gasleft param
* update tests (0edbf860ff
)
* Add EIP-1014 transition config flag (#9268)
* Add EIP-1014 transition config flag
* Remove EIP-86 configs
* Change CREATE2 opcode index to 0xf5
* Move salt to the last item in the stack
* Change sendersaltandaddress scheme to comply with current EIP-1014
* Fix json configs
* Fix create2 test
* Fix deprecated comments
* EIP 1283: Net gas metering for SSTORE without dirty maps (#9319)
* Implement last_checkpoint_storage_at
* Add reverted_storage_at for externalities
* sstore_clears_count -> sstore_clears_refund
* Implement eip1283 for evm
* Add eip1283Transition params
* evm: fix tests
* jsontests: fix test
* Return checkpoint index when creating
* Comply with spec Version II
* Fix docs
* Fix jsontests feature compile
* Address grumbles
* Fix no-checkpoint-entry case
* Remove unnecessary expect
* Add test for State::checkpoint_storage_at
* Add executive level test for eip1283
* Hard-code transaction_checkpoint_index to 0
* Fix jsontests
* Add tests for checkpoint discard/revert
* Require checkpoint to be empty for kill_account and commit
* Get code coverage
* Use saturating_add/saturating_sub
* Fix issues in insert_cache
* Clear the state again
* Fix original_storage_at
* Early return for empty RLP trie storage
* Update comments
* Fix borrow_mut issue
* Simplify checkpoint_storage_at if branches
* Better commenting for gas handling code
* Address naming grumbles
* More tests
* Fix an issue in overwrite_with
* Add another test
* Fix comment
* Remove unnecessary bracket
* Move orig to inner if
* Remove test coverage for this PR
* Add tests for executive original value
* Add warn! for an unreachable cause
* Update state tests execution model (#9440)
* Update & fix JSON state tests.
* Update tests to be able to run ethtest at
021fe3d410773024cd5f0387e62db6e6ec800f32.
- Touch user in state
- Adjust transaction tests to new json format
* Switch to same commit for submodule ethereum/test as geth (next includes constantinople changes).
Added test `json_tests::trie::generic::TrieTests_trieanyorder` and a few
difficulty tests.
* Remove trietestnextprev as it would require to parse differently and
implement it.
* Support new (shitty) format of transaction tests.
* Ignore junk in ethereum/tests repo.
* Ignore incorrect test.
* Update to a later commit
* Move block number to a constant.
* Fix ZK2 test - touched account should also be cleared.
* Fix conflict resolution
* Fix checkpointing when creating contract failed (#9514)
* In create memory calculation is the same for create2 because the additional parameter was popped before. (#9522)
* Enable all Constantinople hard fork changes in constantinople_test.json (#9505)
* Enable all Constantinople hard fork changes in constantinople_test.json
* Address grumbles
* Remove EIP-210 activation
* 8m -> 5m
* Temporarily add back eip210 transition so we can get test passed
* Add eip210_test and remove eip210 transition from const_test
* Add constantinople conf to EvmTestClient. (#9570)
* Add constantinople conf to EvmTestClient.
* Skip some test to update submodule etheureum/tests submodule to latest.
* Put skipping 'under issue' test behind a feature.
* Change blockReward for const-test to pass ethereum/tests
* Update tests to new constantinple definition (change of reward at block
5).
Switch 'reference' to string, that way we can include issues from others
repo (more flexible)Update tests to new constantinple definition (change
of reward at block 5).
Switch 'reference' to string, that way we can include issues from others
repo (more flexible).
* Fix modexp and bn128_mul gas prices in chain config
* Changes `run_test_path` method to append its directory results (without
that it stop testing at the first file failure).
Add some missing tests.
Add skip for those (block create2 is one hundred percent false but on
hive we can see that geth and aleth got similar issue for this item).
* retab current.json
* Update reference to parity issue for failing tests.
* Hardfork the testnets (#9562)
* ethcore: propose hardfork block number 4230000 for ropsten
* ethcore: propose hardfork block number 9000000 for kovan
* ethcore: enable kip-4 and kip-6 on kovan
* etcore: bump kovan hardfork to block 9.2M
* ethcore: fix ropsten constantinople block number to 4.2M
* ethcore: disable difficulty_test_ropsten until ethereum/tests are updated upstream
* Don't hash the init_code of CREATE. (#9688)
* Implement CREATE2 gas changes and fix some potential overflowing (#9694)
* Implement CREATE2 gas changes and fix some potential overflowing
* Ignore create2 state tests
* Split CREATE and CREATE2 in gasometer
* Generalize rounding (x + 31) / 32 to to_word_size
* ethcore: delay ropsten hardfork (#9704)
* Add hardcoded headers (#9730)
* add foundation hardcoded header #6486017
* add ropsten hardcoded headers #4202497
* add kovan hardcoded headers #9023489
* gitlab ci: releasable_branches: change variables condition to schedule (#9729)
* HF in POA Core (2018-10-22) (#9724)
https://github.com/poanetwork/poa-chain-spec/pull/87
651 lines
22 KiB
Rust
651 lines
22 KiB
Rust
// Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
|
// This file is part of Parity.
|
|
|
|
// Parity is free software: you can redistribute it and/or modify
|
|
// it under the terms of the GNU General Public License as published by
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
// (at your option) any later version.
|
|
|
|
// Parity is distributed in the hope that it will be useful,
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
// GNU General Public License for more details.
|
|
|
|
// You should have received a copy of the GNU General Public License
|
|
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
//! Helpers for fetching blockchain data either from the light client or the network.
|
|
|
|
use std::cmp;
|
|
use std::sync::Arc;
|
|
|
|
use ethcore::basic_account::BasicAccount;
|
|
use ethcore::encoded;
|
|
use ethcore::ids::BlockId;
|
|
use ethcore::filter::Filter as EthcoreFilter;
|
|
use ethcore::receipt::Receipt;
|
|
|
|
use jsonrpc_core::{Result, Error};
|
|
use jsonrpc_core::futures::{future, Future};
|
|
use jsonrpc_core::futures::future::Either;
|
|
use jsonrpc_macros::Trailing;
|
|
|
|
use light::cache::Cache;
|
|
use light::client::LightChainClient;
|
|
use light::{cht, MAX_HEADERS_PER_REQUEST};
|
|
use light::on_demand::{
|
|
request, OnDemand, HeaderRef, Request as OnDemandRequest,
|
|
Response as OnDemandResponse, ExecutionResult,
|
|
};
|
|
use light::request::Field;
|
|
|
|
use sync::LightSync;
|
|
use ethereum_types::{U256, Address};
|
|
use hash::H256;
|
|
use parking_lot::Mutex;
|
|
use plain_hasher::H256FastMap;
|
|
use transaction::{Action, Transaction as EthTransaction, SignedTransaction, LocalizedTransaction};
|
|
|
|
use v1::helpers::{CallRequest as CallRequestHelper, errors, dispatch};
|
|
use v1::types::{BlockNumber, CallRequest, Log, Transaction};
|
|
|
|
const NO_INVALID_BACK_REFS: &str = "Fails only on invalid back-references; back-references here known to be valid; qed";
|
|
|
|
/// Helper for fetching blockchain data either from the light client or the network
|
|
/// as necessary.
|
|
#[derive(Clone)]
|
|
pub struct LightFetch {
|
|
/// The light client.
|
|
pub client: Arc<LightChainClient>,
|
|
/// The on-demand request service.
|
|
pub on_demand: Arc<OnDemand>,
|
|
/// Handle to the network.
|
|
pub sync: Arc<LightSync>,
|
|
/// The light data cache.
|
|
pub cache: Arc<Mutex<Cache>>,
|
|
/// Gas Price percentile
|
|
pub gas_price_percentile: usize,
|
|
}
|
|
|
|
/// Extract a transaction at given index.
|
|
pub fn extract_transaction_at_index(block: encoded::Block, index: usize) -> Option<Transaction> {
|
|
block.transactions().into_iter().nth(index)
|
|
// Verify if transaction signature is correct.
|
|
.and_then(|tx| SignedTransaction::new(tx).ok())
|
|
.map(|signed_tx| {
|
|
let (signed, sender, _) = signed_tx.deconstruct();
|
|
let block_hash = block.hash();
|
|
let block_number = block.number();
|
|
let transaction_index = index;
|
|
let cached_sender = Some(sender);
|
|
|
|
LocalizedTransaction {
|
|
signed,
|
|
block_number,
|
|
block_hash,
|
|
transaction_index,
|
|
cached_sender,
|
|
}
|
|
})
|
|
.map(|tx| Transaction::from_localized(tx))
|
|
}
|
|
|
|
// extract the header indicated by the given `HeaderRef` from the given responses.
|
|
// fails only if they do not correspond.
|
|
fn extract_header(res: &[OnDemandResponse], header: HeaderRef) -> Option<encoded::Header> {
|
|
match header {
|
|
HeaderRef::Stored(hdr) => Some(hdr),
|
|
HeaderRef::Unresolved(idx, _) => match res.get(idx) {
|
|
Some(&OnDemandResponse::HeaderByHash(ref hdr)) => Some(hdr.clone()),
|
|
_ => None,
|
|
},
|
|
}
|
|
}
|
|
|
|
impl LightFetch {
|
|
// push the necessary requests onto the request chain to get the header by the given ID.
|
|
// yield a header reference which other requests can use.
|
|
fn make_header_requests(&self, id: BlockId, reqs: &mut Vec<OnDemandRequest>) -> Result<HeaderRef> {
|
|
if let Some(h) = self.client.block_header(id) {
|
|
return Ok(h.into());
|
|
}
|
|
|
|
match id {
|
|
BlockId::Number(n) => {
|
|
let cht_root = cht::block_to_cht_number(n).and_then(|cn| self.client.cht_root(cn as usize));
|
|
match cht_root {
|
|
None => Err(errors::unknown_block()),
|
|
Some(root) => {
|
|
let req = request::HeaderProof::new(n, root)
|
|
.expect("only fails for 0; client always stores genesis; client already queried; qed");
|
|
|
|
let idx = reqs.len();
|
|
let hash_ref = Field::back_ref(idx, 0);
|
|
reqs.push(req.into());
|
|
reqs.push(request::HeaderByHash(hash_ref.clone()).into());
|
|
|
|
Ok(HeaderRef::Unresolved(idx + 1, hash_ref))
|
|
}
|
|
}
|
|
}
|
|
BlockId::Hash(h) => {
|
|
let idx = reqs.len();
|
|
reqs.push(request::HeaderByHash(h.into()).into());
|
|
Ok(HeaderRef::Unresolved(idx, h.into()))
|
|
}
|
|
_ => Err(errors::unknown_block()) // latest, earliest, and pending will have all already returned.
|
|
}
|
|
}
|
|
|
|
/// Get a block header from the on demand service or client, or error.
|
|
pub fn header(&self, id: BlockId) -> impl Future<Item = encoded::Header, Error = Error> + Send {
|
|
let mut reqs = Vec::new();
|
|
let header_ref = match self.make_header_requests(id, &mut reqs) {
|
|
Ok(r) => r,
|
|
Err(e) => return Either::A(future::err(e)),
|
|
};
|
|
|
|
Either::B(self.send_requests(reqs, |res|
|
|
extract_header(&res, header_ref)
|
|
.expect("these responses correspond to requests that header_ref belongs to \
|
|
therefore it will not fail; qed")
|
|
))
|
|
}
|
|
|
|
/// Helper for getting contract code at a given block.
|
|
pub fn code(&self, address: Address, id: BlockId) -> impl Future<Item = Vec<u8>, Error = Error> + Send {
|
|
let mut reqs = Vec::new();
|
|
let header_ref = match self.make_header_requests(id, &mut reqs) {
|
|
Ok(r) => r,
|
|
Err(e) => return Either::A(future::err(e)),
|
|
};
|
|
|
|
reqs.push(request::Account { header: header_ref.clone(), address: address }.into());
|
|
let account_idx = reqs.len() - 1;
|
|
reqs.push(request::Code { header: header_ref, code_hash: Field::back_ref(account_idx, 0) }.into());
|
|
|
|
Either::B(self.send_requests(reqs, |mut res| match res.pop() {
|
|
Some(OnDemandResponse::Code(code)) => code,
|
|
_ => panic!("responses correspond directly with requests in amount and type; qed"),
|
|
}))
|
|
}
|
|
|
|
/// Helper for getting account info at a given block.
|
|
/// `None` indicates the account doesn't exist at the given block.
|
|
pub fn account(&self, address: Address, id: BlockId) -> impl Future<Item = Option<BasicAccount>, Error = Error> + Send {
|
|
let mut reqs = Vec::new();
|
|
let header_ref = match self.make_header_requests(id, &mut reqs) {
|
|
Ok(r) => r,
|
|
Err(e) => return Either::A(future::err(e)),
|
|
};
|
|
|
|
reqs.push(request::Account { header: header_ref, address: address }.into());
|
|
|
|
Either::B(self.send_requests(reqs, |mut res|match res.pop() {
|
|
Some(OnDemandResponse::Account(acc)) => acc,
|
|
_ => panic!("responses correspond directly with requests in amount and type; qed"),
|
|
}))
|
|
}
|
|
|
|
/// Helper for getting proved execution.
|
|
pub fn proved_read_only_execution(&self, req: CallRequest, num: Trailing<BlockNumber>) -> impl Future<Item = ExecutionResult, Error = Error> + Send {
|
|
const DEFAULT_GAS_PRICE: u64 = 21_000;
|
|
// starting gas when gas not provided.
|
|
const START_GAS: u64 = 50_000;
|
|
|
|
let (sync, on_demand, client) = (self.sync.clone(), self.on_demand.clone(), self.client.clone());
|
|
let req: CallRequestHelper = req.into();
|
|
|
|
// Note: Here we treat `Pending` as `Latest`.
|
|
// Since light clients don't produce pending blocks
|
|
// (they don't have state) we can safely fallback to `Latest`.
|
|
let id = match num.unwrap_or_default() {
|
|
BlockNumber::Num(n) => BlockId::Number(n),
|
|
BlockNumber::Earliest => BlockId::Earliest,
|
|
BlockNumber::Latest => BlockId::Latest,
|
|
BlockNumber::Pending => {
|
|
warn!("`Pending` is deprecated and may be removed in future versions. Falling back to `Latest`");
|
|
BlockId::Latest
|
|
}
|
|
};
|
|
|
|
let from = req.from.unwrap_or_else(|| Address::zero());
|
|
let nonce_fut = match req.nonce {
|
|
Some(nonce) => Either::A(future::ok(Some(nonce))),
|
|
None => Either::B(self.account(from, id).map(|acc| acc.map(|a| a.nonce))),
|
|
};
|
|
|
|
let gas_price_percentile = self.gas_price_percentile;
|
|
let gas_price_fut = match req.gas_price {
|
|
Some(price) => Either::A(future::ok(price)),
|
|
None => Either::B(dispatch::fetch_gas_price_corpus(
|
|
self.sync.clone(),
|
|
self.client.clone(),
|
|
self.on_demand.clone(),
|
|
self.cache.clone(),
|
|
).map(move |corp| match corp.percentile(gas_price_percentile) {
|
|
Some(percentile) => *percentile,
|
|
None => DEFAULT_GAS_PRICE.into(),
|
|
}))
|
|
};
|
|
|
|
// if nonce resolves, this should too since it'll be in the LRU-cache.
|
|
let header_fut = self.header(id);
|
|
|
|
// fetch missing transaction fields from the network.
|
|
Box::new(nonce_fut.join(gas_price_fut).and_then(move |(nonce, gas_price)| {
|
|
future::done(
|
|
Ok((req.gas.is_some(), EthTransaction {
|
|
nonce: nonce.unwrap_or_default(),
|
|
action: req.to.map_or(Action::Create, Action::Call),
|
|
gas: req.gas.unwrap_or_else(|| START_GAS.into()),
|
|
gas_price,
|
|
value: req.value.unwrap_or_else(U256::zero),
|
|
data: req.data.unwrap_or_default(),
|
|
}))
|
|
)
|
|
}).join(header_fut).and_then(move |((gas_known, tx), hdr)| {
|
|
// then request proved execution.
|
|
// TODO: get last-hashes from network.
|
|
let hash = hdr.hash();
|
|
let env_info = match client.env_info(BlockId::Hash(hash)) {
|
|
Some(env_info) => env_info,
|
|
_ => return Either::A(future::err(errors::unknown_block())),
|
|
};
|
|
|
|
Either::B(execute_read_only_tx(gas_known, ExecuteParams {
|
|
from,
|
|
tx,
|
|
hdr,
|
|
env_info,
|
|
engine: client.engine().clone(),
|
|
on_demand,
|
|
sync,
|
|
}))
|
|
}))
|
|
}
|
|
|
|
/// Get a block itself. Fails on unknown block ID.
|
|
pub fn block(&self, id: BlockId) -> impl Future<Item = encoded::Block, Error = Error> + Send {
|
|
let mut reqs = Vec::new();
|
|
let header_ref = match self.make_header_requests(id, &mut reqs) {
|
|
Ok(r) => r,
|
|
Err(e) => return Either::A(future::err(e)),
|
|
};
|
|
|
|
reqs.push(request::Body(header_ref).into());
|
|
|
|
Either::B(self.send_requests(reqs, |mut res| match res.pop() {
|
|
Some(OnDemandResponse::Body(b)) => b,
|
|
_ => panic!("responses correspond directly with requests in amount and type; qed"),
|
|
}))
|
|
}
|
|
|
|
/// Get the block receipts. Fails on unknown block ID.
|
|
pub fn receipts(&self, id: BlockId) -> impl Future<Item = Vec<Receipt>, Error = Error> + Send {
|
|
let mut reqs = Vec::new();
|
|
let header_ref = match self.make_header_requests(id, &mut reqs) {
|
|
Ok(r) => r,
|
|
Err(e) => return Either::A(future::err(e)),
|
|
};
|
|
|
|
reqs.push(request::BlockReceipts(header_ref).into());
|
|
|
|
Either::B(self.send_requests(reqs, |mut res| match res.pop() {
|
|
Some(OnDemandResponse::Receipts(b)) => b,
|
|
_ => panic!("responses correspond directly with requests in amount and type; qed"),
|
|
}))
|
|
}
|
|
|
|
/// Get transaction logs
|
|
pub fn logs(&self, filter: EthcoreFilter) -> impl Future<Item = Vec<Log>, Error = Error> + Send {
|
|
use std::collections::BTreeMap;
|
|
use jsonrpc_core::futures::stream::{self, Stream};
|
|
|
|
const MAX_BLOCK_RANGE: u64 = 1000;
|
|
|
|
let fetcher = self.clone();
|
|
self.headers_range_by_block_id(filter.from_block, filter.to_block, MAX_BLOCK_RANGE)
|
|
.and_then(move |mut headers| {
|
|
if headers.is_empty() {
|
|
return Either::A(future::ok(Vec::new()));
|
|
}
|
|
|
|
let on_demand = &fetcher.on_demand;
|
|
|
|
let maybe_future = fetcher.sync.with_context(move |ctx| {
|
|
// find all headers which match the filter, and fetch the receipts for each one.
|
|
// match them with their numbers for easy sorting later.
|
|
let bit_combos = filter.bloom_possibilities();
|
|
let receipts_futures: Vec<_> = headers.drain(..)
|
|
.filter(|ref hdr| {
|
|
let hdr_bloom = hdr.log_bloom();
|
|
bit_combos.iter().any(|bloom| hdr_bloom.contains_bloom(bloom))
|
|
})
|
|
.map(|hdr| (hdr.number(), hdr.hash(), request::BlockReceipts(hdr.into())))
|
|
.map(|(num, hash, req)| on_demand.request(ctx, req).expect(NO_INVALID_BACK_REFS).map(move |x| (num, hash, x)))
|
|
.collect();
|
|
|
|
// as the receipts come in, find logs within them which match the filter.
|
|
// insert them into a BTreeMap to maintain order by number and block index.
|
|
stream::futures_unordered(receipts_futures)
|
|
.fold(BTreeMap::new(), move |mut matches, (num, hash, receipts)| {
|
|
let mut block_index = 0;
|
|
for (transaction_index, receipt) in receipts.into_iter().enumerate() {
|
|
for (transaction_log_index, log) in receipt.logs.into_iter().enumerate() {
|
|
if filter.matches(&log) {
|
|
matches.insert((num, block_index), Log {
|
|
address: log.address.into(),
|
|
topics: log.topics.into_iter().map(Into::into).collect(),
|
|
data: log.data.into(),
|
|
block_hash: Some(hash.into()),
|
|
block_number: Some(num.into()),
|
|
// No way to easily retrieve transaction hash, so let's just skip it.
|
|
transaction_hash: None,
|
|
transaction_index: Some(transaction_index.into()),
|
|
log_index: Some(block_index.into()),
|
|
transaction_log_index: Some(transaction_log_index.into()),
|
|
log_type: "mined".into(),
|
|
removed: false,
|
|
});
|
|
}
|
|
block_index += 1;
|
|
}
|
|
}
|
|
future::ok(matches)
|
|
}) // and then collect them into a vector.
|
|
.map(|matches| matches.into_iter().map(|(_, v)| v).collect())
|
|
.map_err(errors::on_demand_cancel)
|
|
});
|
|
|
|
match maybe_future {
|
|
Some(fut) => Either::B(Either::A(fut)),
|
|
None => Either::B(Either::B(future::err(errors::network_disabled()))),
|
|
}
|
|
})
|
|
}
|
|
|
|
// Get a transaction by hash. also returns the index in the block.
|
|
// Only returns transactions in the canonical chain.
|
|
pub fn transaction_by_hash(&self, tx_hash: H256)
|
|
-> impl Future<Item = Option<(Transaction, usize)>, Error = Error> + Send
|
|
{
|
|
let params = (self.sync.clone(), self.on_demand.clone());
|
|
let fetcher: Self = self.clone();
|
|
|
|
Box::new(future::loop_fn(params, move |(sync, on_demand)| {
|
|
let maybe_future = sync.with_context(|ctx| {
|
|
let req = request::TransactionIndex(tx_hash.clone().into());
|
|
on_demand.request(ctx, req)
|
|
});
|
|
|
|
let eventual_index = match maybe_future {
|
|
Some(e) => e.expect(NO_INVALID_BACK_REFS).map_err(errors::on_demand_cancel),
|
|
None => return Either::A(future::err(errors::network_disabled())),
|
|
};
|
|
|
|
let fetcher = fetcher.clone();
|
|
let extract_transaction = eventual_index.and_then(move |index| {
|
|
// check that the block is known by number.
|
|
// that ensures that it is within the chain that we are aware of.
|
|
fetcher.block(BlockId::Number(index.num)).then(move |blk| match blk {
|
|
Ok(blk) => {
|
|
// if the block is known by number, make sure the
|
|
// index from earlier isn't garbage.
|
|
|
|
if blk.hash() != index.hash {
|
|
// index is on a different chain from us.
|
|
return Ok(future::Loop::Continue((sync, on_demand)))
|
|
}
|
|
|
|
let index = index.index as usize;
|
|
let transaction = extract_transaction_at_index(blk, index);
|
|
|
|
if transaction.as_ref().map_or(true, |tx| tx.hash != tx_hash.into()) {
|
|
// index is actively wrong: indicated block has
|
|
// fewer transactions than necessary or the transaction
|
|
// at that index had a different hash.
|
|
// TODO: punish peer/move into OnDemand somehow?
|
|
Ok(future::Loop::Continue((sync, on_demand)))
|
|
} else {
|
|
let transaction = transaction.map(move |tx| (tx, index));
|
|
Ok(future::Loop::Break(transaction))
|
|
}
|
|
}
|
|
Err(ref e) if e == &errors::unknown_block() => {
|
|
// block by number not in the canonical chain.
|
|
Ok(future::Loop::Break(None))
|
|
}
|
|
Err(e) => Err(e),
|
|
})
|
|
});
|
|
|
|
Either::B(extract_transaction)
|
|
}))
|
|
}
|
|
|
|
fn send_requests<T, F>(&self, reqs: Vec<OnDemandRequest>, parse_response: F) -> impl Future<Item = T, Error = Error> + Send where
|
|
F: FnOnce(Vec<OnDemandResponse>) -> T + Send + 'static,
|
|
T: Send + 'static,
|
|
{
|
|
let maybe_future = self.sync.with_context(move |ctx| {
|
|
Box::new(self.on_demand.request_raw(ctx, reqs)
|
|
.expect(NO_INVALID_BACK_REFS)
|
|
.map(parse_response)
|
|
.map_err(errors::on_demand_cancel))
|
|
});
|
|
|
|
match maybe_future {
|
|
Some(recv) => recv,
|
|
None => Box::new(future::err(errors::network_disabled())) as Box<Future<Item = _, Error = _> + Send>
|
|
}
|
|
}
|
|
|
|
fn headers_range_by_block_id(
|
|
&self,
|
|
from_block: BlockId,
|
|
to_block: BlockId,
|
|
max: u64
|
|
) -> impl Future<Item = Vec<encoded::Header>, Error = Error> {
|
|
let fetch_hashes = [from_block, to_block].iter()
|
|
.filter_map(|block_id| match block_id {
|
|
BlockId::Hash(hash) => Some(hash.clone()),
|
|
_ => None,
|
|
})
|
|
.collect::<Vec<_>>();
|
|
|
|
let best_number = self.client.chain_info().best_block_number;
|
|
|
|
let fetcher = self.clone();
|
|
self.headers_by_hash(&fetch_hashes[..]).and_then(move |mut header_map| {
|
|
let (from_block_num, to_block_num) = {
|
|
let block_number = |id| match id {
|
|
&BlockId::Earliest => 0,
|
|
&BlockId::Latest => best_number,
|
|
&BlockId::Hash(ref h) =>
|
|
header_map.get(h).map(|hdr| hdr.number())
|
|
.expect("from_block and to_block headers are fetched by hash; this closure is only called on from_block and to_block; qed"),
|
|
&BlockId::Number(x) => x,
|
|
};
|
|
(block_number(&from_block), block_number(&to_block))
|
|
};
|
|
|
|
if to_block_num < from_block_num {
|
|
// early exit for "to" block before "from" block.
|
|
return Either::A(future::err(errors::filter_block_not_found(to_block)));
|
|
} else if to_block_num - from_block_num >= max {
|
|
return Either::A(future::err(errors::request_rejected_param_limit(max, "blocks")));
|
|
}
|
|
|
|
let to_header_hint = match to_block {
|
|
BlockId::Hash(ref h) => header_map.remove(h),
|
|
_ => None,
|
|
};
|
|
let headers_fut = fetcher.headers_range(from_block_num, to_block_num, to_header_hint);
|
|
Either::B(headers_fut.map(move |headers| {
|
|
// Validate from_block if it's a hash
|
|
let last_hash = headers.last().map(|hdr| hdr.hash());
|
|
match (last_hash, from_block) {
|
|
(Some(h1), BlockId::Hash(h2)) if h1 != h2 => Vec::new(),
|
|
_ => headers,
|
|
}
|
|
}))
|
|
})
|
|
}
|
|
|
|
fn headers_by_hash(&self, hashes: &[H256]) -> impl Future<Item = H256FastMap<encoded::Header>, Error = Error> {
|
|
let mut refs = H256FastMap::with_capacity_and_hasher(hashes.len(), Default::default());
|
|
let mut reqs = Vec::with_capacity(hashes.len());
|
|
|
|
for hash in hashes {
|
|
refs.entry(*hash).or_insert_with(|| {
|
|
self.make_header_requests(BlockId::Hash(*hash), &mut reqs)
|
|
.expect("make_header_requests never fails for BlockId::Hash; qed")
|
|
});
|
|
}
|
|
|
|
self.send_requests(reqs, move |res| {
|
|
let headers = refs.drain()
|
|
.map(|(hash, header_ref)| {
|
|
let hdr = extract_header(&res, header_ref)
|
|
.expect("these responses correspond to requests that header_ref belongs to; \
|
|
qed");
|
|
(hash, hdr)
|
|
})
|
|
.collect();
|
|
headers
|
|
})
|
|
}
|
|
|
|
fn headers_range(
|
|
&self,
|
|
from_number: u64,
|
|
to_number: u64,
|
|
to_header_hint: Option<encoded::Header>
|
|
) -> impl Future<Item = Vec<encoded::Header>, Error = Error> {
|
|
let range_length = (to_number - from_number + 1) as usize;
|
|
let mut headers: Vec<encoded::Header> = Vec::with_capacity(range_length);
|
|
|
|
let iter_start = match to_header_hint {
|
|
Some(hdr) => {
|
|
let block_id = BlockId::Hash(hdr.parent_hash());
|
|
headers.push(hdr);
|
|
block_id
|
|
}
|
|
None => BlockId::Number(to_number),
|
|
};
|
|
headers.extend(self.client.ancestry_iter(iter_start)
|
|
.take_while(|hdr| hdr.number() >= from_number));
|
|
|
|
let fetcher = self.clone();
|
|
future::loop_fn(headers, move |mut headers| {
|
|
let remaining = range_length - headers.len();
|
|
if remaining == 0 {
|
|
return Either::A(future::ok(future::Loop::Break(headers)));
|
|
}
|
|
|
|
let mut reqs: Vec<request::Request> = Vec::with_capacity(2);
|
|
|
|
let start_hash = if let Some(hdr) = headers.last() {
|
|
hdr.parent_hash().into()
|
|
} else {
|
|
let cht_root = cht::block_to_cht_number(to_number)
|
|
.and_then(|cht_num| fetcher.client.cht_root(cht_num as usize));
|
|
|
|
let cht_root = match cht_root {
|
|
Some(cht_root) => cht_root,
|
|
None => return Either::A(future::err(errors::unknown_block())),
|
|
};
|
|
|
|
let header_proof = request::HeaderProof::new(to_number, cht_root)
|
|
.expect("HeaderProof::new is Some(_) if cht::block_to_cht_number() is Some(_); \
|
|
this would return above if block_to_cht_number returned None; qed");
|
|
|
|
let idx = reqs.len();
|
|
let hash_ref = Field::back_ref(idx, 0);
|
|
reqs.push(header_proof.into());
|
|
|
|
hash_ref
|
|
};
|
|
|
|
let max = cmp::min(remaining as u64, MAX_HEADERS_PER_REQUEST);
|
|
reqs.push(request::HeaderWithAncestors {
|
|
block_hash: start_hash,
|
|
ancestor_count: max - 1,
|
|
}.into());
|
|
|
|
Either::B(fetcher.send_requests(reqs, |mut res| {
|
|
match res.last_mut() {
|
|
Some(&mut OnDemandResponse::HeaderWithAncestors(ref mut res_headers)) =>
|
|
headers.extend(res_headers.drain(..)),
|
|
_ => panic!("reqs has at least one entry; each request maps to a response; qed"),
|
|
};
|
|
future::Loop::Continue(headers)
|
|
}))
|
|
})
|
|
}
|
|
}
|
|
|
|
#[derive(Clone)]
|
|
struct ExecuteParams {
|
|
from: Address,
|
|
tx: EthTransaction,
|
|
hdr: encoded::Header,
|
|
env_info: ::vm::EnvInfo,
|
|
engine: Arc<::ethcore::engines::EthEngine>,
|
|
on_demand: Arc<OnDemand>,
|
|
sync: Arc<LightSync>,
|
|
}
|
|
|
|
// has a peer execute the transaction with given params. If `gas_known` is false,
|
|
// this will double the gas on each `OutOfGas` error.
|
|
fn execute_read_only_tx(gas_known: bool, params: ExecuteParams) -> impl Future<Item = ExecutionResult, Error = Error> + Send {
|
|
if !gas_known {
|
|
Box::new(future::loop_fn(params, |mut params| {
|
|
execute_read_only_tx(true, params.clone()).and_then(move |res| {
|
|
match res {
|
|
Ok(executed) => {
|
|
// TODO: how to distinguish between actual OOG and
|
|
// exception?
|
|
if executed.exception.is_some() {
|
|
let old_gas = params.tx.gas;
|
|
params.tx.gas = params.tx.gas * 2u32;
|
|
if params.tx.gas > params.hdr.gas_limit() {
|
|
params.tx.gas = old_gas;
|
|
} else {
|
|
return Ok(future::Loop::Continue(params))
|
|
}
|
|
}
|
|
|
|
Ok(future::Loop::Break(Ok(executed)))
|
|
}
|
|
failed => Ok(future::Loop::Break(failed)),
|
|
}
|
|
})
|
|
})) as Box<Future<Item = _, Error = _> + Send>
|
|
} else {
|
|
trace!(target: "light_fetch", "Placing execution request for {} gas in on_demand",
|
|
params.tx.gas);
|
|
|
|
let request = request::TransactionProof {
|
|
tx: params.tx.fake_sign(params.from),
|
|
header: params.hdr.into(),
|
|
env_info: params.env_info,
|
|
engine: params.engine,
|
|
};
|
|
|
|
let on_demand = params.on_demand;
|
|
let proved_future = params.sync.with_context(move |ctx| {
|
|
on_demand
|
|
.request(ctx, request)
|
|
.expect("no back-references; therefore all back-refs valid; qed")
|
|
.map_err(errors::on_demand_cancel)
|
|
});
|
|
|
|
match proved_future {
|
|
Some(fut) => Box::new(fut) as Box<Future<Item = _, Error = _> + Send>,
|
|
None => Box::new(future::err(errors::network_disabled())) as Box<Future<Item = _, Error = _> + Send>,
|
|
}
|
|
}
|
|
}
|