[beta] Backports (#8624)
* Trace precompiled contracts when the transfer value is not zero (#8486) * Trace precompiled contracts when the transfer value is not zero * Add tests for precompiled CALL tracing * Use byzantium test machine for the new test * Add notes in comments on why we don't trace all precompileds * Use is_transferred instead of transferred * Return error if RLP size of transaction exceeds the limit (#8473) * Return error if RLP size of transaction exceeds the limit * Review comments fixed * RLP check moved to verifier, corresponding pool test added * Don't block sync when importing old blocks (#8530) * Alter IO queueing. * Don't require IoMessages to be Clone * Ancient blocks imported via IoChannel. * Get rid of private transactions io message. * Get rid of deadlock and fix disconnected handler. * Revert to old disconnect condition. * Fix tests. * Fix deadlock. * Refactoring `ethcore-sync` - Fixing warp-sync barrier (#8543) * Start dividing sync chain : first supplier method * WIP - updated chain sync supplier * Finish refactoring the Chain Sync Supplier * Create Chain Sync Requester * Add Propagator for Chain Sync * Add the Chain Sync Handler * Move tests from mod -> handler * Move tests to propagator * Refactor SyncRequester arguments * Refactoring peer fork header handler * Fix wrong highest block number in snapshot sync * Small refactor... * Address PR grumbles * Retry failed CI job * Fix tests * PR Grumbles * Handle socket address parsing errors (#8545) Unpack errors and check for io::ErrorKind::InvalidInput and return our own AddressParse error. Remove the foreign link to std::net::AddrParseError and add an `impl From` for that error. Test parsing properly. * Fix packet count when talking with PAR2 peers (#8555) * Support diferent packet counts in different protocol versions. * Fix light timeouts and eclipse protection. * Fix devp2p tests. * Fix whisper-cli compilation. * Fix compilation. * Fix ethcore-sync tests. * Revert "Fix light timeouts and eclipse protection." This reverts commit 06285ea8c1d9d184d809f64b5507aece633da6cc. * Increase timeouts. * Add whisper CLI to the pipelines (#8578) * Add whisper CLI to the pipelines * Address todo, ref #8579 * Rename `whisper-cli binary` to `whisper` (#8579) * rename whisper-cli binary to whisper * fix tests * Remove manually added text to the errors (#8595) These messages were confusing for the users especially the help message. * Fix account list double 0x display (#8596) * Remove unused self import * Fix account list double 0x display * Fix BlockReward contract "arithmetic operation overflow" (#8611) * Fix BlockReward contract "arithmetic operation overflow" * Add docs on how execute_as_system works * Fix typo * Rlp decode returns Result (#8527) rlp::decode returns Result Make a best effort to handle decoding errors gracefully throughout the code, using `expect` where the value is guaranteed to be valid (and in other places where it makes sense). * Remove expect (#8536) * Remove expect and propagate rlp::DecoderErrors as TrieErrors * Decoding headers can fail (#8570) * rlp::decode returns Result * Fix journaldb to handle rlp::decode Result * Fix ethcore to work with rlp::decode returning Result * Light client handles rlp::decode returning Result * Fix tests in rlp_derive * Fix tests * Cleanup * cleanup * Allow panic rather than breaking out of iterator * Let decoding failures when reading from disk blow up * syntax * Fix the trivial grumbles * Fix failing tests * Make Account::from_rlp return Result * Syntx, sigh * Temp-fix for decoding failures * Header::decode returns Result Handle new return type throughout the code base. * Do not continue reading from the DB when a value could not be read * Fix tests * Handle header decoding in light_sync * Handling header decoding errors * Let the DecodeError bubble up unchanged * Remove redundant error conversion * fix compiler warning (#8590) * Attempt to fix intermittent test failures (#8584) Occasionally should_return_correct_nonces_when_dropped_because_of_limit fails, possibly because of multiple threads competing to finish. See CI logs here for an example: https://gitlab.parity.io/parity/parity/-/jobs/86738 * block_header can fail so return Result (#8581) * block_header can fail so return Result * Restore previous return type based on feedback * Fix failing doc tests running on non-code * Block::decode() returns Result (#8586) * Gitlab test script fixes (#8573) * Exclude /docs from modified files. * Ensure all references in the working tree are available * Remove duplicated line from test script
This commit is contained in:
parent
885f45c8c1
commit
6654d02163
10
Cargo.lock
generated
10
Cargo.lock
generated
@ -35,6 +35,11 @@ dependencies = [
|
||||
"nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "assert_matches"
|
||||
version = "1.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
|
||||
[[package]]
|
||||
name = "aster"
|
||||
version = "0.41.0"
|
||||
@ -684,6 +689,7 @@ dependencies = [
|
||||
"parking_lot 0.5.4 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"price-info 1.11.0",
|
||||
"rayon 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"rlp 0.2.1",
|
||||
"rustc-hex 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"trace-time 0.1.0",
|
||||
"transaction-pool 1.11.0",
|
||||
@ -709,6 +715,7 @@ name = "ethcore-network-devp2p"
|
||||
version = "1.11.0"
|
||||
dependencies = [
|
||||
"ansi_term 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"assert_matches 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"bytes 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"error-chain 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"ethcore-bytes 0.1.0",
|
||||
@ -828,6 +835,7 @@ dependencies = [
|
||||
"log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"stop-guard 0.1.0",
|
||||
"tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"trace-time 0.1.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -876,6 +884,7 @@ dependencies = [
|
||||
"rustc-hex 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"semver 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"smallvec 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
|
||||
"trace-time 0.1.0",
|
||||
"triehash 0.1.0",
|
||||
]
|
||||
|
||||
@ -3774,6 +3783,7 @@ dependencies = [
|
||||
"checksum ansi_term 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)" = "6b3568b48b7cefa6b8ce125f9bb4989e52fbcc29ebea88df04cc7c5f12f70455"
|
||||
"checksum app_dirs 1.2.1 (git+https://github.com/paritytech/app-dirs-rs)" = "<none>"
|
||||
"checksum arrayvec 0.4.7 (registry+https://github.com/rust-lang/crates.io-index)" = "a1e964f9e24d588183fcb43503abda40d288c8657dfc27311516ce2f05675aef"
|
||||
"checksum assert_matches 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "664470abf00fae0f31c0eb6e1ca12d82961b2a2541ef898bc9dd51a9254d218b"
|
||||
"checksum aster 0.41.0 (registry+https://github.com/rust-lang/crates.io-index)" = "4ccfdf7355d9db158df68f976ed030ab0f6578af811f5a7bb6dcf221ec24e0e0"
|
||||
"checksum atty 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "af80143d6f7608d746df1520709e5d141c96f240b0e62b0aa41bdfb53374d9d4"
|
||||
"checksum backtrace 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "ebbbf59b1c43eefa8c3ede390fcc36820b4999f7914104015be25025e0d62af2"
|
||||
|
@ -228,7 +228,7 @@ impl HeaderChain {
|
||||
let decoded_header = spec.genesis_header();
|
||||
|
||||
let chain = if let Some(current) = db.get(col, CURRENT_KEY)? {
|
||||
let curr : BestAndLatest = ::rlp::decode(¤t);
|
||||
let curr : BestAndLatest = ::rlp::decode(¤t).expect("decoding db value failed");
|
||||
|
||||
let mut cur_number = curr.latest_num;
|
||||
let mut candidates = BTreeMap::new();
|
||||
@ -236,7 +236,7 @@ impl HeaderChain {
|
||||
// load all era entries, referenced headers within them,
|
||||
// and live epoch proofs.
|
||||
while let Some(entry) = db.get(col, era_key(cur_number).as_bytes())? {
|
||||
let entry: Entry = ::rlp::decode(&entry);
|
||||
let entry: Entry = ::rlp::decode(&entry).expect("decoding db value failed");
|
||||
trace!(target: "chain", "loaded header chain entry for era {} with {} candidates",
|
||||
cur_number, entry.candidates.len());
|
||||
|
||||
@ -305,7 +305,7 @@ impl HeaderChain {
|
||||
batch.put(col, cht_key(cht_num as u64).as_bytes(), &::rlp::encode(cht_root));
|
||||
}
|
||||
|
||||
let decoded_header = hardcoded_sync.header.decode();
|
||||
let decoded_header = hardcoded_sync.header.decode()?;
|
||||
let decoded_header_num = decoded_header.number();
|
||||
|
||||
// write the block in the DB.
|
||||
@ -524,7 +524,10 @@ impl HeaderChain {
|
||||
None
|
||||
}
|
||||
Ok(None) => panic!("stored candidates always have corresponding headers; qed"),
|
||||
Ok(Some(header)) => Some((epoch_transition, ::rlp::decode(&header))),
|
||||
Ok(Some(header)) => Some((
|
||||
epoch_transition,
|
||||
::rlp::decode(&header).expect("decoding value from db failed")
|
||||
)),
|
||||
};
|
||||
}
|
||||
}
|
||||
@ -582,7 +585,7 @@ impl HeaderChain {
|
||||
bail!(ErrorKind::Database(msg.into()));
|
||||
};
|
||||
|
||||
let decoded = header.decode();
|
||||
let decoded = header.decode().expect("decoding db value failed");
|
||||
|
||||
let entry: Entry = {
|
||||
let bytes = self.db.get(self.col, era_key(h_num).as_bytes())?
|
||||
@ -591,7 +594,7 @@ impl HeaderChain {
|
||||
in an inconsistent state", h_num);
|
||||
ErrorKind::Database(msg.into())
|
||||
})?;
|
||||
::rlp::decode(&bytes)
|
||||
::rlp::decode(&bytes).expect("decoding db value failed")
|
||||
};
|
||||
|
||||
let total_difficulty = entry.candidates.iter()
|
||||
@ -604,9 +607,9 @@ impl HeaderChain {
|
||||
.total_difficulty;
|
||||
|
||||
break Ok(Some(SpecHardcodedSync {
|
||||
header: header,
|
||||
total_difficulty: total_difficulty,
|
||||
chts: chts,
|
||||
header,
|
||||
total_difficulty,
|
||||
chts,
|
||||
}));
|
||||
},
|
||||
None => {
|
||||
@ -742,7 +745,7 @@ impl HeaderChain {
|
||||
/// so including it within a CHT would be redundant.
|
||||
pub fn cht_root(&self, n: usize) -> Option<H256> {
|
||||
match self.db.get(self.col, cht_key(n as u64).as_bytes()) {
|
||||
Ok(val) => val.map(|x| ::rlp::decode(&x)),
|
||||
Ok(db_fetch) => db_fetch.map(|bytes| ::rlp::decode(&bytes).expect("decoding value from db failed")),
|
||||
Err(e) => {
|
||||
warn!(target: "chain", "Error reading from database: {}", e);
|
||||
None
|
||||
@ -793,7 +796,7 @@ impl HeaderChain {
|
||||
pub fn pending_transition(&self, hash: H256) -> Option<PendingEpochTransition> {
|
||||
let key = pending_transition_key(hash);
|
||||
match self.db.get(self.col, &*key) {
|
||||
Ok(val) => val.map(|x| ::rlp::decode(&x)),
|
||||
Ok(db_fetch) => db_fetch.map(|bytes| ::rlp::decode(&bytes).expect("decoding value from db failed")),
|
||||
Err(e) => {
|
||||
warn!(target: "chain", "Error reading from database: {}", e);
|
||||
None
|
||||
@ -812,7 +815,9 @@ impl HeaderChain {
|
||||
|
||||
for hdr in self.ancestry_iter(BlockId::Hash(parent_hash)) {
|
||||
if let Some(transition) = live_proofs.get(&hdr.hash()).cloned() {
|
||||
return Some((hdr.decode(), transition.proof))
|
||||
return hdr.decode().map(|decoded_hdr| {
|
||||
(decoded_hdr, transition.proof)
|
||||
}).ok();
|
||||
}
|
||||
}
|
||||
|
||||
@ -1192,7 +1197,7 @@ mod tests {
|
||||
|
||||
let cache = Arc::new(Mutex::new(Cache::new(Default::default(), Duration::from_secs(6 * 3600))));
|
||||
|
||||
let chain = HeaderChain::new(db.clone(), None, &spec, cache, HardcodedSync::Allow).unwrap();
|
||||
let chain = HeaderChain::new(db.clone(), None, &spec, cache, HardcodedSync::Allow).expect("failed to instantiate a new HeaderChain");
|
||||
|
||||
let mut parent_hash = genesis_header.hash();
|
||||
let mut rolling_timestamp = genesis_header.timestamp();
|
||||
@ -1211,17 +1216,17 @@ mod tests {
|
||||
parent_hash = header.hash();
|
||||
|
||||
let mut tx = db.transaction();
|
||||
let pending = chain.insert(&mut tx, header, None).unwrap();
|
||||
let pending = chain.insert(&mut tx, header, None).expect("failed inserting a transaction");
|
||||
db.write(tx).unwrap();
|
||||
chain.apply_pending(pending);
|
||||
|
||||
rolling_timestamp += 10;
|
||||
}
|
||||
|
||||
let hardcoded_sync = chain.read_hardcoded_sync().unwrap().unwrap();
|
||||
let hardcoded_sync = chain.read_hardcoded_sync().expect("failed reading hardcoded sync").expect("failed unwrapping hardcoded sync");
|
||||
assert_eq!(hardcoded_sync.chts.len(), 3);
|
||||
assert_eq!(hardcoded_sync.total_difficulty, total_difficulty);
|
||||
let decoded: Header = hardcoded_sync.header.decode();
|
||||
let decoded: Header = hardcoded_sync.header.decode().expect("decoding failed");
|
||||
assert_eq!(decoded.number(), h_num);
|
||||
}
|
||||
}
|
||||
|
@ -318,7 +318,7 @@ impl<T: ChainDataFetcher> Client<T> {
|
||||
|
||||
let epoch_proof = self.engine.is_epoch_end(
|
||||
&verified_header,
|
||||
&|h| self.chain.block_header(BlockId::Hash(h)).map(|hdr| hdr.decode()),
|
||||
&|h| self.chain.block_header(BlockId::Hash(h)).and_then(|hdr| hdr.decode().ok()),
|
||||
&|h| self.chain.pending_transition(h),
|
||||
);
|
||||
|
||||
@ -426,7 +426,15 @@ impl<T: ChainDataFetcher> Client<T> {
|
||||
};
|
||||
|
||||
// Verify Block Family
|
||||
let verify_family_result = self.engine.verify_block_family(&verified_header, &parent_header.decode());
|
||||
|
||||
let verify_family_result = {
|
||||
parent_header.decode()
|
||||
.map_err(|dec_err| dec_err.into())
|
||||
.and_then(|decoded| {
|
||||
self.engine.verify_block_family(&verified_header, &decoded)
|
||||
})
|
||||
|
||||
};
|
||||
if let Err(e) = verify_family_result {
|
||||
warn!(target: "client", "Stage 3 block verification failed for #{} ({})\nError: {:?}",
|
||||
verified_header.number(), verified_header.hash(), e);
|
||||
|
@ -75,14 +75,17 @@ const RECALCULATE_COSTS_INTERVAL: Duration = Duration::from_secs(60 * 60);
|
||||
// minimum interval between updates.
|
||||
const UPDATE_INTERVAL: Duration = Duration::from_millis(5000);
|
||||
|
||||
/// Packet count for PIP.
|
||||
const PACKET_COUNT_V1: u8 = 9;
|
||||
|
||||
/// Supported protocol versions.
|
||||
pub const PROTOCOL_VERSIONS: &'static [u8] = &[1];
|
||||
pub const PROTOCOL_VERSIONS: &'static [(u8, u8)] = &[
|
||||
(1, PACKET_COUNT_V1),
|
||||
];
|
||||
|
||||
/// Max protocol version.
|
||||
pub const MAX_PROTOCOL_VERSION: u8 = 1;
|
||||
|
||||
/// Packet count for PIP.
|
||||
pub const PACKET_COUNT: u8 = 9;
|
||||
|
||||
// packet ID definitions.
|
||||
mod packet {
|
||||
@ -111,9 +114,9 @@ mod packet {
|
||||
mod timeout {
|
||||
use std::time::Duration;
|
||||
|
||||
pub const HANDSHAKE: Duration = Duration::from_millis(2500);
|
||||
pub const ACKNOWLEDGE_UPDATE: Duration = Duration::from_millis(5000);
|
||||
pub const BASE: u64 = 1500; // base timeout for packet.
|
||||
pub const HANDSHAKE: Duration = Duration::from_millis(4_000);
|
||||
pub const ACKNOWLEDGE_UPDATE: Duration = Duration::from_millis(5_000);
|
||||
pub const BASE: u64 = 2_500; // base timeout for packet.
|
||||
|
||||
// timeouts per request within packet.
|
||||
pub const HEADERS: u64 = 250; // per header?
|
||||
@ -688,7 +691,7 @@ impl LightProtocol {
|
||||
Err(e) => { punish(*peer, io, e); return }
|
||||
};
|
||||
|
||||
if PROTOCOL_VERSIONS.iter().find(|x| **x == proto_version).is_none() {
|
||||
if PROTOCOL_VERSIONS.iter().find(|x| x.0 == proto_version).is_none() {
|
||||
punish(*peer, io, Error::UnsupportedProtocolVersion(proto_version));
|
||||
return;
|
||||
}
|
||||
|
@ -407,7 +407,7 @@ mod tests {
|
||||
let costs = CostTable::default();
|
||||
let serialized = ::rlp::encode(&costs);
|
||||
|
||||
let new_costs: CostTable = ::rlp::decode(&*serialized);
|
||||
let new_costs: CostTable = ::rlp::decode(&*serialized).unwrap();
|
||||
|
||||
assert_eq!(costs, new_costs);
|
||||
}
|
||||
|
@ -1642,7 +1642,7 @@ mod tests {
|
||||
{
|
||||
// check as single value.
|
||||
let bytes = ::rlp::encode(&val);
|
||||
let new_val: T = ::rlp::decode(&bytes);
|
||||
let new_val: T = ::rlp::decode(&bytes).unwrap();
|
||||
assert_eq!(val, new_val);
|
||||
|
||||
// check as list containing single value.
|
||||
|
@ -148,8 +148,8 @@ impl Provider where {
|
||||
encryptor: Box<Encryptor>,
|
||||
config: ProviderConfig,
|
||||
channel: IoChannel<ClientIoMessage>,
|
||||
) -> Result<Self, Error> {
|
||||
Ok(Provider {
|
||||
) -> Self {
|
||||
Provider {
|
||||
encryptor,
|
||||
validator_accounts: config.validator_accounts.into_iter().collect(),
|
||||
signer_account: config.signer_account,
|
||||
@ -161,7 +161,7 @@ impl Provider where {
|
||||
miner,
|
||||
accounts,
|
||||
channel,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TODO [ToDr] Don't use `ChainNotify` here!
|
||||
@ -242,50 +242,6 @@ impl Provider where {
|
||||
Ok(original_transaction)
|
||||
}
|
||||
|
||||
/// Process received private transaction
|
||||
pub fn import_private_transaction(&self, rlp: &[u8]) -> Result<(), Error> {
|
||||
trace!("Private transaction received");
|
||||
let private_tx: PrivateTransaction = Rlp::new(rlp).as_val()?;
|
||||
let contract = private_tx.contract;
|
||||
let contract_validators = self.get_validators(BlockId::Latest, &contract)?;
|
||||
|
||||
let validation_account = contract_validators
|
||||
.iter()
|
||||
.find(|address| self.validator_accounts.contains(address));
|
||||
|
||||
match validation_account {
|
||||
None => {
|
||||
// TODO [ToDr] This still seems a bit invalid, imho we should still import the transaction to the pool.
|
||||
// Importing to pool verifies correctness and nonce; here we are just blindly forwarding.
|
||||
//
|
||||
// Not for verification, broadcast further to peers
|
||||
self.broadcast_private_transaction(rlp.into());
|
||||
return Ok(());
|
||||
},
|
||||
Some(&validation_account) => {
|
||||
let hash = private_tx.hash();
|
||||
trace!("Private transaction taken for verification");
|
||||
let original_tx = self.extract_original_transaction(private_tx, &contract)?;
|
||||
trace!("Validating transaction: {:?}", original_tx);
|
||||
// Verify with the first account available
|
||||
trace!("The following account will be used for verification: {:?}", validation_account);
|
||||
let nonce_cache = Default::default();
|
||||
self.transactions_for_verification.lock().add_transaction(
|
||||
original_tx,
|
||||
contract,
|
||||
validation_account,
|
||||
hash,
|
||||
self.pool_client(&nonce_cache),
|
||||
)?;
|
||||
// NOTE This will just fire `on_private_transaction_queued` but from a client thread.
|
||||
// It seems that a lot of heavy work (verification) is done in this thread anyway
|
||||
// it might actually make sense to decouple it from clientService and just use dedicated thread
|
||||
// for both verification and execution.
|
||||
self.channel.send(ClientIoMessage::NewPrivateTransaction).map_err(|_| ErrorKind::ClientIsMalformed.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn pool_client<'a>(&'a self, nonce_cache: &'a RwLock<HashMap<Address, U256>>) -> miner::pool_client::PoolClient<'a, Client> {
|
||||
let engine = self.client.engine();
|
||||
let refuse_service_transactions = true;
|
||||
@ -298,11 +254,6 @@ impl Provider where {
|
||||
)
|
||||
}
|
||||
|
||||
/// Private transaction for validation added into queue
|
||||
pub fn on_private_transaction_queued(&self) -> Result<(), Error> {
|
||||
self.process_queue()
|
||||
}
|
||||
|
||||
/// Retrieve and verify the first available private transaction for every sender
|
||||
///
|
||||
/// TODO [ToDr] It seems that:
|
||||
@ -346,73 +297,6 @@ impl Provider where {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Add signed private transaction into the store
|
||||
/// Creates corresponding public transaction if last required singature collected and sends it to the chain
|
||||
pub fn import_signed_private_transaction(&self, rlp: &[u8]) -> Result<(), Error> {
|
||||
let tx: SignedPrivateTransaction = Rlp::new(rlp).as_val()?;
|
||||
trace!("Signature for private transaction received: {:?}", tx);
|
||||
let private_hash = tx.private_transaction_hash();
|
||||
let desc = match self.transactions_for_signing.lock().get(&private_hash) {
|
||||
None => {
|
||||
// TODO [ToDr] Verification (we can't just blindly forward every transaction)
|
||||
|
||||
// Not our transaction, broadcast further to peers
|
||||
self.broadcast_signed_private_transaction(rlp.into());
|
||||
return Ok(());
|
||||
},
|
||||
Some(desc) => desc,
|
||||
};
|
||||
|
||||
let last = self.last_required_signature(&desc, tx.signature())?;
|
||||
|
||||
if last {
|
||||
let mut signatures = desc.received_signatures.clone();
|
||||
signatures.push(tx.signature());
|
||||
let rsv: Vec<Signature> = signatures.into_iter().map(|sign| sign.into_electrum().into()).collect();
|
||||
//Create public transaction
|
||||
let public_tx = self.public_transaction(
|
||||
desc.state.clone(),
|
||||
&desc.original_transaction,
|
||||
&rsv,
|
||||
desc.original_transaction.nonce,
|
||||
desc.original_transaction.gas_price
|
||||
)?;
|
||||
trace!("Last required signature received, public transaction created: {:?}", public_tx);
|
||||
//Sign and add it to the queue
|
||||
let chain_id = desc.original_transaction.chain_id();
|
||||
let hash = public_tx.hash(chain_id);
|
||||
let signer_account = self.signer_account.ok_or_else(|| ErrorKind::SignerAccountNotSet)?;
|
||||
let password = find_account_password(&self.passwords, &*self.accounts, &signer_account);
|
||||
let signature = self.accounts.sign(signer_account, password, hash)?;
|
||||
let signed = SignedTransaction::new(public_tx.with_signature(signature, chain_id))?;
|
||||
match self.miner.import_own_transaction(&*self.client, signed.into()) {
|
||||
Ok(_) => trace!("Public transaction added to queue"),
|
||||
Err(err) => {
|
||||
trace!("Failed to add transaction to queue, error: {:?}", err);
|
||||
bail!(err);
|
||||
}
|
||||
}
|
||||
//Remove from store for signing
|
||||
match self.transactions_for_signing.lock().remove(&private_hash) {
|
||||
Ok(_) => {}
|
||||
Err(err) => {
|
||||
trace!("Failed to remove transaction from signing store, error: {:?}", err);
|
||||
bail!(err);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
//Add signature to the store
|
||||
match self.transactions_for_signing.lock().add_signature(&private_hash, tx.signature()) {
|
||||
Ok(_) => trace!("Signature stored for private transaction"),
|
||||
Err(err) => {
|
||||
trace!("Failed to add signature to signing store, error: {:?}", err);
|
||||
bail!(err);
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn last_required_signature(&self, desc: &PrivateTransactionSigningDesc, sign: Signature) -> Result<bool, Error> {
|
||||
if desc.received_signatures.contains(&sign) {
|
||||
return Ok(false);
|
||||
@ -656,6 +540,134 @@ impl Provider where {
|
||||
}
|
||||
}
|
||||
|
||||
pub trait Importer {
|
||||
/// Process received private transaction
|
||||
fn import_private_transaction(&self, _rlp: &[u8]) -> Result<(), Error>;
|
||||
|
||||
/// Add signed private transaction into the store
|
||||
///
|
||||
/// Creates corresponding public transaction if last required signature collected and sends it to the chain
|
||||
fn import_signed_private_transaction(&self, _rlp: &[u8]) -> Result<(), Error>;
|
||||
}
|
||||
|
||||
// TODO [ToDr] Offload more heavy stuff to the IoService thread.
|
||||
// It seems that a lot of heavy work (verification) is done in this thread anyway
|
||||
// it might actually make sense to decouple it from clientService and just use dedicated thread
|
||||
// for both verification and execution.
|
||||
|
||||
impl Importer for Arc<Provider> {
|
||||
fn import_private_transaction(&self, rlp: &[u8]) -> Result<(), Error> {
|
||||
trace!("Private transaction received");
|
||||
let private_tx: PrivateTransaction = Rlp::new(rlp).as_val()?;
|
||||
let contract = private_tx.contract;
|
||||
let contract_validators = self.get_validators(BlockId::Latest, &contract)?;
|
||||
|
||||
let validation_account = contract_validators
|
||||
.iter()
|
||||
.find(|address| self.validator_accounts.contains(address));
|
||||
|
||||
match validation_account {
|
||||
None => {
|
||||
// TODO [ToDr] This still seems a bit invalid, imho we should still import the transaction to the pool.
|
||||
// Importing to pool verifies correctness and nonce; here we are just blindly forwarding.
|
||||
//
|
||||
// Not for verification, broadcast further to peers
|
||||
self.broadcast_private_transaction(rlp.into());
|
||||
return Ok(());
|
||||
},
|
||||
Some(&validation_account) => {
|
||||
let hash = private_tx.hash();
|
||||
trace!("Private transaction taken for verification");
|
||||
let original_tx = self.extract_original_transaction(private_tx, &contract)?;
|
||||
trace!("Validating transaction: {:?}", original_tx);
|
||||
// Verify with the first account available
|
||||
trace!("The following account will be used for verification: {:?}", validation_account);
|
||||
let nonce_cache = Default::default();
|
||||
self.transactions_for_verification.lock().add_transaction(
|
||||
original_tx,
|
||||
contract,
|
||||
validation_account,
|
||||
hash,
|
||||
self.pool_client(&nonce_cache),
|
||||
)?;
|
||||
let provider = Arc::downgrade(self);
|
||||
self.channel.send(ClientIoMessage::execute(move |_| {
|
||||
if let Some(provider) = provider.upgrade() {
|
||||
if let Err(e) = provider.process_queue() {
|
||||
debug!("Unable to process the queue: {}", e);
|
||||
}
|
||||
}
|
||||
})).map_err(|_| ErrorKind::ClientIsMalformed.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn import_signed_private_transaction(&self, rlp: &[u8]) -> Result<(), Error> {
|
||||
let tx: SignedPrivateTransaction = Rlp::new(rlp).as_val()?;
|
||||
trace!("Signature for private transaction received: {:?}", tx);
|
||||
let private_hash = tx.private_transaction_hash();
|
||||
let desc = match self.transactions_for_signing.lock().get(&private_hash) {
|
||||
None => {
|
||||
// TODO [ToDr] Verification (we can't just blindly forward every transaction)
|
||||
|
||||
// Not our transaction, broadcast further to peers
|
||||
self.broadcast_signed_private_transaction(rlp.into());
|
||||
return Ok(());
|
||||
},
|
||||
Some(desc) => desc,
|
||||
};
|
||||
|
||||
let last = self.last_required_signature(&desc, tx.signature())?;
|
||||
|
||||
if last {
|
||||
let mut signatures = desc.received_signatures.clone();
|
||||
signatures.push(tx.signature());
|
||||
let rsv: Vec<Signature> = signatures.into_iter().map(|sign| sign.into_electrum().into()).collect();
|
||||
//Create public transaction
|
||||
let public_tx = self.public_transaction(
|
||||
desc.state.clone(),
|
||||
&desc.original_transaction,
|
||||
&rsv,
|
||||
desc.original_transaction.nonce,
|
||||
desc.original_transaction.gas_price
|
||||
)?;
|
||||
trace!("Last required signature received, public transaction created: {:?}", public_tx);
|
||||
//Sign and add it to the queue
|
||||
let chain_id = desc.original_transaction.chain_id();
|
||||
let hash = public_tx.hash(chain_id);
|
||||
let signer_account = self.signer_account.ok_or_else(|| ErrorKind::SignerAccountNotSet)?;
|
||||
let password = find_account_password(&self.passwords, &*self.accounts, &signer_account);
|
||||
let signature = self.accounts.sign(signer_account, password, hash)?;
|
||||
let signed = SignedTransaction::new(public_tx.with_signature(signature, chain_id))?;
|
||||
match self.miner.import_own_transaction(&*self.client, signed.into()) {
|
||||
Ok(_) => trace!("Public transaction added to queue"),
|
||||
Err(err) => {
|
||||
trace!("Failed to add transaction to queue, error: {:?}", err);
|
||||
bail!(err);
|
||||
}
|
||||
}
|
||||
//Remove from store for signing
|
||||
match self.transactions_for_signing.lock().remove(&private_hash) {
|
||||
Ok(_) => {}
|
||||
Err(err) => {
|
||||
trace!("Failed to remove transaction from signing store, error: {:?}", err);
|
||||
bail!(err);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
//Add signature to the store
|
||||
match self.transactions_for_signing.lock().add_signature(&private_hash, tx.signature()) {
|
||||
Ok(_) => trace!("Signature stored for private transaction"),
|
||||
Err(err) => {
|
||||
trace!("Failed to add signature to signing store, error: {:?}", err);
|
||||
bail!(err);
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Try to unlock account using stored password, return found password if any
|
||||
fn find_account_password(passwords: &Vec<String>, account_provider: &AccountProvider, account: &Address) -> Option<String> {
|
||||
for password in passwords {
|
||||
|
@ -74,7 +74,7 @@ fn private_contract() {
|
||||
Box::new(NoopEncryptor::default()),
|
||||
config,
|
||||
io,
|
||||
).unwrap());
|
||||
));
|
||||
|
||||
let (address, _) = contract_address(CreateContractAddress::FromSenderAndNonce, &key1.address(), &0.into(), &[]);
|
||||
|
||||
|
@ -13,6 +13,7 @@ ethcore-sync = { path = "../sync" }
|
||||
kvdb = { path = "../../util/kvdb" }
|
||||
log = "0.3"
|
||||
stop-guard = { path = "../../util/stop-guard" }
|
||||
trace-time = { path = "../../util/trace-time" }
|
||||
|
||||
[dev-dependencies]
|
||||
tempdir = "0.3"
|
||||
|
@ -28,6 +28,9 @@ extern crate error_chain;
|
||||
#[macro_use]
|
||||
extern crate log;
|
||||
|
||||
#[macro_use]
|
||||
extern crate trace_time;
|
||||
|
||||
#[cfg(test)]
|
||||
extern crate tempdir;
|
||||
|
||||
|
@ -33,7 +33,7 @@ use ethcore::snapshot::{RestorationStatus};
|
||||
use ethcore::spec::Spec;
|
||||
use ethcore::account_provider::AccountProvider;
|
||||
|
||||
use ethcore_private_tx;
|
||||
use ethcore_private_tx::{self, Importer};
|
||||
use Error;
|
||||
|
||||
pub struct PrivateTxService {
|
||||
@ -112,14 +112,13 @@ impl ClientService {
|
||||
account_provider,
|
||||
encryptor,
|
||||
private_tx_conf,
|
||||
io_service.channel())?,
|
||||
);
|
||||
io_service.channel(),
|
||||
));
|
||||
let private_tx = Arc::new(PrivateTxService::new(provider));
|
||||
|
||||
let client_io = Arc::new(ClientIoHandler {
|
||||
client: client.clone(),
|
||||
snapshot: snapshot.clone(),
|
||||
private_tx: private_tx.clone(),
|
||||
});
|
||||
io_service.register_handler(client_io)?;
|
||||
|
||||
@ -175,7 +174,6 @@ impl ClientService {
|
||||
struct ClientIoHandler {
|
||||
client: Arc<Client>,
|
||||
snapshot: Arc<SnapshotService>,
|
||||
private_tx: Arc<PrivateTxService>,
|
||||
}
|
||||
|
||||
const CLIENT_TICK_TIMER: TimerToken = 0;
|
||||
@ -191,6 +189,7 @@ impl IoHandler<ClientIoMessage> for ClientIoHandler {
|
||||
}
|
||||
|
||||
fn timeout(&self, _io: &IoContext<ClientIoMessage>, timer: TimerToken) {
|
||||
trace_time!("service::read");
|
||||
match timer {
|
||||
CLIENT_TICK_TIMER => {
|
||||
use ethcore::snapshot::SnapshotService;
|
||||
@ -203,20 +202,24 @@ impl IoHandler<ClientIoMessage> for ClientIoHandler {
|
||||
}
|
||||
|
||||
fn message(&self, _io: &IoContext<ClientIoMessage>, net_message: &ClientIoMessage) {
|
||||
trace_time!("service::message");
|
||||
use std::thread;
|
||||
|
||||
match *net_message {
|
||||
ClientIoMessage::BlockVerified => { self.client.import_verified_blocks(); }
|
||||
ClientIoMessage::NewTransactions(ref transactions, peer_id) => {
|
||||
self.client.import_queued_transactions(transactions, peer_id);
|
||||
ClientIoMessage::BlockVerified => {
|
||||
self.client.import_verified_blocks();
|
||||
}
|
||||
ClientIoMessage::BeginRestoration(ref manifest) => {
|
||||
if let Err(e) = self.snapshot.init_restore(manifest.clone(), true) {
|
||||
warn!("Failed to initialize snapshot restoration: {}", e);
|
||||
}
|
||||
}
|
||||
ClientIoMessage::FeedStateChunk(ref hash, ref chunk) => self.snapshot.feed_state_chunk(*hash, chunk),
|
||||
ClientIoMessage::FeedBlockChunk(ref hash, ref chunk) => self.snapshot.feed_block_chunk(*hash, chunk),
|
||||
ClientIoMessage::FeedStateChunk(ref hash, ref chunk) => {
|
||||
self.snapshot.feed_state_chunk(*hash, chunk)
|
||||
}
|
||||
ClientIoMessage::FeedBlockChunk(ref hash, ref chunk) => {
|
||||
self.snapshot.feed_block_chunk(*hash, chunk)
|
||||
}
|
||||
ClientIoMessage::TakeSnapshot(num) => {
|
||||
let client = self.client.clone();
|
||||
let snapshot = self.snapshot.clone();
|
||||
@ -231,12 +234,9 @@ impl IoHandler<ClientIoMessage> for ClientIoHandler {
|
||||
debug!(target: "snapshot", "Failed to initialize periodic snapshot thread: {:?}", e);
|
||||
}
|
||||
},
|
||||
ClientIoMessage::NewMessage(ref message) => if let Err(e) = self.client.engine().handle_message(message) {
|
||||
trace!(target: "poa", "Invalid message received: {}", e);
|
||||
},
|
||||
ClientIoMessage::NewPrivateTransaction => if let Err(e) = self.private_tx.provider.on_private_transaction_queued() {
|
||||
warn!("Failed to handle private transaction {:?}", e);
|
||||
},
|
||||
ClientIoMessage::Execute(ref exec) => {
|
||||
(*exec.0)(&self.client);
|
||||
}
|
||||
_ => {} // ignore other messages
|
||||
}
|
||||
}
|
||||
|
@ -438,7 +438,7 @@ impl<'a> Iterator for EpochTransitionIter<'a> {
|
||||
return None
|
||||
}
|
||||
|
||||
let transitions: EpochTransitions = ::rlp::decode(&val[..]);
|
||||
let transitions: EpochTransitions = ::rlp::decode(&val[..]).expect("decode error: the db is corrupted or the data structure has changed");
|
||||
|
||||
// if there are multiple candidates, at most one will be on the
|
||||
// canon chain.
|
||||
@ -462,7 +462,7 @@ impl<'a> Iterator for EpochTransitionIter<'a> {
|
||||
impl BlockChain {
|
||||
/// Create new instance of blockchain from given Genesis.
|
||||
pub fn new(config: Config, genesis: &[u8], db: Arc<KeyValueDB>) -> BlockChain {
|
||||
// 400 is the avarage size of the key
|
||||
// 400 is the average size of the key
|
||||
let cache_man = CacheManager::new(config.pref_cache_size, config.max_cache_size, 400);
|
||||
|
||||
let mut bc = BlockChain {
|
||||
|
@ -32,16 +32,16 @@ const HEAVY_VERIFY_RATE: f32 = 0.02;
|
||||
/// Ancient block verifier: import an ancient sequence of blocks in order from a starting
|
||||
/// epoch.
|
||||
pub struct AncientVerifier {
|
||||
cur_verifier: RwLock<Box<EpochVerifier<EthereumMachine>>>,
|
||||
cur_verifier: RwLock<Option<Box<EpochVerifier<EthereumMachine>>>>,
|
||||
engine: Arc<EthEngine>,
|
||||
}
|
||||
|
||||
impl AncientVerifier {
|
||||
/// Create a new ancient block verifier with the given engine and initial verifier.
|
||||
pub fn new(engine: Arc<EthEngine>, start_verifier: Box<EpochVerifier<EthereumMachine>>) -> Self {
|
||||
/// Create a new ancient block verifier with the given engine.
|
||||
pub fn new(engine: Arc<EthEngine>) -> Self {
|
||||
AncientVerifier {
|
||||
cur_verifier: RwLock::new(start_verifier),
|
||||
engine: engine,
|
||||
cur_verifier: RwLock::new(None),
|
||||
engine,
|
||||
}
|
||||
}
|
||||
|
||||
@ -53,17 +53,49 @@ impl AncientVerifier {
|
||||
header: &Header,
|
||||
chain: &BlockChain,
|
||||
) -> Result<(), ::error::Error> {
|
||||
match rng.gen::<f32>() <= HEAVY_VERIFY_RATE {
|
||||
true => self.cur_verifier.read().verify_heavy(header)?,
|
||||
false => self.cur_verifier.read().verify_light(header)?,
|
||||
// perform verification
|
||||
let verified = if let Some(ref cur_verifier) = *self.cur_verifier.read() {
|
||||
match rng.gen::<f32>() <= HEAVY_VERIFY_RATE {
|
||||
true => cur_verifier.verify_heavy(header)?,
|
||||
false => cur_verifier.verify_light(header)?,
|
||||
}
|
||||
true
|
||||
} else {
|
||||
false
|
||||
};
|
||||
|
||||
// when there is no verifier initialize it.
|
||||
// We use a bool flag to avoid double locking in the happy case
|
||||
if !verified {
|
||||
{
|
||||
let mut cur_verifier = self.cur_verifier.write();
|
||||
if cur_verifier.is_none() {
|
||||
*cur_verifier = Some(self.initial_verifier(header, chain)?);
|
||||
}
|
||||
}
|
||||
// Call again to verify.
|
||||
return self.verify(rng, header, chain);
|
||||
}
|
||||
|
||||
// ancient import will only use transitions obtained from the snapshot.
|
||||
if let Some(transition) = chain.epoch_transition(header.number(), header.hash()) {
|
||||
let v = self.engine.epoch_verifier(&header, &transition.proof).known_confirmed()?;
|
||||
*self.cur_verifier.write() = v;
|
||||
*self.cur_verifier.write() = Some(v);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn initial_verifier(&self, header: &Header, chain: &BlockChain)
|
||||
-> Result<Box<EpochVerifier<EthereumMachine>>, ::error::Error>
|
||||
{
|
||||
trace!(target: "client", "Initializing ancient block restoration.");
|
||||
let current_epoch_data = chain.epoch_transitions()
|
||||
.take_while(|&(_, ref t)| t.block_number < header.number())
|
||||
.last()
|
||||
.map(|(_, t)| t.proof)
|
||||
.expect("At least one epoch entry (genesis) always stored; qed");
|
||||
|
||||
self.engine.epoch_verifier(&header, ¤t_epoch_data).known_confirmed()
|
||||
}
|
||||
}
|
||||
|
@ -15,15 +15,16 @@
|
||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use std::collections::{HashSet, HashMap, BTreeMap, BTreeSet, VecDeque};
|
||||
use std::fmt;
|
||||
use std::str::FromStr;
|
||||
use std::sync::{Arc, Weak};
|
||||
use std::sync::atomic::{AtomicUsize, AtomicBool, Ordering as AtomicOrdering};
|
||||
use std::time::{Instant};
|
||||
use itertools::Itertools;
|
||||
|
||||
// util
|
||||
use hash::keccak;
|
||||
use bytes::Bytes;
|
||||
use itertools::Itertools;
|
||||
use journaldb;
|
||||
use trie::{TrieSpec, TrieFactory, Trie};
|
||||
use kvdb::{DBValue, KeyValueDB, DBTransaction};
|
||||
@ -45,7 +46,8 @@ use client::{
|
||||
use client::{
|
||||
BlockId, TransactionId, UncleId, TraceId, ClientConfig, BlockChainClient,
|
||||
TraceFilter, CallAnalytics, BlockImportError, Mode,
|
||||
ChainNotify, PruningInfo, ProvingBlockChainClient, EngineInfo, ChainMessageType
|
||||
ChainNotify, PruningInfo, ProvingBlockChainClient, EngineInfo, ChainMessageType,
|
||||
IoClient,
|
||||
};
|
||||
use encoded;
|
||||
use engines::{EthEngine, EpochTransition};
|
||||
@ -55,14 +57,13 @@ use evm::Schedule;
|
||||
use executive::{Executive, Executed, TransactOptions, contract_address};
|
||||
use factory::{Factories, VmFactory};
|
||||
use header::{BlockNumber, Header};
|
||||
use io::IoChannel;
|
||||
use io::{IoChannel, IoError};
|
||||
use log_entry::LocalizedLogEntry;
|
||||
use miner::{Miner, MinerService};
|
||||
use ethcore_miner::pool::VerifiedTransaction;
|
||||
use parking_lot::{Mutex, RwLock};
|
||||
use rand::OsRng;
|
||||
use receipt::{Receipt, LocalizedReceipt};
|
||||
use rlp::Rlp;
|
||||
use snapshot::{self, io as snapshot_io};
|
||||
use spec::Spec;
|
||||
use state_db::StateDB;
|
||||
@ -86,6 +87,7 @@ pub use verification::queue::QueueInfo as BlockQueueInfo;
|
||||
use_contract!(registry, "Registry", "res/contracts/registrar.json");
|
||||
|
||||
const MAX_TX_QUEUE_SIZE: usize = 4096;
|
||||
const MAX_ANCIENT_BLOCKS_QUEUE_SIZE: usize = 4096;
|
||||
const MAX_QUEUE_SIZE_TO_SLEEP_ON: usize = 2;
|
||||
const MIN_HISTORY_SIZE: u64 = 8;
|
||||
|
||||
@ -155,10 +157,7 @@ struct Importer {
|
||||
pub miner: Arc<Miner>,
|
||||
|
||||
/// Ancient block verifier: import an ancient sequence of blocks in order from a starting epoch
|
||||
pub ancient_verifier: Mutex<Option<AncientVerifier>>,
|
||||
|
||||
/// Random number generator used by `AncientVerifier`
|
||||
pub rng: Mutex<OsRng>,
|
||||
pub ancient_verifier: AncientVerifier,
|
||||
|
||||
/// Ethereum engine to be used during import
|
||||
pub engine: Arc<EthEngine>,
|
||||
@ -205,8 +204,13 @@ pub struct Client {
|
||||
/// List of actors to be notified on certain chain events
|
||||
notify: RwLock<Vec<Weak<ChainNotify>>>,
|
||||
|
||||
/// Count of pending transactions in the queue
|
||||
queue_transactions: AtomicUsize,
|
||||
/// Queued transactions from IO
|
||||
queue_transactions: IoChannelQueue,
|
||||
/// Ancient blocks import queue
|
||||
queue_ancient_blocks: IoChannelQueue,
|
||||
/// Consensus messages import queue
|
||||
queue_consensus_message: IoChannelQueue,
|
||||
|
||||
last_hashes: RwLock<VecDeque<H256>>,
|
||||
factories: Factories,
|
||||
|
||||
@ -240,8 +244,7 @@ impl Importer {
|
||||
verifier: verification::new(config.verifier_type.clone()),
|
||||
block_queue,
|
||||
miner,
|
||||
ancient_verifier: Mutex::new(None),
|
||||
rng: Mutex::new(OsRng::new()?),
|
||||
ancient_verifier: AncientVerifier::new(engine.clone()),
|
||||
engine,
|
||||
})
|
||||
}
|
||||
@ -448,55 +451,25 @@ impl Importer {
|
||||
Ok(locked_block)
|
||||
}
|
||||
|
||||
|
||||
/// Import a block with transaction receipts.
|
||||
///
|
||||
/// The block is guaranteed to be the next best blocks in the
|
||||
/// first block sequence. Does no sealing or transaction validation.
|
||||
fn import_old_block(&self, header: &Header, block_bytes: Bytes, receipts_bytes: Bytes, db: &KeyValueDB, chain: &BlockChain) -> Result<H256, ::error::Error> {
|
||||
let receipts = ::rlp::decode_list(&receipts_bytes);
|
||||
fn import_old_block(&self, header: &Header, block_bytes: &[u8], receipts_bytes: &[u8], db: &KeyValueDB, chain: &BlockChain) -> Result<H256, ::error::Error> {
|
||||
let receipts = ::rlp::decode_list(receipts_bytes);
|
||||
let hash = header.hash();
|
||||
let _import_lock = self.import_lock.lock();
|
||||
|
||||
{
|
||||
trace_time!("import_old_block");
|
||||
let mut ancient_verifier = self.ancient_verifier.lock();
|
||||
|
||||
{
|
||||
// closure for verifying a block.
|
||||
let verify_with = |verifier: &AncientVerifier| -> Result<(), ::error::Error> {
|
||||
// verify the block, passing the chain for updating the epoch
|
||||
// verifier.
|
||||
let mut rng = OsRng::new().map_err(UtilError::from)?;
|
||||
verifier.verify(&mut rng, &header, &chain)
|
||||
};
|
||||
|
||||
// initialize the ancient block verifier if we don't have one already.
|
||||
match &mut *ancient_verifier {
|
||||
&mut Some(ref verifier) => {
|
||||
verify_with(verifier)?
|
||||
}
|
||||
x @ &mut None => {
|
||||
// load most recent epoch.
|
||||
trace!(target: "client", "Initializing ancient block restoration.");
|
||||
let current_epoch_data = chain.epoch_transitions()
|
||||
.take_while(|&(_, ref t)| t.block_number < header.number())
|
||||
.last()
|
||||
.map(|(_, t)| t.proof)
|
||||
.expect("At least one epoch entry (genesis) always stored; qed");
|
||||
|
||||
let current_verifier = self.engine.epoch_verifier(&header, ¤t_epoch_data)
|
||||
.known_confirmed()?;
|
||||
let current_verifier = AncientVerifier::new(self.engine.clone(), current_verifier);
|
||||
|
||||
verify_with(¤t_verifier)?;
|
||||
*x = Some(current_verifier);
|
||||
}
|
||||
}
|
||||
}
|
||||
// verify the block, passing the chain for updating the epoch verifier.
|
||||
let mut rng = OsRng::new().map_err(UtilError::from)?;
|
||||
self.ancient_verifier.verify(&mut rng, &header, &chain)?;
|
||||
|
||||
// Commit results
|
||||
let mut batch = DBTransaction::new();
|
||||
chain.insert_unordered_block(&mut batch, &block_bytes, receipts, None, false, true);
|
||||
chain.insert_unordered_block(&mut batch, block_bytes, receipts, None, false, true);
|
||||
// Final commit to the DB
|
||||
db.write_buffered(batch);
|
||||
chain.commit();
|
||||
@ -766,7 +739,9 @@ impl Client {
|
||||
report: RwLock::new(Default::default()),
|
||||
io_channel: Mutex::new(message_channel),
|
||||
notify: RwLock::new(Vec::new()),
|
||||
queue_transactions: AtomicUsize::new(0),
|
||||
queue_transactions: IoChannelQueue::new(MAX_TX_QUEUE_SIZE),
|
||||
queue_ancient_blocks: IoChannelQueue::new(MAX_ANCIENT_BLOCKS_QUEUE_SIZE),
|
||||
queue_consensus_message: IoChannelQueue::new(usize::max_value()),
|
||||
last_hashes: RwLock::new(VecDeque::new()),
|
||||
factories: factories,
|
||||
history: history,
|
||||
@ -852,7 +827,7 @@ impl Client {
|
||||
}
|
||||
|
||||
fn notify<F>(&self, f: F) where F: Fn(&ChainNotify) {
|
||||
for np in self.notify.read().iter() {
|
||||
for np in &*self.notify.read() {
|
||||
if let Some(n) = np.upgrade() {
|
||||
f(&*n);
|
||||
}
|
||||
@ -986,24 +961,6 @@ impl Client {
|
||||
}
|
||||
}
|
||||
|
||||
/// Import transactions from the IO queue
|
||||
pub fn import_queued_transactions(&self, transactions: &[Bytes], peer_id: usize) -> usize {
|
||||
trace_time!("import_queued_transactions");
|
||||
self.queue_transactions.fetch_sub(transactions.len(), AtomicOrdering::SeqCst);
|
||||
|
||||
let txs: Vec<UnverifiedTransaction> = transactions
|
||||
.iter()
|
||||
.filter_map(|bytes| Rlp::new(bytes).as_val().ok())
|
||||
.collect();
|
||||
|
||||
self.notify(|notify| {
|
||||
notify.transactions_received(&txs, peer_id);
|
||||
});
|
||||
|
||||
let results = self.importer.miner.import_external_transactions(self, txs);
|
||||
results.len()
|
||||
}
|
||||
|
||||
/// Get shared miner reference.
|
||||
#[cfg(test)]
|
||||
pub fn miner(&self) -> Arc<Miner> {
|
||||
@ -1293,8 +1250,7 @@ impl Client {
|
||||
=> Some(self.chain.read().best_block_header()),
|
||||
BlockId::Number(number) if number == self.chain.read().best_block_number()
|
||||
=> Some(self.chain.read().best_block_header()),
|
||||
_
|
||||
=> self.block_header(id).map(|h| h.decode()),
|
||||
_ => self.block_header(id).and_then(|h| h.decode().ok())
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -1424,22 +1380,6 @@ impl ImportBlock for Client {
|
||||
}
|
||||
Ok(self.importer.block_queue.import(unverified)?)
|
||||
}
|
||||
|
||||
fn import_block_with_receipts(&self, block_bytes: Bytes, receipts_bytes: Bytes) -> Result<H256, BlockImportError> {
|
||||
let header: Header = ::rlp::Rlp::new(&block_bytes).val_at(0)?;
|
||||
{
|
||||
// check block order
|
||||
if self.chain.read().is_known(&header.hash()) {
|
||||
bail!(BlockImportErrorKind::Import(ImportErrorKind::AlreadyInChain));
|
||||
}
|
||||
let status = self.block_status(BlockId::Hash(*header.parent_hash()));
|
||||
if status == BlockStatus::Unknown || status == BlockStatus::Pending {
|
||||
bail!(BlockImportErrorKind::Block(BlockError::UnknownParent(*header.parent_hash())));
|
||||
}
|
||||
}
|
||||
|
||||
self.importer.import_old_block(&header, block_bytes, receipts_bytes, &**self.db.read(), &*self.chain.read()).map_err(Into::into)
|
||||
}
|
||||
}
|
||||
|
||||
impl StateClient for Client {
|
||||
@ -1974,35 +1914,10 @@ impl BlockChainClient for Client {
|
||||
(*self.build_last_hashes(&self.chain.read().best_block_hash())).clone()
|
||||
}
|
||||
|
||||
fn queue_transactions(&self, transactions: Vec<Bytes>, peer_id: usize) {
|
||||
let queue_size = self.queue_transactions.load(AtomicOrdering::Relaxed);
|
||||
trace!(target: "external_tx", "Queue size: {}", queue_size);
|
||||
if queue_size > MAX_TX_QUEUE_SIZE {
|
||||
debug!("Ignoring {} transactions: queue is full", transactions.len());
|
||||
} else {
|
||||
let len = transactions.len();
|
||||
match self.io_channel.lock().send(ClientIoMessage::NewTransactions(transactions, peer_id)) {
|
||||
Ok(_) => {
|
||||
self.queue_transactions.fetch_add(len, AtomicOrdering::SeqCst);
|
||||
}
|
||||
Err(e) => {
|
||||
debug!("Ignoring {} transactions: error queueing: {}", len, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn ready_transactions(&self) -> Vec<Arc<VerifiedTransaction>> {
|
||||
self.importer.miner.ready_transactions(self)
|
||||
}
|
||||
|
||||
fn queue_consensus_message(&self, message: Bytes) {
|
||||
let channel = self.io_channel.lock().clone();
|
||||
if let Err(e) = channel.send(ClientIoMessage::NewMessage(message)) {
|
||||
debug!("Ignoring the message, error queueing: {}", e);
|
||||
}
|
||||
}
|
||||
|
||||
fn signing_chain_id(&self) -> Option<u64> {
|
||||
self.engine.signing_chain_id(&self.latest_env_info())
|
||||
}
|
||||
@ -2014,7 +1929,11 @@ impl BlockChainClient for Client {
|
||||
|
||||
fn uncle_extra_info(&self, id: UncleId) -> Option<BTreeMap<String, String>> {
|
||||
self.uncle(id)
|
||||
.map(|header| self.engine.extra_info(&header.decode()))
|
||||
.and_then(|h| {
|
||||
h.decode().map(|dh| {
|
||||
self.engine.extra_info(&dh)
|
||||
}).ok()
|
||||
})
|
||||
}
|
||||
|
||||
fn pruning_info(&self) -> PruningInfo {
|
||||
@ -2050,6 +1969,72 @@ impl BlockChainClient for Client {
|
||||
}
|
||||
}
|
||||
|
||||
impl IoClient for Client {
|
||||
fn queue_transactions(&self, transactions: Vec<Bytes>, peer_id: usize) {
|
||||
let len = transactions.len();
|
||||
self.queue_transactions.queue(&mut self.io_channel.lock(), len, move |client| {
|
||||
trace_time!("import_queued_transactions");
|
||||
|
||||
let txs: Vec<UnverifiedTransaction> = transactions
|
||||
.iter()
|
||||
.filter_map(|bytes| client.engine.decode_transaction(bytes).ok())
|
||||
.collect();
|
||||
|
||||
client.notify(|notify| {
|
||||
notify.transactions_received(&txs, peer_id);
|
||||
});
|
||||
|
||||
client.importer.miner.import_external_transactions(client, txs);
|
||||
}).unwrap_or_else(|e| {
|
||||
debug!(target: "client", "Ignoring {} transactions: {}", len, e);
|
||||
});
|
||||
}
|
||||
|
||||
fn queue_ancient_block(&self, block_bytes: Bytes, receipts_bytes: Bytes) -> Result<H256, BlockImportError> {
|
||||
let header: Header = ::rlp::Rlp::new(&block_bytes).val_at(0)?;
|
||||
let hash = header.hash();
|
||||
|
||||
{
|
||||
// check block order
|
||||
if self.chain.read().is_known(&header.hash()) {
|
||||
bail!(BlockImportErrorKind::Import(ImportErrorKind::AlreadyInChain));
|
||||
}
|
||||
let status = self.block_status(BlockId::Hash(*header.parent_hash()));
|
||||
if status == BlockStatus::Unknown || status == BlockStatus::Pending {
|
||||
bail!(BlockImportErrorKind::Block(BlockError::UnknownParent(*header.parent_hash())));
|
||||
}
|
||||
}
|
||||
|
||||
match self.queue_ancient_blocks.queue(&mut self.io_channel.lock(), 1, move |client| {
|
||||
client.importer.import_old_block(
|
||||
&header,
|
||||
&block_bytes,
|
||||
&receipts_bytes,
|
||||
&**client.db.read(),
|
||||
&*client.chain.read()
|
||||
).map(|_| ()).unwrap_or_else(|e| {
|
||||
error!(target: "client", "Error importing ancient block: {}", e);
|
||||
});
|
||||
}) {
|
||||
Ok(_) => Ok(hash),
|
||||
Err(e) => bail!(BlockImportErrorKind::Other(format!("{}", e))),
|
||||
}
|
||||
}
|
||||
|
||||
fn queue_consensus_message(&self, message: Bytes) {
|
||||
match self.queue_consensus_message.queue(&mut self.io_channel.lock(), 1, move |client| {
|
||||
if let Err(e) = client.engine().handle_message(&message) {
|
||||
debug!(target: "poa", "Invalid message received: {}", e);
|
||||
}
|
||||
}) {
|
||||
Ok(_) => (),
|
||||
Err(e) => {
|
||||
debug!(target: "poa", "Ignoring the message, error queueing: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl ReopenBlock for Client {
|
||||
fn reopen_block(&self, block: ClosedBlock) -> OpenBlock {
|
||||
let engine = &*self.engine;
|
||||
@ -2066,7 +2051,8 @@ impl ReopenBlock for Client {
|
||||
for h in uncles {
|
||||
if !block.uncles().iter().any(|header| header.hash() == h) {
|
||||
let uncle = chain.block_header_data(&h).expect("find_uncle_hashes only returns hashes for existing headers; qed");
|
||||
block.push_uncle(uncle.decode()).expect("pushing up to maximum_uncle_count;
|
||||
let uncle = uncle.decode().expect("decoding failure");
|
||||
block.push_uncle(uncle).expect("pushing up to maximum_uncle_count;
|
||||
push_uncle is not ok only if more than maximum_uncle_count is pushed;
|
||||
so all push_uncle are Ok;
|
||||
qed");
|
||||
@ -2107,7 +2093,7 @@ impl PrepareOpenBlock for Client {
|
||||
.into_iter()
|
||||
.take(engine.maximum_uncle_count(open_block.header().number()))
|
||||
.foreach(|h| {
|
||||
open_block.push_uncle(h.decode()).expect("pushing maximum_uncle_count;
|
||||
open_block.push_uncle(h.decode().expect("decoding failure")).expect("pushing maximum_uncle_count;
|
||||
open_block was just created;
|
||||
push_uncle is not ok only if more than maximum_uncle_count is pushed;
|
||||
so all push_uncle are Ok;
|
||||
@ -2429,3 +2415,54 @@ mod tests {
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
enum QueueError {
|
||||
Channel(IoError),
|
||||
Full(usize),
|
||||
}
|
||||
|
||||
impl fmt::Display for QueueError {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
|
||||
match *self {
|
||||
QueueError::Channel(ref c) => fmt::Display::fmt(c, fmt),
|
||||
QueueError::Full(limit) => write!(fmt, "The queue is full ({})", limit),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Queue some items to be processed by IO client.
|
||||
struct IoChannelQueue {
|
||||
currently_queued: Arc<AtomicUsize>,
|
||||
limit: usize,
|
||||
}
|
||||
|
||||
impl IoChannelQueue {
|
||||
pub fn new(limit: usize) -> Self {
|
||||
IoChannelQueue {
|
||||
currently_queued: Default::default(),
|
||||
limit,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn queue<F>(&self, channel: &mut IoChannel<ClientIoMessage>, count: usize, fun: F) -> Result<(), QueueError> where
|
||||
F: Fn(&Client) + Send + Sync + 'static,
|
||||
{
|
||||
let queue_size = self.currently_queued.load(AtomicOrdering::Relaxed);
|
||||
ensure!(queue_size < self.limit, QueueError::Full(self.limit));
|
||||
|
||||
let currently_queued = self.currently_queued.clone();
|
||||
let result = channel.send(ClientIoMessage::execute(move |client| {
|
||||
currently_queued.fetch_sub(count, AtomicOrdering::SeqCst);
|
||||
fun(client);
|
||||
}));
|
||||
|
||||
match result {
|
||||
Ok(_) => {
|
||||
self.currently_queued.fetch_add(count, AtomicOrdering::SeqCst);
|
||||
Ok(())
|
||||
},
|
||||
Err(e) => Err(QueueError::Channel(e)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -14,19 +14,19 @@
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use ethereum_types::H256;
|
||||
use std::fmt;
|
||||
use bytes::Bytes;
|
||||
use client::Client;
|
||||
use ethereum_types::H256;
|
||||
use snapshot::ManifestData;
|
||||
|
||||
/// Message type for external and internal events
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
#[derive(Debug)]
|
||||
pub enum ClientIoMessage {
|
||||
/// Best Block Hash in chain has been changed
|
||||
NewChainHead,
|
||||
/// A block is ready
|
||||
BlockVerified,
|
||||
/// New transaction RLPs are ready to be imported
|
||||
NewTransactions(Vec<Bytes>, usize),
|
||||
/// Begin snapshot restoration
|
||||
BeginRestoration(ManifestData),
|
||||
/// Feed a state chunk to the snapshot service
|
||||
@ -35,9 +35,23 @@ pub enum ClientIoMessage {
|
||||
FeedBlockChunk(H256, Bytes),
|
||||
/// Take a snapshot for the block with given number.
|
||||
TakeSnapshot(u64),
|
||||
/// New consensus message received.
|
||||
NewMessage(Bytes),
|
||||
/// New private transaction arrived
|
||||
NewPrivateTransaction,
|
||||
/// Execute wrapped closure
|
||||
Execute(Callback),
|
||||
}
|
||||
|
||||
impl ClientIoMessage {
|
||||
/// Create new `ClientIoMessage` that executes given procedure.
|
||||
pub fn execute<F: Fn(&Client) + Send + Sync + 'static>(fun: F) -> Self {
|
||||
ClientIoMessage::Execute(Callback(Box::new(fun)))
|
||||
}
|
||||
}
|
||||
|
||||
/// A function to invoke in the client thread.
|
||||
pub struct Callback(pub Box<Fn(&Client) + Send + Sync>);
|
||||
|
||||
impl fmt::Debug for Callback {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
|
||||
write!(fmt, "<callback>")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -36,9 +36,8 @@ pub use self::traits::{
|
||||
Nonce, Balance, ChainInfo, BlockInfo, ReopenBlock, PrepareOpenBlock, CallContract, TransactionInfo, RegistryInfo, ScheduleInfo, ImportSealedBlock, BroadcastProposalBlock, ImportBlock,
|
||||
StateOrBlock, StateClient, Call, EngineInfo, AccountData, BlockChain, BlockProducer, SealedBlockImporter
|
||||
};
|
||||
//pub use self::private_notify::PrivateNotify;
|
||||
pub use state::StateInfo;
|
||||
pub use self::traits::{BlockChainClient, EngineClient, ProvingBlockChainClient};
|
||||
pub use self::traits::{BlockChainClient, EngineClient, ProvingBlockChainClient, IoClient};
|
||||
|
||||
pub use types::ids::*;
|
||||
pub use types::trace_filter::Filter as TraceFilter;
|
||||
|
@ -39,7 +39,7 @@ use client::{
|
||||
PrepareOpenBlock, BlockChainClient, BlockChainInfo, BlockStatus, BlockId,
|
||||
TransactionId, UncleId, TraceId, TraceFilter, LastHashes, CallAnalytics, BlockImportError,
|
||||
ProvingBlockChainClient, ScheduleInfo, ImportSealedBlock, BroadcastProposalBlock, ImportBlock, StateOrBlock,
|
||||
Call, StateClient, EngineInfo, AccountData, BlockChain, BlockProducer, SealedBlockImporter
|
||||
Call, StateClient, EngineInfo, AccountData, BlockChain, BlockProducer, SealedBlockImporter, IoClient
|
||||
};
|
||||
use db::{NUM_COLUMNS, COL_STATE};
|
||||
use header::{Header as BlockHeader, BlockNumber};
|
||||
@ -289,7 +289,7 @@ impl TestBlockChainClient {
|
||||
/// Make a bad block by setting invalid extra data.
|
||||
pub fn corrupt_block(&self, n: BlockNumber) {
|
||||
let hash = self.block_hash(BlockId::Number(n)).unwrap();
|
||||
let mut header: BlockHeader = self.block_header(BlockId::Number(n)).unwrap().decode();
|
||||
let mut header: BlockHeader = self.block_header(BlockId::Number(n)).unwrap().decode().expect("decoding failed");
|
||||
header.set_extra_data(b"This extra data is way too long to be considered valid".to_vec());
|
||||
let mut rlp = RlpStream::new_list(3);
|
||||
rlp.append(&header);
|
||||
@ -301,7 +301,7 @@ impl TestBlockChainClient {
|
||||
/// Make a bad block by setting invalid parent hash.
|
||||
pub fn corrupt_block_parent(&self, n: BlockNumber) {
|
||||
let hash = self.block_hash(BlockId::Number(n)).unwrap();
|
||||
let mut header: BlockHeader = self.block_header(BlockId::Number(n)).unwrap().decode();
|
||||
let mut header: BlockHeader = self.block_header(BlockId::Number(n)).unwrap().decode().expect("decoding failed");
|
||||
header.set_parent_hash(H256::from(42));
|
||||
let mut rlp = RlpStream::new_list(3);
|
||||
rlp.append(&header);
|
||||
@ -479,6 +479,7 @@ impl BlockInfo for TestBlockChainClient {
|
||||
self.block_header(BlockId::Hash(self.chain_info().best_block_hash))
|
||||
.expect("Best block always has header.")
|
||||
.decode()
|
||||
.expect("decoding failed")
|
||||
}
|
||||
|
||||
fn block(&self, id: BlockId) -> Option<encoded::Block> {
|
||||
@ -556,10 +557,6 @@ impl ImportBlock for TestBlockChainClient {
|
||||
}
|
||||
Ok(h)
|
||||
}
|
||||
|
||||
fn import_block_with_receipts(&self, b: Bytes, _r: Bytes) -> Result<H256, BlockImportError> {
|
||||
self.import_block(b)
|
||||
}
|
||||
}
|
||||
|
||||
impl Call for TestBlockChainClient {
|
||||
@ -809,16 +806,6 @@ impl BlockChainClient for TestBlockChainClient {
|
||||
self.traces.read().clone()
|
||||
}
|
||||
|
||||
fn queue_transactions(&self, transactions: Vec<Bytes>, _peer_id: usize) {
|
||||
// import right here
|
||||
let txs = transactions.into_iter().filter_map(|bytes| Rlp::new(&bytes).as_val().ok()).collect();
|
||||
self.miner.import_external_transactions(self, txs);
|
||||
}
|
||||
|
||||
fn queue_consensus_message(&self, message: Bytes) {
|
||||
self.spec.engine.handle_message(&message).unwrap();
|
||||
}
|
||||
|
||||
fn ready_transactions(&self) -> Vec<Arc<VerifiedTransaction>> {
|
||||
self.miner.ready_transactions(self)
|
||||
}
|
||||
@ -863,6 +850,22 @@ impl BlockChainClient for TestBlockChainClient {
|
||||
fn eip86_transition(&self) -> u64 { u64::max_value() }
|
||||
}
|
||||
|
||||
impl IoClient for TestBlockChainClient {
|
||||
fn queue_transactions(&self, transactions: Vec<Bytes>, _peer_id: usize) {
|
||||
// import right here
|
||||
let txs = transactions.into_iter().filter_map(|bytes| Rlp::new(&bytes).as_val().ok()).collect();
|
||||
self.miner.import_external_transactions(self, txs);
|
||||
}
|
||||
|
||||
fn queue_ancient_block(&self, b: Bytes, _r: Bytes) -> Result<H256, BlockImportError> {
|
||||
self.import_block(b)
|
||||
}
|
||||
|
||||
fn queue_consensus_message(&self, message: Bytes) {
|
||||
self.spec.engine.handle_message(&message).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
impl ProvingBlockChainClient for TestBlockChainClient {
|
||||
fn prove_storage(&self, _: H256, _: H256, _: BlockId) -> Option<(Vec<Bytes>, H256)> {
|
||||
None
|
||||
|
@ -168,9 +168,6 @@ pub trait RegistryInfo {
|
||||
pub trait ImportBlock {
|
||||
/// Import a block into the blockchain.
|
||||
fn import_block(&self, bytes: Bytes) -> Result<H256, BlockImportError>;
|
||||
|
||||
/// Import a block with transaction receipts. Does no sealing and transaction validation.
|
||||
fn import_block_with_receipts(&self, block_bytes: Bytes, receipts_bytes: Bytes) -> Result<H256, BlockImportError>;
|
||||
}
|
||||
|
||||
/// Provides `call_contract` method
|
||||
@ -201,8 +198,21 @@ pub trait EngineInfo {
|
||||
fn engine(&self) -> &EthEngine;
|
||||
}
|
||||
|
||||
/// IO operations that should off-load heavy work to another thread.
|
||||
pub trait IoClient: Sync + Send {
|
||||
/// Queue transactions for importing.
|
||||
fn queue_transactions(&self, transactions: Vec<Bytes>, peer_id: usize);
|
||||
|
||||
/// Queue block import with transaction receipts. Does no sealing and transaction validation.
|
||||
fn queue_ancient_block(&self, block_bytes: Bytes, receipts_bytes: Bytes) -> Result<H256, BlockImportError>;
|
||||
|
||||
/// Queue conensus engine message.
|
||||
fn queue_consensus_message(&self, message: Bytes);
|
||||
}
|
||||
|
||||
/// Blockchain database client. Owns and manages a blockchain and a block queue.
|
||||
pub trait BlockChainClient : Sync + Send + AccountData + BlockChain + CallContract + RegistryInfo + ImportBlock {
|
||||
pub trait BlockChainClient : Sync + Send + AccountData + BlockChain + CallContract + RegistryInfo + ImportBlock
|
||||
+ IoClient {
|
||||
/// Look up the block number for the given block ID.
|
||||
fn block_number(&self, id: BlockId) -> Option<BlockNumber>;
|
||||
|
||||
@ -310,12 +320,6 @@ pub trait BlockChainClient : Sync + Send + AccountData + BlockChain + CallContra
|
||||
/// Get last hashes starting from best block.
|
||||
fn last_hashes(&self) -> LastHashes;
|
||||
|
||||
/// Queue transactions for importing.
|
||||
fn queue_transactions(&self, transactions: Vec<Bytes>, peer_id: usize);
|
||||
|
||||
/// Queue conensus engine message.
|
||||
fn queue_consensus_message(&self, message: Bytes);
|
||||
|
||||
/// List all transactions that are allowed into the next block.
|
||||
fn ready_transactions(&self) -> Vec<Arc<VerifiedTransaction>>;
|
||||
|
||||
|
@ -218,15 +218,12 @@ impl Writable for DBTransaction {
|
||||
}
|
||||
|
||||
impl<KVDB: KeyValueDB + ?Sized> Readable for KVDB {
|
||||
fn read<T, R>(&self, col: Option<u32>, key: &Key<T, Target = R>) -> Option<T> where T: rlp::Decodable, R: Deref<Target = [u8]> {
|
||||
let result = self.get(col, &key.key());
|
||||
fn read<T, R>(&self, col: Option<u32>, key: &Key<T, Target = R>) -> Option<T>
|
||||
where T: rlp::Decodable, R: Deref<Target = [u8]> {
|
||||
self.get(col, &key.key())
|
||||
.expect(&format!("db get failed, key: {:?}", &key.key() as &[u8]))
|
||||
.map(|v| rlp::decode(&v).expect("decode db value failed") )
|
||||
|
||||
match result {
|
||||
Ok(option) => option.map(|v| rlp::decode(&v)),
|
||||
Err(err) => {
|
||||
panic!("db get failed, key: {:?}, err: {:?}", &key.key() as &[u8], err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn exists<T, R>(&self, col: Option<u32>, key: &Key<T, Target = R>) -> bool where R: Deref<Target = [u8]> {
|
||||
|
@ -24,13 +24,12 @@
|
||||
//! decoded object where parts like the hash can be saved.
|
||||
|
||||
use block::Block as FullBlock;
|
||||
use header::{BlockNumber, Header as FullHeader};
|
||||
use transaction::UnverifiedTransaction;
|
||||
|
||||
use hash::keccak;
|
||||
use heapsize::HeapSizeOf;
|
||||
use ethereum_types::{H256, Bloom, U256, Address};
|
||||
use rlp::{Rlp, RlpStream};
|
||||
use hash::keccak;
|
||||
use header::{BlockNumber, Header as FullHeader};
|
||||
use heapsize::HeapSizeOf;
|
||||
use rlp::{self, Rlp, RlpStream};
|
||||
use transaction::UnverifiedTransaction;
|
||||
use views::{self, BlockView, HeaderView, BodyView};
|
||||
|
||||
/// Owning header view.
|
||||
@ -48,7 +47,9 @@ impl Header {
|
||||
pub fn new(encoded: Vec<u8>) -> Self { Header(encoded) }
|
||||
|
||||
/// Upgrade this encoded view to a fully owned `Header` object.
|
||||
pub fn decode(&self) -> FullHeader { ::rlp::decode(&self.0) }
|
||||
pub fn decode(&self) -> Result<FullHeader, rlp::DecoderError> {
|
||||
rlp::decode(&self.0)
|
||||
}
|
||||
|
||||
/// Get a borrowed header view onto the data.
|
||||
#[inline]
|
||||
@ -205,7 +206,7 @@ impl Block {
|
||||
pub fn header_view(&self) -> HeaderView { self.view().header_view() }
|
||||
|
||||
/// Decode to a full block.
|
||||
pub fn decode(&self) -> FullBlock { ::rlp::decode(&self.0) }
|
||||
pub fn decode(&self) -> Result<FullBlock, rlp::DecoderError> { rlp::decode(&self.0) }
|
||||
|
||||
/// Decode the header.
|
||||
pub fn decode_header(&self) -> FullHeader { self.view().rlp().val_at(0) }
|
||||
|
@ -996,7 +996,7 @@ impl Engine<EthereumMachine> for AuthorityRound {
|
||||
|
||||
let parent = client.block_header(::client::BlockId::Hash(*block.header().parent_hash()))
|
||||
.expect("hash is from parent; parent header must exist; qed")
|
||||
.decode();
|
||||
.decode()?;
|
||||
|
||||
let parent_step = header_step(&parent, self.empty_steps_transition)?;
|
||||
let current_step = self.step.load();
|
||||
|
@ -426,6 +426,11 @@ pub trait EthEngine: Engine<::machine::EthereumMachine> {
|
||||
fn additional_params(&self) -> HashMap<String, String> {
|
||||
self.machine().additional_params()
|
||||
}
|
||||
|
||||
/// Performs pre-validation of RLP decoded transaction before other processing
|
||||
fn decode_transaction(&self, transaction: &[u8]) -> Result<UnverifiedTransaction, transaction::Error> {
|
||||
self.machine().decode_transaction(transaction)
|
||||
}
|
||||
}
|
||||
|
||||
// convenience wrappers for existing functions.
|
||||
|
@ -142,8 +142,10 @@ impl <F> super::EpochVerifier<EthereumMachine> for EpochVerifier<F>
|
||||
}
|
||||
|
||||
fn check_finality_proof(&self, proof: &[u8]) -> Option<Vec<H256>> {
|
||||
let header: Header = ::rlp::decode(proof);
|
||||
self.verify_light(&header).ok().map(|_| vec![header.hash()])
|
||||
match ::rlp::decode(proof) {
|
||||
Ok(header) => self.verify_light(&header).ok().map(|_| vec![header.hash()]),
|
||||
Err(_) => None // REVIEW: log perhaps? Not sure what the policy is.
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -290,6 +290,12 @@ error_chain! {
|
||||
description("Unknown engine name")
|
||||
display("Unknown engine name ({})", name)
|
||||
}
|
||||
|
||||
#[doc = "RLP decoding errors"]
|
||||
Decoder(err: ::rlp::DecoderError) {
|
||||
description("decoding value failed")
|
||||
display("decoding value failed with error: {}", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -310,11 +316,11 @@ impl From<AccountsError> for Error {
|
||||
fn from(err: AccountsError) -> Error {
|
||||
ErrorKind::AccountProvider(err).into()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<::rlp::DecoderError> for Error {
|
||||
fn from(err: ::rlp::DecoderError) -> Error {
|
||||
UtilError::from(err).into()
|
||||
ErrorKind::Decoder(err).into()
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -428,8 +428,14 @@ impl<'a, B: 'a + StateBackend> Executive<'a, B> {
|
||||
self.state.discard_checkpoint();
|
||||
output.write(0, &builtin_out_buffer);
|
||||
|
||||
// trace only top level calls to builtins to avoid DDoS attacks
|
||||
if self.depth == 0 {
|
||||
// Trace only top level calls and calls with balance transfer to builtins. The reason why we don't
|
||||
// trace all internal calls to builtin contracts is that memcpy (IDENTITY) is a heavily used
|
||||
// function.
|
||||
let is_transferred = match params.value {
|
||||
ActionValue::Transfer(value) => value != U256::zero(),
|
||||
ActionValue::Apparent(_) => false,
|
||||
};
|
||||
if self.depth == 0 || is_transferred {
|
||||
let mut trace_output = tracer.prepare_trace_output();
|
||||
if let Some(out) = trace_output.as_mut() {
|
||||
*out = output.to_owned();
|
||||
@ -722,6 +728,12 @@ mod tests {
|
||||
machine
|
||||
}
|
||||
|
||||
fn make_byzantium_machine(max_depth: usize) -> EthereumMachine {
|
||||
let mut machine = ::ethereum::new_byzantium_test_machine();
|
||||
machine.set_schedule_creation_rules(Box::new(move |s, _| s.max_depth = max_depth));
|
||||
machine
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_contract_address() {
|
||||
let address = Address::from_str("0f572e5295c57f15886f9b263e2f6d2d6c7b5ec6").unwrap();
|
||||
@ -813,6 +825,76 @@ mod tests {
|
||||
assert_eq!(substate.contracts_created.len(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_call_to_precompiled_tracing() {
|
||||
// code:
|
||||
//
|
||||
// 60 00 - push 00 out size
|
||||
// 60 00 - push 00 out offset
|
||||
// 60 00 - push 00 in size
|
||||
// 60 00 - push 00 in offset
|
||||
// 60 01 - push 01 value
|
||||
// 60 03 - push 03 to
|
||||
// 61 ffff - push fff gas
|
||||
// f1 - CALL
|
||||
|
||||
let code = "60006000600060006001600361fffff1".from_hex().unwrap();
|
||||
let sender = Address::from_str("4444444444444444444444444444444444444444").unwrap();
|
||||
let address = Address::from_str("5555555555555555555555555555555555555555").unwrap();
|
||||
|
||||
let mut params = ActionParams::default();
|
||||
params.address = address.clone();
|
||||
params.code_address = address.clone();
|
||||
params.sender = sender.clone();
|
||||
params.origin = sender.clone();
|
||||
params.gas = U256::from(100_000);
|
||||
params.code = Some(Arc::new(code));
|
||||
params.value = ActionValue::Transfer(U256::from(100));
|
||||
params.call_type = CallType::Call;
|
||||
let mut state = get_temp_state();
|
||||
state.add_balance(&sender, &U256::from(100), CleanupMode::NoEmpty).unwrap();
|
||||
let info = EnvInfo::default();
|
||||
let machine = make_byzantium_machine(5);
|
||||
let mut substate = Substate::new();
|
||||
let mut tracer = ExecutiveTracer::default();
|
||||
let mut vm_tracer = ExecutiveVMTracer::toplevel();
|
||||
|
||||
let mut ex = Executive::new(&mut state, &info, &machine);
|
||||
let output = BytesRef::Fixed(&mut[0u8;0]);
|
||||
ex.call(params, &mut substate, output, &mut tracer, &mut vm_tracer).unwrap();
|
||||
|
||||
assert_eq!(tracer.drain(), vec![FlatTrace {
|
||||
action: trace::Action::Call(trace::Call {
|
||||
from: "4444444444444444444444444444444444444444".into(),
|
||||
to: "5555555555555555555555555555555555555555".into(),
|
||||
value: 100.into(),
|
||||
gas: 100_000.into(),
|
||||
input: vec![],
|
||||
call_type: CallType::Call
|
||||
}),
|
||||
result: trace::Res::Call(trace::CallResult {
|
||||
gas_used: 33021.into(),
|
||||
output: vec![]
|
||||
}),
|
||||
subtraces: 1,
|
||||
trace_address: Default::default()
|
||||
}, FlatTrace {
|
||||
action: trace::Action::Call(trace::Call {
|
||||
from: "5555555555555555555555555555555555555555".into(),
|
||||
to: "0000000000000000000000000000000000000003".into(),
|
||||
value: 1.into(),
|
||||
gas: 66560.into(),
|
||||
input: vec![],
|
||||
call_type: CallType::Call
|
||||
}), result: trace::Res::Call(trace::CallResult {
|
||||
gas_used: 600.into(),
|
||||
output: vec![]
|
||||
}),
|
||||
subtraces: 0,
|
||||
trace_address: vec![0].into_iter().collect(),
|
||||
}]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
// Tracing is not suported in JIT
|
||||
fn test_call_to_create() {
|
||||
|
@ -398,7 +398,7 @@ mod tests {
|
||||
let nonce = "88ab4e252a7e8c2a23".from_hex().unwrap();
|
||||
let nonce_decoded = "ab4e252a7e8c2a23".from_hex().unwrap();
|
||||
|
||||
let header: Header = rlp::decode(&header_rlp);
|
||||
let header: Header = rlp::decode(&header_rlp).expect("error decoding header");
|
||||
let seal_fields = header.seal.clone();
|
||||
assert_eq!(seal_fields.len(), 2);
|
||||
assert_eq!(seal_fields[0], mix_hash);
|
||||
@ -415,7 +415,7 @@ mod tests {
|
||||
// that's rlp of block header created with ethash engine.
|
||||
let header_rlp = "f901f9a0d405da4e66f1445d455195229624e133f5baafe72b5cf7b3c36c12c8146e98b7a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347948888f1f195afa192cfee860698584c030f4c9db1a05fb2b4bfdef7b314451cb138a534d225c922fc0e5fbe25e451142732c3e25c25a088d2ec6b9860aae1a2c3b299f72b6a5d70d7f7ba4722c78f2c49ba96273c2158a007c6fdfa8eea7e86b81f5b0fc0f78f90cc19f4aa60d323151e0cac660199e9a1b90100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008302008003832fefba82524d84568e932a80a0a0349d8c3df71f1a48a9df7d03fd5f14aeee7d91332c009ecaff0a71ead405bd88ab4e252a7e8c2a23".from_hex().unwrap();
|
||||
|
||||
let header: Header = rlp::decode(&header_rlp);
|
||||
let header: Header = rlp::decode(&header_rlp).expect("error decoding header");
|
||||
let encoded_header = rlp::encode(&header).into_vec();
|
||||
|
||||
assert_eq!(header_rlp, encoded_header);
|
||||
|
@ -34,6 +34,7 @@ use tx_filter::TransactionFilter;
|
||||
|
||||
use ethereum_types::{U256, Address};
|
||||
use bytes::BytesRef;
|
||||
use rlp::Rlp;
|
||||
use vm::{CallType, ActionParams, ActionValue, ParamsType};
|
||||
use vm::{EnvInfo, Schedule, CreateContractAddress};
|
||||
|
||||
@ -121,7 +122,13 @@ impl EthereumMachine {
|
||||
}
|
||||
|
||||
impl EthereumMachine {
|
||||
/// Execute a call as the system address.
|
||||
/// Execute a call as the system address. Block environment information passed to the
|
||||
/// VM is modified to have its gas limit bounded at the upper limit of possible used
|
||||
/// gases including this system call, capped at the maximum value able to be
|
||||
/// represented by U256. This system call modifies the block state, but discards other
|
||||
/// information. If suicides, logs or refunds happen within the system call, they
|
||||
/// will not be executed or recorded. Gas used by this system call will not be counted
|
||||
/// on the block.
|
||||
pub fn execute_as_system(
|
||||
&self,
|
||||
block: &mut ExecutedBlock,
|
||||
@ -131,7 +138,7 @@ impl EthereumMachine {
|
||||
) -> Result<Vec<u8>, Error> {
|
||||
let env_info = {
|
||||
let mut env_info = block.env_info();
|
||||
env_info.gas_limit = env_info.gas_used + gas;
|
||||
env_info.gas_limit = env_info.gas_used.saturating_add(gas);
|
||||
env_info
|
||||
};
|
||||
|
||||
@ -376,6 +383,16 @@ impl EthereumMachine {
|
||||
"registrar".to_owned() => format!("{:x}", self.params.registrar)
|
||||
]
|
||||
}
|
||||
|
||||
/// Performs pre-validation of RLP decoded transaction before other processing
|
||||
pub fn decode_transaction(&self, transaction: &[u8]) -> Result<UnverifiedTransaction, transaction::Error> {
|
||||
let rlp = Rlp::new(&transaction);
|
||||
if rlp.as_raw().len() > self.params().max_transaction_size {
|
||||
debug!("Rejected oversized transaction of {} bytes", rlp.as_raw().len());
|
||||
return Err(transaction::Error::TooBig)
|
||||
}
|
||||
rlp.as_val().map_err(|e| transaction::Error::InvalidRlp(e.to_string()))
|
||||
}
|
||||
}
|
||||
|
||||
/// Auxiliary data fetcher for an Ethereum machine. In Ethereum-like machines
|
||||
|
@ -528,8 +528,8 @@ impl Miner {
|
||||
}
|
||||
|
||||
/// Attempts to perform internal sealing (one that does not require work) and handles the result depending on the type of Seal.
|
||||
fn seal_and_import_block_internally<C>(&self, chain: &C, block: ClosedBlock) -> bool where
|
||||
C: BlockChain + SealedBlockImporter,
|
||||
fn seal_and_import_block_internally<C>(&self, chain: &C, block: ClosedBlock) -> bool
|
||||
where C: BlockChain + SealedBlockImporter,
|
||||
{
|
||||
{
|
||||
let sealing = self.sealing.lock();
|
||||
@ -544,7 +544,12 @@ impl Miner {
|
||||
trace!(target: "miner", "seal_block_internally: attempting internal seal.");
|
||||
|
||||
let parent_header = match chain.block_header(BlockId::Hash(*block.header().parent_hash())) {
|
||||
Some(hdr) => hdr.decode(),
|
||||
Some(h) => {
|
||||
match h.decode() {
|
||||
Ok(decoded_hdr) => decoded_hdr,
|
||||
Err(_) => return false
|
||||
}
|
||||
}
|
||||
None => return false,
|
||||
};
|
||||
|
||||
|
@ -145,6 +145,10 @@ impl<'a, C: 'a> pool::client::Client for PoolClient<'a, C> where
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn decode_transaction(&self, transaction: &[u8]) -> Result<UnverifiedTransaction, transaction::Error> {
|
||||
self.engine.decode_transaction(transaction)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, C: 'a> NonceClient for PoolClient<'a, C> where
|
||||
|
@ -236,7 +236,7 @@ mod tests {
|
||||
};
|
||||
|
||||
let thin_rlp = ::rlp::encode(&account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp), account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||
|
||||
let fat_rlps = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hashdb(), &addr), &mut Default::default(), usize::max_value(), usize::max_value()).unwrap();
|
||||
let fat_rlp = Rlp::new(&fat_rlps[0]).at(1).unwrap();
|
||||
@ -261,7 +261,7 @@ mod tests {
|
||||
};
|
||||
|
||||
let thin_rlp = ::rlp::encode(&account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp), account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||
|
||||
let fat_rlp = to_fat_rlps(&keccak(&addr), &account, &AccountDB::new(db.as_hashdb(), &addr), &mut Default::default(), usize::max_value(), usize::max_value()).unwrap();
|
||||
let fat_rlp = Rlp::new(&fat_rlp[0]).at(1).unwrap();
|
||||
@ -286,7 +286,7 @@ mod tests {
|
||||
};
|
||||
|
||||
let thin_rlp = ::rlp::encode(&account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp), account);
|
||||
assert_eq!(::rlp::decode::<BasicAccount>(&thin_rlp).unwrap(), account);
|
||||
|
||||
let fat_rlps = to_fat_rlps(&keccak(addr), &account, &AccountDB::new(db.as_hashdb(), &addr), &mut Default::default(), 500, 1000).unwrap();
|
||||
let mut root = KECCAK_NULL_RLP;
|
||||
|
@ -100,7 +100,7 @@ impl SnapshotComponents for PoaSnapshot {
|
||||
let (block, receipts) = chain.block(&block_at)
|
||||
.and_then(|b| chain.block_receipts(&block_at).map(|r| (b, r)))
|
||||
.ok_or(Error::BlockNotFound(block_at))?;
|
||||
let block = block.decode();
|
||||
let block = block.decode()?;
|
||||
|
||||
let parent_td = chain.block_details(block.header.parent_hash())
|
||||
.map(|d| d.total_difficulty)
|
||||
|
@ -281,7 +281,7 @@ pub fn chunk_state<'a>(db: &HashDB, root: &H256, writer: &Mutex<SnapshotWriter +
|
||||
// account_key here is the address' hash.
|
||||
for item in account_trie.iter()? {
|
||||
let (account_key, account_data) = item?;
|
||||
let account = ::rlp::decode(&*account_data);
|
||||
let account = ::rlp::decode(&*account_data)?;
|
||||
let account_key_hash = H256::from_slice(&account_key);
|
||||
|
||||
let account_db = AccountDB::from_hash(db, account_key_hash);
|
||||
@ -467,10 +467,10 @@ fn rebuild_accounts(
|
||||
*out = (hash, thin_rlp);
|
||||
}
|
||||
if let Some(&(ref hash, ref rlp)) = out_chunk.iter().last() {
|
||||
known_storage_roots.insert(*hash, ::rlp::decode::<BasicAccount>(rlp).storage_root);
|
||||
known_storage_roots.insert(*hash, ::rlp::decode::<BasicAccount>(rlp)?.storage_root);
|
||||
}
|
||||
if let Some(&(ref hash, ref rlp)) = out_chunk.iter().next() {
|
||||
known_storage_roots.insert(*hash, ::rlp::decode::<BasicAccount>(rlp).storage_root);
|
||||
known_storage_roots.insert(*hash, ::rlp::decode::<BasicAccount>(rlp)?.storage_root);
|
||||
}
|
||||
Ok(status)
|
||||
}
|
||||
@ -487,7 +487,7 @@ pub fn verify_old_block(rng: &mut OsRng, header: &Header, engine: &EthEngine, ch
|
||||
if always || rng.gen::<f32>() <= POW_VERIFY_RATE {
|
||||
engine.verify_block_unordered(header)?;
|
||||
match chain.block_header_data(header.parent_hash()) {
|
||||
Some(parent) => engine.verify_block_family(header, &parent.decode()),
|
||||
Some(parent) => engine.verify_block_family(header, &parent.decode()?),
|
||||
None => Ok(()),
|
||||
}
|
||||
} else {
|
||||
|
@ -75,7 +75,7 @@ impl StateProducer {
|
||||
|
||||
// sweep once to alter storage tries.
|
||||
for &mut (ref mut address_hash, ref mut account_data) in &mut accounts_to_modify {
|
||||
let mut account: BasicAccount = ::rlp::decode(&*account_data);
|
||||
let mut account: BasicAccount = ::rlp::decode(&*account_data).expect("error decoding basic account");
|
||||
let acct_db = AccountDBMut::from_hash(db, *address_hash);
|
||||
fill_storage(acct_db, &mut account.storage_root, &mut self.storage_seed);
|
||||
*account_data = DBValue::from_vec(::rlp::encode(&account).into_vec());
|
||||
|
@ -114,7 +114,7 @@ fn get_code_from_prev_chunk() {
|
||||
// first one will have code inlined,
|
||||
// second will just have its hash.
|
||||
let thin_rlp = acc_stream.out();
|
||||
let acc: BasicAccount = ::rlp::decode(&thin_rlp);
|
||||
let acc: BasicAccount = ::rlp::decode(&thin_rlp).expect("error decoding basic account");
|
||||
|
||||
let mut make_chunk = |acc, hash| {
|
||||
let mut db = MemoryDB::new();
|
||||
|
@ -48,6 +48,8 @@ use trace::{NoopTracer, NoopVMTracer};
|
||||
|
||||
pub use ethash::OptimizeFor;
|
||||
|
||||
const MAX_TRANSACTION_SIZE: usize = 300 * 1024;
|
||||
|
||||
// helper for formatting errors.
|
||||
fn fmt_err<F: ::std::fmt::Display>(f: F) -> String {
|
||||
format!("Spec json is invalid: {}", f)
|
||||
@ -123,6 +125,8 @@ pub struct CommonParams {
|
||||
pub max_code_size_transition: BlockNumber,
|
||||
/// Transaction permission managing contract address.
|
||||
pub transaction_permission_contract: Option<Address>,
|
||||
/// Maximum size of transaction's RLP payload
|
||||
pub max_transaction_size: usize,
|
||||
}
|
||||
|
||||
impl CommonParams {
|
||||
@ -238,6 +242,7 @@ impl From<ethjson::spec::Params> for CommonParams {
|
||||
registrar: p.registrar.map_or_else(Address::new, Into::into),
|
||||
node_permission_contract: p.node_permission_contract.map(Into::into),
|
||||
max_code_size: p.max_code_size.map_or(u64::max_value(), Into::into),
|
||||
max_transaction_size: p.max_transaction_size.map_or(MAX_TRANSACTION_SIZE, Into::into),
|
||||
max_code_size_transition: p.max_code_size_transition.map_or(0, Into::into),
|
||||
transaction_permission_contract: p.transaction_permission_contract.map(Into::into),
|
||||
wasm_activation_transition: p.wasm_activation_transition.map_or(
|
||||
|
@ -21,6 +21,7 @@ use std::sync::Arc;
|
||||
use std::collections::{HashMap, BTreeMap};
|
||||
use hash::{KECCAK_EMPTY, KECCAK_NULL_RLP, keccak};
|
||||
use ethereum_types::{H256, U256, Address};
|
||||
use error::Error;
|
||||
use hashdb::HashDB;
|
||||
use kvdb::DBValue;
|
||||
use bytes::{Bytes, ToPretty};
|
||||
@ -144,9 +145,10 @@ impl Account {
|
||||
}
|
||||
|
||||
/// Create a new account from RLP.
|
||||
pub fn from_rlp(rlp: &[u8]) -> Account {
|
||||
let basic: BasicAccount = ::rlp::decode(rlp);
|
||||
basic.into()
|
||||
pub fn from_rlp(rlp: &[u8]) -> Result<Account, Error> {
|
||||
::rlp::decode::<BasicAccount>(rlp)
|
||||
.map(|ba| ba.into())
|
||||
.map_err(|e| e.into())
|
||||
}
|
||||
|
||||
/// Create a new contract account.
|
||||
@ -202,8 +204,8 @@ impl Account {
|
||||
return Ok(value);
|
||||
}
|
||||
let db = SecTrieDB::new(db, &self.storage_root)?;
|
||||
|
||||
let item: U256 = db.get_with(key, ::rlp::decode)?.unwrap_or_else(U256::zero);
|
||||
let panicky_decoder = |bytes:&[u8]| ::rlp::decode(&bytes).expect("decoding db value failed");
|
||||
let item: U256 = db.get_with(key, panicky_decoder)?.unwrap_or_else(U256::zero);
|
||||
let value: H256 = item.into();
|
||||
self.storage_cache.borrow_mut().insert(key.clone(), value.clone());
|
||||
Ok(value)
|
||||
@ -478,7 +480,8 @@ impl Account {
|
||||
|
||||
let trie = TrieDB::new(db, &self.storage_root)?;
|
||||
let item: U256 = {
|
||||
let query = (&mut recorder, ::rlp::decode);
|
||||
let panicky_decoder = |bytes:&[u8]| ::rlp::decode(bytes).expect("decoding db value failed");
|
||||
let query = (&mut recorder, panicky_decoder);
|
||||
trie.get_with(&storage_key, query)?.unwrap_or_else(U256::zero)
|
||||
};
|
||||
|
||||
@ -528,7 +531,7 @@ mod tests {
|
||||
a.rlp()
|
||||
};
|
||||
|
||||
let a = Account::from_rlp(&rlp);
|
||||
let a = Account::from_rlp(&rlp).expect("decoding db value failed");
|
||||
assert_eq!(*a.storage_root().unwrap(), "c57e1afb758b07f8d2c8f13a3b6e44fa5ff94ab266facc5a4fd3f062426e50b2".into());
|
||||
assert_eq!(a.storage_at(&db.immutable(), &0x00u64.into()).unwrap(), 0x1234u64.into());
|
||||
assert_eq!(a.storage_at(&db.immutable(), &0x01u64.into()).unwrap(), H256::default());
|
||||
@ -546,10 +549,10 @@ mod tests {
|
||||
a.rlp()
|
||||
};
|
||||
|
||||
let mut a = Account::from_rlp(&rlp);
|
||||
let mut a = Account::from_rlp(&rlp).expect("decoding db value failed");
|
||||
assert!(a.cache_code(&db.immutable()).is_some());
|
||||
|
||||
let mut a = Account::from_rlp(&rlp);
|
||||
let mut a = Account::from_rlp(&rlp).expect("decoding db value failed");
|
||||
assert_eq!(a.note_code(vec![0x55, 0x44, 0xffu8]), Ok(()));
|
||||
}
|
||||
|
||||
@ -609,7 +612,7 @@ mod tests {
|
||||
#[test]
|
||||
fn rlpio() {
|
||||
let a = Account::new(69u8.into(), 0u8.into(), HashMap::new(), Bytes::new());
|
||||
let b = Account::from_rlp(&a.rlp());
|
||||
let b = Account::from_rlp(&a.rlp()).unwrap();
|
||||
assert_eq!(a.balance(), b.balance());
|
||||
assert_eq!(a.nonce(), b.nonce());
|
||||
assert_eq!(a.code_hash(), b.code_hash());
|
||||
|
@ -605,7 +605,8 @@ impl<B: Backend> State<B> {
|
||||
|
||||
// account is not found in the global cache, get from the DB and insert into local
|
||||
let db = self.factories.trie.readonly(self.db.as_hashdb(), &self.root).expect(SEC_TRIE_DB_UNWRAP_STR);
|
||||
let maybe_acc = db.get_with(address, Account::from_rlp)?;
|
||||
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
||||
let maybe_acc = db.get_with(address, from_rlp)?;
|
||||
let r = maybe_acc.as_ref().map_or(Ok(H256::new()), |a| {
|
||||
let account_db = self.factories.accountdb.readonly(self.db.as_hashdb(), a.address_hash(address));
|
||||
a.storage_at(account_db.as_hashdb(), key)
|
||||
@ -983,7 +984,8 @@ impl<B: Backend> State<B> {
|
||||
|
||||
// not found in the global cache, get from the DB and insert into local
|
||||
let db = self.factories.trie.readonly(self.db.as_hashdb(), &self.root)?;
|
||||
let mut maybe_acc = db.get_with(a, Account::from_rlp)?;
|
||||
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
||||
let mut maybe_acc = db.get_with(a, from_rlp)?;
|
||||
if let Some(ref mut account) = maybe_acc.as_mut() {
|
||||
let accountdb = self.factories.accountdb.readonly(self.db.as_hashdb(), account.address_hash(a));
|
||||
Self::update_account_cache(require, account, &self.db, accountdb.as_hashdb());
|
||||
@ -1012,7 +1014,8 @@ impl<B: Backend> State<B> {
|
||||
None => {
|
||||
let maybe_acc = if !self.db.is_known_null(a) {
|
||||
let db = self.factories.trie.readonly(self.db.as_hashdb(), &self.root)?;
|
||||
AccountEntry::new_clean(db.get_with(a, Account::from_rlp)?)
|
||||
let from_rlp = |b:&[u8]| { Account::from_rlp(b).expect("decoding db value failed") };
|
||||
AccountEntry::new_clean(db.get_with(a, from_rlp)?)
|
||||
} else {
|
||||
AccountEntry::new_clean(None)
|
||||
};
|
||||
@ -1064,7 +1067,10 @@ impl<B: Backend> State<B> {
|
||||
let mut recorder = Recorder::new();
|
||||
let trie = TrieDB::new(self.db.as_hashdb(), &self.root)?;
|
||||
let maybe_account: Option<BasicAccount> = {
|
||||
let query = (&mut recorder, ::rlp::decode);
|
||||
let panicky_decoder = |bytes: &[u8]| {
|
||||
::rlp::decode(bytes).expect(&format!("prove_account, could not query trie for account key={}", &account_key))
|
||||
};
|
||||
let query = (&mut recorder, panicky_decoder);
|
||||
trie.get_with(&account_key, query)?
|
||||
};
|
||||
let account = maybe_account.unwrap_or_else(|| BasicAccount {
|
||||
@ -1086,7 +1092,8 @@ impl<B: Backend> State<B> {
|
||||
// TODO: probably could look into cache somehow but it's keyed by
|
||||
// address, not keccak(address).
|
||||
let trie = TrieDB::new(self.db.as_hashdb(), &self.root)?;
|
||||
let acc = match trie.get_with(&account_key, Account::from_rlp)? {
|
||||
let from_rlp = |b: &[u8]| Account::from_rlp(b).expect("decoding db value failed");
|
||||
let acc = match trie.get_with(&account_key, from_rlp)? {
|
||||
Some(acc) => acc,
|
||||
None => return Ok((Vec::new(), H256::new())),
|
||||
};
|
||||
|
@ -244,7 +244,7 @@ mod tests {
|
||||
]);
|
||||
|
||||
let encoded = ::rlp::encode(&block_traces);
|
||||
let decoded = ::rlp::decode(&encoded);
|
||||
let decoded = ::rlp::decode(&encoded).expect("error decoding block traces");
|
||||
assert_eq!(block_traces, decoded);
|
||||
}
|
||||
}
|
||||
|
@ -224,7 +224,7 @@ fn verify_uncles(header: &Header, bytes: &[u8], bc: &BlockProvider, engine: &Eth
|
||||
return Err(From::from(BlockError::UncleParentNotInChain(uncle_parent.hash())));
|
||||
}
|
||||
|
||||
let uncle_parent = uncle_parent.decode();
|
||||
let uncle_parent = uncle_parent.decode()?;
|
||||
verify_parent(&uncle, &uncle_parent, engine)?;
|
||||
engine.verify_block_family(&uncle, &uncle_parent)?;
|
||||
verified.insert(uncle.hash());
|
||||
@ -500,10 +500,9 @@ mod tests {
|
||||
// no existing tests need access to test, so having this not function
|
||||
// is fine.
|
||||
let client = ::client::TestBlockChainClient::default();
|
||||
|
||||
let parent = bc.block_header_data(header.parent_hash())
|
||||
.ok_or(BlockError::UnknownParent(header.parent_hash().clone()))?
|
||||
.decode();
|
||||
.decode()?;
|
||||
|
||||
let full_params = FullFamilyParams {
|
||||
block_bytes: bytes,
|
||||
|
@ -29,7 +29,6 @@ pub struct BlockView<'a> {
|
||||
rlp: ViewRlp<'a>
|
||||
}
|
||||
|
||||
|
||||
impl<'a> BlockView<'a> {
|
||||
/// Creates new view onto block from rlp.
|
||||
/// Use the `view!` macro to create this view in order to capture debugging info.
|
||||
@ -39,9 +38,9 @@ impl<'a> BlockView<'a> {
|
||||
/// ```
|
||||
/// #[macro_use]
|
||||
/// extern crate ethcore;
|
||||
///
|
||||
///
|
||||
/// use ethcore::views::{BlockView};
|
||||
///
|
||||
///
|
||||
/// fn main() {
|
||||
/// let bytes : &[u8] = &[];
|
||||
/// let block_view = view!(BlockView, bytes);
|
||||
|
@ -30,6 +30,7 @@ heapsize = "0.4"
|
||||
semver = "0.9"
|
||||
smallvec = { version = "0.4", features = ["heapsizeof"] }
|
||||
parking_lot = "0.5"
|
||||
trace-time = { path = "../../util/trace-time" }
|
||||
ipnetwork = "0.12.6"
|
||||
|
||||
[dev-dependencies]
|
||||
|
@ -33,7 +33,7 @@ use chain::{ChainSync, SyncStatus as EthSyncStatus};
|
||||
use std::net::{SocketAddr, AddrParseError};
|
||||
use std::str::FromStr;
|
||||
use parking_lot::RwLock;
|
||||
use chain::{ETH_PACKET_COUNT, SNAPSHOT_SYNC_PACKET_COUNT, ETH_PROTOCOL_VERSION_63, ETH_PROTOCOL_VERSION_62,
|
||||
use chain::{ETH_PROTOCOL_VERSION_63, ETH_PROTOCOL_VERSION_62,
|
||||
PAR_PROTOCOL_VERSION_1, PAR_PROTOCOL_VERSION_2, PAR_PROTOCOL_VERSION_3};
|
||||
use light::client::AsLightClient;
|
||||
use light::Provider;
|
||||
@ -202,10 +202,8 @@ pub struct AttachedProtocol {
|
||||
pub handler: Arc<NetworkProtocolHandler + Send + Sync>,
|
||||
/// 3-character ID for the protocol.
|
||||
pub protocol_id: ProtocolId,
|
||||
/// Packet count.
|
||||
pub packet_count: u8,
|
||||
/// Supported versions.
|
||||
pub versions: &'static [u8],
|
||||
/// Supported versions and their packet counts.
|
||||
pub versions: &'static [(u8, u8)],
|
||||
}
|
||||
|
||||
impl AttachedProtocol {
|
||||
@ -213,7 +211,6 @@ impl AttachedProtocol {
|
||||
let res = network.register_protocol(
|
||||
self.handler.clone(),
|
||||
self.protocol_id,
|
||||
self.packet_count,
|
||||
self.versions
|
||||
);
|
||||
|
||||
@ -379,10 +376,12 @@ impl NetworkProtocolHandler for SyncProtocolHandler {
|
||||
}
|
||||
|
||||
fn read(&self, io: &NetworkContext, peer: &PeerId, packet_id: u8, data: &[u8]) {
|
||||
trace_time!("sync::read");
|
||||
ChainSync::dispatch_packet(&self.sync, &mut NetSyncIo::new(io, &*self.chain, &*self.snapshot_service, &self.overlay), *peer, packet_id, data);
|
||||
}
|
||||
|
||||
fn connected(&self, io: &NetworkContext, peer: &PeerId) {
|
||||
trace_time!("sync::connected");
|
||||
// If warp protocol is supported only allow warp handshake
|
||||
let warp_protocol = io.protocol_version(WARP_SYNC_PROTOCOL_ID, *peer).unwrap_or(0) != 0;
|
||||
let warp_context = io.subprotocol_name() == WARP_SYNC_PROTOCOL_ID;
|
||||
@ -392,12 +391,14 @@ impl NetworkProtocolHandler for SyncProtocolHandler {
|
||||
}
|
||||
|
||||
fn disconnected(&self, io: &NetworkContext, peer: &PeerId) {
|
||||
trace_time!("sync::disconnected");
|
||||
if io.subprotocol_name() != WARP_SYNC_PROTOCOL_ID {
|
||||
self.sync.write().on_peer_aborting(&mut NetSyncIo::new(io, &*self.chain, &*self.snapshot_service, &self.overlay), *peer);
|
||||
}
|
||||
}
|
||||
|
||||
fn timeout(&self, io: &NetworkContext, _timer: TimerToken) {
|
||||
trace_time!("sync::timeout");
|
||||
let mut io = NetSyncIo::new(io, &*self.chain, &*self.snapshot_service, &self.overlay);
|
||||
self.sync.write().maintain_peers(&mut io);
|
||||
self.sync.write().maintain_sync(&mut io);
|
||||
@ -456,15 +457,15 @@ impl ChainNotify for EthSync {
|
||||
Err(err) => warn!("Error starting network: {}", err),
|
||||
_ => {},
|
||||
}
|
||||
self.network.register_protocol(self.eth_handler.clone(), self.subprotocol_name, ETH_PACKET_COUNT, &[ETH_PROTOCOL_VERSION_62, ETH_PROTOCOL_VERSION_63])
|
||||
self.network.register_protocol(self.eth_handler.clone(), self.subprotocol_name, &[ETH_PROTOCOL_VERSION_62, ETH_PROTOCOL_VERSION_63])
|
||||
.unwrap_or_else(|e| warn!("Error registering ethereum protocol: {:?}", e));
|
||||
// register the warp sync subprotocol
|
||||
self.network.register_protocol(self.eth_handler.clone(), WARP_SYNC_PROTOCOL_ID, SNAPSHOT_SYNC_PACKET_COUNT, &[PAR_PROTOCOL_VERSION_1, PAR_PROTOCOL_VERSION_2, PAR_PROTOCOL_VERSION_3])
|
||||
self.network.register_protocol(self.eth_handler.clone(), WARP_SYNC_PROTOCOL_ID, &[PAR_PROTOCOL_VERSION_1, PAR_PROTOCOL_VERSION_2, PAR_PROTOCOL_VERSION_3])
|
||||
.unwrap_or_else(|e| warn!("Error registering snapshot sync protocol: {:?}", e));
|
||||
|
||||
// register the light protocol.
|
||||
if let Some(light_proto) = self.light_proto.as_ref().map(|x| x.clone()) {
|
||||
self.network.register_protocol(light_proto, self.light_subprotocol_name, ::light::net::PACKET_COUNT, ::light::net::PROTOCOL_VERSIONS)
|
||||
self.network.register_protocol(light_proto, self.light_subprotocol_name, ::light::net::PROTOCOL_VERSIONS)
|
||||
.unwrap_or_else(|e| warn!("Error registering light client protocol: {:?}", e));
|
||||
}
|
||||
|
||||
@ -824,7 +825,7 @@ impl ManageNetwork for LightSync {
|
||||
|
||||
let light_proto = self.proto.clone();
|
||||
|
||||
self.network.register_protocol(light_proto, self.subprotocol_name, ::light::net::PACKET_COUNT, ::light::net::PROTOCOL_VERSIONS)
|
||||
self.network.register_protocol(light_proto, self.subprotocol_name, ::light::net::PROTOCOL_VERSIONS)
|
||||
.unwrap_or_else(|e| warn!("Error registering light client protocol: {:?}", e));
|
||||
|
||||
for proto in &self.attached_protos { proto.register(&self.network) }
|
||||
|
@ -496,7 +496,7 @@ impl BlockDownloader {
|
||||
}
|
||||
|
||||
let result = if let Some(receipts) = receipts {
|
||||
io.chain().import_block_with_receipts(block, receipts)
|
||||
io.chain().queue_ancient_block(block, receipts)
|
||||
} else {
|
||||
io.chain().import_block(block)
|
||||
};
|
||||
|
File diff suppressed because it is too large
Load Diff
830
ethcore/sync/src/chain/handler.rs
Normal file
830
ethcore/sync/src/chain/handler.rs
Normal file
@ -0,0 +1,830 @@
|
||||
// Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity.
|
||||
|
||||
// Parity is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use api::WARP_SYNC_PROTOCOL_ID;
|
||||
use block_sync::{BlockDownloaderImportError as DownloaderImportError, DownloadAction};
|
||||
use bytes::Bytes;
|
||||
use ethcore::client::{BlockStatus, BlockId, BlockImportError, BlockImportErrorKind};
|
||||
use ethcore::error::*;
|
||||
use ethcore::header::{BlockNumber, Header as BlockHeader};
|
||||
use ethcore::snapshot::{ManifestData, RestorationStatus};
|
||||
use ethereum_types::{H256, U256};
|
||||
use hash::keccak;
|
||||
use network::PeerId;
|
||||
use rlp::Rlp;
|
||||
use snapshot::ChunkType;
|
||||
use std::cmp;
|
||||
use std::collections::HashSet;
|
||||
use std::time::Instant;
|
||||
use sync_io::SyncIo;
|
||||
|
||||
use super::{
|
||||
BlockSet,
|
||||
ChainSync,
|
||||
ForkConfirmation,
|
||||
PacketDecodeError,
|
||||
PeerAsking,
|
||||
PeerInfo,
|
||||
SyncRequester,
|
||||
SyncState,
|
||||
ETH_PROTOCOL_VERSION_62,
|
||||
ETH_PROTOCOL_VERSION_63,
|
||||
MAX_NEW_BLOCK_AGE,
|
||||
MAX_NEW_HASHES,
|
||||
PAR_PROTOCOL_VERSION_1,
|
||||
PAR_PROTOCOL_VERSION_3,
|
||||
BLOCK_BODIES_PACKET,
|
||||
BLOCK_HEADERS_PACKET,
|
||||
NEW_BLOCK_HASHES_PACKET,
|
||||
NEW_BLOCK_PACKET,
|
||||
PRIVATE_TRANSACTION_PACKET,
|
||||
RECEIPTS_PACKET,
|
||||
SIGNED_PRIVATE_TRANSACTION_PACKET,
|
||||
SNAPSHOT_DATA_PACKET,
|
||||
SNAPSHOT_MANIFEST_PACKET,
|
||||
STATUS_PACKET,
|
||||
TRANSACTIONS_PACKET,
|
||||
};
|
||||
|
||||
/// The Chain Sync Handler: handles responses from peers
|
||||
pub struct SyncHandler;
|
||||
|
||||
impl SyncHandler {
|
||||
/// Handle incoming packet from peer
|
||||
pub fn on_packet(sync: &mut ChainSync, io: &mut SyncIo, peer: PeerId, packet_id: u8, data: &[u8]) {
|
||||
if packet_id != STATUS_PACKET && !sync.peers.contains_key(&peer) {
|
||||
debug!(target:"sync", "Unexpected packet {} from unregistered peer: {}:{}", packet_id, peer, io.peer_info(peer));
|
||||
return;
|
||||
}
|
||||
let rlp = Rlp::new(data);
|
||||
let result = match packet_id {
|
||||
STATUS_PACKET => SyncHandler::on_peer_status(sync, io, peer, &rlp),
|
||||
TRANSACTIONS_PACKET => SyncHandler::on_peer_transactions(sync, io, peer, &rlp),
|
||||
BLOCK_HEADERS_PACKET => SyncHandler::on_peer_block_headers(sync, io, peer, &rlp),
|
||||
BLOCK_BODIES_PACKET => SyncHandler::on_peer_block_bodies(sync, io, peer, &rlp),
|
||||
RECEIPTS_PACKET => SyncHandler::on_peer_block_receipts(sync, io, peer, &rlp),
|
||||
NEW_BLOCK_PACKET => SyncHandler::on_peer_new_block(sync, io, peer, &rlp),
|
||||
NEW_BLOCK_HASHES_PACKET => SyncHandler::on_peer_new_hashes(sync, io, peer, &rlp),
|
||||
SNAPSHOT_MANIFEST_PACKET => SyncHandler::on_snapshot_manifest(sync, io, peer, &rlp),
|
||||
SNAPSHOT_DATA_PACKET => SyncHandler::on_snapshot_data(sync, io, peer, &rlp),
|
||||
PRIVATE_TRANSACTION_PACKET => SyncHandler::on_private_transaction(sync, io, peer, &rlp),
|
||||
SIGNED_PRIVATE_TRANSACTION_PACKET => SyncHandler::on_signed_private_transaction(sync, io, peer, &rlp),
|
||||
_ => {
|
||||
debug!(target: "sync", "{}: Unknown packet {}", peer, packet_id);
|
||||
Ok(())
|
||||
}
|
||||
};
|
||||
result.unwrap_or_else(|e| {
|
||||
debug!(target:"sync", "{} -> Malformed packet {} : {}", peer, packet_id, e);
|
||||
})
|
||||
}
|
||||
|
||||
/// Called when peer sends us new consensus packet
|
||||
pub fn on_consensus_packet(io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
trace!(target: "sync", "Received consensus packet from {:?}", peer_id);
|
||||
io.chain().queue_consensus_message(r.as_raw().to_vec());
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called by peer when it is disconnecting
|
||||
pub fn on_peer_aborting(sync: &mut ChainSync, io: &mut SyncIo, peer: PeerId) {
|
||||
trace!(target: "sync", "== Disconnecting {}: {}", peer, io.peer_info(peer));
|
||||
sync.handshaking_peers.remove(&peer);
|
||||
if sync.peers.contains_key(&peer) {
|
||||
debug!(target: "sync", "Disconnected {}", peer);
|
||||
sync.clear_peer_download(peer);
|
||||
sync.peers.remove(&peer);
|
||||
sync.active_peers.remove(&peer);
|
||||
sync.continue_sync(io);
|
||||
}
|
||||
}
|
||||
|
||||
/// Called when a new peer is connected
|
||||
pub fn on_peer_connected(sync: &mut ChainSync, io: &mut SyncIo, peer: PeerId) {
|
||||
trace!(target: "sync", "== Connected {}: {}", peer, io.peer_info(peer));
|
||||
if let Err(e) = sync.send_status(io, peer) {
|
||||
debug!(target:"sync", "Error sending status request: {:?}", e);
|
||||
io.disconnect_peer(peer);
|
||||
} else {
|
||||
sync.handshaking_peers.insert(peer, Instant::now());
|
||||
}
|
||||
}
|
||||
|
||||
/// Called by peer once it has new block bodies
|
||||
pub fn on_peer_new_block(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
|
||||
trace!(target: "sync", "Ignoring new block from unconfirmed peer {}", peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
let difficulty: U256 = r.val_at(1)?;
|
||||
if let Some(ref mut peer) = sync.peers.get_mut(&peer_id) {
|
||||
if peer.difficulty.map_or(true, |pd| difficulty > pd) {
|
||||
peer.difficulty = Some(difficulty);
|
||||
}
|
||||
}
|
||||
let block_rlp = r.at(0)?;
|
||||
let header_rlp = block_rlp.at(0)?;
|
||||
let h = keccak(&header_rlp.as_raw());
|
||||
trace!(target: "sync", "{} -> NewBlock ({})", peer_id, h);
|
||||
let header: BlockHeader = header_rlp.as_val()?;
|
||||
if header.number() > sync.highest_block.unwrap_or(0) {
|
||||
sync.highest_block = Some(header.number());
|
||||
}
|
||||
let mut unknown = false;
|
||||
{
|
||||
if let Some(ref mut peer) = sync.peers.get_mut(&peer_id) {
|
||||
peer.latest_hash = header.hash();
|
||||
}
|
||||
}
|
||||
let last_imported_number = sync.new_blocks.last_imported_block_number();
|
||||
if last_imported_number > header.number() && last_imported_number - header.number() > MAX_NEW_BLOCK_AGE {
|
||||
trace!(target: "sync", "Ignored ancient new block {:?}", h);
|
||||
io.disable_peer(peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
match io.chain().import_block(block_rlp.as_raw().to_vec()) {
|
||||
Err(BlockImportError(BlockImportErrorKind::Import(ImportErrorKind::AlreadyInChain), _)) => {
|
||||
trace!(target: "sync", "New block already in chain {:?}", h);
|
||||
},
|
||||
Err(BlockImportError(BlockImportErrorKind::Import(ImportErrorKind::AlreadyQueued), _)) => {
|
||||
trace!(target: "sync", "New block already queued {:?}", h);
|
||||
},
|
||||
Ok(_) => {
|
||||
// abort current download of the same block
|
||||
sync.complete_sync(io);
|
||||
sync.new_blocks.mark_as_known(&header.hash(), header.number());
|
||||
trace!(target: "sync", "New block queued {:?} ({})", h, header.number());
|
||||
},
|
||||
Err(BlockImportError(BlockImportErrorKind::Block(BlockError::UnknownParent(p)), _)) => {
|
||||
unknown = true;
|
||||
trace!(target: "sync", "New block with unknown parent ({:?}) {:?}", p, h);
|
||||
},
|
||||
Err(e) => {
|
||||
debug!(target: "sync", "Bad new block {:?} : {:?}", h, e);
|
||||
io.disable_peer(peer_id);
|
||||
}
|
||||
};
|
||||
if unknown {
|
||||
if sync.state != SyncState::Idle {
|
||||
trace!(target: "sync", "NewBlock ignored while seeking");
|
||||
} else {
|
||||
trace!(target: "sync", "New unknown block {:?}", h);
|
||||
//TODO: handle too many unknown blocks
|
||||
sync.sync_peer(io, peer_id, true);
|
||||
}
|
||||
}
|
||||
sync.continue_sync(io);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Handles `NewHashes` packet. Initiates headers download for any unknown hashes.
|
||||
pub fn on_peer_new_hashes(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
|
||||
trace!(target: "sync", "Ignoring new hashes from unconfirmed peer {}", peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
let hashes: Vec<_> = r.iter().take(MAX_NEW_HASHES).map(|item| (item.val_at::<H256>(0), item.val_at::<BlockNumber>(1))).collect();
|
||||
if let Some(ref mut peer) = sync.peers.get_mut(&peer_id) {
|
||||
// Peer has new blocks with unknown difficulty
|
||||
peer.difficulty = None;
|
||||
if let Some(&(Ok(ref h), _)) = hashes.last() {
|
||||
peer.latest_hash = h.clone();
|
||||
}
|
||||
}
|
||||
if sync.state != SyncState::Idle {
|
||||
trace!(target: "sync", "Ignoring new hashes since we're already downloading.");
|
||||
let max = r.iter().take(MAX_NEW_HASHES).map(|item| item.val_at::<BlockNumber>(1).unwrap_or(0)).fold(0u64, cmp::max);
|
||||
if max > sync.highest_block.unwrap_or(0) {
|
||||
sync.highest_block = Some(max);
|
||||
}
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
trace!(target: "sync", "{} -> NewHashes ({} entries)", peer_id, r.item_count()?);
|
||||
let mut max_height: BlockNumber = 0;
|
||||
let mut new_hashes = Vec::new();
|
||||
let last_imported_number = sync.new_blocks.last_imported_block_number();
|
||||
for (rh, rn) in hashes {
|
||||
let hash = rh?;
|
||||
let number = rn?;
|
||||
if number > sync.highest_block.unwrap_or(0) {
|
||||
sync.highest_block = Some(number);
|
||||
}
|
||||
if sync.new_blocks.is_downloading(&hash) {
|
||||
continue;
|
||||
}
|
||||
if last_imported_number > number && last_imported_number - number > MAX_NEW_BLOCK_AGE {
|
||||
trace!(target: "sync", "Ignored ancient new block hash {:?}", hash);
|
||||
io.disable_peer(peer_id);
|
||||
continue;
|
||||
}
|
||||
match io.chain().block_status(BlockId::Hash(hash.clone())) {
|
||||
BlockStatus::InChain => {
|
||||
trace!(target: "sync", "New block hash already in chain {:?}", hash);
|
||||
},
|
||||
BlockStatus::Queued => {
|
||||
trace!(target: "sync", "New hash block already queued {:?}", hash);
|
||||
},
|
||||
BlockStatus::Unknown | BlockStatus::Pending => {
|
||||
new_hashes.push(hash.clone());
|
||||
if number > max_height {
|
||||
trace!(target: "sync", "New unknown block hash {:?}", hash);
|
||||
if let Some(ref mut peer) = sync.peers.get_mut(&peer_id) {
|
||||
peer.latest_hash = hash.clone();
|
||||
}
|
||||
max_height = number;
|
||||
}
|
||||
},
|
||||
BlockStatus::Bad => {
|
||||
debug!(target: "sync", "Bad new block hash {:?}", hash);
|
||||
io.disable_peer(peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
}
|
||||
};
|
||||
if max_height != 0 {
|
||||
trace!(target: "sync", "Downloading blocks for new hashes");
|
||||
sync.new_blocks.reset_to(new_hashes);
|
||||
sync.state = SyncState::NewBlocks;
|
||||
sync.sync_peer(io, peer_id, true);
|
||||
}
|
||||
sync.continue_sync(io);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called by peer once it has new block bodies
|
||||
fn on_peer_block_bodies(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
sync.clear_peer_download(peer_id);
|
||||
let block_set = sync.peers.get(&peer_id).and_then(|p| p.block_set).unwrap_or(BlockSet::NewBlocks);
|
||||
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockBodies) {
|
||||
trace!(target: "sync", "{}: Ignored unexpected bodies", peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
let item_count = r.item_count()?;
|
||||
trace!(target: "sync", "{} -> BlockBodies ({} entries), set = {:?}", peer_id, item_count, block_set);
|
||||
if item_count == 0 {
|
||||
sync.deactivate_peer(io, peer_id);
|
||||
}
|
||||
else if sync.state == SyncState::Waiting {
|
||||
trace!(target: "sync", "Ignored block bodies while waiting");
|
||||
}
|
||||
else
|
||||
{
|
||||
let result = {
|
||||
let downloader = match block_set {
|
||||
BlockSet::NewBlocks => &mut sync.new_blocks,
|
||||
BlockSet::OldBlocks => match sync.old_blocks {
|
||||
None => {
|
||||
trace!(target: "sync", "Ignored block headers while block download is inactive");
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
},
|
||||
Some(ref mut blocks) => blocks,
|
||||
}
|
||||
};
|
||||
downloader.import_bodies(io, r)
|
||||
};
|
||||
|
||||
match result {
|
||||
Err(DownloaderImportError::Invalid) => {
|
||||
io.disable_peer(peer_id);
|
||||
sync.deactivate_peer(io, peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
},
|
||||
Err(DownloaderImportError::Useless) => {
|
||||
sync.deactivate_peer(io, peer_id);
|
||||
},
|
||||
Ok(()) => (),
|
||||
}
|
||||
|
||||
sync.collect_blocks(io, block_set);
|
||||
sync.sync_peer(io, peer_id, false);
|
||||
}
|
||||
sync.continue_sync(io);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn on_peer_confirmed(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId) {
|
||||
sync.sync_peer(io, peer_id, false);
|
||||
}
|
||||
|
||||
fn on_peer_fork_header(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
{
|
||||
let peer = sync.peers.get_mut(&peer_id).expect("Is only called when peer is present in peers");
|
||||
peer.asking = PeerAsking::Nothing;
|
||||
let item_count = r.item_count()?;
|
||||
let (fork_number, fork_hash) = sync.fork_block.expect("ForkHeader request is sent only fork block is Some; qed").clone();
|
||||
|
||||
if item_count == 0 || item_count != 1 {
|
||||
trace!(target: "sync", "{}: Chain is too short to confirm the block", peer_id);
|
||||
io.disable_peer(peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let header = r.at(0)?.as_raw();
|
||||
if keccak(&header) != fork_hash {
|
||||
trace!(target: "sync", "{}: Fork mismatch", peer_id);
|
||||
io.disable_peer(peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
trace!(target: "sync", "{}: Confirmed peer", peer_id);
|
||||
peer.confirmation = ForkConfirmation::Confirmed;
|
||||
if !io.chain_overlay().read().contains_key(&fork_number) {
|
||||
io.chain_overlay().write().insert(fork_number, header.to_vec());
|
||||
}
|
||||
}
|
||||
SyncHandler::on_peer_confirmed(sync, io, peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
/// Called by peer once it has new block headers during sync
|
||||
fn on_peer_block_headers(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
let is_fork_header_request = match sync.peers.get(&peer_id) {
|
||||
Some(peer) if peer.asking == PeerAsking::ForkHeader => true,
|
||||
_ => false,
|
||||
};
|
||||
|
||||
if is_fork_header_request {
|
||||
return SyncHandler::on_peer_fork_header(sync, io, peer_id, r);
|
||||
}
|
||||
|
||||
sync.clear_peer_download(peer_id);
|
||||
let expected_hash = sync.peers.get(&peer_id).and_then(|p| p.asking_hash);
|
||||
let allowed = sync.peers.get(&peer_id).map(|p| p.is_allowed()).unwrap_or(false);
|
||||
let block_set = sync.peers.get(&peer_id).and_then(|p| p.block_set).unwrap_or(BlockSet::NewBlocks);
|
||||
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockHeaders) || expected_hash.is_none() || !allowed {
|
||||
trace!(target: "sync", "{}: Ignored unexpected headers, expected_hash = {:?}", peer_id, expected_hash);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
let item_count = r.item_count()?;
|
||||
trace!(target: "sync", "{} -> BlockHeaders ({} entries), state = {:?}, set = {:?}", peer_id, item_count, sync.state, block_set);
|
||||
if (sync.state == SyncState::Idle || sync.state == SyncState::WaitingPeers) && sync.old_blocks.is_none() {
|
||||
trace!(target: "sync", "Ignored unexpected block headers");
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
if sync.state == SyncState::Waiting {
|
||||
trace!(target: "sync", "Ignored block headers while waiting");
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let result = {
|
||||
let downloader = match block_set {
|
||||
BlockSet::NewBlocks => &mut sync.new_blocks,
|
||||
BlockSet::OldBlocks => {
|
||||
match sync.old_blocks {
|
||||
None => {
|
||||
trace!(target: "sync", "Ignored block headers while block download is inactive");
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
},
|
||||
Some(ref mut blocks) => blocks,
|
||||
}
|
||||
}
|
||||
};
|
||||
downloader.import_headers(io, r, expected_hash)
|
||||
};
|
||||
|
||||
match result {
|
||||
Err(DownloaderImportError::Useless) => {
|
||||
sync.deactivate_peer(io, peer_id);
|
||||
},
|
||||
Err(DownloaderImportError::Invalid) => {
|
||||
io.disable_peer(peer_id);
|
||||
sync.deactivate_peer(io, peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
},
|
||||
Ok(DownloadAction::Reset) => {
|
||||
// mark all outstanding requests as expired
|
||||
trace!("Resetting downloads for {:?}", block_set);
|
||||
for (_, ref mut p) in sync.peers.iter_mut().filter(|&(_, ref p)| p.block_set == Some(block_set)) {
|
||||
p.reset_asking();
|
||||
}
|
||||
|
||||
}
|
||||
Ok(DownloadAction::None) => {},
|
||||
}
|
||||
|
||||
sync.collect_blocks(io, block_set);
|
||||
// give a task to the same peer first if received valuable headers.
|
||||
sync.sync_peer(io, peer_id, false);
|
||||
// give tasks to other peers
|
||||
sync.continue_sync(io);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called by peer once it has new block receipts
|
||||
fn on_peer_block_receipts(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
sync.clear_peer_download(peer_id);
|
||||
let block_set = sync.peers.get(&peer_id).and_then(|p| p.block_set).unwrap_or(BlockSet::NewBlocks);
|
||||
if !sync.reset_peer_asking(peer_id, PeerAsking::BlockReceipts) {
|
||||
trace!(target: "sync", "{}: Ignored unexpected receipts", peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
let item_count = r.item_count()?;
|
||||
trace!(target: "sync", "{} -> BlockReceipts ({} entries)", peer_id, item_count);
|
||||
if item_count == 0 {
|
||||
sync.deactivate_peer(io, peer_id);
|
||||
}
|
||||
else if sync.state == SyncState::Waiting {
|
||||
trace!(target: "sync", "Ignored block receipts while waiting");
|
||||
}
|
||||
else
|
||||
{
|
||||
let result = {
|
||||
let downloader = match block_set {
|
||||
BlockSet::NewBlocks => &mut sync.new_blocks,
|
||||
BlockSet::OldBlocks => match sync.old_blocks {
|
||||
None => {
|
||||
trace!(target: "sync", "Ignored block headers while block download is inactive");
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
},
|
||||
Some(ref mut blocks) => blocks,
|
||||
}
|
||||
};
|
||||
downloader.import_receipts(io, r)
|
||||
};
|
||||
|
||||
match result {
|
||||
Err(DownloaderImportError::Invalid) => {
|
||||
io.disable_peer(peer_id);
|
||||
sync.deactivate_peer(io, peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
},
|
||||
Err(DownloaderImportError::Useless) => {
|
||||
sync.deactivate_peer(io, peer_id);
|
||||
},
|
||||
Ok(()) => (),
|
||||
}
|
||||
|
||||
sync.collect_blocks(io, block_set);
|
||||
sync.sync_peer(io, peer_id, false);
|
||||
}
|
||||
sync.continue_sync(io);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called when snapshot manifest is downloaded from a peer.
|
||||
fn on_snapshot_manifest(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
|
||||
trace!(target: "sync", "Ignoring snapshot manifest from unconfirmed peer {}", peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
sync.clear_peer_download(peer_id);
|
||||
if !sync.reset_peer_asking(peer_id, PeerAsking::SnapshotManifest) || sync.state != SyncState::SnapshotManifest {
|
||||
trace!(target: "sync", "{}: Ignored unexpected/expired manifest", peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let manifest_rlp = r.at(0)?;
|
||||
let manifest = match ManifestData::from_rlp(manifest_rlp.as_raw()) {
|
||||
Err(e) => {
|
||||
trace!(target: "sync", "{}: Ignored bad manifest: {:?}", peer_id, e);
|
||||
io.disable_peer(peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
Ok(manifest) => manifest,
|
||||
};
|
||||
|
||||
let is_supported_version = io.snapshot_service().supported_versions()
|
||||
.map_or(false, |(l, h)| manifest.version >= l && manifest.version <= h);
|
||||
|
||||
if !is_supported_version {
|
||||
trace!(target: "sync", "{}: Snapshot manifest version not supported: {}", peer_id, manifest.version);
|
||||
io.disable_peer(peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
sync.snapshot.reset_to(&manifest, &keccak(manifest_rlp.as_raw()));
|
||||
io.snapshot_service().begin_restore(manifest);
|
||||
sync.state = SyncState::SnapshotData;
|
||||
|
||||
// give a task to the same peer first.
|
||||
sync.sync_peer(io, peer_id, false);
|
||||
// give tasks to other peers
|
||||
sync.continue_sync(io);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called when snapshot data is downloaded from a peer.
|
||||
fn on_snapshot_data(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
|
||||
trace!(target: "sync", "Ignoring snapshot data from unconfirmed peer {}", peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
sync.clear_peer_download(peer_id);
|
||||
if !sync.reset_peer_asking(peer_id, PeerAsking::SnapshotData) || (sync.state != SyncState::SnapshotData && sync.state != SyncState::SnapshotWaiting) {
|
||||
trace!(target: "sync", "{}: Ignored unexpected snapshot data", peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// check service status
|
||||
let status = io.snapshot_service().status();
|
||||
match status {
|
||||
RestorationStatus::Inactive | RestorationStatus::Failed => {
|
||||
trace!(target: "sync", "{}: Snapshot restoration aborted", peer_id);
|
||||
sync.state = SyncState::WaitingPeers;
|
||||
|
||||
// only note bad if restoration failed.
|
||||
if let (Some(hash), RestorationStatus::Failed) = (sync.snapshot.snapshot_hash(), status) {
|
||||
trace!(target: "sync", "Noting snapshot hash {} as bad", hash);
|
||||
sync.snapshot.note_bad(hash);
|
||||
}
|
||||
|
||||
sync.snapshot.clear();
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
},
|
||||
RestorationStatus::Ongoing { .. } => {
|
||||
trace!(target: "sync", "{}: Snapshot restoration is ongoing", peer_id);
|
||||
},
|
||||
}
|
||||
|
||||
let snapshot_data: Bytes = r.val_at(0)?;
|
||||
match sync.snapshot.validate_chunk(&snapshot_data) {
|
||||
Ok(ChunkType::Block(hash)) => {
|
||||
trace!(target: "sync", "{}: Processing block chunk", peer_id);
|
||||
io.snapshot_service().restore_block_chunk(hash, snapshot_data);
|
||||
}
|
||||
Ok(ChunkType::State(hash)) => {
|
||||
trace!(target: "sync", "{}: Processing state chunk", peer_id);
|
||||
io.snapshot_service().restore_state_chunk(hash, snapshot_data);
|
||||
}
|
||||
Err(()) => {
|
||||
trace!(target: "sync", "{}: Got bad snapshot chunk", peer_id);
|
||||
io.disconnect_peer(peer_id);
|
||||
sync.continue_sync(io);
|
||||
return Ok(());
|
||||
}
|
||||
}
|
||||
|
||||
if sync.snapshot.is_complete() {
|
||||
// wait for snapshot restoration process to complete
|
||||
sync.state = SyncState::SnapshotWaiting;
|
||||
}
|
||||
// give a task to the same peer first.
|
||||
sync.sync_peer(io, peer_id, false);
|
||||
// give tasks to other peers
|
||||
sync.continue_sync(io);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called by peer to report status
|
||||
fn on_peer_status(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
sync.handshaking_peers.remove(&peer_id);
|
||||
let protocol_version: u8 = r.val_at(0)?;
|
||||
let warp_protocol = io.protocol_version(&WARP_SYNC_PROTOCOL_ID, peer_id) != 0;
|
||||
let peer = PeerInfo {
|
||||
protocol_version: protocol_version,
|
||||
network_id: r.val_at(1)?,
|
||||
difficulty: Some(r.val_at(2)?),
|
||||
latest_hash: r.val_at(3)?,
|
||||
genesis: r.val_at(4)?,
|
||||
asking: PeerAsking::Nothing,
|
||||
asking_blocks: Vec::new(),
|
||||
asking_hash: None,
|
||||
ask_time: Instant::now(),
|
||||
last_sent_transactions: HashSet::new(),
|
||||
expired: false,
|
||||
confirmation: if sync.fork_block.is_none() { ForkConfirmation::Confirmed } else { ForkConfirmation::Unconfirmed },
|
||||
asking_snapshot_data: None,
|
||||
snapshot_hash: if warp_protocol { Some(r.val_at(5)?) } else { None },
|
||||
snapshot_number: if warp_protocol { Some(r.val_at(6)?) } else { None },
|
||||
block_set: None,
|
||||
};
|
||||
|
||||
trace!(target: "sync", "New peer {} (protocol: {}, network: {:?}, difficulty: {:?}, latest:{}, genesis:{}, snapshot:{:?})",
|
||||
peer_id, peer.protocol_version, peer.network_id, peer.difficulty, peer.latest_hash, peer.genesis, peer.snapshot_number);
|
||||
if io.is_expired() {
|
||||
trace!(target: "sync", "Status packet from expired session {}:{}", peer_id, io.peer_info(peer_id));
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if sync.peers.contains_key(&peer_id) {
|
||||
debug!(target: "sync", "Unexpected status packet from {}:{}", peer_id, io.peer_info(peer_id));
|
||||
return Ok(());
|
||||
}
|
||||
let chain_info = io.chain().chain_info();
|
||||
if peer.genesis != chain_info.genesis_hash {
|
||||
io.disable_peer(peer_id);
|
||||
trace!(target: "sync", "Peer {} genesis hash mismatch (ours: {}, theirs: {})", peer_id, chain_info.genesis_hash, peer.genesis);
|
||||
return Ok(());
|
||||
}
|
||||
if peer.network_id != sync.network_id {
|
||||
io.disable_peer(peer_id);
|
||||
trace!(target: "sync", "Peer {} network id mismatch (ours: {}, theirs: {})", peer_id, sync.network_id, peer.network_id);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if false
|
||||
|| (warp_protocol && (peer.protocol_version < PAR_PROTOCOL_VERSION_1.0 || peer.protocol_version > PAR_PROTOCOL_VERSION_3.0))
|
||||
|| (!warp_protocol && (peer.protocol_version < ETH_PROTOCOL_VERSION_62.0 || peer.protocol_version > ETH_PROTOCOL_VERSION_63.0))
|
||||
{
|
||||
io.disable_peer(peer_id);
|
||||
trace!(target: "sync", "Peer {} unsupported eth protocol ({})", peer_id, peer.protocol_version);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if sync.sync_start_time.is_none() {
|
||||
sync.sync_start_time = Some(Instant::now());
|
||||
}
|
||||
|
||||
sync.peers.insert(peer_id.clone(), peer);
|
||||
// Don't activate peer immediatelly when searching for common block.
|
||||
// Let the current sync round complete first.
|
||||
sync.active_peers.insert(peer_id.clone());
|
||||
debug!(target: "sync", "Connected {}:{}", peer_id, io.peer_info(peer_id));
|
||||
if let Some((fork_block, _)) = sync.fork_block {
|
||||
SyncRequester::request_fork_header(sync, io, peer_id, fork_block);
|
||||
} else {
|
||||
SyncHandler::on_peer_confirmed(sync, io, peer_id);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called when peer sends us new transactions
|
||||
fn on_peer_transactions(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
// Accept transactions only when fully synced
|
||||
if !io.is_chain_queue_empty() || (sync.state != SyncState::Idle && sync.state != SyncState::NewBlocks) {
|
||||
trace!(target: "sync", "{} Ignoring transactions while syncing", peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
|
||||
trace!(target: "sync", "{} Ignoring transactions from unconfirmed/unknown peer", peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let item_count = r.item_count()?;
|
||||
trace!(target: "sync", "{:02} -> Transactions ({} entries)", peer_id, item_count);
|
||||
let mut transactions = Vec::with_capacity(item_count);
|
||||
for i in 0 .. item_count {
|
||||
let rlp = r.at(i)?;
|
||||
let tx = rlp.as_raw().to_vec();
|
||||
transactions.push(tx);
|
||||
}
|
||||
io.chain().queue_transactions(transactions, peer_id);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called when peer sends us signed private transaction packet
|
||||
fn on_signed_private_transaction(sync: &ChainSync, _io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
|
||||
trace!(target: "sync", "{} Ignoring packet from unconfirmed/unknown peer", peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
trace!(target: "sync", "Received signed private transaction packet from {:?}", peer_id);
|
||||
if let Err(e) = sync.private_tx_handler.import_signed_private_transaction(r.as_raw()) {
|
||||
trace!(target: "sync", "Ignoring the message, error queueing: {}", e);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Called when peer sends us new private transaction packet
|
||||
fn on_private_transaction(sync: &ChainSync, _io: &mut SyncIo, peer_id: PeerId, r: &Rlp) -> Result<(), PacketDecodeError> {
|
||||
if !sync.peers.get(&peer_id).map_or(false, |p| p.can_sync()) {
|
||||
trace!(target: "sync", "{} Ignoring packet from unconfirmed/unknown peer", peer_id);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
trace!(target: "sync", "Received private transaction packet from {:?}", peer_id);
|
||||
|
||||
if let Err(e) = sync.private_tx_handler.import_private_transaction(r.as_raw()) {
|
||||
trace!(target: "sync", "Ignoring the message, error queueing: {}", e);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use ethcore::client::{ChainInfo, EachBlockWith, TestBlockChainClient};
|
||||
use parking_lot::RwLock;
|
||||
use rlp::{Rlp};
|
||||
use std::collections::{VecDeque};
|
||||
use tests::helpers::{TestIo};
|
||||
use tests::snapshot::TestSnapshotService;
|
||||
|
||||
use super::*;
|
||||
use super::super::tests::{
|
||||
dummy_sync_with_peer,
|
||||
get_dummy_block,
|
||||
get_dummy_blocks,
|
||||
get_dummy_hashes,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn handles_peer_new_hashes() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(10, EachBlockWith::Uncle);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(5), &client);
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let hashes_data = get_dummy_hashes();
|
||||
let hashes_rlp = Rlp::new(&hashes_data);
|
||||
|
||||
let result = SyncHandler::on_peer_new_hashes(&mut sync, &mut io, 0, &hashes_rlp);
|
||||
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn handles_peer_new_block_malformed() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(10, EachBlockWith::Uncle);
|
||||
|
||||
let block_data = get_dummy_block(11, client.chain_info().best_block_hash);
|
||||
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(5), &client);
|
||||
//sync.have_common_block = true;
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let block = Rlp::new(&block_data);
|
||||
|
||||
let result = SyncHandler::on_peer_new_block(&mut sync, &mut io, 0, &block);
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn handles_peer_new_block() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(10, EachBlockWith::Uncle);
|
||||
|
||||
let block_data = get_dummy_blocks(11, client.chain_info().best_block_hash);
|
||||
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(5), &client);
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let block = Rlp::new(&block_data);
|
||||
|
||||
let result = SyncHandler::on_peer_new_block(&mut sync, &mut io, 0, &block);
|
||||
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn handles_peer_new_block_empty() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(10, EachBlockWith::Uncle);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(5), &client);
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let empty_data = vec![];
|
||||
let block = Rlp::new(&empty_data);
|
||||
|
||||
let result = SyncHandler::on_peer_new_block(&mut sync, &mut io, 0, &block);
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn handles_peer_new_hashes_empty() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(10, EachBlockWith::Uncle);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(5), &client);
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let empty_hashes_data = vec![];
|
||||
let hashes_rlp = Rlp::new(&empty_hashes_data);
|
||||
|
||||
let result = SyncHandler::on_peer_new_hashes(&mut sync, &mut io, 0, &hashes_rlp);
|
||||
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
}
|
1375
ethcore/sync/src/chain/mod.rs
Normal file
1375
ethcore/sync/src/chain/mod.rs
Normal file
File diff suppressed because it is too large
Load Diff
636
ethcore/sync/src/chain/propagator.rs
Normal file
636
ethcore/sync/src/chain/propagator.rs
Normal file
@ -0,0 +1,636 @@
|
||||
// Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity.
|
||||
|
||||
// Parity is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use bytes::Bytes;
|
||||
use ethereum_types::H256;
|
||||
use ethcore::client::BlockChainInfo;
|
||||
use ethcore::header::BlockNumber;
|
||||
use network::{PeerId, PacketId};
|
||||
use rand::Rng;
|
||||
use rlp::{Encodable, RlpStream};
|
||||
use sync_io::SyncIo;
|
||||
use std::cmp;
|
||||
use std::collections::HashSet;
|
||||
use transaction::SignedTransaction;
|
||||
|
||||
use super::{
|
||||
random,
|
||||
ChainSync,
|
||||
MAX_PEER_LAG_PROPAGATION,
|
||||
MAX_PEERS_PROPAGATION,
|
||||
MAX_TRANSACTION_PACKET_SIZE,
|
||||
MAX_TRANSACTIONS_TO_PROPAGATE,
|
||||
MIN_PEERS_PROPAGATION,
|
||||
CONSENSUS_DATA_PACKET,
|
||||
NEW_BLOCK_HASHES_PACKET,
|
||||
NEW_BLOCK_PACKET,
|
||||
PRIVATE_TRANSACTION_PACKET,
|
||||
SIGNED_PRIVATE_TRANSACTION_PACKET,
|
||||
TRANSACTIONS_PACKET,
|
||||
};
|
||||
|
||||
/// Checks if peer is able to process service transactions
|
||||
fn accepts_service_transaction(client_id: &str) -> bool {
|
||||
// Parity versions starting from this will accept service-transactions
|
||||
const SERVICE_TRANSACTIONS_VERSION: (u32, u32) = (1u32, 6u32);
|
||||
// Parity client string prefix
|
||||
const PARITY_CLIENT_ID_PREFIX: &'static str = "Parity/v";
|
||||
|
||||
if !client_id.starts_with(PARITY_CLIENT_ID_PREFIX) {
|
||||
return false;
|
||||
}
|
||||
let ver: Vec<u32> = client_id[PARITY_CLIENT_ID_PREFIX.len()..].split('.')
|
||||
.take(2)
|
||||
.filter_map(|s| s.parse().ok())
|
||||
.collect();
|
||||
ver.len() == 2 && (ver[0] > SERVICE_TRANSACTIONS_VERSION.0 || (ver[0] == SERVICE_TRANSACTIONS_VERSION.0 && ver[1] >= SERVICE_TRANSACTIONS_VERSION.1))
|
||||
}
|
||||
|
||||
/// The Chain Sync Propagator: propagates data to peers
|
||||
pub struct SyncPropagator;
|
||||
|
||||
impl SyncPropagator {
|
||||
/// propagates latest block to a set of peers
|
||||
pub fn propagate_blocks(sync: &mut ChainSync, chain_info: &BlockChainInfo, io: &mut SyncIo, blocks: &[H256], peers: &[PeerId]) -> usize {
|
||||
trace!(target: "sync", "Sending NewBlocks to {:?}", peers);
|
||||
let mut sent = 0;
|
||||
for peer_id in peers {
|
||||
if blocks.is_empty() {
|
||||
let rlp = ChainSync::create_latest_block_rlp(io.chain());
|
||||
SyncPropagator::send_packet(io, *peer_id, NEW_BLOCK_PACKET, rlp);
|
||||
} else {
|
||||
for h in blocks {
|
||||
let rlp = ChainSync::create_new_block_rlp(io.chain(), h);
|
||||
SyncPropagator::send_packet(io, *peer_id, NEW_BLOCK_PACKET, rlp);
|
||||
}
|
||||
}
|
||||
if let Some(ref mut peer) = sync.peers.get_mut(peer_id) {
|
||||
peer.latest_hash = chain_info.best_block_hash.clone();
|
||||
}
|
||||
sent += 1;
|
||||
}
|
||||
sent
|
||||
}
|
||||
|
||||
/// propagates new known hashes to all peers
|
||||
pub fn propagate_new_hashes(sync: &mut ChainSync, chain_info: &BlockChainInfo, io: &mut SyncIo, peers: &[PeerId]) -> usize {
|
||||
trace!(target: "sync", "Sending NewHashes to {:?}", peers);
|
||||
let mut sent = 0;
|
||||
let last_parent = *io.chain().best_block_header().parent_hash();
|
||||
for peer_id in peers {
|
||||
sent += match ChainSync::create_new_hashes_rlp(io.chain(), &last_parent, &chain_info.best_block_hash) {
|
||||
Some(rlp) => {
|
||||
{
|
||||
if let Some(ref mut peer) = sync.peers.get_mut(peer_id) {
|
||||
peer.latest_hash = chain_info.best_block_hash.clone();
|
||||
}
|
||||
}
|
||||
SyncPropagator::send_packet(io, *peer_id, NEW_BLOCK_HASHES_PACKET, rlp);
|
||||
1
|
||||
},
|
||||
None => 0
|
||||
}
|
||||
}
|
||||
sent
|
||||
}
|
||||
|
||||
/// propagates new transactions to all peers
|
||||
pub fn propagate_new_transactions(sync: &mut ChainSync, io: &mut SyncIo) -> usize {
|
||||
// Early out if nobody to send to.
|
||||
if sync.peers.is_empty() {
|
||||
return 0;
|
||||
}
|
||||
|
||||
let transactions = io.chain().ready_transactions();
|
||||
if transactions.is_empty() {
|
||||
return 0;
|
||||
}
|
||||
|
||||
let (transactions, service_transactions): (Vec<_>, Vec<_>) = transactions.iter()
|
||||
.map(|tx| tx.signed())
|
||||
.partition(|tx| !tx.gas_price.is_zero());
|
||||
|
||||
// usual transactions could be propagated to all peers
|
||||
let mut affected_peers = HashSet::new();
|
||||
if !transactions.is_empty() {
|
||||
let peers = SyncPropagator::select_peers_for_transactions(sync, |_| true);
|
||||
affected_peers = SyncPropagator::propagate_transactions_to_peers(sync, io, peers, transactions);
|
||||
}
|
||||
|
||||
// most of times service_transactions will be empty
|
||||
// => there's no need to merge packets
|
||||
if !service_transactions.is_empty() {
|
||||
let service_transactions_peers = SyncPropagator::select_peers_for_transactions(sync, |peer_id| accepts_service_transaction(&io.peer_info(*peer_id)));
|
||||
let service_transactions_affected_peers = SyncPropagator::propagate_transactions_to_peers(sync, io, service_transactions_peers, service_transactions);
|
||||
affected_peers.extend(&service_transactions_affected_peers);
|
||||
}
|
||||
|
||||
affected_peers.len()
|
||||
}
|
||||
|
||||
fn propagate_transactions_to_peers(sync: &mut ChainSync, io: &mut SyncIo, peers: Vec<PeerId>, transactions: Vec<&SignedTransaction>) -> HashSet<PeerId> {
|
||||
let all_transactions_hashes = transactions.iter()
|
||||
.map(|tx| tx.hash())
|
||||
.collect::<HashSet<H256>>();
|
||||
let all_transactions_rlp = {
|
||||
let mut packet = RlpStream::new_list(transactions.len());
|
||||
for tx in &transactions { packet.append(&**tx); }
|
||||
packet.out()
|
||||
};
|
||||
|
||||
// Clear old transactions from stats
|
||||
sync.transactions_stats.retain(&all_transactions_hashes);
|
||||
|
||||
// sqrt(x)/x scaled to max u32
|
||||
let block_number = io.chain().chain_info().best_block_number;
|
||||
|
||||
let lucky_peers = {
|
||||
peers.into_iter()
|
||||
.filter_map(|peer_id| {
|
||||
let stats = &mut sync.transactions_stats;
|
||||
let peer_info = sync.peers.get_mut(&peer_id)
|
||||
.expect("peer_id is form peers; peers is result of select_peers_for_transactions; select_peers_for_transactions selects peers from self.peers; qed");
|
||||
|
||||
// Send all transactions
|
||||
if peer_info.last_sent_transactions.is_empty() {
|
||||
// update stats
|
||||
for hash in &all_transactions_hashes {
|
||||
let id = io.peer_session_info(peer_id).and_then(|info| info.id);
|
||||
stats.propagated(hash, id, block_number);
|
||||
}
|
||||
peer_info.last_sent_transactions = all_transactions_hashes.clone();
|
||||
return Some((peer_id, all_transactions_hashes.len(), all_transactions_rlp.clone()));
|
||||
}
|
||||
|
||||
// Get hashes of all transactions to send to this peer
|
||||
let to_send = all_transactions_hashes.difference(&peer_info.last_sent_transactions)
|
||||
.take(MAX_TRANSACTIONS_TO_PROPAGATE)
|
||||
.cloned()
|
||||
.collect::<HashSet<_>>();
|
||||
if to_send.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Construct RLP
|
||||
let (packet, to_send) = {
|
||||
let mut to_send = to_send;
|
||||
let mut packet = RlpStream::new();
|
||||
packet.begin_unbounded_list();
|
||||
let mut pushed = 0;
|
||||
for tx in &transactions {
|
||||
let hash = tx.hash();
|
||||
if to_send.contains(&hash) {
|
||||
let mut transaction = RlpStream::new();
|
||||
tx.rlp_append(&mut transaction);
|
||||
let appended = packet.append_raw_checked(&transaction.drain(), 1, MAX_TRANSACTION_PACKET_SIZE);
|
||||
if !appended {
|
||||
// Maximal packet size reached just proceed with sending
|
||||
debug!("Transaction packet size limit reached. Sending incomplete set of {}/{} transactions.", pushed, to_send.len());
|
||||
to_send = to_send.into_iter().take(pushed).collect();
|
||||
break;
|
||||
}
|
||||
pushed += 1;
|
||||
}
|
||||
}
|
||||
packet.complete_unbounded_list();
|
||||
(packet, to_send)
|
||||
};
|
||||
|
||||
// Update stats
|
||||
let id = io.peer_session_info(peer_id).and_then(|info| info.id);
|
||||
for hash in &to_send {
|
||||
// update stats
|
||||
stats.propagated(hash, id, block_number);
|
||||
}
|
||||
|
||||
peer_info.last_sent_transactions = all_transactions_hashes
|
||||
.intersection(&peer_info.last_sent_transactions)
|
||||
.chain(&to_send)
|
||||
.cloned()
|
||||
.collect();
|
||||
Some((peer_id, to_send.len(), packet.out()))
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
};
|
||||
|
||||
// Send RLPs
|
||||
let mut peers = HashSet::new();
|
||||
if lucky_peers.len() > 0 {
|
||||
let mut max_sent = 0;
|
||||
let lucky_peers_len = lucky_peers.len();
|
||||
for (peer_id, sent, rlp) in lucky_peers {
|
||||
peers.insert(peer_id);
|
||||
SyncPropagator::send_packet(io, peer_id, TRANSACTIONS_PACKET, rlp);
|
||||
trace!(target: "sync", "{:02} <- Transactions ({} entries)", peer_id, sent);
|
||||
max_sent = cmp::max(max_sent, sent);
|
||||
}
|
||||
debug!(target: "sync", "Sent up to {} transactions to {} peers.", max_sent, lucky_peers_len);
|
||||
}
|
||||
|
||||
peers
|
||||
}
|
||||
|
||||
pub fn propagate_latest_blocks(sync: &mut ChainSync, io: &mut SyncIo, sealed: &[H256]) {
|
||||
let chain_info = io.chain().chain_info();
|
||||
if (((chain_info.best_block_number as i64) - (sync.last_sent_block_number as i64)).abs() as BlockNumber) < MAX_PEER_LAG_PROPAGATION {
|
||||
let mut peers = sync.get_lagging_peers(&chain_info);
|
||||
if sealed.is_empty() {
|
||||
let hashes = SyncPropagator::propagate_new_hashes(sync, &chain_info, io, &peers);
|
||||
peers = ChainSync::select_random_peers(&peers);
|
||||
let blocks = SyncPropagator::propagate_blocks(sync, &chain_info, io, sealed, &peers);
|
||||
if blocks != 0 || hashes != 0 {
|
||||
trace!(target: "sync", "Sent latest {} blocks and {} hashes to peers.", blocks, hashes);
|
||||
}
|
||||
} else {
|
||||
SyncPropagator::propagate_blocks(sync, &chain_info, io, sealed, &peers);
|
||||
SyncPropagator::propagate_new_hashes(sync, &chain_info, io, &peers);
|
||||
trace!(target: "sync", "Sent sealed block to all peers");
|
||||
};
|
||||
}
|
||||
sync.last_sent_block_number = chain_info.best_block_number;
|
||||
}
|
||||
|
||||
/// Distribute valid proposed blocks to subset of current peers.
|
||||
pub fn propagate_proposed_blocks(sync: &mut ChainSync, io: &mut SyncIo, proposed: &[Bytes]) {
|
||||
let peers = sync.get_consensus_peers();
|
||||
trace!(target: "sync", "Sending proposed blocks to {:?}", peers);
|
||||
for block in proposed {
|
||||
let rlp = ChainSync::create_block_rlp(
|
||||
block,
|
||||
io.chain().chain_info().total_difficulty
|
||||
);
|
||||
for peer_id in &peers {
|
||||
SyncPropagator::send_packet(io, *peer_id, NEW_BLOCK_PACKET, rlp.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Broadcast consensus message to peers.
|
||||
pub fn propagate_consensus_packet(sync: &mut ChainSync, io: &mut SyncIo, packet: Bytes) {
|
||||
let lucky_peers = ChainSync::select_random_peers(&sync.get_consensus_peers());
|
||||
trace!(target: "sync", "Sending consensus packet to {:?}", lucky_peers);
|
||||
for peer_id in lucky_peers {
|
||||
SyncPropagator::send_packet(io, peer_id, CONSENSUS_DATA_PACKET, packet.clone());
|
||||
}
|
||||
}
|
||||
|
||||
/// Broadcast private transaction message to peers.
|
||||
pub fn propagate_private_transaction(sync: &mut ChainSync, io: &mut SyncIo, packet: Bytes) {
|
||||
let lucky_peers = ChainSync::select_random_peers(&sync.get_private_transaction_peers());
|
||||
trace!(target: "sync", "Sending private transaction packet to {:?}", lucky_peers);
|
||||
for peer_id in lucky_peers {
|
||||
SyncPropagator::send_packet(io, peer_id, PRIVATE_TRANSACTION_PACKET, packet.clone());
|
||||
}
|
||||
}
|
||||
|
||||
/// Broadcast signed private transaction message to peers.
|
||||
pub fn propagate_signed_private_transaction(sync: &mut ChainSync, io: &mut SyncIo, packet: Bytes) {
|
||||
let lucky_peers = ChainSync::select_random_peers(&sync.get_private_transaction_peers());
|
||||
trace!(target: "sync", "Sending signed private transaction packet to {:?}", lucky_peers);
|
||||
for peer_id in lucky_peers {
|
||||
SyncPropagator::send_packet(io, peer_id, SIGNED_PRIVATE_TRANSACTION_PACKET, packet.clone());
|
||||
}
|
||||
}
|
||||
|
||||
fn select_peers_for_transactions<F>(sync: &ChainSync, filter: F) -> Vec<PeerId>
|
||||
where F: Fn(&PeerId) -> bool {
|
||||
// sqrt(x)/x scaled to max u32
|
||||
let fraction = ((sync.peers.len() as f64).powf(-0.5) * (u32::max_value() as f64).round()) as u32;
|
||||
let small = sync.peers.len() < MIN_PEERS_PROPAGATION;
|
||||
|
||||
let mut random = random::new();
|
||||
sync.peers.keys()
|
||||
.cloned()
|
||||
.filter(filter)
|
||||
.filter(|_| small || random.next_u32() < fraction)
|
||||
.take(MAX_PEERS_PROPAGATION)
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Generic packet sender
|
||||
fn send_packet(sync: &mut SyncIo, peer_id: PeerId, packet_id: PacketId, packet: Bytes) {
|
||||
if let Err(e) = sync.send(peer_id, packet_id, packet) {
|
||||
debug!(target:"sync", "Error sending packet: {:?}", e);
|
||||
sync.disconnect_peer(peer_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use ethcore::client::{BlockInfo, ChainInfo, EachBlockWith, TestBlockChainClient};
|
||||
use parking_lot::RwLock;
|
||||
use private_tx::NoopPrivateTxHandler;
|
||||
use rlp::{Rlp};
|
||||
use std::collections::{VecDeque};
|
||||
use tests::helpers::{TestIo};
|
||||
use tests::snapshot::TestSnapshotService;
|
||||
|
||||
use super::{*, super::{*, tests::*}};
|
||||
|
||||
#[test]
|
||||
fn sends_new_hashes_to_lagging_peer() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Uncle);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(5), &client);
|
||||
let chain_info = client.chain_info();
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let peers = sync.get_lagging_peers(&chain_info);
|
||||
let peer_count = SyncPropagator::propagate_new_hashes(&mut sync, &chain_info, &mut io, &peers);
|
||||
|
||||
// 1 message should be send
|
||||
assert_eq!(1, io.packets.len());
|
||||
// 1 peer should be updated
|
||||
assert_eq!(1, peer_count);
|
||||
// NEW_BLOCK_HASHES_PACKET
|
||||
assert_eq!(0x01, io.packets[0].packet_id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sends_latest_block_to_lagging_peer() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Uncle);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(5), &client);
|
||||
let chain_info = client.chain_info();
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
let peers = sync.get_lagging_peers(&chain_info);
|
||||
let peer_count = SyncPropagator::propagate_blocks(&mut sync, &chain_info, &mut io, &[], &peers);
|
||||
|
||||
// 1 message should be send
|
||||
assert_eq!(1, io.packets.len());
|
||||
// 1 peer should be updated
|
||||
assert_eq!(1, peer_count);
|
||||
// NEW_BLOCK_PACKET
|
||||
assert_eq!(0x07, io.packets[0].packet_id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sends_sealed_block() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Uncle);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let hash = client.block_hash(BlockId::Number(99)).unwrap();
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(5), &client);
|
||||
let chain_info = client.chain_info();
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
let peers = sync.get_lagging_peers(&chain_info);
|
||||
let peer_count = SyncPropagator::propagate_blocks(&mut sync ,&chain_info, &mut io, &[hash.clone()], &peers);
|
||||
|
||||
// 1 message should be send
|
||||
assert_eq!(1, io.packets.len());
|
||||
// 1 peer should be updated
|
||||
assert_eq!(1, peer_count);
|
||||
// NEW_BLOCK_PACKET
|
||||
assert_eq!(0x07, io.packets[0].packet_id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sends_proposed_block() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(2, EachBlockWith::Uncle);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let block = client.block(BlockId::Latest).unwrap().into_inner();
|
||||
let mut sync = ChainSync::new(SyncConfig::default(), &client, Arc::new(NoopPrivateTxHandler));
|
||||
sync.peers.insert(0,
|
||||
PeerInfo {
|
||||
// Messaging protocol
|
||||
protocol_version: 2,
|
||||
genesis: H256::zero(),
|
||||
network_id: 0,
|
||||
latest_hash: client.block_hash_delta_minus(1),
|
||||
difficulty: None,
|
||||
asking: PeerAsking::Nothing,
|
||||
asking_blocks: Vec::new(),
|
||||
asking_hash: None,
|
||||
ask_time: Instant::now(),
|
||||
last_sent_transactions: HashSet::new(),
|
||||
expired: false,
|
||||
confirmation: ForkConfirmation::Confirmed,
|
||||
snapshot_number: None,
|
||||
snapshot_hash: None,
|
||||
asking_snapshot_data: None,
|
||||
block_set: None,
|
||||
});
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
SyncPropagator::propagate_proposed_blocks(&mut sync, &mut io, &[block]);
|
||||
|
||||
// 1 message should be sent
|
||||
assert_eq!(1, io.packets.len());
|
||||
// NEW_BLOCK_PACKET
|
||||
assert_eq!(0x07, io.packets[0].packet_id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn propagates_transactions() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Uncle);
|
||||
client.insert_transaction_to_queue();
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(1), &client);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
let peer_count = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
// Try to propagate same transactions for the second time
|
||||
let peer_count2 = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
// Even after new block transactions should not be propagated twice
|
||||
sync.chain_new_blocks(&mut io, &[], &[], &[], &[], &[], &[]);
|
||||
// Try to propagate same transactions for the third time
|
||||
let peer_count3 = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
|
||||
// 1 message should be send
|
||||
assert_eq!(1, io.packets.len());
|
||||
// 1 peer should be updated but only once
|
||||
assert_eq!(1, peer_count);
|
||||
assert_eq!(0, peer_count2);
|
||||
assert_eq!(0, peer_count3);
|
||||
// TRANSACTIONS_PACKET
|
||||
assert_eq!(0x02, io.packets[0].packet_id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn does_not_propagate_new_transactions_after_new_block() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Uncle);
|
||||
client.insert_transaction_to_queue();
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(1), &client);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
let peer_count = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
io.chain.insert_transaction_to_queue();
|
||||
// New block import should not trigger propagation.
|
||||
// (we only propagate on timeout)
|
||||
sync.chain_new_blocks(&mut io, &[], &[], &[], &[], &[], &[]);
|
||||
|
||||
// 2 message should be send
|
||||
assert_eq!(1, io.packets.len());
|
||||
// 1 peer should receive the message
|
||||
assert_eq!(1, peer_count);
|
||||
// TRANSACTIONS_PACKET
|
||||
assert_eq!(0x02, io.packets[0].packet_id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn does_not_fail_for_no_peers() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Uncle);
|
||||
client.insert_transaction_to_queue();
|
||||
// Sync with no peers
|
||||
let mut sync = ChainSync::new(SyncConfig::default(), &client, Arc::new(NoopPrivateTxHandler));
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
let peer_count = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
sync.chain_new_blocks(&mut io, &[], &[], &[], &[], &[], &[]);
|
||||
// Try to propagate same transactions for the second time
|
||||
let peer_count2 = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
|
||||
assert_eq!(0, io.packets.len());
|
||||
assert_eq!(0, peer_count);
|
||||
assert_eq!(0, peer_count2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn propagates_transactions_without_alternating() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Uncle);
|
||||
client.insert_transaction_to_queue();
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(1), &client);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
// should sent some
|
||||
{
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
let peer_count = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
assert_eq!(1, io.packets.len());
|
||||
assert_eq!(1, peer_count);
|
||||
}
|
||||
// Insert some more
|
||||
client.insert_transaction_to_queue();
|
||||
let (peer_count2, peer_count3) = {
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
// Propagate new transactions
|
||||
let peer_count2 = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
// And now the peer should have all transactions
|
||||
let peer_count3 = SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
(peer_count2, peer_count3)
|
||||
};
|
||||
|
||||
// 2 message should be send (in total)
|
||||
assert_eq!(2, queue.read().len());
|
||||
// 1 peer should be updated but only once after inserting new transaction
|
||||
assert_eq!(1, peer_count2);
|
||||
assert_eq!(0, peer_count3);
|
||||
// TRANSACTIONS_PACKET
|
||||
assert_eq!(0x02, queue.read()[0].packet_id);
|
||||
assert_eq!(0x02, queue.read()[1].packet_id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn should_maintain_transations_propagation_stats() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Uncle);
|
||||
client.insert_transaction_to_queue();
|
||||
let mut sync = dummy_sync_with_peer(client.block_hash_delta_minus(1), &client);
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
|
||||
let stats = sync.transactions_stats();
|
||||
assert_eq!(stats.len(), 1, "Should maintain stats for single transaction.")
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn should_propagate_service_transaction_to_selected_peers_only() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.insert_transaction_with_gas_price_to_queue(U256::zero());
|
||||
let block_hash = client.block_hash_delta_minus(1);
|
||||
let mut sync = ChainSync::new(SyncConfig::default(), &client, Arc::new(NoopPrivateTxHandler));
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
// when peer#1 is Geth
|
||||
insert_dummy_peer(&mut sync, 1, block_hash);
|
||||
io.peers_info.insert(1, "Geth".to_owned());
|
||||
// and peer#2 is Parity, accepting service transactions
|
||||
insert_dummy_peer(&mut sync, 2, block_hash);
|
||||
io.peers_info.insert(2, "Parity/v1.6".to_owned());
|
||||
// and peer#3 is Parity, discarding service transactions
|
||||
insert_dummy_peer(&mut sync, 3, block_hash);
|
||||
io.peers_info.insert(3, "Parity/v1.5".to_owned());
|
||||
// and peer#4 is Parity, accepting service transactions
|
||||
insert_dummy_peer(&mut sync, 4, block_hash);
|
||||
io.peers_info.insert(4, "Parity/v1.7.3-ABCDEFGH".to_owned());
|
||||
|
||||
// and new service transaction is propagated to peers
|
||||
SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
|
||||
// peer#2 && peer#4 are receiving service transaction
|
||||
assert!(io.packets.iter().any(|p| p.packet_id == 0x02 && p.recipient == 2)); // TRANSACTIONS_PACKET
|
||||
assert!(io.packets.iter().any(|p| p.packet_id == 0x02 && p.recipient == 4)); // TRANSACTIONS_PACKET
|
||||
assert_eq!(io.packets.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn should_propagate_service_transaction_is_sent_as_separate_message() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
let tx1_hash = client.insert_transaction_to_queue();
|
||||
let tx2_hash = client.insert_transaction_with_gas_price_to_queue(U256::zero());
|
||||
let block_hash = client.block_hash_delta_minus(1);
|
||||
let mut sync = ChainSync::new(SyncConfig::default(), &client, Arc::new(NoopPrivateTxHandler));
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
// when peer#1 is Parity, accepting service transactions
|
||||
insert_dummy_peer(&mut sync, 1, block_hash);
|
||||
io.peers_info.insert(1, "Parity/v1.6".to_owned());
|
||||
|
||||
// and service + non-service transactions are propagated to peers
|
||||
SyncPropagator::propagate_new_transactions(&mut sync, &mut io);
|
||||
|
||||
// two separate packets for peer are queued:
|
||||
// 1) with non-service-transaction
|
||||
// 2) with service transaction
|
||||
let sent_transactions: Vec<UnverifiedTransaction> = io.packets.iter()
|
||||
.filter_map(|p| {
|
||||
if p.packet_id != 0x02 || p.recipient != 1 { // TRANSACTIONS_PACKET
|
||||
return None;
|
||||
}
|
||||
|
||||
let rlp = Rlp::new(&*p.data);
|
||||
let item_count = rlp.item_count().unwrap_or(0);
|
||||
if item_count != 1 {
|
||||
return None;
|
||||
}
|
||||
|
||||
rlp.at(0).ok().and_then(|r| r.as_val().ok())
|
||||
})
|
||||
.collect();
|
||||
assert_eq!(sent_transactions.len(), 2);
|
||||
assert!(sent_transactions.iter().any(|tx| tx.hash() == tx1_hash));
|
||||
assert!(sent_transactions.iter().any(|tx| tx.hash() == tx2_hash));
|
||||
}
|
||||
}
|
155
ethcore/sync/src/chain/requester.rs
Normal file
155
ethcore/sync/src/chain/requester.rs
Normal file
@ -0,0 +1,155 @@
|
||||
// Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity.
|
||||
|
||||
// Parity is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use api::WARP_SYNC_PROTOCOL_ID;
|
||||
use block_sync::BlockRequest;
|
||||
use bytes::Bytes;
|
||||
use ethcore::header::BlockNumber;
|
||||
use ethereum_types::H256;
|
||||
use network::{PeerId, PacketId};
|
||||
use rlp::RlpStream;
|
||||
use std::time::Instant;
|
||||
use sync_io::SyncIo;
|
||||
|
||||
use super::{
|
||||
BlockSet,
|
||||
ChainSync,
|
||||
PeerAsking,
|
||||
ETH_PROTOCOL_VERSION_63,
|
||||
GET_BLOCK_BODIES_PACKET,
|
||||
GET_BLOCK_HEADERS_PACKET,
|
||||
GET_RECEIPTS_PACKET,
|
||||
GET_SNAPSHOT_DATA_PACKET,
|
||||
GET_SNAPSHOT_MANIFEST_PACKET,
|
||||
};
|
||||
|
||||
/// The Chain Sync Requester: requesting data to other peers
|
||||
pub struct SyncRequester;
|
||||
|
||||
impl SyncRequester {
|
||||
/// Perform block download request`
|
||||
pub fn request_blocks(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, request: BlockRequest, block_set: BlockSet) {
|
||||
match request {
|
||||
BlockRequest::Headers { start, count, skip } => {
|
||||
SyncRequester::request_headers_by_hash(sync, io, peer_id, &start, count, skip, false, block_set);
|
||||
},
|
||||
BlockRequest::Bodies { hashes } => {
|
||||
SyncRequester::request_bodies(sync, io, peer_id, hashes, block_set);
|
||||
},
|
||||
BlockRequest::Receipts { hashes } => {
|
||||
SyncRequester::request_receipts(sync, io, peer_id, hashes, block_set);
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/// Request block bodies from a peer
|
||||
fn request_bodies(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, hashes: Vec<H256>, set: BlockSet) {
|
||||
let mut rlp = RlpStream::new_list(hashes.len());
|
||||
trace!(target: "sync", "{} <- GetBlockBodies: {} entries starting from {:?}, set = {:?}", peer_id, hashes.len(), hashes.first(), set);
|
||||
for h in &hashes {
|
||||
rlp.append(&h.clone());
|
||||
}
|
||||
SyncRequester::send_request(sync, io, peer_id, PeerAsking::BlockBodies, GET_BLOCK_BODIES_PACKET, rlp.out());
|
||||
let peer = sync.peers.get_mut(&peer_id).expect("peer_id may originate either from on_packet, where it is already validated or from enumerating self.peers. qed");
|
||||
peer.asking_blocks = hashes;
|
||||
peer.block_set = Some(set);
|
||||
}
|
||||
|
||||
/// Request headers from a peer by block number
|
||||
pub fn request_fork_header(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, n: BlockNumber) {
|
||||
trace!(target: "sync", "{} <- GetForkHeader: at {}", peer_id, n);
|
||||
let mut rlp = RlpStream::new_list(4);
|
||||
rlp.append(&n);
|
||||
rlp.append(&1u32);
|
||||
rlp.append(&0u32);
|
||||
rlp.append(&0u32);
|
||||
SyncRequester::send_request(sync, io, peer_id, PeerAsking::ForkHeader, GET_BLOCK_HEADERS_PACKET, rlp.out());
|
||||
}
|
||||
|
||||
/// Find some headers or blocks to download for a peer.
|
||||
pub fn request_snapshot_data(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId) {
|
||||
// find chunk data to download
|
||||
if let Some(hash) = sync.snapshot.needed_chunk() {
|
||||
if let Some(ref mut peer) = sync.peers.get_mut(&peer_id) {
|
||||
peer.asking_snapshot_data = Some(hash.clone());
|
||||
}
|
||||
SyncRequester::request_snapshot_chunk(sync, io, peer_id, &hash);
|
||||
}
|
||||
}
|
||||
|
||||
/// Request snapshot manifest from a peer.
|
||||
pub fn request_snapshot_manifest(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId) {
|
||||
trace!(target: "sync", "{} <- GetSnapshotManifest", peer_id);
|
||||
let rlp = RlpStream::new_list(0);
|
||||
SyncRequester::send_request(sync, io, peer_id, PeerAsking::SnapshotManifest, GET_SNAPSHOT_MANIFEST_PACKET, rlp.out());
|
||||
}
|
||||
|
||||
/// Request headers from a peer by block hash
|
||||
fn request_headers_by_hash(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, h: &H256, count: u64, skip: u64, reverse: bool, set: BlockSet) {
|
||||
trace!(target: "sync", "{} <- GetBlockHeaders: {} entries starting from {}, set = {:?}", peer_id, count, h, set);
|
||||
let mut rlp = RlpStream::new_list(4);
|
||||
rlp.append(h);
|
||||
rlp.append(&count);
|
||||
rlp.append(&skip);
|
||||
rlp.append(&if reverse {1u32} else {0u32});
|
||||
SyncRequester::send_request(sync, io, peer_id, PeerAsking::BlockHeaders, GET_BLOCK_HEADERS_PACKET, rlp.out());
|
||||
let peer = sync.peers.get_mut(&peer_id).expect("peer_id may originate either from on_packet, where it is already validated or from enumerating self.peers. qed");
|
||||
peer.asking_hash = Some(h.clone());
|
||||
peer.block_set = Some(set);
|
||||
}
|
||||
|
||||
/// Request block receipts from a peer
|
||||
fn request_receipts(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, hashes: Vec<H256>, set: BlockSet) {
|
||||
let mut rlp = RlpStream::new_list(hashes.len());
|
||||
trace!(target: "sync", "{} <- GetBlockReceipts: {} entries starting from {:?}, set = {:?}", peer_id, hashes.len(), hashes.first(), set);
|
||||
for h in &hashes {
|
||||
rlp.append(&h.clone());
|
||||
}
|
||||
SyncRequester::send_request(sync, io, peer_id, PeerAsking::BlockReceipts, GET_RECEIPTS_PACKET, rlp.out());
|
||||
let peer = sync.peers.get_mut(&peer_id).expect("peer_id may originate either from on_packet, where it is already validated or from enumerating self.peers. qed");
|
||||
peer.asking_blocks = hashes;
|
||||
peer.block_set = Some(set);
|
||||
}
|
||||
|
||||
/// Request snapshot chunk from a peer.
|
||||
fn request_snapshot_chunk(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, chunk: &H256) {
|
||||
trace!(target: "sync", "{} <- GetSnapshotData {:?}", peer_id, chunk);
|
||||
let mut rlp = RlpStream::new_list(1);
|
||||
rlp.append(chunk);
|
||||
SyncRequester::send_request(sync, io, peer_id, PeerAsking::SnapshotData, GET_SNAPSHOT_DATA_PACKET, rlp.out());
|
||||
}
|
||||
|
||||
/// Generic request sender
|
||||
fn send_request(sync: &mut ChainSync, io: &mut SyncIo, peer_id: PeerId, asking: PeerAsking, packet_id: PacketId, packet: Bytes) {
|
||||
if let Some(ref mut peer) = sync.peers.get_mut(&peer_id) {
|
||||
if peer.asking != PeerAsking::Nothing {
|
||||
warn!(target:"sync", "Asking {:?} while requesting {:?}", peer.asking, asking);
|
||||
}
|
||||
peer.asking = asking;
|
||||
peer.ask_time = Instant::now();
|
||||
// TODO [ToDr] This seems quite fragile. Be careful when protocol is updated.
|
||||
let result = if packet_id >= ETH_PROTOCOL_VERSION_63.1 {
|
||||
io.send_protocol(WARP_SYNC_PROTOCOL_ID, peer_id, packet_id, packet)
|
||||
} else {
|
||||
io.send(peer_id, packet_id, packet)
|
||||
};
|
||||
if let Err(e) = result {
|
||||
debug!(target:"sync", "Error sending request: {:?}", e);
|
||||
io.disconnect_peer(peer_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
446
ethcore/sync/src/chain/supplier.rs
Normal file
446
ethcore/sync/src/chain/supplier.rs
Normal file
@ -0,0 +1,446 @@
|
||||
// Copyright 2015-2018 Parity Technologies (UK) Ltd.
|
||||
// This file is part of Parity.
|
||||
|
||||
// Parity is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
|
||||
// Parity is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use bytes::Bytes;
|
||||
use ethcore::client::BlockId;
|
||||
use ethcore::header::BlockNumber;
|
||||
use ethereum_types::H256;
|
||||
use network::{self, PeerId};
|
||||
use parking_lot::RwLock;
|
||||
use rlp::{Rlp, RlpStream};
|
||||
use std::cmp;
|
||||
use sync_io::SyncIo;
|
||||
|
||||
use super::{
|
||||
ChainSync,
|
||||
RlpResponseResult,
|
||||
PacketDecodeError,
|
||||
BLOCK_BODIES_PACKET,
|
||||
BLOCK_HEADERS_PACKET,
|
||||
CONSENSUS_DATA_PACKET,
|
||||
GET_BLOCK_BODIES_PACKET,
|
||||
GET_BLOCK_HEADERS_PACKET,
|
||||
GET_NODE_DATA_PACKET,
|
||||
GET_RECEIPTS_PACKET,
|
||||
GET_SNAPSHOT_DATA_PACKET,
|
||||
GET_SNAPSHOT_MANIFEST_PACKET,
|
||||
MAX_BODIES_TO_SEND,
|
||||
MAX_HEADERS_TO_SEND,
|
||||
MAX_NODE_DATA_TO_SEND,
|
||||
MAX_RECEIPTS_HEADERS_TO_SEND,
|
||||
MAX_RECEIPTS_TO_SEND,
|
||||
NODE_DATA_PACKET,
|
||||
RECEIPTS_PACKET,
|
||||
SNAPSHOT_DATA_PACKET,
|
||||
SNAPSHOT_MANIFEST_PACKET,
|
||||
};
|
||||
|
||||
/// The Chain Sync Supplier: answers requests from peers with available data
|
||||
pub struct SyncSupplier;
|
||||
|
||||
impl SyncSupplier {
|
||||
/// Dispatch incoming requests and responses
|
||||
pub fn dispatch_packet(sync: &RwLock<ChainSync>, io: &mut SyncIo, peer: PeerId, packet_id: u8, data: &[u8]) {
|
||||
let rlp = Rlp::new(data);
|
||||
let result = match packet_id {
|
||||
GET_BLOCK_BODIES_PACKET => SyncSupplier::return_rlp(io, &rlp, peer,
|
||||
SyncSupplier::return_block_bodies,
|
||||
|e| format!("Error sending block bodies: {:?}", e)),
|
||||
|
||||
GET_BLOCK_HEADERS_PACKET => SyncSupplier::return_rlp(io, &rlp, peer,
|
||||
SyncSupplier::return_block_headers,
|
||||
|e| format!("Error sending block headers: {:?}", e)),
|
||||
|
||||
GET_RECEIPTS_PACKET => SyncSupplier::return_rlp(io, &rlp, peer,
|
||||
SyncSupplier::return_receipts,
|
||||
|e| format!("Error sending receipts: {:?}", e)),
|
||||
|
||||
GET_NODE_DATA_PACKET => SyncSupplier::return_rlp(io, &rlp, peer,
|
||||
SyncSupplier::return_node_data,
|
||||
|e| format!("Error sending nodes: {:?}", e)),
|
||||
|
||||
GET_SNAPSHOT_MANIFEST_PACKET => SyncSupplier::return_rlp(io, &rlp, peer,
|
||||
SyncSupplier::return_snapshot_manifest,
|
||||
|e| format!("Error sending snapshot manifest: {:?}", e)),
|
||||
|
||||
GET_SNAPSHOT_DATA_PACKET => SyncSupplier::return_rlp(io, &rlp, peer,
|
||||
SyncSupplier::return_snapshot_data,
|
||||
|e| format!("Error sending snapshot data: {:?}", e)),
|
||||
CONSENSUS_DATA_PACKET => ChainSync::on_consensus_packet(io, peer, &rlp),
|
||||
_ => {
|
||||
sync.write().on_packet(io, peer, packet_id, data);
|
||||
Ok(())
|
||||
}
|
||||
};
|
||||
result.unwrap_or_else(|e| {
|
||||
debug!(target:"sync", "{} -> Malformed packet {} : {}", peer, packet_id, e);
|
||||
})
|
||||
}
|
||||
|
||||
/// Respond to GetBlockHeaders request
|
||||
fn return_block_headers(io: &SyncIo, r: &Rlp, peer_id: PeerId) -> RlpResponseResult {
|
||||
// Packet layout:
|
||||
// [ block: { P , B_32 }, maxHeaders: P, skip: P, reverse: P in { 0 , 1 } ]
|
||||
let max_headers: usize = r.val_at(1)?;
|
||||
let skip: usize = r.val_at(2)?;
|
||||
let reverse: bool = r.val_at(3)?;
|
||||
let last = io.chain().chain_info().best_block_number;
|
||||
let number = if r.at(0)?.size() == 32 {
|
||||
// id is a hash
|
||||
let hash: H256 = r.val_at(0)?;
|
||||
trace!(target: "sync", "{} -> GetBlockHeaders (hash: {}, max: {}, skip: {}, reverse:{})", peer_id, hash, max_headers, skip, reverse);
|
||||
match io.chain().block_header(BlockId::Hash(hash)) {
|
||||
Some(hdr) => {
|
||||
let number = hdr.number().into();
|
||||
debug_assert_eq!(hdr.hash(), hash);
|
||||
|
||||
if max_headers == 1 || io.chain().block_hash(BlockId::Number(number)) != Some(hash) {
|
||||
// Non canonical header or single header requested
|
||||
// TODO: handle single-step reverse hashchains of non-canon hashes
|
||||
trace!(target:"sync", "Returning single header: {:?}", hash);
|
||||
let mut rlp = RlpStream::new_list(1);
|
||||
rlp.append_raw(&hdr.into_inner(), 1);
|
||||
return Ok(Some((BLOCK_HEADERS_PACKET, rlp)));
|
||||
}
|
||||
number
|
||||
}
|
||||
None => return Ok(Some((BLOCK_HEADERS_PACKET, RlpStream::new_list(0)))) //no such header, return nothing
|
||||
}
|
||||
} else {
|
||||
trace!(target: "sync", "{} -> GetBlockHeaders (number: {}, max: {}, skip: {}, reverse:{})", peer_id, r.val_at::<BlockNumber>(0)?, max_headers, skip, reverse);
|
||||
r.val_at(0)?
|
||||
};
|
||||
|
||||
let mut number = if reverse {
|
||||
cmp::min(last, number)
|
||||
} else {
|
||||
cmp::max(0, number)
|
||||
};
|
||||
let max_count = cmp::min(MAX_HEADERS_TO_SEND, max_headers);
|
||||
let mut count = 0;
|
||||
let mut data = Bytes::new();
|
||||
let inc = (skip + 1) as BlockNumber;
|
||||
let overlay = io.chain_overlay().read();
|
||||
|
||||
while number <= last && count < max_count {
|
||||
if let Some(hdr) = overlay.get(&number) {
|
||||
trace!(target: "sync", "{}: Returning cached fork header", peer_id);
|
||||
data.extend_from_slice(hdr);
|
||||
count += 1;
|
||||
} else if let Some(hdr) = io.chain().block_header(BlockId::Number(number)) {
|
||||
data.append(&mut hdr.into_inner());
|
||||
count += 1;
|
||||
} else {
|
||||
// No required block.
|
||||
break;
|
||||
}
|
||||
if reverse {
|
||||
if number <= inc || number == 0 {
|
||||
break;
|
||||
}
|
||||
number -= inc;
|
||||
}
|
||||
else {
|
||||
number += inc;
|
||||
}
|
||||
}
|
||||
let mut rlp = RlpStream::new_list(count as usize);
|
||||
rlp.append_raw(&data, count as usize);
|
||||
trace!(target: "sync", "{} -> GetBlockHeaders: returned {} entries", peer_id, count);
|
||||
Ok(Some((BLOCK_HEADERS_PACKET, rlp)))
|
||||
}
|
||||
|
||||
/// Respond to GetBlockBodies request
|
||||
fn return_block_bodies(io: &SyncIo, r: &Rlp, peer_id: PeerId) -> RlpResponseResult {
|
||||
let mut count = r.item_count().unwrap_or(0);
|
||||
if count == 0 {
|
||||
debug!(target: "sync", "Empty GetBlockBodies request, ignoring.");
|
||||
return Ok(None);
|
||||
}
|
||||
count = cmp::min(count, MAX_BODIES_TO_SEND);
|
||||
let mut added = 0usize;
|
||||
let mut data = Bytes::new();
|
||||
for i in 0..count {
|
||||
if let Some(body) = io.chain().block_body(BlockId::Hash(r.val_at::<H256>(i)?)) {
|
||||
data.append(&mut body.into_inner());
|
||||
added += 1;
|
||||
}
|
||||
}
|
||||
let mut rlp = RlpStream::new_list(added);
|
||||
rlp.append_raw(&data, added);
|
||||
trace!(target: "sync", "{} -> GetBlockBodies: returned {} entries", peer_id, added);
|
||||
Ok(Some((BLOCK_BODIES_PACKET, rlp)))
|
||||
}
|
||||
|
||||
/// Respond to GetNodeData request
|
||||
fn return_node_data(io: &SyncIo, r: &Rlp, peer_id: PeerId) -> RlpResponseResult {
|
||||
let mut count = r.item_count().unwrap_or(0);
|
||||
trace!(target: "sync", "{} -> GetNodeData: {} entries", peer_id, count);
|
||||
if count == 0 {
|
||||
debug!(target: "sync", "Empty GetNodeData request, ignoring.");
|
||||
return Ok(None);
|
||||
}
|
||||
count = cmp::min(count, MAX_NODE_DATA_TO_SEND);
|
||||
let mut added = 0usize;
|
||||
let mut data = Vec::new();
|
||||
for i in 0..count {
|
||||
if let Some(node) = io.chain().state_data(&r.val_at::<H256>(i)?) {
|
||||
data.push(node);
|
||||
added += 1;
|
||||
}
|
||||
}
|
||||
trace!(target: "sync", "{} -> GetNodeData: return {} entries", peer_id, added);
|
||||
let mut rlp = RlpStream::new_list(added);
|
||||
for d in data {
|
||||
rlp.append(&d);
|
||||
}
|
||||
Ok(Some((NODE_DATA_PACKET, rlp)))
|
||||
}
|
||||
|
||||
fn return_receipts(io: &SyncIo, rlp: &Rlp, peer_id: PeerId) -> RlpResponseResult {
|
||||
let mut count = rlp.item_count().unwrap_or(0);
|
||||
trace!(target: "sync", "{} -> GetReceipts: {} entries", peer_id, count);
|
||||
if count == 0 {
|
||||
debug!(target: "sync", "Empty GetReceipts request, ignoring.");
|
||||
return Ok(None);
|
||||
}
|
||||
count = cmp::min(count, MAX_RECEIPTS_HEADERS_TO_SEND);
|
||||
let mut added_headers = 0usize;
|
||||
let mut added_receipts = 0usize;
|
||||
let mut data = Bytes::new();
|
||||
for i in 0..count {
|
||||
if let Some(mut receipts_bytes) = io.chain().block_receipts(&rlp.val_at::<H256>(i)?) {
|
||||
data.append(&mut receipts_bytes);
|
||||
added_receipts += receipts_bytes.len();
|
||||
added_headers += 1;
|
||||
if added_receipts > MAX_RECEIPTS_TO_SEND { break; }
|
||||
}
|
||||
}
|
||||
let mut rlp_result = RlpStream::new_list(added_headers);
|
||||
rlp_result.append_raw(&data, added_headers);
|
||||
Ok(Some((RECEIPTS_PACKET, rlp_result)))
|
||||
}
|
||||
|
||||
/// Respond to GetSnapshotManifest request
|
||||
fn return_snapshot_manifest(io: &SyncIo, r: &Rlp, peer_id: PeerId) -> RlpResponseResult {
|
||||
let count = r.item_count().unwrap_or(0);
|
||||
trace!(target: "sync", "{} -> GetSnapshotManifest", peer_id);
|
||||
if count != 0 {
|
||||
debug!(target: "sync", "Invalid GetSnapshotManifest request, ignoring.");
|
||||
return Ok(None);
|
||||
}
|
||||
let rlp = match io.snapshot_service().manifest() {
|
||||
Some(manifest) => {
|
||||
trace!(target: "sync", "{} <- SnapshotManifest", peer_id);
|
||||
let mut rlp = RlpStream::new_list(1);
|
||||
rlp.append_raw(&manifest.into_rlp(), 1);
|
||||
rlp
|
||||
},
|
||||
None => {
|
||||
trace!(target: "sync", "{}: No manifest to return", peer_id);
|
||||
RlpStream::new_list(0)
|
||||
}
|
||||
};
|
||||
Ok(Some((SNAPSHOT_MANIFEST_PACKET, rlp)))
|
||||
}
|
||||
|
||||
/// Respond to GetSnapshotData request
|
||||
fn return_snapshot_data(io: &SyncIo, r: &Rlp, peer_id: PeerId) -> RlpResponseResult {
|
||||
let hash: H256 = r.val_at(0)?;
|
||||
trace!(target: "sync", "{} -> GetSnapshotData {:?}", peer_id, hash);
|
||||
let rlp = match io.snapshot_service().chunk(hash) {
|
||||
Some(data) => {
|
||||
let mut rlp = RlpStream::new_list(1);
|
||||
trace!(target: "sync", "{} <- SnapshotData", peer_id);
|
||||
rlp.append(&data);
|
||||
rlp
|
||||
},
|
||||
None => {
|
||||
RlpStream::new_list(0)
|
||||
}
|
||||
};
|
||||
Ok(Some((SNAPSHOT_DATA_PACKET, rlp)))
|
||||
}
|
||||
|
||||
fn return_rlp<FRlp, FError>(io: &mut SyncIo, rlp: &Rlp, peer: PeerId, rlp_func: FRlp, error_func: FError) -> Result<(), PacketDecodeError>
|
||||
where FRlp : Fn(&SyncIo, &Rlp, PeerId) -> RlpResponseResult,
|
||||
FError : FnOnce(network::Error) -> String
|
||||
{
|
||||
let response = rlp_func(io, rlp, peer);
|
||||
match response {
|
||||
Err(e) => Err(e),
|
||||
Ok(Some((packet_id, rlp_stream))) => {
|
||||
io.respond(packet_id, rlp_stream.out()).unwrap_or_else(
|
||||
|e| debug!(target: "sync", "{:?}", error_func(e)));
|
||||
Ok(())
|
||||
}
|
||||
_ => Ok(())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use std::collections::{VecDeque};
|
||||
use tests::helpers::{TestIo};
|
||||
use tests::snapshot::TestSnapshotService;
|
||||
use ethereum_types::{H256};
|
||||
use parking_lot::RwLock;
|
||||
use bytes::Bytes;
|
||||
use rlp::{Rlp, RlpStream};
|
||||
use super::{*, super::tests::*};
|
||||
use ethcore::client::{BlockChainClient, EachBlockWith, TestBlockChainClient};
|
||||
|
||||
#[test]
|
||||
fn return_block_headers() {
|
||||
use ethcore::views::HeaderView;
|
||||
fn make_hash_req(h: &H256, count: usize, skip: usize, reverse: bool) -> Bytes {
|
||||
let mut rlp = RlpStream::new_list(4);
|
||||
rlp.append(h);
|
||||
rlp.append(&count);
|
||||
rlp.append(&skip);
|
||||
rlp.append(&if reverse {1u32} else {0u32});
|
||||
rlp.out()
|
||||
}
|
||||
|
||||
fn make_num_req(n: usize, count: usize, skip: usize, reverse: bool) -> Bytes {
|
||||
let mut rlp = RlpStream::new_list(4);
|
||||
rlp.append(&n);
|
||||
rlp.append(&count);
|
||||
rlp.append(&skip);
|
||||
rlp.append(&if reverse {1u32} else {0u32});
|
||||
rlp.out()
|
||||
}
|
||||
fn to_header_vec(rlp: ::chain::RlpResponseResult) -> Vec<Bytes> {
|
||||
Rlp::new(&rlp.unwrap().unwrap().1.out()).iter().map(|r| r.as_raw().to_vec()).collect()
|
||||
}
|
||||
|
||||
let mut client = TestBlockChainClient::new();
|
||||
client.add_blocks(100, EachBlockWith::Nothing);
|
||||
let blocks: Vec<_> = (0 .. 100)
|
||||
.map(|i| (&client as &BlockChainClient).block(BlockId::Number(i as BlockNumber)).map(|b| b.into_inner()).unwrap()).collect();
|
||||
let headers: Vec<_> = blocks.iter().map(|b| Rlp::new(b).at(0).unwrap().as_raw().to_vec()).collect();
|
||||
let hashes: Vec<_> = headers.iter().map(|h| view!(HeaderView, h).hash()).collect();
|
||||
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
let io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let unknown: H256 = H256::new();
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_hash_req(&unknown, 1, 0, false)), 0);
|
||||
assert!(to_header_vec(result).is_empty());
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_hash_req(&unknown, 1, 0, true)), 0);
|
||||
assert!(to_header_vec(result).is_empty());
|
||||
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_hash_req(&hashes[2], 1, 0, true)), 0);
|
||||
assert_eq!(to_header_vec(result), vec![headers[2].clone()]);
|
||||
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_hash_req(&hashes[2], 1, 0, false)), 0);
|
||||
assert_eq!(to_header_vec(result), vec![headers[2].clone()]);
|
||||
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_hash_req(&hashes[50], 3, 5, false)), 0);
|
||||
assert_eq!(to_header_vec(result), vec![headers[50].clone(), headers[56].clone(), headers[62].clone()]);
|
||||
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_hash_req(&hashes[50], 3, 5, true)), 0);
|
||||
assert_eq!(to_header_vec(result), vec![headers[50].clone(), headers[44].clone(), headers[38].clone()]);
|
||||
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_num_req(2, 1, 0, true)), 0);
|
||||
assert_eq!(to_header_vec(result), vec![headers[2].clone()]);
|
||||
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_num_req(2, 1, 0, false)), 0);
|
||||
assert_eq!(to_header_vec(result), vec![headers[2].clone()]);
|
||||
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_num_req(50, 3, 5, false)), 0);
|
||||
assert_eq!(to_header_vec(result), vec![headers[50].clone(), headers[56].clone(), headers[62].clone()]);
|
||||
|
||||
let result = SyncSupplier::return_block_headers(&io, &Rlp::new(&make_num_req(50, 3, 5, true)), 0);
|
||||
assert_eq!(to_header_vec(result), vec![headers[50].clone(), headers[44].clone(), headers[38].clone()]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn return_nodes() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let sync = dummy_sync_with_peer(H256::new(), &client);
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let mut node_list = RlpStream::new_list(3);
|
||||
node_list.append(&H256::from("0000000000000000000000000000000000000000000000005555555555555555"));
|
||||
node_list.append(&H256::from("ffffffffffffffffffffffffffffffffffffffffffffaaaaaaaaaaaaaaaaaaaa"));
|
||||
node_list.append(&H256::from("aff0000000000000000000000000000000000000000000000000000000000000"));
|
||||
|
||||
let node_request = node_list.out();
|
||||
// it returns rlp ONLY for hashes started with "f"
|
||||
let result = SyncSupplier::return_node_data(&io, &Rlp::new(&node_request.clone()), 0);
|
||||
|
||||
assert!(result.is_ok());
|
||||
let rlp_result = result.unwrap();
|
||||
assert!(rlp_result.is_some());
|
||||
|
||||
// the length of one rlp-encoded hashe
|
||||
let rlp = rlp_result.unwrap().1.out();
|
||||
let rlp = Rlp::new(&rlp);
|
||||
assert_eq!(Ok(1), rlp.item_count());
|
||||
|
||||
io.sender = Some(2usize);
|
||||
|
||||
ChainSync::dispatch_packet(&RwLock::new(sync), &mut io, 0usize, GET_NODE_DATA_PACKET, &node_request);
|
||||
assert_eq!(1, io.packets.len());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn return_receipts_empty() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let ss = TestSnapshotService::new();
|
||||
let io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let result = SyncSupplier::return_receipts(&io, &Rlp::new(&[0xc0]), 0);
|
||||
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn return_receipts() {
|
||||
let mut client = TestBlockChainClient::new();
|
||||
let queue = RwLock::new(VecDeque::new());
|
||||
let sync = dummy_sync_with_peer(H256::new(), &client);
|
||||
let ss = TestSnapshotService::new();
|
||||
let mut io = TestIo::new(&mut client, &ss, &queue, None);
|
||||
|
||||
let mut receipt_list = RlpStream::new_list(4);
|
||||
receipt_list.append(&H256::from("0000000000000000000000000000000000000000000000005555555555555555"));
|
||||
receipt_list.append(&H256::from("ff00000000000000000000000000000000000000000000000000000000000000"));
|
||||
receipt_list.append(&H256::from("fff0000000000000000000000000000000000000000000000000000000000000"));
|
||||
receipt_list.append(&H256::from("aff0000000000000000000000000000000000000000000000000000000000000"));
|
||||
|
||||
let receipts_request = receipt_list.out();
|
||||
// it returns rlp ONLY for hashes started with "f"
|
||||
let result = SyncSupplier::return_receipts(&io, &Rlp::new(&receipts_request.clone()), 0);
|
||||
|
||||
assert!(result.is_ok());
|
||||
let rlp_result = result.unwrap();
|
||||
assert!(rlp_result.is_some());
|
||||
|
||||
// the length of two rlp-encoded receipts
|
||||
assert_eq!(603, rlp_result.unwrap().1.out().len());
|
||||
|
||||
io.sender = Some(2usize);
|
||||
ChainSync::dispatch_packet(&RwLock::new(sync), &mut io, 0usize, GET_RECEIPTS_PACKET, &receipts_request);
|
||||
assert_eq!(1, io.packets.len());
|
||||
}
|
||||
}
|
@ -54,6 +54,8 @@ extern crate macros;
|
||||
extern crate log;
|
||||
#[macro_use]
|
||||
extern crate heapsize;
|
||||
#[macro_use]
|
||||
extern crate trace_time;
|
||||
|
||||
mod chain;
|
||||
mod blocks;
|
||||
|
@ -16,13 +16,11 @@
|
||||
|
||||
//! Helpers for decoding and verifying responses for headers.
|
||||
|
||||
use std::fmt;
|
||||
|
||||
use ethcore::encoded;
|
||||
use ethcore::header::Header;
|
||||
use ethcore::{encoded, header::Header};
|
||||
use ethereum_types::H256;
|
||||
use light::request::{HashOrNumber, CompleteHeadersRequest as HeadersRequest};
|
||||
use rlp::DecoderError;
|
||||
use ethereum_types::H256;
|
||||
use std::fmt;
|
||||
|
||||
/// Errors found when decoding headers and verifying with basic constraints.
|
||||
#[derive(Debug, PartialEq)]
|
||||
@ -74,19 +72,23 @@ pub trait Constraint {
|
||||
|
||||
/// Do basic verification of provided headers against a request.
|
||||
pub fn verify(headers: &[encoded::Header], request: &HeadersRequest) -> Result<Vec<Header>, BasicError> {
|
||||
let headers: Vec<_> = headers.iter().map(|h| h.decode()).collect();
|
||||
let headers: Result<Vec<_>, _> = headers.iter().map(|h| h.decode() ).collect();
|
||||
match headers {
|
||||
Ok(headers) => {
|
||||
let reverse = request.reverse;
|
||||
|
||||
let reverse = request.reverse;
|
||||
Max(request.max as usize).verify(&headers, reverse)?;
|
||||
match request.start {
|
||||
HashOrNumber::Number(ref num) => StartsAtNumber(*num).verify(&headers, reverse)?,
|
||||
HashOrNumber::Hash(ref hash) => StartsAtHash(*hash).verify(&headers, reverse)?,
|
||||
}
|
||||
|
||||
Max(request.max as usize).verify(&headers, reverse)?;
|
||||
match request.start {
|
||||
HashOrNumber::Number(ref num) => StartsAtNumber(*num).verify(&headers, reverse)?,
|
||||
HashOrNumber::Hash(ref hash) => StartsAtHash(*hash).verify(&headers, reverse)?,
|
||||
SkipsBetween(request.skip).verify(&headers, reverse)?;
|
||||
|
||||
Ok(headers)
|
||||
},
|
||||
Err(e) => Err(e.into())
|
||||
}
|
||||
|
||||
SkipsBetween(request.skip).verify(&headers, reverse)?;
|
||||
|
||||
Ok(headers)
|
||||
}
|
||||
|
||||
struct StartsAtNumber(u64);
|
||||
|
@ -45,7 +45,7 @@ fn fork_post_cht() {
|
||||
for id in (0..CHAIN_LENGTH).map(|x| x + 1).map(BlockId::Number) {
|
||||
let (light_peer, full_peer) = (net.peer(0), net.peer(1));
|
||||
let light_chain = light_peer.light_chain();
|
||||
let header = full_peer.chain().block_header(id).unwrap().decode();
|
||||
let header = full_peer.chain().block_header(id).unwrap().decode().expect("decoding failure");
|
||||
let _ = light_chain.import_header(header);
|
||||
light_chain.flush_queue();
|
||||
light_chain.import_verified();
|
||||
|
@ -133,11 +133,11 @@ impl<'p, C> SyncIo for TestIo<'p, C> where C: FlushingBlockChainClient, C: 'p {
|
||||
}
|
||||
|
||||
fn eth_protocol_version(&self, _peer: PeerId) -> u8 {
|
||||
ETH_PROTOCOL_VERSION_63
|
||||
ETH_PROTOCOL_VERSION_63.0
|
||||
}
|
||||
|
||||
fn protocol_version(&self, protocol: &ProtocolId, peer_id: PeerId) -> u8 {
|
||||
if protocol == &WARP_SYNC_PROTOCOL_ID { PAR_PROTOCOL_VERSION_3 } else { self.eth_protocol_version(peer_id) }
|
||||
if protocol == &WARP_SYNC_PROTOCOL_ID { PAR_PROTOCOL_VERSION_3.0 } else { self.eth_protocol_version(peer_id) }
|
||||
}
|
||||
|
||||
fn chain_overlay(&self) -> &RwLock<HashMap<BlockNumber, Bytes>> {
|
||||
@ -519,11 +519,9 @@ impl TestIoHandler {
|
||||
impl IoHandler<ClientIoMessage> for TestIoHandler {
|
||||
fn message(&self, _io: &IoContext<ClientIoMessage>, net_message: &ClientIoMessage) {
|
||||
match *net_message {
|
||||
ClientIoMessage::NewMessage(ref message) => if let Err(e) = self.client.engine().handle_message(message) {
|
||||
panic!("Invalid message received: {}", e);
|
||||
},
|
||||
ClientIoMessage::NewPrivateTransaction => {
|
||||
ClientIoMessage::Execute(ref exec) => {
|
||||
*self.private_tx_queued.lock() += 1;
|
||||
(*exec.0)(&self.client);
|
||||
},
|
||||
_ => {} // ignore other messages
|
||||
}
|
||||
|
@ -24,7 +24,7 @@ use ethcore::CreateContractAddress;
|
||||
use transaction::{Transaction, Action};
|
||||
use ethcore::executive::{contract_address};
|
||||
use ethcore::test_helpers::{push_block_with_transactions};
|
||||
use ethcore_private_tx::{Provider, ProviderConfig, NoopEncryptor};
|
||||
use ethcore_private_tx::{Provider, ProviderConfig, NoopEncryptor, Importer};
|
||||
use ethcore::account_provider::AccountProvider;
|
||||
use ethkey::{KeyPair};
|
||||
use tests::helpers::{TestNet, TestIoHandler};
|
||||
@ -84,7 +84,7 @@ fn send_private_transaction() {
|
||||
Box::new(NoopEncryptor::default()),
|
||||
signer_config,
|
||||
IoChannel::to_handler(Arc::downgrade(&io_handler0)),
|
||||
).unwrap());
|
||||
));
|
||||
pm0.add_notify(net.peers[0].clone());
|
||||
|
||||
let pm1 = Arc::new(Provider::new(
|
||||
@ -94,7 +94,7 @@ fn send_private_transaction() {
|
||||
Box::new(NoopEncryptor::default()),
|
||||
validator_config,
|
||||
IoChannel::to_handler(Arc::downgrade(&io_handler1)),
|
||||
).unwrap());
|
||||
));
|
||||
pm1.add_notify(net.peers[1].clone());
|
||||
|
||||
// Create and deploy contract
|
||||
@ -133,7 +133,6 @@ fn send_private_transaction() {
|
||||
//process received private transaction message
|
||||
let private_transaction = received_private_transactions[0].clone();
|
||||
assert!(pm1.import_private_transaction(&private_transaction).is_ok());
|
||||
assert!(pm1.on_private_transaction_queued().is_ok());
|
||||
|
||||
//send signed response
|
||||
net.sync();
|
||||
@ -147,4 +146,4 @@ fn send_private_transaction() {
|
||||
assert!(pm0.import_signed_private_transaction(&signed_private_transaction).is_ok());
|
||||
let local_transactions = net.peer(0).miner.local_transactions();
|
||||
assert_eq!(local_transactions.len(), 1);
|
||||
}
|
||||
}
|
||||
|
@ -22,7 +22,7 @@ use parking_lot::Mutex;
|
||||
use bytes::Bytes;
|
||||
use ethcore::snapshot::{SnapshotService, ManifestData, RestorationStatus};
|
||||
use ethcore::header::BlockNumber;
|
||||
use ethcore::client::{EachBlockWith};
|
||||
use ethcore::client::EachBlockWith;
|
||||
use super::helpers::*;
|
||||
use {SyncConfig, WarpSync};
|
||||
|
||||
@ -99,7 +99,15 @@ impl SnapshotService for TestSnapshotService {
|
||||
}
|
||||
|
||||
fn begin_restore(&self, manifest: ManifestData) {
|
||||
*self.restoration_manifest.lock() = Some(manifest);
|
||||
let mut restoration_manifest = self.restoration_manifest.lock();
|
||||
|
||||
if let Some(ref c_manifest) = *restoration_manifest {
|
||||
if c_manifest.state_root == manifest.state_root {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
*restoration_manifest = Some(manifest);
|
||||
self.state_restoration_chunks.lock().clear();
|
||||
self.block_restoration_chunks.lock().clear();
|
||||
}
|
||||
|
@ -18,6 +18,7 @@ use std::{fmt, error};
|
||||
|
||||
use ethereum_types::U256;
|
||||
use ethkey;
|
||||
use rlp;
|
||||
use unexpected::OutOfBounds;
|
||||
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
@ -74,6 +75,10 @@ pub enum Error {
|
||||
NotAllowed,
|
||||
/// Signature error
|
||||
InvalidSignature(String),
|
||||
/// Transaction too big
|
||||
TooBig,
|
||||
/// Invalid RLP encoding
|
||||
InvalidRlp(String),
|
||||
}
|
||||
|
||||
impl From<ethkey::Error> for Error {
|
||||
@ -82,6 +87,12 @@ impl From<ethkey::Error> for Error {
|
||||
}
|
||||
}
|
||||
|
||||
impl From<rlp::DecoderError> for Error {
|
||||
fn from(err: rlp::DecoderError) -> Self {
|
||||
Error::InvalidRlp(format!("{}", err))
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
use self::Error::*;
|
||||
@ -106,6 +117,8 @@ impl fmt::Display for Error {
|
||||
InvalidChainId => "Transaction of this chain ID is not allowed on this chain.".into(),
|
||||
InvalidSignature(ref err) => format!("Transaction has invalid signature: {}.", err),
|
||||
NotAllowed => "Sender does not have permissions to execute this type of transction".into(),
|
||||
TooBig => "Transaction too big".into(),
|
||||
InvalidRlp(ref err) => format!("Transaction has invalid RLP structure: {}.", err),
|
||||
};
|
||||
|
||||
f.write_fmt(format_args!("Transaction error ({})", msg))
|
||||
|
@ -576,7 +576,8 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn sender_test() {
|
||||
let t: UnverifiedTransaction = rlp::decode(&::rustc_hex::FromHex::from_hex("f85f800182520894095e7baea6a6c7c4c2dfeb977efac326af552d870a801ba048b55bfa915ac795c431978d8a6a992b628d557da5ff759b307d495a36649353a0efffd310ac743f371de3b9f7f9cb56c0b28ad43601b4ab949f53faa07bd2c804").unwrap());
|
||||
let bytes = ::rustc_hex::FromHex::from_hex("f85f800182520894095e7baea6a6c7c4c2dfeb977efac326af552d870a801ba048b55bfa915ac795c431978d8a6a992b628d557da5ff759b307d495a36649353a0efffd310ac743f371de3b9f7f9cb56c0b28ad43601b4ab949f53faa07bd2c804").unwrap();
|
||||
let t: UnverifiedTransaction = rlp::decode(&bytes).expect("decoding UnverifiedTransaction failed");
|
||||
assert_eq!(t.data, b"");
|
||||
assert_eq!(t.gas, U256::from(0x5208u64));
|
||||
assert_eq!(t.gas_price, U256::from(0x01u64));
|
||||
@ -645,7 +646,7 @@ mod tests {
|
||||
use rustc_hex::FromHex;
|
||||
|
||||
let test_vector = |tx_data: &str, address: &'static str| {
|
||||
let signed = rlp::decode(&FromHex::from_hex(tx_data).unwrap());
|
||||
let signed = rlp::decode(&FromHex::from_hex(tx_data).unwrap()).expect("decoding tx data failed");
|
||||
let signed = SignedTransaction::new(signed).unwrap();
|
||||
assert_eq!(signed.sender(), address.into());
|
||||
println!("chainid: {:?}", signed.chain_id());
|
||||
|
@ -193,7 +193,7 @@ mod tests {
|
||||
);
|
||||
let encoded = ::rlp::encode(&r);
|
||||
assert_eq!(&encoded[..], &expected[..]);
|
||||
let decoded: Receipt = ::rlp::decode(&encoded);
|
||||
let decoded: Receipt = ::rlp::decode(&encoded).expect("decoding receipt failed");
|
||||
assert_eq!(decoded, r);
|
||||
}
|
||||
|
||||
@ -211,7 +211,7 @@ mod tests {
|
||||
);
|
||||
let encoded = ::rlp::encode(&r);
|
||||
assert_eq!(&encoded[..], &expected[..]);
|
||||
let decoded: Receipt = ::rlp::decode(&encoded);
|
||||
let decoded: Receipt = ::rlp::decode(&encoded).expect("decoding receipt failed");
|
||||
assert_eq!(decoded, r);
|
||||
}
|
||||
}
|
||||
|
@ -64,7 +64,7 @@ mod tests {
|
||||
fn should_encode_and_decode_call_type() {
|
||||
let original = CallType::Call;
|
||||
let encoded = encode(&original);
|
||||
let decoded = decode(&encoded);
|
||||
let decoded = decode(&encoded).expect("failure decoding CallType");
|
||||
assert_eq!(original, decoded);
|
||||
}
|
||||
}
|
||||
|
@ -113,6 +113,9 @@ pub struct Params {
|
||||
/// See main EthashParams docs.
|
||||
#[serde(rename="maxCodeSize")]
|
||||
pub max_code_size: Option<Uint>,
|
||||
/// Maximum size of transaction RLP payload.
|
||||
#[serde(rename="maxTransactionSize")]
|
||||
pub max_transaction_size: Option<Uint>,
|
||||
/// See main EthashParams docs.
|
||||
#[serde(rename="maxCodeSizeTransition")]
|
||||
pub max_code_size_transition: Option<Uint>,
|
||||
|
@ -28,6 +28,7 @@ log = "0.3"
|
||||
parking_lot = "0.5"
|
||||
price-info = { path = "../price-info" }
|
||||
rayon = "1.0"
|
||||
rlp = { path = "../util/rlp" }
|
||||
trace-time = { path = "../util/trace-time" }
|
||||
transaction-pool = { path = "../transaction-pool" }
|
||||
|
||||
|
@ -30,6 +30,7 @@ extern crate linked_hash_map;
|
||||
extern crate parking_lot;
|
||||
extern crate price_info;
|
||||
extern crate rayon;
|
||||
extern crate rlp;
|
||||
extern crate trace_time;
|
||||
extern crate transaction_pool as txpool;
|
||||
|
||||
|
@ -62,6 +62,10 @@ pub trait Client: fmt::Debug + Sync {
|
||||
|
||||
/// Classify transaction (check if transaction is filtered by some contracts).
|
||||
fn transaction_type(&self, tx: &transaction::SignedTransaction) -> TransactionType;
|
||||
|
||||
/// Performs pre-validation of RLP decoded transaction
|
||||
fn decode_transaction(&self, transaction: &[u8])
|
||||
-> Result<transaction::UnverifiedTransaction, transaction::Error>;
|
||||
}
|
||||
|
||||
/// State nonce client
|
||||
|
1
miner/src/pool/res/big_transaction.data
Normal file
1
miner/src/pool/res/big_transaction.data
Normal file
File diff suppressed because one or more lines are too long
@ -15,17 +15,21 @@
|
||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use ethereum_types::{U256, H256, Address};
|
||||
use rlp::Rlp;
|
||||
use transaction::{self, Transaction, SignedTransaction, UnverifiedTransaction};
|
||||
|
||||
use pool;
|
||||
use pool::client::AccountDetails;
|
||||
|
||||
const MAX_TRANSACTION_SIZE: usize = 15 * 1024;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct TestClient {
|
||||
account_details: AccountDetails,
|
||||
gas_required: U256,
|
||||
is_service_transaction: bool,
|
||||
local_address: Address,
|
||||
max_transaction_size: usize,
|
||||
}
|
||||
|
||||
impl Default for TestClient {
|
||||
@ -39,6 +43,7 @@ impl Default for TestClient {
|
||||
gas_required: 21_000.into(),
|
||||
is_service_transaction: false,
|
||||
local_address: Default::default(),
|
||||
max_transaction_size: MAX_TRANSACTION_SIZE,
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -116,6 +121,15 @@ impl pool::client::Client for TestClient {
|
||||
pool::client::TransactionType::Regular
|
||||
}
|
||||
}
|
||||
|
||||
fn decode_transaction(&self, transaction: &[u8]) -> Result<UnverifiedTransaction, transaction::Error> {
|
||||
let rlp = Rlp::new(&transaction);
|
||||
if rlp.as_raw().len() > self.max_transaction_size {
|
||||
return Err(transaction::Error::TooBig)
|
||||
}
|
||||
rlp.as_val().map_err(|e| transaction::Error::InvalidRlp(e.to_string()))
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
impl pool::client::NonceClient for TestClient {
|
||||
|
@ -63,8 +63,10 @@ fn should_return_correct_nonces_when_dropped_because_of_limit() {
|
||||
let nonce = tx1.nonce;
|
||||
|
||||
// when
|
||||
let result = txq.import(TestClient::new(), vec![tx1, tx2].local());
|
||||
assert_eq!(result, vec![Ok(()), Err(transaction::Error::LimitReached)]);
|
||||
let r1= txq.import(TestClient::new(), vec![tx1].local());
|
||||
let r2= txq.import(TestClient::new(), vec![tx2].local());
|
||||
assert_eq!(r1, vec![Ok(())]);
|
||||
assert_eq!(r2, vec![Err(transaction::Error::LimitReached)]);
|
||||
assert_eq!(txq.status().status.transaction_count, 1);
|
||||
|
||||
// then
|
||||
@ -755,3 +757,13 @@ fn should_clear_cache_after_timeout_for_local() {
|
||||
// then
|
||||
assert_eq!(txq.pending(TestClient::new(), 0, 1002, None).len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn should_reject_big_transaction() {
|
||||
let txq = new_queue();
|
||||
let big_tx = Tx::default().big_one();
|
||||
let res = txq.import(TestClient::new(), vec![
|
||||
verifier::Transaction::Local(PendingTransaction::new(big_tx, transaction::Condition::Timestamp(1000).into()))
|
||||
]);
|
||||
assert_eq!(res, vec![Err(transaction::Error::TooBig)]);
|
||||
}
|
@ -87,6 +87,19 @@ impl Tx {
|
||||
nonce: self.nonce.into()
|
||||
}
|
||||
}
|
||||
|
||||
pub fn big_one(self) -> SignedTransaction {
|
||||
let keypair = Random.generate().unwrap();
|
||||
let tx = Transaction {
|
||||
action: transaction::Action::Create,
|
||||
value: U256::from(100),
|
||||
data: include_str!("../res/big_transaction.data").from_hex().unwrap(),
|
||||
gas: self.gas.into(),
|
||||
gas_price: self.gas_price.into(),
|
||||
nonce: self.nonce.into()
|
||||
};
|
||||
tx.sign(keypair.secret(), None)
|
||||
}
|
||||
}
|
||||
pub trait TxExt: Sized {
|
||||
type Out;
|
||||
|
@ -27,6 +27,7 @@ use std::sync::Arc;
|
||||
use std::sync::atomic::{self, AtomicUsize};
|
||||
|
||||
use ethereum_types::{U256, H256};
|
||||
use rlp::Encodable;
|
||||
use transaction;
|
||||
use txpool;
|
||||
|
||||
@ -222,6 +223,12 @@ impl<C: Client> txpool::Verifier<Transaction> for Verifier<C> {
|
||||
Transaction::Local(tx) => tx,
|
||||
};
|
||||
|
||||
// Verify RLP payload
|
||||
if let Err(err) = self.client.decode_transaction(&transaction.rlp_bytes()) {
|
||||
debug!(target: "txqueue", "[{:?}] Rejected transaction's rlp payload", err);
|
||||
bail!(err)
|
||||
}
|
||||
|
||||
let sender = transaction.sender();
|
||||
let account_details = self.client.account_details(&sender);
|
||||
|
||||
|
@ -94,7 +94,7 @@ fn new(n: NewAccount) -> Result<String, String> {
|
||||
let secret_store = Box::new(secret_store(dir, Some(n.iterations))?);
|
||||
let acc_provider = AccountProvider::new(secret_store, AccountProviderSettings::default());
|
||||
let new_account = acc_provider.new_account(&password).map_err(|e| format!("Could not create new account: {}", e))?;
|
||||
Ok(format!("0x{:?}", new_account))
|
||||
Ok(format!("0x{:x}", new_account))
|
||||
}
|
||||
|
||||
fn list(list_cmd: ListAccounts) -> Result<String, String> {
|
||||
@ -103,7 +103,7 @@ fn list(list_cmd: ListAccounts) -> Result<String, String> {
|
||||
let acc_provider = AccountProvider::new(secret_store, AccountProviderSettings::default());
|
||||
let accounts = acc_provider.accounts().map_err(|e| format!("{}", e))?;
|
||||
let result = accounts.into_iter()
|
||||
.map(|a| format!("0x{:?}", a))
|
||||
.map(|a| format!("0x{:x}", a))
|
||||
.collect::<Vec<String>>()
|
||||
.join("\n");
|
||||
|
||||
|
@ -89,7 +89,6 @@ pub fn setup(target_pool_size: usize, protos: &mut Vec<AttachedProtocol>)
|
||||
|
||||
protos.push(AttachedProtocol {
|
||||
handler: net.clone() as Arc<_>,
|
||||
packet_count: whisper_net::PACKET_COUNT,
|
||||
versions: whisper_net::SUPPORTED_VERSIONS,
|
||||
protocol_id: whisper_net::PROTOCOL_ID,
|
||||
});
|
||||
@ -97,7 +96,6 @@ pub fn setup(target_pool_size: usize, protos: &mut Vec<AttachedProtocol>)
|
||||
// parity-only extensions to whisper.
|
||||
protos.push(AttachedProtocol {
|
||||
handler: Arc::new(whisper_net::ParityExtensions),
|
||||
packet_count: whisper_net::PACKET_COUNT,
|
||||
versions: whisper_net::SUPPORTED_VERSIONS,
|
||||
protocol_id: whisper_net::PARITY_PROTOCOL_ID,
|
||||
});
|
||||
|
@ -338,6 +338,8 @@ pub fn transaction_message(error: &TransactionError) -> String {
|
||||
RecipientBanned => "Recipient is banned in local queue.".into(),
|
||||
CodeBanned => "Code is banned in local queue.".into(),
|
||||
NotAllowed => "Transaction is not permitted.".into(),
|
||||
TooBig => "Transaction is too big, see chain specification for the limit.".into(),
|
||||
InvalidRlp(ref descr) => format!("Invalid RLP data: {}", descr),
|
||||
}
|
||||
}
|
||||
|
||||
@ -358,6 +360,19 @@ pub fn transaction<T: Into<EthcoreError>>(error: T) -> Error {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn decode<T: Into<EthcoreError>>(error: T) -> Error {
|
||||
let error = error.into();
|
||||
match *error.kind() {
|
||||
ErrorKind::Decoder(ref dec_err) => rlp(dec_err.clone()),
|
||||
_ => Error {
|
||||
code: ErrorCode::InternalError,
|
||||
message: "decoding error".into(),
|
||||
data: None,
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
pub fn rlp(error: DecoderError) -> Error {
|
||||
Error {
|
||||
code: ErrorCode::InvalidParams,
|
||||
|
@ -343,7 +343,10 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> EthClient<C, SN, S
|
||||
let uncle_id = UncleId { block: block_id, position };
|
||||
|
||||
let uncle = match client.uncle(uncle_id) {
|
||||
Some(hdr) => hdr.decode(),
|
||||
Some(hdr) => match hdr.decode() {
|
||||
Ok(h) => h,
|
||||
Err(e) => return Err(errors::decode(e))
|
||||
},
|
||||
None => { return Ok(None); }
|
||||
};
|
||||
|
||||
@ -851,9 +854,9 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
|
||||
};
|
||||
|
||||
let state = try_bf!(self.client.state_at(id).ok_or(errors::state_pruned()));
|
||||
let header = try_bf!(self.client.block_header(id).ok_or(errors::state_pruned()));
|
||||
let header = try_bf!(self.client.block_header(id).ok_or(errors::state_pruned()).and_then(|h| h.decode().map_err(errors::decode)));
|
||||
|
||||
(state, header.decode())
|
||||
(state, header)
|
||||
};
|
||||
|
||||
let result = self.client.call(&signed, Default::default(), &mut state, &header);
|
||||
@ -890,9 +893,9 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
|
||||
};
|
||||
|
||||
let state = try_bf!(self.client.state_at(id).ok_or(errors::state_pruned()));
|
||||
let header = try_bf!(self.client.block_header(id).ok_or(errors::state_pruned()));
|
||||
let header = try_bf!(self.client.block_header(id).ok_or(errors::state_pruned()).and_then(|h| h.decode().map_err(errors::decode)));
|
||||
|
||||
(state, header.decode())
|
||||
(state, header)
|
||||
};
|
||||
|
||||
Box::new(future::done(self.client.estimate_gas(&signed, &state, &header)
|
||||
|
@ -371,7 +371,7 @@ impl<T: LightChainClient + 'static> Eth for EthClient<T> {
|
||||
}
|
||||
|
||||
fn send_raw_transaction(&self, raw: Bytes) -> Result<RpcH256> {
|
||||
let best_header = self.client.best_block_header().decode();
|
||||
let best_header = self.client.best_block_header().decode().map_err(errors::decode)?;
|
||||
|
||||
Rlp::new(&raw.into_vec()).as_val()
|
||||
.map_err(errors::rlp)
|
||||
|
@ -395,9 +395,9 @@ impl Parity for ParityClient {
|
||||
|
||||
let engine = self.light_dispatch.client.engine().clone();
|
||||
let from_encoded = move |encoded: encoded::Header| {
|
||||
let header = encoded.decode();
|
||||
let header = encoded.decode().map_err(errors::decode)?;
|
||||
let extra_info = engine.extra_info(&header);
|
||||
RichHeader {
|
||||
Ok(RichHeader {
|
||||
inner: Header {
|
||||
hash: Some(header.hash().into()),
|
||||
size: Some(encoded.rlp().as_raw().len().into()),
|
||||
@ -418,9 +418,8 @@ impl Parity for ParityClient {
|
||||
extra_data: Bytes::new(header.extra_data().clone()),
|
||||
},
|
||||
extra_info: extra_info,
|
||||
}
|
||||
})
|
||||
};
|
||||
|
||||
// Note: Here we treat `Pending` as `Latest`.
|
||||
// Since light clients don't produce pending blocks
|
||||
// (they don't have state) we can safely fallback to `Latest`.
|
||||
@ -430,7 +429,7 @@ impl Parity for ParityClient {
|
||||
BlockNumber::Latest | BlockNumber::Pending => BlockId::Latest,
|
||||
};
|
||||
|
||||
Box::new(self.fetcher().header(id).map(from_encoded))
|
||||
Box::new(self.fetcher().header(id).and_then(from_encoded))
|
||||
}
|
||||
|
||||
fn ipfs_cid(&self, content: Bytes) -> Result<String> {
|
||||
|
@ -487,9 +487,9 @@ impl<C, M, U, S> Parity for ParityClient<C, M, U> where
|
||||
};
|
||||
|
||||
let state = self.client.state_at(id).ok_or(errors::state_pruned())?;
|
||||
let header = self.client.block_header(id).ok_or(errors::state_pruned())?;
|
||||
let header = self.client.block_header(id).ok_or(errors::state_pruned())?.decode().map_err(errors::decode)?;
|
||||
|
||||
(state, header.decode())
|
||||
(state, header)
|
||||
};
|
||||
|
||||
self.client.call_many(&requests, &mut state, &header)
|
||||
|
@ -104,7 +104,7 @@ impl<C, S> Traces for TracesClient<C> where
|
||||
let mut state = self.client.state_at(id).ok_or(errors::state_pruned())?;
|
||||
let header = self.client.block_header(id).ok_or(errors::state_pruned())?;
|
||||
|
||||
self.client.call(&signed, to_call_analytics(flags), &mut state, &header.decode())
|
||||
self.client.call(&signed, to_call_analytics(flags), &mut state, &header.decode().map_err(errors::decode)?)
|
||||
.map(TraceResults::from)
|
||||
.map_err(errors::call)
|
||||
}
|
||||
@ -131,7 +131,7 @@ impl<C, S> Traces for TracesClient<C> where
|
||||
let mut state = self.client.state_at(id).ok_or(errors::state_pruned())?;
|
||||
let header = self.client.block_header(id).ok_or(errors::state_pruned())?;
|
||||
|
||||
self.client.call_many(&requests, &mut state, &header.decode())
|
||||
self.client.call_many(&requests, &mut state, &header.decode().map_err(errors::decode)?)
|
||||
.map(|results| results.into_iter().map(TraceResults::from).collect())
|
||||
.map_err(errors::call)
|
||||
}
|
||||
@ -153,7 +153,7 @@ impl<C, S> Traces for TracesClient<C> where
|
||||
let mut state = self.client.state_at(id).ok_or(errors::state_pruned())?;
|
||||
let header = self.client.block_header(id).ok_or(errors::state_pruned())?;
|
||||
|
||||
self.client.call(&signed, to_call_analytics(flags), &mut state, &header.decode())
|
||||
self.client.call(&signed, to_call_analytics(flags), &mut state, &header.decode().map_err(errors::decode)?)
|
||||
.map(TraceResults::from)
|
||||
.map_err(errors::call)
|
||||
}
|
||||
|
@ -566,7 +566,8 @@ fn rpc_eth_pending_transaction_by_hash() {
|
||||
|
||||
let tester = EthTester::default();
|
||||
{
|
||||
let tx = rlp::decode(&FromHex::from_hex("f85f800182520894095e7baea6a6c7c4c2dfeb977efac326af552d870a801ba048b55bfa915ac795c431978d8a6a992b628d557da5ff759b307d495a36649353a0efffd310ac743f371de3b9f7f9cb56c0b28ad43601b4ab949f53faa07bd2c804").unwrap());
|
||||
let bytes = FromHex::from_hex("f85f800182520894095e7baea6a6c7c4c2dfeb977efac326af552d870a801ba048b55bfa915ac795c431978d8a6a992b628d557da5ff759b307d495a36649353a0efffd310ac743f371de3b9f7f9cb56c0b28ad43601b4ab949f53faa07bd2c804").unwrap();
|
||||
let tx = rlp::decode(&bytes).expect("decoding failure");
|
||||
let tx = SignedTransaction::new(tx).unwrap();
|
||||
tester.miner.pending_transactions.lock().insert(H256::zero(), tx);
|
||||
}
|
||||
|
@ -62,13 +62,16 @@ build () {
|
||||
cargo build --target $PLATFORM --release -p ethstore-cli
|
||||
echo "Build ethkey-cli:"
|
||||
cargo build --target $PLATFORM --release -p ethkey-cli
|
||||
echo "Build whisper-cli:"
|
||||
cargo build --target $PLATFORM --release -p whisper-cli
|
||||
}
|
||||
strip_binaries () {
|
||||
echo "Strip binaries:"
|
||||
$STRIP_BIN -v target/$PLATFORM/release/parity
|
||||
$STRIP_BIN -v target/$PLATFORM/release/parity-evm
|
||||
$STRIP_BIN -v target/$PLATFORM/release/ethstore
|
||||
$STRIP_BIN -v target/$PLATFORM/release/ethkey;
|
||||
$STRIP_BIN -v target/$PLATFORM/release/ethkey
|
||||
$STRIP_BIN -v target/$PLATFORM/release/whisper;
|
||||
}
|
||||
calculate_checksums () {
|
||||
echo "Checksum calculation:"
|
||||
@ -89,6 +92,8 @@ calculate_checksums () {
|
||||
$SHA256_BIN target/$PLATFORM/release/ethstore$S3WIN > ethstore$S3WIN.sha256
|
||||
$MD5_BIN target/$PLATFORM/release/ethkey$S3WIN > ethkey$S3WIN.md5
|
||||
$SHA256_BIN target/$PLATFORM/release/ethkey$S3WIN > ethkey$S3WIN.sha256
|
||||
$MD5_BIN target/$PLATFORM/release/whisper$S3WIN > whisper$S3WIN.md5
|
||||
$SHA256_BIN target/$PLATFORM/release/whisper$S3WIN > whisper$S3WIN.sha256
|
||||
}
|
||||
make_deb () {
|
||||
rm -rf deb
|
||||
@ -122,6 +127,7 @@ make_deb () {
|
||||
cp target/$PLATFORM/release/parity-evm deb/usr/bin/parity-evm
|
||||
cp target/$PLATFORM/release/ethstore deb/usr/bin/ethstore
|
||||
cp target/$PLATFORM/release/ethkey deb/usr/bin/ethkey
|
||||
cp target/$PLATFORM/release/whisper deb/usr/bin/whisper
|
||||
dpkg-deb -b deb "parity_"$VER"_"$IDENT"_"$ARC".deb"
|
||||
$MD5_BIN "parity_"$VER"_"$IDENT"_"$ARC".deb" > "parity_"$VER"_"$IDENT"_"$ARC".deb.md5"
|
||||
$SHA256_BIN "parity_"$VER"_"$IDENT"_"$ARC".deb" > "parity_"$VER"_"$IDENT"_"$ARC".deb.sha256"
|
||||
@ -133,6 +139,7 @@ make_rpm () {
|
||||
cp target/$PLATFORM/release/parity-evm /install/usr/bin/parity-evm
|
||||
cp target/$PLATFORM/release/ethstore /install/usr/bin/ethstore
|
||||
cp target/$PLATFORM/release/ethkey /install/usr/bin/ethkey
|
||||
cp target/$PLATFORM/release/whisper /install/usr/bin/whisper
|
||||
|
||||
rm -rf "parity-"$VER"-1."$ARC".rpm" || true
|
||||
fpm -s dir -t rpm -n parity -v $VER --epoch 1 --license GPLv3 -d openssl --provides parity --url https://parity.io --vendor "Parity Technologies" -a x86_64 -m "<devops@parity.io>" --description "Ethereum network client by Parity Technologies" -C /install/
|
||||
@ -146,6 +153,7 @@ make_pkg () {
|
||||
cp target/$PLATFORM/release/parity-evm target/release/parity-evm
|
||||
cp target/$PLATFORM/release/ethstore target/release/ethstore
|
||||
cp target/$PLATFORM/release/ethkey target/release/ethkey
|
||||
cp target/$PLATFORM/release/whisper target/release/whisper
|
||||
cd mac
|
||||
xcodebuild -configuration Release
|
||||
cd ..
|
||||
@ -194,6 +202,9 @@ push_binaries () {
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/ethkey$S3WIN --body target/$PLATFORM/release/ethkey$S3WIN
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/ethkey$S3WIN.md5 --body ethkey$S3WIN.md5
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/ethkey$S3WIN.sha256 --body ethkey$S3WIN.sha256
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/whisper$S3WIN --body target/$PLATFORM/release/whisper$S3WIN
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/whisper$S3WIN.md5 --body whisper$S3WIN.md5
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/whisper$S3WIN.sha256 --body whisper$S3WIN.sha256
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/"parity_"$VER"_"$IDENT"_"$ARC"."$EXT --body "parity_"$VER"_"$IDENT"_"$ARC"."$EXT
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/"parity_"$VER"_"$IDENT"_"$ARC"."$EXT".md5" --body "parity_"$VER"_"$IDENT"_"$ARC"."$EXT".md5"
|
||||
aws s3api put-object --bucket $S3_BUCKET --key $CI_BUILD_REF_NAME/$BUILD_PLATFORM/"parity_"$VER"_"$IDENT"_"$ARC"."$EXT".sha256" --body "parity_"$VER"_"$IDENT"_"$ARC"."$EXT".sha256"
|
||||
@ -201,7 +212,7 @@ push_binaries () {
|
||||
make_archive () {
|
||||
echo "add artifacts to archive"
|
||||
rm -rf parity.zip
|
||||
zip -r parity.zip target/$PLATFORM/release/parity$S3WIN target/$PLATFORM/release/parity-evm$S3WIN target/$PLATFORM/release/ethstore$S3WIN target/$PLATFORM/release/ethkey$S3WIN parity$S3WIN.md5 parity-evm$S3WIN.md5 ethstore$S3WIN.md5 ethkey$S3WIN.md5 parity$S3WIN.sha256 parity-evm$S3WIN.sha256 ethstore$S3WIN.sha256 ethkey$S3WIN.sha256
|
||||
zip -r parity.zip target/$PLATFORM/release/parity$S3WIN target/$PLATFORM/release/parity-evm$S3WIN target/$PLATFORM/release/ethstore$S3WIN target/$PLATFORM/release/ethkey$S3WIN target/$PLATFORM/release/whisper$S3WIN parity$S3WIN.md5 parity-evm$S3WIN.md5 ethstore$S3WIN.md5 ethkey$S3WIN.md5 whisper$S3WIN.md5 parity$S3WIN.sha256 parity-evm$S3WIN.sha256 ethstore$S3WIN.sha256 ethkey$S3WIN.sha256 whisper$S3WIN.sha256
|
||||
}
|
||||
|
||||
updater_push_release () {
|
||||
|
@ -7,8 +7,8 @@ if [[ "$CI_COMMIT_REF_NAME" = "master" || "$CI_COMMIT_REF_NAME" = "beta" || "$CI
|
||||
else
|
||||
export GIT_COMPARE=master;
|
||||
fi
|
||||
export RUST_FILES_MODIFIED="$(git --no-pager diff --name-only $GIT_COMPARE...$CI_COMMIT_SHA | grep -v -e ^\\. -e ^LICENSE -e ^README.md -e ^test.sh -e ^windows/ -e ^scripts/ -e ^mac/ -e ^nsis/ | wc -l)"
|
||||
echo "RUST_FILES_MODIFIED: $RUST_FILES_MODIFIED"
|
||||
git fetch -a
|
||||
export RUST_FILES_MODIFIED="$(git --no-pager diff --name-only $GIT_COMPARE...$CI_COMMIT_SHA | grep -v -e ^\\. -e ^LICENSE -e ^README.md -e ^test.sh -e ^windows/ -e ^scripts/ -e ^mac/ -e ^nsis/ -e ^docs/ | wc -l)"
|
||||
echo "RUST_FILES_MODIFIED: $RUST_FILES_MODIFIED"
|
||||
TEST_SWITCH=$1
|
||||
rust_test () {
|
||||
|
@ -106,7 +106,7 @@ impl From<::std::io::Error> for IoError {
|
||||
}
|
||||
}
|
||||
|
||||
impl<Message> From<NotifyError<service::IoMessage<Message>>> for IoError where Message: Send + Clone {
|
||||
impl<Message> From<NotifyError<service::IoMessage<Message>>> for IoError where Message: Send {
|
||||
fn from(_err: NotifyError<service::IoMessage<Message>>) -> IoError {
|
||||
IoError::Mio(::std::io::Error::new(::std::io::ErrorKind::ConnectionAborted, "Network IO notification error"))
|
||||
}
|
||||
@ -115,7 +115,7 @@ impl<Message> From<NotifyError<service::IoMessage<Message>>> for IoError where M
|
||||
/// Generic IO handler.
|
||||
/// All the handler function are called from within IO event loop.
|
||||
/// `Message` type is used as notification data
|
||||
pub trait IoHandler<Message>: Send + Sync where Message: Send + Sync + Clone + 'static {
|
||||
pub trait IoHandler<Message>: Send + Sync where Message: Send + Sync + 'static {
|
||||
/// Initialize the handler
|
||||
fn initialize(&self, _io: &IoContext<Message>) {}
|
||||
/// Timer function called after a timeout created with `HandlerIo::timeout`.
|
||||
|
@ -41,7 +41,7 @@ const MAX_HANDLERS: usize = 8;
|
||||
|
||||
/// Messages used to communicate with the event loop from other threads.
|
||||
#[derive(Clone)]
|
||||
pub enum IoMessage<Message> where Message: Send + Clone + Sized {
|
||||
pub enum IoMessage<Message> where Message: Send + Sized {
|
||||
/// Shutdown the event loop
|
||||
Shutdown,
|
||||
/// Register a new protocol handler.
|
||||
@ -74,16 +74,16 @@ pub enum IoMessage<Message> where Message: Send + Clone + Sized {
|
||||
token: StreamToken,
|
||||
},
|
||||
/// Broadcast a message across all protocol handlers.
|
||||
UserMessage(Message)
|
||||
UserMessage(Arc<Message>)
|
||||
}
|
||||
|
||||
/// IO access point. This is passed to all IO handlers and provides an interface to the IO subsystem.
|
||||
pub struct IoContext<Message> where Message: Send + Clone + Sync + 'static {
|
||||
pub struct IoContext<Message> where Message: Send + Sync + 'static {
|
||||
channel: IoChannel<Message>,
|
||||
handler: HandlerId,
|
||||
}
|
||||
|
||||
impl<Message> IoContext<Message> where Message: Send + Clone + Sync + 'static {
|
||||
impl<Message> IoContext<Message> where Message: Send + Sync + 'static {
|
||||
/// Create a new IO access point. Takes references to all the data that can be updated within the IO handler.
|
||||
pub fn new(channel: IoChannel<Message>, handler: HandlerId) -> IoContext<Message> {
|
||||
IoContext {
|
||||
@ -187,7 +187,7 @@ pub struct IoManager<Message> where Message: Send + Sync {
|
||||
work_ready: Arc<SCondvar>,
|
||||
}
|
||||
|
||||
impl<Message> IoManager<Message> where Message: Send + Sync + Clone + 'static {
|
||||
impl<Message> IoManager<Message> where Message: Send + Sync + 'static {
|
||||
/// Creates a new instance and registers it with the event loop.
|
||||
pub fn start(
|
||||
event_loop: &mut EventLoop<IoManager<Message>>,
|
||||
@ -219,7 +219,7 @@ impl<Message> IoManager<Message> where Message: Send + Sync + Clone + 'static {
|
||||
}
|
||||
}
|
||||
|
||||
impl<Message> Handler for IoManager<Message> where Message: Send + Clone + Sync + 'static {
|
||||
impl<Message> Handler for IoManager<Message> where Message: Send + Sync + 'static {
|
||||
type Timeout = Token;
|
||||
type Message = IoMessage<Message>;
|
||||
|
||||
@ -317,7 +317,12 @@ impl<Message> Handler for IoManager<Message> where Message: Send + Clone + Sync
|
||||
for id in 0 .. MAX_HANDLERS {
|
||||
if let Some(h) = self.handlers.read().get(id) {
|
||||
let handler = h.clone();
|
||||
self.worker_channel.push(Work { work_type: WorkType::Message(data.clone()), token: 0, handler: handler, handler_id: id });
|
||||
self.worker_channel.push(Work {
|
||||
work_type: WorkType::Message(data.clone()),
|
||||
token: 0,
|
||||
handler: handler,
|
||||
handler_id: id
|
||||
});
|
||||
}
|
||||
}
|
||||
self.work_ready.notify_all();
|
||||
@ -326,21 +331,30 @@ impl<Message> Handler for IoManager<Message> where Message: Send + Clone + Sync
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
enum Handlers<Message> where Message: Send + Clone {
|
||||
enum Handlers<Message> where Message: Send {
|
||||
SharedCollection(Weak<RwLock<Slab<Arc<IoHandler<Message>>, HandlerId>>>),
|
||||
Single(Weak<IoHandler<Message>>),
|
||||
}
|
||||
|
||||
/// Allows sending messages into the event loop. All the IO handlers will get the message
|
||||
/// in the `message` callback.
|
||||
pub struct IoChannel<Message> where Message: Send + Clone{
|
||||
channel: Option<Sender<IoMessage<Message>>>,
|
||||
handlers: Handlers<Message>,
|
||||
impl<Message: Send> Clone for Handlers<Message> {
|
||||
fn clone(&self) -> Self {
|
||||
use self::Handlers::*;
|
||||
|
||||
match *self {
|
||||
SharedCollection(ref w) => SharedCollection(w.clone()),
|
||||
Single(ref w) => Single(w.clone()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<Message> Clone for IoChannel<Message> where Message: Send + Clone + Sync + 'static {
|
||||
/// Allows sending messages into the event loop. All the IO handlers will get the message
|
||||
/// in the `message` callback.
|
||||
pub struct IoChannel<Message> where Message: Send {
|
||||
channel: Option<Sender<IoMessage<Message>>>,
|
||||
handlers: Handlers<Message>,
|
||||
}
|
||||
|
||||
impl<Message> Clone for IoChannel<Message> where Message: Send + Sync + 'static {
|
||||
fn clone(&self) -> IoChannel<Message> {
|
||||
IoChannel {
|
||||
channel: self.channel.clone(),
|
||||
@ -349,11 +363,11 @@ impl<Message> Clone for IoChannel<Message> where Message: Send + Clone + Sync +
|
||||
}
|
||||
}
|
||||
|
||||
impl<Message> IoChannel<Message> where Message: Send + Clone + Sync + 'static {
|
||||
impl<Message> IoChannel<Message> where Message: Send + Sync + 'static {
|
||||
/// Send a message through the channel
|
||||
pub fn send(&self, message: Message) -> Result<(), IoError> {
|
||||
match self.channel {
|
||||
Some(ref channel) => channel.send(IoMessage::UserMessage(message))?,
|
||||
Some(ref channel) => channel.send(IoMessage::UserMessage(Arc::new(message)))?,
|
||||
None => self.send_sync(message)?
|
||||
}
|
||||
Ok(())
|
||||
@ -413,13 +427,13 @@ impl<Message> IoChannel<Message> where Message: Send + Clone + Sync + 'static {
|
||||
|
||||
/// General IO Service. Starts an event loop and dispatches IO requests.
|
||||
/// 'Message' is a notification message type
|
||||
pub struct IoService<Message> where Message: Send + Sync + Clone + 'static {
|
||||
pub struct IoService<Message> where Message: Send + Sync + 'static {
|
||||
thread: Mutex<Option<JoinHandle<()>>>,
|
||||
host_channel: Mutex<Sender<IoMessage<Message>>>,
|
||||
handlers: Arc<RwLock<Slab<Arc<IoHandler<Message>>, HandlerId>>>,
|
||||
}
|
||||
|
||||
impl<Message> IoService<Message> where Message: Send + Sync + Clone + 'static {
|
||||
impl<Message> IoService<Message> where Message: Send + Sync + 'static {
|
||||
/// Starts IO event loop
|
||||
pub fn start() -> Result<IoService<Message>, IoError> {
|
||||
let mut config = EventLoopBuilder::new();
|
||||
@ -462,7 +476,7 @@ impl<Message> IoService<Message> where Message: Send + Sync + Clone + 'static {
|
||||
|
||||
/// Send a message over the network. Normaly `HostIo::send` should be used. This can be used from non-io threads.
|
||||
pub fn send_message(&self, message: Message) -> Result<(), IoError> {
|
||||
self.host_channel.lock().send(IoMessage::UserMessage(message))?;
|
||||
self.host_channel.lock().send(IoMessage::UserMessage(Arc::new(message)))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@ -472,7 +486,7 @@ impl<Message> IoService<Message> where Message: Send + Sync + Clone + 'static {
|
||||
}
|
||||
}
|
||||
|
||||
impl<Message> Drop for IoService<Message> where Message: Send + Sync + Clone {
|
||||
impl<Message> Drop for IoService<Message> where Message: Send + Sync {
|
||||
fn drop(&mut self) {
|
||||
self.stop()
|
||||
}
|
||||
|
@ -38,7 +38,7 @@ pub enum WorkType<Message> {
|
||||
Writable,
|
||||
Hup,
|
||||
Timeout,
|
||||
Message(Message)
|
||||
Message(Arc<Message>)
|
||||
}
|
||||
|
||||
pub struct Work<Message> {
|
||||
@ -65,7 +65,7 @@ impl Worker {
|
||||
wait: Arc<SCondvar>,
|
||||
wait_mutex: Arc<SMutex<()>>,
|
||||
) -> Worker
|
||||
where Message: Send + Sync + Clone + 'static {
|
||||
where Message: Send + Sync + 'static {
|
||||
let deleting = Arc::new(AtomicBool::new(false));
|
||||
let mut worker = Worker {
|
||||
thread: None,
|
||||
@ -86,7 +86,7 @@ impl Worker {
|
||||
channel: IoChannel<Message>, wait: Arc<SCondvar>,
|
||||
wait_mutex: Arc<SMutex<()>>,
|
||||
deleting: Arc<AtomicBool>)
|
||||
where Message: Send + Sync + Clone + 'static {
|
||||
where Message: Send + Sync + 'static {
|
||||
loop {
|
||||
{
|
||||
let lock = wait_mutex.lock().expect("Poisoned work_loop mutex");
|
||||
@ -105,7 +105,7 @@ impl Worker {
|
||||
}
|
||||
}
|
||||
|
||||
fn do_work<Message>(work: Work<Message>, channel: IoChannel<Message>) where Message: Send + Sync + Clone + 'static {
|
||||
fn do_work<Message>(work: Work<Message>, channel: IoChannel<Message>) where Message: Send + Sync + 'static {
|
||||
match work.work_type {
|
||||
WorkType::Readable => {
|
||||
work.handler.stream_readable(&IoContext::new(channel, work.handler_id), work.token);
|
||||
@ -120,7 +120,7 @@ impl Worker {
|
||||
work.handler.timeout(&IoContext::new(channel, work.handler_id), work.token);
|
||||
}
|
||||
WorkType::Message(message) => {
|
||||
work.handler.message(&IoContext::new(channel, work.handler_id), &message);
|
||||
work.handler.message(&IoContext::new(channel, work.handler_id), &*message);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -45,14 +45,15 @@ pub struct ArchiveDB {
|
||||
|
||||
impl ArchiveDB {
|
||||
/// Create a new instance from a key-value db.
|
||||
pub fn new(backing: Arc<KeyValueDB>, col: Option<u32>) -> ArchiveDB {
|
||||
let latest_era = backing.get(col, &LATEST_ERA_KEY).expect("Low-level database error.")
|
||||
.map(|val| decode::<u64>(&val));
|
||||
pub fn new(backing: Arc<KeyValueDB>, column: Option<u32>) -> ArchiveDB {
|
||||
let latest_era = backing.get(column, &LATEST_ERA_KEY)
|
||||
.expect("Low-level database error.")
|
||||
.map(|val| decode::<u64>(&val).expect("decoding db value failed"));
|
||||
ArchiveDB {
|
||||
overlay: MemoryDB::new(),
|
||||
backing: backing,
|
||||
latest_era: latest_era,
|
||||
column: col,
|
||||
backing,
|
||||
latest_era,
|
||||
column,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -57,7 +57,7 @@ enum RemoveFrom {
|
||||
/// the removals actually take effect.
|
||||
///
|
||||
/// journal format:
|
||||
/// ```
|
||||
/// ```text
|
||||
/// [era, 0] => [ id, [insert_0, ...], [remove_0, ...] ]
|
||||
/// [era, 1] => [ id, [insert_0, ...], [remove_0, ...] ]
|
||||
/// [era, n] => [ ... ]
|
||||
@ -76,7 +76,7 @@ enum RemoveFrom {
|
||||
/// which includes an original key, if any.
|
||||
///
|
||||
/// The semantics of the `counter` are:
|
||||
/// ```
|
||||
/// ```text
|
||||
/// insert key k:
|
||||
/// counter already contains k: count += 1
|
||||
/// counter doesn't contain k:
|
||||
@ -92,7 +92,7 @@ enum RemoveFrom {
|
||||
///
|
||||
/// Practically, this means that for each commit block turning from recent to ancient we do the
|
||||
/// following:
|
||||
/// ```
|
||||
/// ```text
|
||||
/// is_canonical:
|
||||
/// inserts: Ignored (left alone in the backing database).
|
||||
/// deletes: Enacted; however, recent history queue is checked for ongoing references. This is
|
||||
@ -263,7 +263,7 @@ impl EarlyMergeDB {
|
||||
let mut refs = HashMap::new();
|
||||
let mut latest_era = None;
|
||||
if let Some(val) = db.get(col, &LATEST_ERA_KEY).expect("Low-level database error.") {
|
||||
let mut era = decode::<u64>(&val);
|
||||
let mut era = decode::<u64>(&val).expect("decoding db value failed");
|
||||
latest_era = Some(era);
|
||||
loop {
|
||||
let mut db_key = DatabaseKey {
|
||||
|
@ -137,7 +137,7 @@ impl OverlayDB {
|
||||
fn payload(&self, key: &H256) -> Option<Payload> {
|
||||
self.backing.get(self.column, key)
|
||||
.expect("Low-level database error. Some issue with your hard disk?")
|
||||
.map(|d| decode(&d))
|
||||
.map(|d| decode(&d).expect("decoding db value failed"))
|
||||
}
|
||||
|
||||
/// Put the refs and value of the given key, possibly deleting it from the db.
|
||||
|
@ -186,7 +186,7 @@ impl OverlayRecentDB {
|
||||
let mut earliest_era = None;
|
||||
let mut cumulative_size = 0;
|
||||
if let Some(val) = db.get(col, &LATEST_ERA_KEY).expect("Low-level database error.") {
|
||||
let mut era = decode::<u64>(&val);
|
||||
let mut era = decode::<u64>(&val).expect("decoding db value failed");
|
||||
latest_era = Some(era);
|
||||
loop {
|
||||
let mut db_key = DatabaseKey {
|
||||
@ -195,7 +195,7 @@ impl OverlayRecentDB {
|
||||
};
|
||||
while let Some(rlp_data) = db.get(col, &encode(&db_key)).expect("Low-level database error.") {
|
||||
trace!("read_overlay: era={}, index={}", era, db_key.index);
|
||||
let value = decode::<DatabaseValue>(&rlp_data);
|
||||
let value = decode::<DatabaseValue>(&rlp_data).expect(&format!("read_overlay: Error decoding DatabaseValue era={}, index{}", era, db_key.index));
|
||||
count += value.inserts.len();
|
||||
let mut inserted_keys = Vec::new();
|
||||
for (k, v) in value.inserts {
|
||||
|
@ -40,7 +40,7 @@ use util::{DatabaseKey, DatabaseValueView, DatabaseValueRef};
|
||||
/// the removals actually take effect.
|
||||
///
|
||||
/// journal format:
|
||||
/// ```
|
||||
/// ```text
|
||||
/// [era, 0] => [ id, [insert_0, ...], [remove_0, ...] ]
|
||||
/// [era, 1] => [ id, [insert_0, ...], [remove_0, ...] ]
|
||||
/// [era, n] => [ ... ]
|
||||
@ -62,17 +62,18 @@ pub struct RefCountedDB {
|
||||
|
||||
impl RefCountedDB {
|
||||
/// Create a new instance given a `backing` database.
|
||||
pub fn new(backing: Arc<KeyValueDB>, col: Option<u32>) -> RefCountedDB {
|
||||
let latest_era = backing.get(col, &LATEST_ERA_KEY).expect("Low-level database error.")
|
||||
.map(|val| decode::<u64>(&val));
|
||||
pub fn new(backing: Arc<KeyValueDB>, column: Option<u32>) -> RefCountedDB {
|
||||
let latest_era = backing.get(column, &LATEST_ERA_KEY)
|
||||
.expect("Low-level database error.")
|
||||
.map(|v| decode::<u64>(&v).expect("decoding db value failed"));
|
||||
|
||||
RefCountedDB {
|
||||
forward: OverlayDB::new(backing.clone(), col),
|
||||
backing: backing,
|
||||
forward: OverlayDB::new(backing.clone(), column),
|
||||
backing,
|
||||
inserts: vec![],
|
||||
removes: vec![],
|
||||
latest_era: latest_era,
|
||||
column: col,
|
||||
latest_era,
|
||||
column,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -38,6 +38,7 @@ error-chain = { version = "0.11", default-features = false }
|
||||
|
||||
[dev-dependencies]
|
||||
tempdir = "0.3"
|
||||
assert_matches = "1.2"
|
||||
|
||||
[features]
|
||||
default = []
|
||||
|
@ -79,7 +79,9 @@ const NODE_TABLE_TIMEOUT: Duration = Duration::from_secs(300);
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
/// Protocol info
|
||||
pub struct CapabilityInfo {
|
||||
/// Protocol ID
|
||||
pub protocol: ProtocolId,
|
||||
/// Protocol version
|
||||
pub version: u8,
|
||||
/// Total number of packet IDs this protocol support.
|
||||
pub packet_count: u8,
|
||||
@ -687,7 +689,7 @@ impl Host {
|
||||
Err(e) => {
|
||||
let s = session.lock();
|
||||
trace!(target: "network", "Session read error: {}:{:?} ({:?}) {:?}", token, s.id(), s.remote_addr(), e);
|
||||
if let ErrorKind::Disconnect(DisconnectReason::UselessPeer) = *e.kind() {
|
||||
if let ErrorKind::Disconnect(DisconnectReason::IncompatibleProtocol) = *e.kind() {
|
||||
if let Some(id) = s.id() {
|
||||
if !self.reserved_nodes.read().contains(id) {
|
||||
let mut nodes = self.nodes.write();
|
||||
@ -990,7 +992,6 @@ impl IoHandler<NetworkIoMessage> for Host {
|
||||
ref handler,
|
||||
ref protocol,
|
||||
ref versions,
|
||||
ref packet_count,
|
||||
} => {
|
||||
let h = handler.clone();
|
||||
let reserved = self.reserved_nodes.read();
|
||||
@ -1000,8 +1001,12 @@ impl IoHandler<NetworkIoMessage> for Host {
|
||||
);
|
||||
self.handlers.write().insert(*protocol, h);
|
||||
let mut info = self.info.write();
|
||||
for v in versions {
|
||||
info.capabilities.push(CapabilityInfo { protocol: *protocol, version: *v, packet_count: *packet_count });
|
||||
for &(version, packet_count) in versions {
|
||||
info.capabilities.push(CapabilityInfo {
|
||||
protocol: *protocol,
|
||||
version,
|
||||
packet_count,
|
||||
});
|
||||
}
|
||||
},
|
||||
NetworkIoMessage::AddTimer {
|
||||
|
@ -49,7 +49,7 @@
|
||||
//! fn main () {
|
||||
//! let mut service = NetworkService::new(NetworkConfiguration::new_local(), None).expect("Error creating network service");
|
||||
//! service.start().expect("Error starting service");
|
||||
//! service.register_protocol(Arc::new(MyHandler), *b"myp", 1, &[1u8]);
|
||||
//! service.register_protocol(Arc::new(MyHandler), *b"myp", &[(1u8, 1u8)]);
|
||||
//!
|
||||
//! // Wait for quit condition
|
||||
//! // ...
|
||||
@ -95,6 +95,8 @@ extern crate serde_derive;
|
||||
|
||||
#[cfg(test)]
|
||||
extern crate tempdir;
|
||||
#[cfg(test)] #[macro_use]
|
||||
extern crate assert_matches;
|
||||
|
||||
mod host;
|
||||
mod connection;
|
||||
|
@ -14,6 +14,12 @@
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
use discovery::{TableUpdates, NodeEntry};
|
||||
use ethereum_types::H512;
|
||||
use ip_utils::*;
|
||||
use network::{Error, ErrorKind, AllowIP, IpFilter};
|
||||
use rlp::{Rlp, RlpStream, DecoderError};
|
||||
use serde_json;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::fmt::{self, Display, Formatter};
|
||||
use std::hash::{Hash, Hasher};
|
||||
@ -23,12 +29,6 @@ use std::str::FromStr;
|
||||
use std::{fs, mem, slice};
|
||||
use std::time::{self, Duration, SystemTime};
|
||||
use rand::{self, Rng};
|
||||
use ethereum_types::H512;
|
||||
use rlp::{Rlp, RlpStream, DecoderError};
|
||||
use network::{Error, ErrorKind, AllowIP, IpFilter};
|
||||
use discovery::{TableUpdates, NodeEntry};
|
||||
use ip_utils::*;
|
||||
use serde_json;
|
||||
|
||||
/// Node public key
|
||||
pub type NodeId = H512;
|
||||
@ -124,8 +124,8 @@ impl FromStr for NodeEndpoint {
|
||||
address: a,
|
||||
udp_port: a.port()
|
||||
}),
|
||||
Ok(_) => Err(ErrorKind::AddressResolve(None).into()),
|
||||
Err(e) => Err(ErrorKind::AddressResolve(Some(e)).into())
|
||||
Ok(None) => bail!(ErrorKind::AddressResolve(None)),
|
||||
Err(_) => Err(ErrorKind::AddressParse.into()) // always an io::Error of InvalidInput kind
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -534,11 +534,34 @@ mod tests {
|
||||
assert!(endpoint.is_ok());
|
||||
let v4 = match endpoint.unwrap().address {
|
||||
SocketAddr::V4(v4address) => v4address,
|
||||
_ => panic!("should ve v4 address")
|
||||
_ => panic!("should be v4 address")
|
||||
};
|
||||
assert_eq!(SocketAddrV4::new(Ipv4Addr::new(123, 99, 55, 44), 7770), v4);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn endpoint_parse_empty_ip_string_returns_error() {
|
||||
let endpoint = NodeEndpoint::from_str("");
|
||||
assert!(endpoint.is_err());
|
||||
assert_matches!(endpoint.unwrap_err().kind(), &ErrorKind::AddressParse);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn endpoint_parse_invalid_ip_string_returns_error() {
|
||||
let endpoint = NodeEndpoint::from_str("beef");
|
||||
assert!(endpoint.is_err());
|
||||
assert_matches!(endpoint.unwrap_err().kind(), &ErrorKind::AddressParse);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn endpoint_parse_valid_ip_without_port_returns_error() {
|
||||
let endpoint = NodeEndpoint::from_str("123.123.123.123");
|
||||
assert!(endpoint.is_err());
|
||||
assert_matches!(endpoint.unwrap_err().kind(), &ErrorKind::AddressParse);
|
||||
let endpoint = NodeEndpoint::from_str("123.123.123.123:123");
|
||||
assert!(endpoint.is_ok())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn node_parse() {
|
||||
assert!(validate_node_url("enode://a979fb575495b8d6db44f750317d0f4622bf4c2aa3365d6af7c284339968eef29b69ad0dce72a4d8db5ebb4968de0e3bec910127f134779fbcb0cb6d3331163c@22.99.55.44:7770").is_none());
|
||||
@ -555,6 +578,17 @@ mod tests {
|
||||
node.id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn node_parse_fails_for_invalid_urls() {
|
||||
let node = Node::from_str("foo");
|
||||
assert!(node.is_err());
|
||||
assert_matches!(node.unwrap_err().kind(), &ErrorKind::AddressParse);
|
||||
|
||||
let node = Node::from_str("enode://foo@bar");
|
||||
assert!(node.is_err());
|
||||
assert_matches!(node.unwrap_err().kind(), &ErrorKind::AddressParse);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn table_last_contact_order() {
|
||||
let node1 = Node::from_str("enode://a979fb575495b8d6db44f750317d0f4622bf4c2aa3365d6af7c284339968eef29b69ad0dce72a4d8db5ebb4968de0e3bec910127f134779fbcb0cb6d3331163c@22.99.55.44:7770").unwrap();
|
||||
|
@ -67,12 +67,17 @@ impl NetworkService {
|
||||
}
|
||||
|
||||
/// Regiter a new protocol handler with the event loop.
|
||||
pub fn register_protocol(&self, handler: Arc<NetworkProtocolHandler + Send + Sync>, protocol: ProtocolId, packet_count: u8, versions: &[u8]) -> Result<(), Error> {
|
||||
pub fn register_protocol(
|
||||
&self,
|
||||
handler: Arc<NetworkProtocolHandler + Send + Sync>,
|
||||
protocol: ProtocolId,
|
||||
// version id + packet count
|
||||
versions: &[(u8, u8)]
|
||||
) -> Result<(), Error> {
|
||||
self.io_service.send_message(NetworkIoMessage::AddHandler {
|
||||
handler: handler,
|
||||
protocol: protocol,
|
||||
handler,
|
||||
protocol,
|
||||
versions: versions.to_vec(),
|
||||
packet_count: packet_count,
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
|
@ -52,7 +52,7 @@ impl TestProtocol {
|
||||
/// Creates and register protocol with the network service
|
||||
pub fn register(service: &mut NetworkService, drop_session: bool) -> Arc<TestProtocol> {
|
||||
let handler = Arc::new(TestProtocol::new(drop_session));
|
||||
service.register_protocol(handler.clone(), *b"tst", 1, &[42u8, 43u8]).expect("Error registering test protocol handler");
|
||||
service.register_protocol(handler.clone(), *b"tst", &[(42u8, 1u8), (43u8, 1u8)]).expect("Error registering test protocol handler");
|
||||
handler
|
||||
}
|
||||
|
||||
@ -104,7 +104,7 @@ impl NetworkProtocolHandler for TestProtocol {
|
||||
fn net_service() {
|
||||
let service = NetworkService::new(NetworkConfiguration::new_local(), None).expect("Error creating network service");
|
||||
service.start().unwrap();
|
||||
service.register_protocol(Arc::new(TestProtocol::new(false)), *b"myp", 1, &[1u8]).unwrap();
|
||||
service.register_protocol(Arc::new(TestProtocol::new(false)), *b"myp", &[(1u8, 1u8)]).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
@ -84,11 +84,16 @@ error_chain! {
|
||||
foreign_links {
|
||||
SocketIo(IoError) #[doc = "Socket IO error."];
|
||||
Io(io::Error) #[doc = "Error concerning the Rust standard library's IO subsystem."];
|
||||
AddressParse(net::AddrParseError) #[doc = "Error concerning the network address parsing subsystem."];
|
||||
Decompression(snappy::InvalidInput) #[doc = "Decompression error."];
|
||||
}
|
||||
|
||||
errors {
|
||||
#[doc = "Error concerning the network address parsing subsystem."]
|
||||
AddressParse {
|
||||
description("Failed to parse network address"),
|
||||
display("Failed to parse network address"),
|
||||
}
|
||||
|
||||
#[doc = "Error concerning the network address resolution subsystem."]
|
||||
AddressResolve(err: Option<io::Error>) {
|
||||
description("Failed to resolve network address"),
|
||||
@ -157,6 +162,10 @@ impl From<crypto::Error> for Error {
|
||||
}
|
||||
}
|
||||
|
||||
impl From<net::AddrParseError> for Error {
|
||||
fn from(_err: net::AddrParseError) -> Self { ErrorKind::AddressParse.into() }
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_errors() {
|
||||
assert_eq!(DisconnectReason::ClientQuit, DisconnectReason::from_u8(8));
|
||||
|
@ -64,10 +64,8 @@ pub enum NetworkIoMessage {
|
||||
handler: Arc<NetworkProtocolHandler + Sync>,
|
||||
/// Protocol Id.
|
||||
protocol: ProtocolId,
|
||||
/// Supported protocol versions.
|
||||
versions: Vec<u8>,
|
||||
/// Number of packet IDs reserved by the protocol.
|
||||
packet_count: u8,
|
||||
/// Supported protocol versions and number of packet IDs reserved by the protocol (packet count).
|
||||
versions: Vec<(u8, u8)>,
|
||||
},
|
||||
/// Register a new protocol timer
|
||||
AddTimer {
|
||||
|
@ -67,6 +67,8 @@ pub enum TrieError {
|
||||
InvalidStateRoot(H256),
|
||||
/// Trie item not found in the database,
|
||||
IncompleteDatabase(H256),
|
||||
/// Corrupt Trie item
|
||||
DecoderError(rlp::DecoderError),
|
||||
}
|
||||
|
||||
impl fmt::Display for TrieError {
|
||||
@ -75,6 +77,7 @@ impl fmt::Display for TrieError {
|
||||
TrieError::InvalidStateRoot(ref root) => write!(f, "Invalid state root: {}", root),
|
||||
TrieError::IncompleteDatabase(ref missing) =>
|
||||
write!(f, "Database missing expected key: {}", missing),
|
||||
TrieError::DecoderError(ref err) => write!(f, "Decoding failed with {}", err),
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -84,10 +87,15 @@ impl error::Error for TrieError {
|
||||
match *self {
|
||||
TrieError::InvalidStateRoot(_) => "Invalid state root",
|
||||
TrieError::IncompleteDatabase(_) => "Incomplete database",
|
||||
TrieError::DecoderError(ref e) => e.description(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<rlp::DecoderError> for Box<TrieError> {
|
||||
fn from(e: rlp::DecoderError) -> Self { Box::new(TrieError::DecoderError(e)) }
|
||||
}
|
||||
|
||||
/// Trie result type. Boxed to avoid copying around extra space for `H256`s on successful queries.
|
||||
pub type Result<T> = ::std::result::Result<T, Box<TrieError>>;
|
||||
|
||||
|
@ -55,7 +55,7 @@ impl<'a, Q: Query> Lookup<'a, Q> {
|
||||
// without incrementing the depth.
|
||||
let mut node_data = &node_data[..];
|
||||
loop {
|
||||
match Node::decoded(node_data).expect("rlp read from db; qed") {
|
||||
match Node::decoded(node_data)? {
|
||||
Node::Leaf(slice, value) => {
|
||||
return Ok(match slice == key {
|
||||
true => Some(self.query.decode(value)),
|
||||
|
@ -493,3 +493,30 @@ fn get_len() {
|
||||
assert_eq!(t.get_with(b"B", |x: &[u8]| x.len()), Ok(Some(5)));
|
||||
assert_eq!(t.get_with(b"C", |x: &[u8]| x.len()), Ok(None));
|
||||
}
|
||||
|
||||
// Test will work once https://github.com/paritytech/parity/pull/8527 is merged and rlp::decode returns Result instead of panicking
|
||||
//#[test]
|
||||
//fn test_lookup_with_corrupt_data_returns_decoder_error() {
|
||||
// use memorydb::*;
|
||||
// use super::TrieMut;
|
||||
// use super::triedbmut::*;
|
||||
// use rlp;
|
||||
// use ethereum_types::H512;
|
||||
//
|
||||
// let mut memdb = MemoryDB::new();
|
||||
// let mut root = H256::new();
|
||||
// {
|
||||
// let mut t = TrieDBMut::new(&mut memdb, &mut root);
|
||||
// t.insert(b"A", b"ABC").unwrap();
|
||||
// t.insert(b"B", b"ABCBA").unwrap();
|
||||
// }
|
||||
//
|
||||
// let t = TrieDB::new(&memdb, &root).unwrap();
|
||||
//
|
||||
// // query for an invalid data type to trigger an error
|
||||
// let q = rlp::decode::<H512>;
|
||||
// let lookup = Lookup{ db: t.db, query: q, hash: root };
|
||||
// let query_result = lookup.look_up(NibbleSlice::new(b"A"));
|
||||
// let expected = Box::new(TrieError::DecoderError(::rlp::DecoderError::RlpIsTooShort));
|
||||
// assert_eq!(query_result.unwrap_err(), expected);
|
||||
//}
|
||||
|
@ -9,7 +9,7 @@
|
||||
use std::fmt;
|
||||
use std::error::Error as StdError;
|
||||
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
#[derive(Debug, PartialEq, Eq, Clone)]
|
||||
/// Error concerning the RLP decoder.
|
||||
pub enum DecoderError {
|
||||
/// Data has additional bytes at the end of the valid RLP fragment.
|
||||
|
@ -63,13 +63,13 @@ pub const EMPTY_LIST_RLP: [u8; 1] = [0xC0; 1];
|
||||
///
|
||||
/// fn main () {
|
||||
/// let data = vec![0x83, b'c', b'a', b't'];
|
||||
/// let animal: String = rlp::decode(&data);
|
||||
/// let animal: String = rlp::decode(&data).expect("could not decode");
|
||||
/// assert_eq!(animal, "cat".to_owned());
|
||||
/// }
|
||||
/// ```
|
||||
pub fn decode<T>(bytes: &[u8]) -> T where T: Decodable {
|
||||
pub fn decode<T>(bytes: &[u8]) -> Result<T, DecoderError> where T: Decodable {
|
||||
let rlp = Rlp::new(bytes);
|
||||
rlp.as_val().expect("trusted rlp should be valid")
|
||||
rlp.as_val()
|
||||
}
|
||||
|
||||
pub fn decode_list<T>(bytes: &[u8]) -> Vec<T> where T: Decodable {
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user