New Transaction Queue implementation (#8074)

* Implementation of Verifier, Scoring and Ready.

* Queue in progress.

* TransactionPool.

* Prepare for txpool release.

* Miner refactor [WiP]

* WiP reworking miner.

* Make it compile.

* Add some docs.

* Split blockchain access to a separate file.

* Work on miner API.

* Fix ethcore tests.

* Refactor miner interface for sealing/work packages.

* Implement next nonce.

* RPC compiles.

* Implement couple of missing methdods for RPC.

* Add transaction queue listeners.

* Compiles!

* Clean-up and parallelize.

* Get rid of RefCell in header.

* Revert "Get rid of RefCell in header."

This reverts commit 0f2424c9b7319a786e1565ea2a8a6d801a21b4fb.

* Override Sync requirement.

* Fix status display.

* Unify logging.

* Extract some cheap checks.

* Measurements and optimizations.

* Fix scoring bug, heap size of bug and add cache

* Disable tx queueing and parallel verification.

* Make ethcore and ethcore-miner compile again.

* Make RPC compile again.

* Bunch of txpool tests.

* Migrate transaction queue tests.

* Nonce Cap

* Nonce cap cache and tests.

* Remove stale future transactions from the queue.

* Optimize scoring and write some tests.

* Simple penalization.

* Clean up and support for different scoring algorithms.

* Add CLI parameters for the new queue.

* Remove banning queue.

* Disable debug build.

* Change per_sender limit to be 1% instead of 5%

* Avoid cloning when propagating transactions.

* Remove old todo.

* Post-review fixes.

* Fix miner options default.

* Implement back ready transactions for light client.

* Get rid of from_pending_block

* Pass rejection reason.

* Add more details to drop.

* Rollback heap size of.

* Avoid cloning hashes when propagating and include more details on rejection.

* Fix tests.

* Introduce nonces cache.

* Remove uneccessary hashes allocation.

* Lower the mem limit.

* Re-enable parallel verification.

* Add miner log. Don't check the type if not below min_gas_price.

* Add more traces, fix disabling miner.

* Fix creating pending blocks twice on AuRa authorities.

* Fix tests.

* re-use pending blocks in AuRa

* Use reseal_min_period to prevent too frequent update_sealing.

* Fix log to contain hash not sender.

* Optimize local transactions.

* Fix aura tests.

* Update locks comments.

* Get rid of unsafe Sync impl.

* Review fixes.

* Remove excessive matches.

* Fix compilation errors.

* Use new pool in private transactions.

* Fix private-tx test.

* Fix secret store tests.

* Actually use gas_floor_target

* Fix config tests.

* Fix pool tests.

* Address grumbles.
This commit is contained in:
Tomasz Drwięga 2018-04-13 17:34:27 +02:00 committed by Marek Kotewicz
parent 03b96a7c0a
commit 1cd93e4ceb
105 changed files with 5185 additions and 5784 deletions

23
Cargo.lock generated
View File

@ -511,7 +511,6 @@ dependencies = [
"ethstore 0.2.0", "ethstore 0.2.0",
"evm 0.1.0", "evm 0.1.0",
"fetch 0.1.0", "fetch 0.1.0",
"futures-cpupool 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)",
"hardware-wallet 1.11.0", "hardware-wallet 1.11.0",
"hashdb 0.1.1", "hashdb 0.1.1",
"heapsize 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)", "heapsize 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
@ -532,7 +531,6 @@ dependencies = [
"parity-machine 0.1.0", "parity-machine 0.1.0",
"parking_lot 0.5.4 (registry+https://github.com/rust-lang/crates.io-index)", "parking_lot 0.5.4 (registry+https://github.com/rust-lang/crates.io-index)",
"patricia-trie 0.1.0", "patricia-trie 0.1.0",
"price-info 1.11.0",
"rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)", "rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rayon 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)", "rayon 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rlp 0.2.1", "rlp 0.2.1",
@ -543,7 +541,6 @@ dependencies = [
"snappy 0.1.0 (git+https://github.com/paritytech/rust-snappy)", "snappy 0.1.0 (git+https://github.com/paritytech/rust-snappy)",
"stats 0.1.0", "stats 0.1.0",
"stop-guard 0.1.0", "stop-guard 0.1.0",
"table 0.1.0",
"tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)", "tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"trace-time 0.1.0", "trace-time 0.1.0",
"trie-standardmap 0.1.0", "trie-standardmap 0.1.0",
@ -655,16 +652,16 @@ dependencies = [
name = "ethcore-miner" name = "ethcore-miner"
version = "1.11.0" version = "1.11.0"
dependencies = [ dependencies = [
"common-types 0.1.0", "ansi_term 0.10.2 (registry+https://github.com/rust-lang/crates.io-index)",
"ethabi 5.1.1 (registry+https://github.com/rust-lang/crates.io-index)", "env_logger 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"ethabi-contract 5.0.3 (registry+https://github.com/rust-lang/crates.io-index)", "error-chain 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
"ethabi-derive 5.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"ethash 1.11.0", "ethash 1.11.0",
"ethcore-transaction 0.1.0", "ethcore-transaction 0.1.0",
"ethereum-types 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)", "ethereum-types 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"ethkey 0.3.0", "ethkey 0.3.0",
"fetch 0.1.0", "fetch 0.1.0",
"futures 0.1.21 (registry+https://github.com/rust-lang/crates.io-index)", "futures 0.1.21 (registry+https://github.com/rust-lang/crates.io-index)",
"futures-cpupool 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)",
"heapsize 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)", "heapsize 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"hyper 0.11.24 (registry+https://github.com/rust-lang/crates.io-index)", "hyper 0.11.24 (registry+https://github.com/rust-lang/crates.io-index)",
"keccak-hash 0.1.0", "keccak-hash 0.1.0",
@ -672,9 +669,11 @@ dependencies = [
"log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)", "log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-reactor 0.1.0", "parity-reactor 0.1.0",
"parking_lot 0.5.4 (registry+https://github.com/rust-lang/crates.io-index)", "parking_lot 0.5.4 (registry+https://github.com/rust-lang/crates.io-index)",
"price-info 1.11.0",
"rayon 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-hex 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)", "rustc-hex 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"table 0.1.0", "trace-time 0.1.0",
"transient-hashmap 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)", "transaction-pool 1.11.0",
"url 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)", "url 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
] ]
@ -2191,6 +2190,7 @@ dependencies = [
"tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)", "tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"tiny-keccak 1.4.1 (registry+https://github.com/rust-lang/crates.io-index)", "tiny-keccak 1.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"tokio-timer 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)", "tokio-timer 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"transaction-pool 1.11.0",
"transient-hashmap 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)", "transient-hashmap 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"vm 0.1.0", "vm 0.1.0",
] ]
@ -3076,10 +3076,6 @@ dependencies = [
"unicode-xid 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)", "unicode-xid 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)",
] ]
[[package]]
name = "table"
version = "0.1.0"
[[package]] [[package]]
name = "take" name = "take"
version = "0.1.0" version = "0.1.0"
@ -3403,6 +3399,7 @@ dependencies = [
"ethereum-types 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)", "ethereum-types 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)", "log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"smallvec 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)", "smallvec 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"trace-time 0.1.0",
] ]
[[package]] [[package]]

View File

@ -34,7 +34,6 @@ ethjson = { path = "../json" }
ethkey = { path = "../ethkey" } ethkey = { path = "../ethkey" }
ethstore = { path = "../ethstore" } ethstore = { path = "../ethstore" }
evm = { path = "evm" } evm = { path = "evm" }
futures-cpupool = "0.1"
hardware-wallet = { path = "../hw" } hardware-wallet = { path = "../hw" }
heapsize = "0.4" heapsize = "0.4"
itertools = "0.5" itertools = "0.5"
@ -45,7 +44,6 @@ num = { version = "0.1", default-features = false, features = ["bigint"] }
num_cpus = "1.2" num_cpus = "1.2"
parity-machine = { path = "../machine" } parity-machine = { path = "../machine" }
parking_lot = "0.5" parking_lot = "0.5"
price-info = { path = "../price-info" }
rayon = "1.0" rayon = "1.0"
rand = "0.4" rand = "0.4"
rlp = { path = "../util/rlp" } rlp = { path = "../util/rlp" }
@ -63,7 +61,6 @@ rustc-hex = "1.0"
stats = { path = "../util/stats" } stats = { path = "../util/stats" }
trace-time = { path = "../util/trace-time" } trace-time = { path = "../util/trace-time" }
using_queue = { path = "../util/using_queue" } using_queue = { path = "../util/using_queue" }
table = { path = "../util/table" }
vm = { path = "vm" } vm = { path = "vm" }
wasm = { path = "wasm" } wasm = { path = "wasm" }
keccak-hash = { path = "../util/hash" } keccak-hash = { path = "../util/hash" }
@ -76,10 +73,18 @@ tempdir = "0.3"
trie-standardmap = { path = "../util/trie-standardmap" } trie-standardmap = { path = "../util/trie-standardmap" }
[features] [features]
# Display EVM debug traces.
evm-debug = ["slow-blocks"] evm-debug = ["slow-blocks"]
# Display EVM debug traces when running tests.
evm-debug-tests = ["evm-debug", "evm/evm-debug-tests"] evm-debug-tests = ["evm-debug", "evm/evm-debug-tests"]
slow-blocks = [] # Use SLOW_TX_DURATION="50" (compile time!) to track transactions over 50ms # Measure time of transaction execution.
# Whenever the transaction execution time (in millis) exceeds the value of
# SLOW_TX_DURATION env variable (provided compile time!)
# EVM debug traces are printed.
slow-blocks = []
# Run JSON consensus tests.
json-tests = ["ethcore-transaction/json-tests"] json-tests = ["ethcore-transaction/json-tests"]
# Run memory/cpu heavy tests.
test-heavy = [] test-heavy = []
default = [] # Compile benches
benches = [] benches = []

View File

@ -282,6 +282,9 @@ impl<T: ProvingBlockChainClient + ?Sized> Provider for T {
fn ready_transactions(&self) -> Vec<PendingTransaction> { fn ready_transactions(&self) -> Vec<PendingTransaction> {
BlockChainClient::ready_transactions(self) BlockChainClient::ready_transactions(self)
.into_iter()
.map(|tx| tx.pending().clone())
.collect()
} }
fn epoch_signal(&self, req: request::CompleteSignalRequest) -> Option<request::SignalResponse> { fn epoch_signal(&self, req: request::CompleteSignalRequest) -> Option<request::SignalResponse> {

View File

@ -120,6 +120,18 @@ impl AccountTransactions {
} }
} }
/// Transaction import result.
pub enum ImportDestination {
/// Transaction has been imported to the current queue.
///
/// It's going to be propagated to peers.
Current,
/// Transaction has been imported to future queue.
///
/// It means it won't be propagated until the gap is filled.
Future,
}
type Listener = Box<Fn(&[H256]) + Send + Sync>; type Listener = Box<Fn(&[H256]) + Send + Sync>;
/// Light transaction queue. See module docs for more details. /// Light transaction queue. See module docs for more details.
@ -142,7 +154,7 @@ impl fmt::Debug for TransactionQueue {
impl TransactionQueue { impl TransactionQueue {
/// Import a pending transaction to be queued. /// Import a pending transaction to be queued.
pub fn import(&mut self, tx: PendingTransaction) -> Result<transaction::ImportResult, transaction::Error> { pub fn import(&mut self, tx: PendingTransaction) -> Result<ImportDestination, transaction::Error> {
let sender = tx.sender(); let sender = tx.sender();
let hash = tx.hash(); let hash = tx.hash();
let nonce = tx.nonce; let nonce = tx.nonce;
@ -158,7 +170,7 @@ impl TransactionQueue {
future: BTreeMap::new(), future: BTreeMap::new(),
}); });
(transaction::ImportResult::Current, vec![hash]) (ImportDestination::Current, vec![hash])
} }
Entry::Occupied(mut entry) => { Entry::Occupied(mut entry) => {
let acct_txs = entry.get_mut(); let acct_txs = entry.get_mut();
@ -180,7 +192,7 @@ impl TransactionQueue {
let old = ::std::mem::replace(&mut acct_txs.current[idx], tx_info); let old = ::std::mem::replace(&mut acct_txs.current[idx], tx_info);
self.by_hash.remove(&old.hash); self.by_hash.remove(&old.hash);
(transaction::ImportResult::Current, vec![hash]) (ImportDestination::Current, vec![hash])
} }
Err(idx) => { Err(idx) => {
let cur_len = acct_txs.current.len(); let cur_len = acct_txs.current.len();
@ -202,13 +214,13 @@ impl TransactionQueue {
acct_txs.future.insert(future_nonce, future); acct_txs.future.insert(future_nonce, future);
} }
(transaction::ImportResult::Current, vec![hash]) (ImportDestination::Current, vec![hash])
} else if idx == cur_len && acct_txs.current.last().map_or(false, |f| f.nonce + 1.into() != nonce) { } else if idx == cur_len && acct_txs.current.last().map_or(false, |f| f.nonce + 1.into() != nonce) {
trace!(target: "txqueue", "Queued future transaction for {}, nonce={}", sender, nonce); trace!(target: "txqueue", "Queued future transaction for {}, nonce={}", sender, nonce);
let future_nonce = nonce; let future_nonce = nonce;
acct_txs.future.insert(future_nonce, tx_info); acct_txs.future.insert(future_nonce, tx_info);
(transaction::ImportResult::Future, vec![]) (ImportDestination::Future, vec![])
} else { } else {
trace!(target: "txqueue", "Queued current transaction for {}, nonce={}", sender, nonce); trace!(target: "txqueue", "Queued current transaction for {}, nonce={}", sender, nonce);
@ -217,7 +229,7 @@ impl TransactionQueue {
let mut promoted = acct_txs.adjust_future(); let mut promoted = acct_txs.adjust_future();
promoted.insert(0, hash); promoted.insert(0, hash);
(transaction::ImportResult::Current, promoted) (ImportDestination::Current, promoted)
} }
} }
} }

View File

@ -132,7 +132,7 @@ mod test {
ClientConfig::default(), ClientConfig::default(),
&spec, &spec,
client_db, client_db,
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
let filter = NodeFilter::new(Arc::downgrade(&client) as Weak<BlockChainClient>, contract_addr); let filter = NodeFilter::new(Arc::downgrade(&client) as Weak<BlockChainClient>, contract_addr);

View File

@ -78,12 +78,10 @@ use ethcore::executed::{Executed};
use transaction::{SignedTransaction, Transaction, Action, UnverifiedTransaction}; use transaction::{SignedTransaction, Transaction, Action, UnverifiedTransaction};
use ethcore::{contract_address as ethcore_contract_address}; use ethcore::{contract_address as ethcore_contract_address};
use ethcore::client::{ use ethcore::client::{
Client, ChainNotify, ChainMessageType, ClientIoMessage, BlockId, Client, ChainNotify, ChainMessageType, ClientIoMessage, BlockId, CallContract
MiningBlockChainClient, ChainInfo, Nonce, CallContract
}; };
use ethcore::account_provider::AccountProvider; use ethcore::account_provider::AccountProvider;
use ethcore_miner::transaction_queue::{TransactionDetailsProvider as TransactionQueueDetailsProvider, AccountDetails}; use ethcore::miner::{self, Miner, MinerService};
use ethcore::miner::MinerService;
use ethcore::trace::{Tracer, VMTracer}; use ethcore::trace::{Tracer, VMTracer};
use rustc_hex::FromHex; use rustc_hex::FromHex;
@ -95,35 +93,6 @@ use_contract!(private, "PrivateContract", "res/private.json");
/// Initialization vector length. /// Initialization vector length.
const INIT_VEC_LEN: usize = 16; const INIT_VEC_LEN: usize = 16;
struct TransactionDetailsProvider<'a> {
client: &'a MiningBlockChainClient,
}
impl<'a> TransactionDetailsProvider<'a> {
pub fn new(client: &'a MiningBlockChainClient) -> Self {
TransactionDetailsProvider {
client: client,
}
}
}
impl<'a> TransactionQueueDetailsProvider for TransactionDetailsProvider<'a> {
fn fetch_account(&self, address: &Address) -> AccountDetails {
AccountDetails {
nonce: self.client.latest_nonce(address),
balance: self.client.latest_balance(address),
}
}
fn estimate_gas_required(&self, tx: &SignedTransaction) -> U256 {
tx.gas_required(&self.client.latest_schedule()).into()
}
fn is_service_transaction_acceptable(&self, _tx: &SignedTransaction) -> Result<bool, String> {
Ok(false)
}
}
/// Configurtion for private transaction provider /// Configurtion for private transaction provider
#[derive(Default, PartialEq, Debug, Clone)] #[derive(Default, PartialEq, Debug, Clone)]
pub struct ProviderConfig { pub struct ProviderConfig {
@ -154,8 +123,10 @@ pub struct Provider {
passwords: Vec<String>, passwords: Vec<String>,
notify: RwLock<Vec<Weak<ChainNotify>>>, notify: RwLock<Vec<Weak<ChainNotify>>>,
transactions_for_signing: Mutex<SigningStore>, transactions_for_signing: Mutex<SigningStore>,
// TODO [ToDr] Move the Mutex/RwLock inside `VerificationStore` after refactored to `drain`.
transactions_for_verification: Mutex<VerificationStore>, transactions_for_verification: Mutex<VerificationStore>,
client: Arc<Client>, client: Arc<Client>,
miner: Arc<Miner>,
accounts: Arc<AccountProvider>, accounts: Arc<AccountProvider>,
channel: IoChannel<ClientIoMessage>, channel: IoChannel<ClientIoMessage>,
} }
@ -172,6 +143,7 @@ impl Provider where {
/// Create a new provider. /// Create a new provider.
pub fn new( pub fn new(
client: Arc<Client>, client: Arc<Client>,
miner: Arc<Miner>,
accounts: Arc<AccountProvider>, accounts: Arc<AccountProvider>,
encryptor: Box<Encryptor>, encryptor: Box<Encryptor>,
config: ProviderConfig, config: ProviderConfig,
@ -186,6 +158,7 @@ impl Provider where {
transactions_for_signing: Mutex::default(), transactions_for_signing: Mutex::default(),
transactions_for_verification: Mutex::default(), transactions_for_verification: Mutex::default(),
client, client,
miner,
accounts, accounts,
channel, channel,
}) })
@ -282,6 +255,9 @@ impl Provider where {
match validation_account { match validation_account {
None => { None => {
// TODO [ToDr] This still seems a bit invalid, imho we should still import the transaction to the pool.
// Importing to pool verifies correctness and nonce; here we are just blindly forwarding.
//
// Not for verification, broadcast further to peers // Not for verification, broadcast further to peers
self.broadcast_private_transaction(rlp.into()); self.broadcast_private_transaction(rlp.into());
return Ok(()); return Ok(());
@ -291,29 +267,59 @@ impl Provider where {
trace!("Private transaction taken for verification"); trace!("Private transaction taken for verification");
let original_tx = self.extract_original_transaction(private_tx, &contract)?; let original_tx = self.extract_original_transaction(private_tx, &contract)?;
trace!("Validating transaction: {:?}", original_tx); trace!("Validating transaction: {:?}", original_tx);
let details_provider = TransactionDetailsProvider::new(&*self.client as &MiningBlockChainClient);
let insertion_time = self.client.chain_info().best_block_number;
// Verify with the first account available // Verify with the first account available
trace!("The following account will be used for verification: {:?}", validation_account); trace!("The following account will be used for verification: {:?}", validation_account);
self.transactions_for_verification.lock() let nonce_cache = Default::default();
.add_transaction(original_tx, contract, validation_account, hash, &details_provider, insertion_time)?; self.transactions_for_verification.lock().add_transaction(
original_tx,
contract,
validation_account,
hash,
self.pool_client(&nonce_cache),
)?;
// NOTE This will just fire `on_private_transaction_queued` but from a client thread.
// It seems that a lot of heavy work (verification) is done in this thread anyway
// it might actually make sense to decouple it from clientService and just use dedicated thread
// for both verification and execution.
self.channel.send(ClientIoMessage::NewPrivateTransaction).map_err(|_| ErrorKind::ClientIsMalformed.into()) self.channel.send(ClientIoMessage::NewPrivateTransaction).map_err(|_| ErrorKind::ClientIsMalformed.into())
} }
} }
} }
fn pool_client<'a>(&'a self, nonce_cache: &'a RwLock<HashMap<Address, U256>>) -> miner::pool_client::PoolClient<'a, Client> {
let engine = self.client.engine();
let refuse_service_transactions = true;
miner::pool_client::PoolClient::new(
&*self.client,
nonce_cache,
engine,
Some(&*self.accounts),
refuse_service_transactions,
)
}
/// Private transaction for validation added into queue /// Private transaction for validation added into queue
pub fn on_private_transaction_queued(&self) -> Result<(), Error> { pub fn on_private_transaction_queued(&self) -> Result<(), Error> {
self.process_queue() self.process_queue()
} }
/// Retrieve and verify the first available private transaction for every sender /// Retrieve and verify the first available private transaction for every sender
///
/// TODO [ToDr] It seems that:
/// 1. This method will fail on any error without removing invalid transaction.
/// 2. It means that the transaction will be stuck there forever and we will never be able to make any progress.
///
/// It might be more sensible to `drain()` transactions from the queue instead and process all of them,
/// possibly printing some errors in case of failures.
/// The 3 methods `ready_transaction,get_descriptor,remove` are always used in conjuction so most likely
/// can be replaced with a single `drain()` method instead.
/// Thanks to this we also don't really need to lock the entire verification for the time of execution.
fn process_queue(&self) -> Result<(), Error> { fn process_queue(&self) -> Result<(), Error> {
let nonce_cache = Default::default();
let mut verification_queue = self.transactions_for_verification.lock(); let mut verification_queue = self.transactions_for_verification.lock();
let ready_transactions = verification_queue.ready_transactions(); let ready_transactions = verification_queue.ready_transactions(self.pool_client(&nonce_cache));
let fetch_nonce = |a: &Address| self.client.latest_nonce(a);
for transaction in ready_transactions { for transaction in ready_transactions {
let transaction_hash = transaction.hash(); let transaction_hash = transaction.signed().hash();
match verification_queue.private_transaction_descriptor(&transaction_hash) { match verification_queue.private_transaction_descriptor(&transaction_hash) {
Ok(desc) => { Ok(desc) => {
if !self.validator_accounts.contains(&desc.validator_account) { if !self.validator_accounts.contains(&desc.validator_account) {
@ -321,9 +327,10 @@ impl Provider where {
bail!(ErrorKind::ValidatorAccountNotSet); bail!(ErrorKind::ValidatorAccountNotSet);
} }
let account = desc.validator_account; let account = desc.validator_account;
if let Action::Call(contract) = transaction.action { if let Action::Call(contract) = transaction.signed().action {
// TODO [ToDr] Usage of BlockId::Latest
let contract_nonce = self.get_contract_nonce(&contract, BlockId::Latest)?; let contract_nonce = self.get_contract_nonce(&contract, BlockId::Latest)?;
let private_state = self.execute_private_transaction(BlockId::Latest, &transaction)?; let private_state = self.execute_private_transaction(BlockId::Latest, transaction.signed())?;
let private_state_hash = self.calculate_state_hash(&private_state, contract_nonce); let private_state_hash = self.calculate_state_hash(&private_state, contract_nonce);
trace!("Hashed effective private state for validator: {:?}", private_state_hash); trace!("Hashed effective private state for validator: {:?}", private_state_hash);
let password = find_account_password(&self.passwords, &*self.accounts, &account); let password = find_account_password(&self.passwords, &*self.accounts, &account);
@ -341,7 +348,7 @@ impl Provider where {
bail!(e); bail!(e);
} }
} }
verification_queue.remove_private_transaction(&transaction_hash, &fetch_nonce); verification_queue.remove_private_transaction(&transaction_hash);
} }
Ok(()) Ok(())
} }
@ -354,6 +361,8 @@ impl Provider where {
let private_hash = tx.private_transaction_hash(); let private_hash = tx.private_transaction_hash();
let desc = match self.transactions_for_signing.lock().get(&private_hash) { let desc = match self.transactions_for_signing.lock().get(&private_hash) {
None => { None => {
// TODO [ToDr] Verification (we can't just blindly forward every transaction)
// Not our transaction, broadcast further to peers // Not our transaction, broadcast further to peers
self.broadcast_signed_private_transaction(rlp.into()); self.broadcast_signed_private_transaction(rlp.into());
return Ok(()); return Ok(());
@ -383,7 +392,7 @@ impl Provider where {
let password = find_account_password(&self.passwords, &*self.accounts, &signer_account); let password = find_account_password(&self.passwords, &*self.accounts, &signer_account);
let signature = self.accounts.sign(signer_account, password, hash)?; let signature = self.accounts.sign(signer_account, password, hash)?;
let signed = SignedTransaction::new(public_tx.with_signature(signature, chain_id))?; let signed = SignedTransaction::new(public_tx.with_signature(signature, chain_id))?;
match self.client.miner().import_own_transaction(&*self.client, signed.into()) { match self.miner.import_own_transaction(&*self.client, signed.into()) {
Ok(_) => trace!("Public transaction added to queue"), Ok(_) => trace!("Public transaction added to queue"),
Err(err) => { Err(err) => {
trace!("Failed to add transaction to queue, error: {:?}", err); trace!("Failed to add transaction to queue, error: {:?}", err);

View File

@ -14,15 +14,16 @@
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>. // along with Parity. If not, see <http://www.gnu.org/licenses/>.
use ethkey::Signature; use std::sync::Arc;
use std::collections::{HashMap, HashSet};
use bytes::Bytes; use bytes::Bytes;
use std::collections::HashMap; use ethcore_miner::pool;
use ethereum_types::{H256, U256, Address}; use ethereum_types::{H256, U256, Address};
use ethkey::Signature;
use transaction::{UnverifiedTransaction, SignedTransaction}; use transaction::{UnverifiedTransaction, SignedTransaction};
use ethcore_miner::transaction_queue::{TransactionQueue, RemovalReason,
TransactionDetailsProvider as TransactionQueueDetailsProvider, TransactionOrigin};
use error::{Error, ErrorKind}; use error::{Error, ErrorKind};
use ethcore::header::BlockNumber;
/// Maximum length for private transactions queues. /// Maximum length for private transactions queues.
const MAX_QUEUE_LEN: usize = 8312; const MAX_QUEUE_LEN: usize = 8312;
@ -39,56 +40,92 @@ pub struct PrivateTransactionDesc {
} }
/// Storage for private transactions for verification /// Storage for private transactions for verification
#[derive(Default)]
pub struct VerificationStore { pub struct VerificationStore {
/// Descriptors for private transactions in queue for verification with key - hash of the original transaction /// Descriptors for private transactions in queue for verification with key - hash of the original transaction
descriptors: HashMap<H256, PrivateTransactionDesc>, descriptors: HashMap<H256, PrivateTransactionDesc>,
/// Queue with transactions for verification /// Queue with transactions for verification
transactions: TransactionQueue, ///
/// TODO [ToDr] Might actually be better to use `txpool` directly and:
/// 1. Store descriptors inside `VerifiedTransaction`
/// 2. Use custom `ready` implementation to only fetch one transaction per sender.
/// 3. Get rid of passing dummy `block_number` and `timestamp`
transactions: pool::TransactionQueue,
}
impl Default for VerificationStore {
fn default() -> Self {
VerificationStore {
descriptors: Default::default(),
transactions: pool::TransactionQueue::new(
pool::Options {
max_count: MAX_QUEUE_LEN,
max_per_sender: MAX_QUEUE_LEN / 10,
max_mem_usage: 8 * 1024 * 1024,
},
pool::verifier::Options {
// TODO [ToDr] This should probably be based on some real values?
minimal_gas_price: 0.into(),
block_gas_limit: 8_000_000.into(),
tx_gas_limit: U256::max_value(),
},
pool::PrioritizationStrategy::GasPriceOnly,
)
}
}
} }
impl VerificationStore { impl VerificationStore {
/// Adds private transaction for verification into the store /// Adds private transaction for verification into the store
pub fn add_transaction( pub fn add_transaction<C: pool::client::Client>(
&mut self, &mut self,
transaction: UnverifiedTransaction, transaction: UnverifiedTransaction,
contract: Address, contract: Address,
validator_account: Address, validator_account: Address,
private_hash: H256, private_hash: H256,
details_provider: &TransactionQueueDetailsProvider, client: C,
insertion_time: BlockNumber,
) -> Result<(), Error> { ) -> Result<(), Error> {
if self.descriptors.len() > MAX_QUEUE_LEN { if self.descriptors.len() > MAX_QUEUE_LEN {
bail!(ErrorKind::QueueIsFull); bail!(ErrorKind::QueueIsFull);
} }
if self.descriptors.get(&transaction.hash()).is_some() { let transaction_hash = transaction.hash();
if self.descriptors.get(&transaction_hash).is_some() {
bail!(ErrorKind::PrivateTransactionAlreadyImported); bail!(ErrorKind::PrivateTransactionAlreadyImported);
} }
let transaction_hash = transaction.hash();
let signed_transaction = SignedTransaction::new(transaction)?; let results = self.transactions.import(
self.transactions client,
.add(signed_transaction, TransactionOrigin::External, insertion_time, None, details_provider) vec![pool::verifier::Transaction::Unverified(transaction)],
.and_then(|_| { );
self.descriptors.insert(transaction_hash, PrivateTransactionDesc{
// Verify that transaction was imported
results.into_iter()
.next()
.expect("One transaction inserted; one result returned; qed")?;
self.descriptors.insert(transaction_hash, PrivateTransactionDesc {
private_hash, private_hash,
contract, contract,
validator_account, validator_account,
}); });
Ok(()) Ok(())
})
.map_err(Into::into)
} }
/// Returns transactions ready for verification /// Returns transactions ready for verification
/// Returns only one transaction per sender because several cannot be verified in a row without verification from other peers /// Returns only one transaction per sender because several cannot be verified in a row without verification from other peers
pub fn ready_transactions(&self) -> Vec<SignedTransaction> { pub fn ready_transactions<C: pool::client::NonceClient>(&self, client: C) -> Vec<Arc<pool::VerifiedTransaction>> {
// TODO [ToDr] Performance killer, re-work with new transaction queue. // We never store PendingTransactions and we don't use internal cache,
let mut transactions = self.transactions.top_transactions(); // so we don't need to provide real block number of timestamp here
// TODO [ToDr] Potential issue (create low address to have your transactions processed first) let block_number = 0;
transactions.sort_by(|a, b| a.sender().cmp(&b.sender())); let timestamp = 0;
transactions.dedup_by(|a, b| a.sender().eq(&b.sender())); let nonce_cap = None;
transactions
self.transactions.collect_pending(client, block_number, timestamp, nonce_cap, |transactions| {
// take only one transaction per sender
let mut senders = HashSet::with_capacity(self.descriptors.len());
transactions.filter(move |tx| senders.insert(tx.signed().sender())).collect()
})
} }
/// Returns descriptor of the corresponding private transaction /// Returns descriptor of the corresponding private transaction
@ -97,11 +134,9 @@ impl VerificationStore {
} }
/// Remove transaction from the queue for verification /// Remove transaction from the queue for verification
pub fn remove_private_transaction<F>(&mut self, transaction_hash: &H256, fetch_nonce: &F) pub fn remove_private_transaction(&mut self, transaction_hash: &H256) {
where F: Fn(&Address) -> U256 {
self.descriptors.remove(transaction_hash); self.descriptors.remove(transaction_hash);
self.transactions.remove(transaction_hash, fetch_nonce, RemovalReason::Invalid); self.transactions.remove(&[*transaction_hash], true);
} }
} }

View File

@ -36,6 +36,7 @@ use ethcore::account_provider::AccountProvider;
use ethcore::client::BlockChainClient; use ethcore::client::BlockChainClient;
use ethcore::client::BlockId; use ethcore::client::BlockId;
use ethcore::executive::{contract_address}; use ethcore::executive::{contract_address};
use ethcore::miner::Miner;
use ethcore::test_helpers::{generate_dummy_client, push_block_with_transactions}; use ethcore::test_helpers::{generate_dummy_client, push_block_with_transactions};
use ethcore_transaction::{Transaction, Action}; use ethcore_transaction::{Transaction, Action};
use ethkey::{Secret, KeyPair, Signature}; use ethkey::{Secret, KeyPair, Signature};
@ -65,7 +66,15 @@ fn private_contract() {
}; };
let io = ethcore_io::IoChannel::disconnected(); let io = ethcore_io::IoChannel::disconnected();
let pm = Arc::new(Provider::new(client.clone(), ap.clone(), Box::new(NoopEncryptor::default()), config, io).unwrap()); let miner = Arc::new(Miner::new_for_tests(&::ethcore::spec::Spec::new_test(), None));
let pm = Arc::new(Provider::new(
client.clone(),
miner,
ap.clone(),
Box::new(NoopEncryptor::default()),
config,
io,
).unwrap());
let (address, _) = contract_address(CreateContractAddress::FromSenderAndNonce, &key1.address(), &0.into(), &[]); let (address, _) = contract_address(CreateContractAddress::FromSenderAndNonce, &key1.address(), &0.into(), &[]);

View File

@ -92,7 +92,7 @@ impl ClientService {
info!("Configured for {} using {} engine", Colour::White.bold().paint(spec.name.clone()), Colour::Yellow.bold().paint(spec.engine.name())); info!("Configured for {} using {} engine", Colour::White.bold().paint(spec.name.clone()), Colour::Yellow.bold().paint(spec.engine.name()));
let pruning = config.pruning; let pruning = config.pruning;
let client = Client::new(config, &spec, client_db.clone(), miner, io_service.channel())?; let client = Client::new(config, &spec, client_db.clone(), miner.clone(), io_service.channel())?;
let snapshot_params = SnapServiceParams { let snapshot_params = SnapServiceParams {
engine: spec.engine.clone(), engine: spec.engine.clone(),
@ -105,7 +105,14 @@ impl ClientService {
}; };
let snapshot = Arc::new(SnapshotService::new(snapshot_params)?); let snapshot = Arc::new(SnapshotService::new(snapshot_params)?);
let provider = Arc::new(ethcore_private_tx::Provider::new(client.clone(), account_provider, encryptor, private_tx_conf, io_service.channel())?); let provider = Arc::new(ethcore_private_tx::Provider::new(
client.clone(),
miner,
account_provider,
encryptor,
private_tx_conf,
io_service.channel())?,
);
let private_tx = Arc::new(PrivateTxService::new(provider)); let private_tx = Arc::new(PrivateTxService::new(provider));
let client_io = Arc::new(ClientIoHandler { let client_io = Arc::new(ClientIoHandler {
@ -292,10 +299,10 @@ mod tests {
&snapshot_path, &snapshot_path,
restoration_db_handler, restoration_db_handler,
tempdir.path(), tempdir.path(),
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
Arc::new(AccountProvider::transient_provider()), Arc::new(AccountProvider::transient_provider()),
Box::new(ethcore_private_tx::NoopEncryptor), Box::new(ethcore_private_tx::NoopEncryptor),
Default::default() Default::default(),
); );
assert!(service.is_ok()); assert!(service.is_ok());
drop(service.unwrap()); drop(service.unwrap());

View File

@ -66,8 +66,6 @@ pub enum SignError {
Hardware(HardwareError), Hardware(HardwareError),
/// Low-level error from store /// Low-level error from store
SStore(SSError), SStore(SSError),
/// Inappropriate chain
InappropriateChain,
} }
impl fmt::Display for SignError { impl fmt::Display for SignError {
@ -77,7 +75,6 @@ impl fmt::Display for SignError {
SignError::NotFound => write!(f, "Account does not exist"), SignError::NotFound => write!(f, "Account does not exist"),
SignError::Hardware(ref e) => write!(f, "{}", e), SignError::Hardware(ref e) => write!(f, "{}", e),
SignError::SStore(ref e) => write!(f, "{}", e), SignError::SStore(ref e) => write!(f, "{}", e),
SignError::InappropriateChain => write!(f, "Inappropriate chain"),
} }
} }
} }

View File

@ -337,8 +337,33 @@ impl<'x> OpenBlock<'x> {
} }
/// Push transactions onto the block. /// Push transactions onto the block.
pub fn push_transactions(&mut self, transactions: &[SignedTransaction]) -> Result<(), Error> { #[cfg(not(feature = "slow-blocks"))]
push_transactions(self, transactions) fn push_transactions(&mut self, transactions: Vec<SignedTransaction>) -> Result<(), Error> {
for t in transactions {
self.push_transaction(t, None)?;
}
Ok(())
}
/// Push transactions onto the block.
#[cfg(feature = "slow-blocks")]
fn push_transactions(&mut self, transactions: Vec<SignedTransaction>) -> Result<(), Error> {
use std::time;
let slow_tx = option_env!("SLOW_TX_DURATION").and_then(|v| v.parse().ok()).unwrap_or(100);
for t in transactions {
let hash = t.hash();
let start = time::Instant::now();
self.push_transaction(t, None)?;
let took = start.elapsed();
let took_ms = took.as_secs() * 1000 + took.subsec_nanos() as u64 / 1000000;
if took > time::Duration::from_millis(slow_tx) {
warn!("Heavy ({} ms) transaction in block {:?}: {:?}", took_ms, block.header().number(), hash);
}
debug!(target: "tx", "Transaction {:?} took: {} ms", hash, took_ms);
}
Ok(())
} }
/// Populate self from a header. /// Populate self from a header.
@ -534,10 +559,10 @@ impl IsBlock for SealedBlock {
} }
/// Enact the block given by block header, transactions and uncles /// Enact the block given by block header, transactions and uncles
pub fn enact( fn enact(
header: &Header, header: Header,
transactions: &[SignedTransaction], transactions: Vec<SignedTransaction>,
uncles: &[Header], uncles: Vec<Header>,
engine: &EthEngine, engine: &EthEngine,
tracing: bool, tracing: bool,
db: StateDB, db: StateDB,
@ -568,11 +593,11 @@ pub fn enact(
is_epoch_begin, is_epoch_begin,
)?; )?;
b.populate_from(header); b.populate_from(&header);
b.push_transactions(transactions)?; b.push_transactions(transactions)?;
for u in uncles { for u in uncles {
b.push_uncle(u.clone())?; b.push_uncle(u)?;
} }
if strip_receipts { if strip_receipts {
@ -584,38 +609,9 @@ pub fn enact(
Ok(b.close_and_lock()) Ok(b.close_and_lock())
} }
#[inline]
#[cfg(not(feature = "slow-blocks"))]
fn push_transactions(block: &mut OpenBlock, transactions: &[SignedTransaction]) -> Result<(), Error> {
for t in transactions {
block.push_transaction(t.clone(), None)?;
}
Ok(())
}
#[cfg(feature = "slow-blocks")]
fn push_transactions(block: &mut OpenBlock, transactions: &[SignedTransaction]) -> Result<(), Error> {
use std::time;
let slow_tx = option_env!("SLOW_TX_DURATION").and_then(|v| v.parse().ok()).unwrap_or(100);
for t in transactions {
let hash = t.hash();
let start = time::Instant::now();
block.push_transaction(t.clone(), None)?;
let took = start.elapsed();
let took_ms = took.as_secs() * 1000 + took.subsec_nanos() as u64 / 1000000;
if took > time::Duration::from_millis(slow_tx) {
warn!("Heavy ({} ms) transaction in block {:?}: {:?}", took_ms, block.header().number(), hash);
}
debug!(target: "tx", "Transaction {:?} took: {} ms", hash, took_ms);
}
Ok(())
}
// TODO [ToDr] Pass `PreverifiedBlock` by move, this will avoid unecessary allocation
/// Enact the block given by `block_bytes` using `engine` on the database `db` with given `parent` block header /// Enact the block given by `block_bytes` using `engine` on the database `db` with given `parent` block header
pub fn enact_verified( pub fn enact_verified(
block: &PreverifiedBlock, block: PreverifiedBlock,
engine: &EthEngine, engine: &EthEngine,
tracing: bool, tracing: bool,
db: StateDB, db: StateDB,
@ -629,9 +625,9 @@ pub fn enact_verified(
let view = BlockView::new(&block.bytes); let view = BlockView::new(&block.bytes);
enact( enact(
&block.header, block.header,
&block.transactions, block.transactions,
&view.uncles(), view.uncles(),
engine, engine,
tracing, tracing,
db, db,
@ -700,7 +696,7 @@ mod tests {
)?; )?;
b.populate_from(&header); b.populate_from(&header);
b.push_transactions(&transactions)?; b.push_transactions(transactions)?;
for u in &block.uncles() { for u in &block.uncles() {
b.push_uncle(u.clone())?; b.push_uncle(u.clone())?;
@ -793,3 +789,4 @@ mod tests {
assert!(orig_db.journal_db().keys().iter().filter(|k| orig_db.journal_db().get(k.0) != db.journal_db().get(k.0)).next() == None); assert!(orig_db.journal_db().keys().iter().filter(|k| orig_db.journal_db().get(k.0) != db.journal_db().get(k.0)).next() == None);
} }
} }

View File

@ -15,6 +15,7 @@
// along with Parity. If not, see <http://www.gnu.org/licenses/>. // along with Parity. If not, see <http://www.gnu.org/licenses/>.
use ethereum_types::{H256, U256}; use ethereum_types::{H256, U256};
use encoded; use encoded;
use header::{Header, BlockNumber}; use header::{Header, BlockNumber};

View File

@ -14,8 +14,9 @@
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>. // along with Parity. If not, see <http://www.gnu.org/licenses/>.
use ethereum_types::H256;
use bytes::Bytes; use bytes::Bytes;
use ethereum_types::H256;
use transaction::UnverifiedTransaction;
/// Messages to broadcast via chain /// Messages to broadcast via chain
pub enum ChainMessageType { pub enum ChainMessageType {
@ -59,7 +60,7 @@ pub trait ChainNotify : Send + Sync {
/// fires when new transactions are received from a peer /// fires when new transactions are received from a peer
fn transactions_received(&self, fn transactions_received(&self,
_hashes: Vec<H256>, _txs: &[UnverifiedTransaction],
_peer_id: usize, _peer_id: usize,
) { ) {
// does nothing by default // does nothing by default

View File

@ -44,7 +44,7 @@ use client::{
}; };
use client::{ use client::{
BlockId, TransactionId, UncleId, TraceId, ClientConfig, BlockChainClient, BlockId, TransactionId, UncleId, TraceId, ClientConfig, BlockChainClient,
MiningBlockChainClient, TraceFilter, CallAnalytics, BlockImportError, Mode, TraceFilter, CallAnalytics, BlockImportError, Mode,
ChainNotify, PruningInfo, ProvingBlockChainClient, EngineInfo, ChainMessageType ChainNotify, PruningInfo, ProvingBlockChainClient, EngineInfo, ChainMessageType
}; };
use encoded; use encoded;
@ -58,6 +58,7 @@ use header::{BlockNumber, Header};
use io::IoChannel; use io::IoChannel;
use log_entry::LocalizedLogEntry; use log_entry::LocalizedLogEntry;
use miner::{Miner, MinerService}; use miner::{Miner, MinerService};
use ethcore_miner::pool::VerifiedTransaction;
use parking_lot::{Mutex, RwLock}; use parking_lot::{Mutex, RwLock};
use rand::OsRng; use rand::OsRng;
use receipt::{Receipt, LocalizedReceipt}; use receipt::{Receipt, LocalizedReceipt};
@ -68,7 +69,7 @@ use state_db::StateDB;
use state::{self, State}; use state::{self, State};
use trace; use trace;
use trace::{TraceDB, ImportRequest as TraceImportRequest, LocalizedTrace, Database as TraceDatabase}; use trace::{TraceDB, ImportRequest as TraceImportRequest, LocalizedTrace, Database as TraceDatabase};
use transaction::{self, LocalizedTransaction, UnverifiedTransaction, SignedTransaction, Transaction, PendingTransaction, Action}; use transaction::{self, LocalizedTransaction, UnverifiedTransaction, SignedTransaction, Transaction, Action};
use types::filter::Filter; use types::filter::Filter;
use types::mode::Mode as IpcMode; use types::mode::Mode as IpcMode;
use verification; use verification;
@ -103,10 +104,10 @@ pub struct ClientReport {
impl ClientReport { impl ClientReport {
/// Alter internal reporting to reflect the additional `block` has been processed. /// Alter internal reporting to reflect the additional `block` has been processed.
pub fn accrue_block(&mut self, block: &PreverifiedBlock) { pub fn accrue_block(&mut self, header: &Header, transactions: usize) {
self.blocks_imported += 1; self.blocks_imported += 1;
self.transactions_applied += block.transactions.len(); self.transactions_applied += transactions;
self.gas_processed = self.gas_processed + block.header.gas_used().clone(); self.gas_processed = self.gas_processed + *header.gas_used();
} }
} }
@ -295,23 +296,29 @@ impl Importer {
let start = Instant::now(); let start = Instant::now();
for block in blocks { for block in blocks {
let header = &block.header; let header = block.header.clone();
let bytes = block.bytes.clone();
let hash = header.hash();
let is_invalid = invalid_blocks.contains(header.parent_hash()); let is_invalid = invalid_blocks.contains(header.parent_hash());
if is_invalid { if is_invalid {
invalid_blocks.insert(header.hash()); invalid_blocks.insert(hash);
continue; continue;
} }
if let Ok(closed_block) = self.check_and_close_block(&block, client) {
if self.engine.is_proposal(&block.header) {
self.block_queue.mark_as_good(&[header.hash()]);
proposed_blocks.push(block.bytes);
} else {
imported_blocks.push(header.hash());
let route = self.commit_block(closed_block, &header, &block.bytes, client); if let Ok(closed_block) = self.check_and_close_block(block, client) {
if self.engine.is_proposal(&header) {
self.block_queue.mark_as_good(&[hash]);
proposed_blocks.push(bytes);
} else {
imported_blocks.push(hash);
let transactions_len = closed_block.transactions().len();
let route = self.commit_block(closed_block, &header, &bytes, client);
import_results.push(route); import_results.push(route);
client.report.write().accrue_block(&block); client.report.write().accrue_block(&header, transactions_len);
} }
} else { } else {
invalid_blocks.insert(header.hash()); invalid_blocks.insert(header.hash());
@ -337,7 +344,7 @@ impl Importer {
let (enacted, retracted) = self.calculate_enacted_retracted(&import_results); let (enacted, retracted) = self.calculate_enacted_retracted(&import_results);
if is_empty { if is_empty {
self.miner.chain_new_blocks(client, &imported_blocks, &invalid_blocks, &enacted, &retracted); self.miner.chain_new_blocks(client, &imported_blocks, &invalid_blocks, &enacted, &retracted, false);
} }
client.notify(|notify| { client.notify(|notify| {
@ -358,9 +365,9 @@ impl Importer {
imported imported
} }
fn check_and_close_block(&self, block: &PreverifiedBlock, client: &Client) -> Result<LockedBlock, ()> { fn check_and_close_block(&self, block: PreverifiedBlock, client: &Client) -> Result<LockedBlock, ()> {
let engine = &*self.engine; let engine = &*self.engine;
let header = &block.header; let header = block.header.clone();
// Check the block isn't so old we won't be able to enact it. // Check the block isn't so old we won't be able to enact it.
let best_block_number = client.chain.read().best_block_number(); let best_block_number = client.chain.read().best_block_number();
@ -381,7 +388,7 @@ impl Importer {
let chain = client.chain.read(); let chain = client.chain.read();
// Verify Block Family // Verify Block Family
let verify_family_result = self.verifier.verify_block_family( let verify_family_result = self.verifier.verify_block_family(
header, &header,
&parent, &parent,
engine, engine,
Some(verification::FullFamilyParams { Some(verification::FullFamilyParams {
@ -397,7 +404,7 @@ impl Importer {
return Err(()); return Err(());
}; };
let verify_external_result = self.verifier.verify_block_external(header, engine); let verify_external_result = self.verifier.verify_block_external(&header, engine);
if let Err(e) = verify_external_result { if let Err(e) = verify_external_result {
warn!(target: "client", "Stage 4 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e); warn!(target: "client", "Stage 4 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
return Err(()); return Err(());
@ -409,7 +416,8 @@ impl Importer {
let is_epoch_begin = chain.epoch_transition(parent.number(), *header.parent_hash()).is_some(); let is_epoch_begin = chain.epoch_transition(parent.number(), *header.parent_hash()).is_some();
let strip_receipts = header.number() < engine.params().validate_receipts_transition; let strip_receipts = header.number() < engine.params().validate_receipts_transition;
let enact_result = enact_verified(block, let enact_result = enact_verified(
block,
engine, engine,
client.tracedb.read().tracing_enabled(), client.tracedb.read().tracing_enabled(),
db, db,
@ -425,7 +433,7 @@ impl Importer {
})?; })?;
// Final Verification // Final Verification
if let Err(e) = self.verifier.verify_block_final(header, locked_block.block().header()) { if let Err(e) = self.verifier.verify_block_final(&header, locked_block.block().header()) {
warn!(target: "client", "Stage 5 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e); warn!(target: "client", "Stage 5 block verification failed for #{} ({})\nError: {:?}", header.number(), header.hash(), e);
return Err(()); return Err(());
} }
@ -975,19 +983,24 @@ impl Client {
/// Import transactions from the IO queue /// Import transactions from the IO queue
pub fn import_queued_transactions(&self, transactions: &[Bytes], peer_id: usize) -> usize { pub fn import_queued_transactions(&self, transactions: &[Bytes], peer_id: usize) -> usize {
trace!(target: "external_tx", "Importing queued");
trace_time!("import_queued_transactions"); trace_time!("import_queued_transactions");
self.queue_transactions.fetch_sub(transactions.len(), AtomicOrdering::SeqCst); self.queue_transactions.fetch_sub(transactions.len(), AtomicOrdering::SeqCst);
let txs: Vec<UnverifiedTransaction> = transactions.iter().filter_map(|bytes| UntrustedRlp::new(bytes).as_val().ok()).collect();
let hashes: Vec<_> = txs.iter().map(|tx| tx.hash()).collect(); let txs: Vec<UnverifiedTransaction> = transactions
.iter()
.filter_map(|bytes| UntrustedRlp::new(bytes).as_val().ok())
.collect();
self.notify(|notify| { self.notify(|notify| {
notify.transactions_received(hashes.clone(), peer_id); notify.transactions_received(&txs, peer_id);
}); });
let results = self.importer.miner.import_external_transactions(self, txs); let results = self.importer.miner.import_external_transactions(self, txs);
results.len() results.len()
} }
/// Get shared miner reference. /// Get shared miner reference.
#[cfg(test)]
pub fn miner(&self) -> Arc<Miner> { pub fn miner(&self) -> Arc<Miner> {
self.importer.miner.clone() self.importer.miner.clone()
} }
@ -1915,12 +1928,8 @@ impl BlockChainClient for Client {
} }
} }
fn ready_transactions(&self) -> Vec<PendingTransaction> { fn ready_transactions(&self) -> Vec<Arc<VerifiedTransaction>> {
let (number, timestamp) = { self.importer.miner.ready_transactions(self)
let chain = self.chain.read();
(chain.best_block_number(), chain.best_block_timestamp())
};
self.importer.miner.ready_transactions(number, timestamp)
} }
fn queue_consensus_message(&self, message: Bytes) { fn queue_consensus_message(&self, message: Bytes) {
@ -1951,17 +1960,19 @@ impl BlockChainClient for Client {
} }
} }
fn transact_contract(&self, address: Address, data: Bytes) -> Result<transaction::ImportResult, EthcoreError> { fn transact_contract(&self, address: Address, data: Bytes) -> Result<(), transaction::Error> {
let authoring_params = self.importer.miner.authoring_params();
let transaction = Transaction { let transaction = Transaction {
nonce: self.latest_nonce(&self.importer.miner.author()), nonce: self.latest_nonce(&authoring_params.author),
action: Action::Call(address), action: Action::Call(address),
gas: self.importer.miner.gas_floor_target(), gas: self.importer.miner.sensible_gas_limit(),
gas_price: self.importer.miner.sensible_gas_price(), gas_price: self.importer.miner.sensible_gas_price(),
value: U256::zero(), value: U256::zero(),
data: data, data: data,
}; };
let chain_id = self.engine.signing_chain_id(&self.latest_env_info()); let chain_id = self.engine.signing_chain_id(&self.latest_env_info());
let signature = self.engine.sign(transaction.hash(chain_id))?; let signature = self.engine.sign(transaction.hash(chain_id))
.map_err(|e| transaction::Error::InvalidSignature(e.to_string()))?;
let signed = SignedTransaction::new(transaction.with_signature(signature, chain_id))?; let signed = SignedTransaction::new(transaction.with_signature(signature, chain_id))?;
self.importer.miner.import_own_transaction(self, signed.into()) self.importer.miner.import_own_transaction(self, signed.into())
} }
@ -2070,7 +2081,7 @@ impl ImportSealedBlock for Client {
route route
}; };
let (enacted, retracted) = self.importer.calculate_enacted_retracted(&[route]); let (enacted, retracted) = self.importer.calculate_enacted_retracted(&[route]);
self.importer.miner.chain_new_blocks(self, &[h.clone()], &[], &enacted, &retracted); self.importer.miner.chain_new_blocks(self, &[h.clone()], &[], &enacted, &retracted, true);
self.notify(|notify| { self.notify(|notify| {
notify.new_blocks( notify.new_blocks(
vec![h.clone()], vec![h.clone()],
@ -2108,11 +2119,8 @@ impl BroadcastProposalBlock for Client {
impl SealedBlockImporter for Client {} impl SealedBlockImporter for Client {}
impl MiningBlockChainClient for Client { impl ::miner::TransactionVerifierClient for Client {}
fn vm_factory(&self) -> &VmFactory { impl ::miner::BlockChainClient for Client {}
&self.factories.vm
}
}
impl super::traits::EngineClient for Client { impl super::traits::EngineClient for Client {
fn update_sealing(&self) { fn update_sealing(&self) {
@ -2120,8 +2128,9 @@ impl super::traits::EngineClient for Client {
} }
fn submit_seal(&self, block_hash: H256, seal: Vec<Bytes>) { fn submit_seal(&self, block_hash: H256, seal: Vec<Bytes>) {
if self.importer.miner.submit_seal(self, block_hash, seal).is_err() { let import = self.importer.miner.submit_seal(block_hash, seal).and_then(|block| self.import_sealed_block(block));
warn!(target: "poa", "Wrong internal seal submission!") if let Err(err) = import {
warn!(target: "poa", "Wrong internal seal submission! {:?}", err);
} }
} }

View File

@ -38,7 +38,7 @@ pub use self::traits::{
}; };
//pub use self::private_notify::PrivateNotify; //pub use self::private_notify::PrivateNotify;
pub use state::StateInfo; pub use state::StateInfo;
pub use self::traits::{BlockChainClient, MiningBlockChainClient, EngineClient, ProvingBlockChainClient}; pub use self::traits::{BlockChainClient, EngineClient, ProvingBlockChainClient};
pub use types::ids::*; pub use types::ids::*;
pub use types::trace_filter::Filter as TraceFilter; pub use types::trace_filter::Filter as TraceFilter;

View File

@ -31,11 +31,12 @@ use kvdb_memorydb;
use bytes::Bytes; use bytes::Bytes;
use rlp::{UntrustedRlp, RlpStream}; use rlp::{UntrustedRlp, RlpStream};
use ethkey::{Generator, Random}; use ethkey::{Generator, Random};
use transaction::{self, Transaction, LocalizedTransaction, PendingTransaction, SignedTransaction, Action}; use ethcore_miner::pool::VerifiedTransaction;
use transaction::{self, Transaction, LocalizedTransaction, SignedTransaction, Action};
use blockchain::{TreeRoute, BlockReceipts}; use blockchain::{TreeRoute, BlockReceipts};
use client::{ use client::{
Nonce, Balance, ChainInfo, BlockInfo, ReopenBlock, CallContract, TransactionInfo, RegistryInfo, Nonce, Balance, ChainInfo, BlockInfo, ReopenBlock, CallContract, TransactionInfo, RegistryInfo,
PrepareOpenBlock, BlockChainClient, MiningBlockChainClient, BlockChainInfo, BlockStatus, BlockId, PrepareOpenBlock, BlockChainClient, BlockChainInfo, BlockStatus, BlockId,
TransactionId, UncleId, TraceId, TraceFilter, LastHashes, CallAnalytics, BlockImportError, TransactionId, UncleId, TraceId, TraceFilter, LastHashes, CallAnalytics, BlockImportError,
ProvingBlockChainClient, ScheduleInfo, ImportSealedBlock, BroadcastProposalBlock, ImportBlock, StateOrBlock, ProvingBlockChainClient, ScheduleInfo, ImportSealedBlock, BroadcastProposalBlock, ImportBlock, StateOrBlock,
Call, StateClient, EngineInfo, AccountData, BlockChain, BlockProducer, SealedBlockImporter Call, StateClient, EngineInfo, AccountData, BlockChain, BlockProducer, SealedBlockImporter
@ -45,9 +46,7 @@ use header::{Header as BlockHeader, BlockNumber};
use filter::Filter; use filter::Filter;
use log_entry::LocalizedLogEntry; use log_entry::LocalizedLogEntry;
use receipt::{Receipt, LocalizedReceipt, TransactionOutcome}; use receipt::{Receipt, LocalizedReceipt, TransactionOutcome};
use error::{ImportResult, Error as EthcoreError}; use error::ImportResult;
use evm::VMType;
use factory::VmFactory;
use vm::Schedule; use vm::Schedule;
use miner::{Miner, MinerService}; use miner::{Miner, MinerService};
use spec::Spec; use spec::Spec;
@ -102,8 +101,6 @@ pub struct TestBlockChainClient {
pub miner: Arc<Miner>, pub miner: Arc<Miner>,
/// Spec /// Spec
pub spec: Spec, pub spec: Spec,
/// VM Factory
pub vm_factory: VmFactory,
/// Timestamp assigned to latest sealed block /// Timestamp assigned to latest sealed block
pub latest_block_timestamp: RwLock<u64>, pub latest_block_timestamp: RwLock<u64>,
/// Ancient block info. /// Ancient block info.
@ -174,9 +171,8 @@ impl TestBlockChainClient {
receipts: RwLock::new(HashMap::new()), receipts: RwLock::new(HashMap::new()),
logs: RwLock::new(Vec::new()), logs: RwLock::new(Vec::new()),
queue_size: AtomicUsize::new(0), queue_size: AtomicUsize::new(0),
miner: Arc::new(Miner::with_spec(&spec)), miner: Arc::new(Miner::new_for_tests(&spec, None)),
spec: spec, spec: spec,
vm_factory: VmFactory::new(VMType::Interpreter, 1024 * 1024),
latest_block_timestamp: RwLock::new(10_000_000), latest_block_timestamp: RwLock::new(10_000_000),
ancient_block: RwLock::new(None), ancient_block: RwLock::new(None),
first_block: RwLock::new(None), first_block: RwLock::new(None),
@ -345,8 +341,8 @@ impl TestBlockChainClient {
self.set_balance(signed_tx.sender(), 10_000_000_000_000_000_000u64.into()); self.set_balance(signed_tx.sender(), 10_000_000_000_000_000_000u64.into());
let hash = signed_tx.hash(); let hash = signed_tx.hash();
let res = self.miner.import_external_transactions(self, vec![signed_tx.into()]); let res = self.miner.import_external_transactions(self, vec![signed_tx.into()]);
let res = res.into_iter().next().unwrap().expect("Successful import"); let res = res.into_iter().next().unwrap();
assert_eq!(res, transaction::ImportResult::Current); assert!(res.is_ok());
hash hash
} }
@ -423,11 +419,8 @@ impl BroadcastProposalBlock for TestBlockChainClient {
impl SealedBlockImporter for TestBlockChainClient {} impl SealedBlockImporter for TestBlockChainClient {}
impl MiningBlockChainClient for TestBlockChainClient { impl ::miner::TransactionVerifierClient for TestBlockChainClient {}
fn vm_factory(&self) -> &VmFactory { impl ::miner::BlockChainClient for TestBlockChainClient {}
&self.vm_factory
}
}
impl Nonce for TestBlockChainClient { impl Nonce for TestBlockChainClient {
fn nonce(&self, address: &Address, id: BlockId) -> Option<U256> { fn nonce(&self, address: &Address, id: BlockId) -> Option<U256> {
@ -826,9 +819,8 @@ impl BlockChainClient for TestBlockChainClient {
self.spec.engine.handle_message(&message).unwrap(); self.spec.engine.handle_message(&message).unwrap();
} }
fn ready_transactions(&self) -> Vec<PendingTransaction> { fn ready_transactions(&self) -> Vec<Arc<VerifiedTransaction>> {
let info = self.chain_info(); self.miner.ready_transactions(self)
self.miner.ready_transactions(info.best_block_number, info.best_block_timestamp)
} }
fn signing_chain_id(&self) -> Option<u64> { None } fn signing_chain_id(&self) -> Option<u64> { None }
@ -851,9 +843,9 @@ impl BlockChainClient for TestBlockChainClient {
} }
} }
fn transact_contract(&self, address: Address, data: Bytes) -> Result<transaction::ImportResult, EthcoreError> { fn transact_contract(&self, address: Address, data: Bytes) -> Result<(), transaction::Error> {
let transaction = Transaction { let transaction = Transaction {
nonce: self.latest_nonce(&self.miner.author()), nonce: self.latest_nonce(&self.miner.authoring_params().author),
action: Action::Call(address), action: Action::Call(address),
gas: self.spec.gas_limit, gas: self.spec.gas_limit,
gas_price: U256::zero(), gas_price: U256::zero(),
@ -895,8 +887,9 @@ impl super::traits::EngineClient for TestBlockChainClient {
} }
fn submit_seal(&self, block_hash: H256, seal: Vec<Bytes>) { fn submit_seal(&self, block_hash: H256, seal: Vec<Bytes>) {
if self.miner.submit_seal(self, block_hash, seal).is_err() { let import = self.miner.submit_seal(block_hash, seal).and_then(|block| self.import_sealed_block(block));
warn!(target: "poa", "Wrong internal seal submission!") if let Err(err) = import {
warn!(target: "poa", "Wrong internal seal submission! {:?}", err);
} }
} }

View File

@ -15,28 +15,30 @@
// along with Parity. If not, see <http://www.gnu.org/licenses/>. // along with Parity. If not, see <http://www.gnu.org/licenses/>.
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::sync::Arc;
use itertools::Itertools; use itertools::Itertools;
use block::{OpenBlock, SealedBlock, ClosedBlock}; use block::{OpenBlock, SealedBlock, ClosedBlock};
use blockchain::TreeRoute; use blockchain::TreeRoute;
use encoded; use encoded;
use vm::LastHashes; use vm::LastHashes;
use error::{ImportResult, CallError, Error as EthcoreError, BlockImportError}; use error::{ImportResult, CallError, BlockImportError};
use evm::Schedule; use evm::Schedule;
use factory::VmFactory;
use executive::Executed; use executive::Executed;
use filter::Filter; use filter::Filter;
use header::{BlockNumber}; use header::{BlockNumber};
use log_entry::LocalizedLogEntry; use log_entry::LocalizedLogEntry;
use receipt::LocalizedReceipt; use receipt::LocalizedReceipt;
use trace::LocalizedTrace; use trace::LocalizedTrace;
use transaction::{LocalizedTransaction, PendingTransaction, SignedTransaction, ImportResult as TransactionImportResult}; use transaction::{self, LocalizedTransaction, SignedTransaction};
use verification::queue::QueueInfo as BlockQueueInfo; use verification::queue::QueueInfo as BlockQueueInfo;
use state::StateInfo; use state::StateInfo;
use header::Header; use header::Header;
use engines::EthEngine; use engines::EthEngine;
use ethereum_types::{H256, U256, Address}; use ethereum_types::{H256, U256, Address};
use ethcore_miner::pool::VerifiedTransaction;
use bytes::Bytes; use bytes::Bytes;
use hashdb::DBValue; use hashdb::DBValue;
@ -315,7 +317,7 @@ pub trait BlockChainClient : Sync + Send + AccountData + BlockChain + CallContra
fn queue_consensus_message(&self, message: Bytes); fn queue_consensus_message(&self, message: Bytes);
/// List all transactions that are allowed into the next block. /// List all transactions that are allowed into the next block.
fn ready_transactions(&self) -> Vec<PendingTransaction>; fn ready_transactions(&self) -> Vec<Arc<VerifiedTransaction>>;
/// Sorted list of transaction gas prices from at least last sample_size blocks. /// Sorted list of transaction gas prices from at least last sample_size blocks.
fn gas_price_corpus(&self, sample_size: usize) -> ::stats::Corpus<U256> { fn gas_price_corpus(&self, sample_size: usize) -> ::stats::Corpus<U256> {
@ -366,8 +368,8 @@ pub trait BlockChainClient : Sync + Send + AccountData + BlockChain + CallContra
/// Returns information about pruning/data availability. /// Returns information about pruning/data availability.
fn pruning_info(&self) -> PruningInfo; fn pruning_info(&self) -> PruningInfo;
/// Import a transaction: used for misbehaviour reporting. /// Schedule state-altering transaction to be executed on the next pending block.
fn transact_contract(&self, address: Address, data: Bytes) -> Result<TransactionImportResult, EthcoreError>; fn transact_contract(&self, address: Address, data: Bytes) -> Result<(), transaction::Error>;
/// Get the address of the registry itself. /// Get the address of the registry itself.
fn registrar_address(&self) -> Option<Address>; fn registrar_address(&self) -> Option<Address>;
@ -416,12 +418,6 @@ pub trait BroadcastProposalBlock {
/// Provides methods to import sealed block and broadcast a block proposal /// Provides methods to import sealed block and broadcast a block proposal
pub trait SealedBlockImporter: ImportSealedBlock + BroadcastProposalBlock {} pub trait SealedBlockImporter: ImportSealedBlock + BroadcastProposalBlock {}
/// Extended client interface used for mining
pub trait MiningBlockChainClient: BlockChainClient + BlockProducer + ScheduleInfo + SealedBlockImporter {
/// Returns EvmFactory.
fn vm_factory(&self) -> &VmFactory;
}
/// Client facilities used by internally sealing Engines. /// Client facilities used by internally sealing Engines.
pub trait EngineClient: Sync + Send + ChainInfo { pub trait EngineClient: Sync + Send + ChainInfo {
/// Make a new block and seal it. /// Make a new block and seal it.

View File

@ -38,7 +38,7 @@ use super::validator_set::{ValidatorSet, SimpleList, new_validator_set};
use self::finality::RollingFinality; use self::finality::RollingFinality;
use ethkey::{public_to_address, recover, verify_address, Signature}; use ethkey::{self, Signature};
use io::{IoContext, IoHandler, TimerToken, IoService}; use io::{IoContext, IoHandler, TimerToken, IoService};
use itertools::{self, Itertools}; use itertools::{self, Itertools};
use rlp::{encode, Decodable, DecoderError, Encodable, RlpStream, UntrustedRlp}; use rlp::{encode, Decodable, DecoderError, Encodable, RlpStream, UntrustedRlp};
@ -292,14 +292,14 @@ impl EmptyStep {
let message = keccak(empty_step_rlp(self.step, &self.parent_hash)); let message = keccak(empty_step_rlp(self.step, &self.parent_hash));
let correct_proposer = step_proposer(validators, &self.parent_hash, self.step); let correct_proposer = step_proposer(validators, &self.parent_hash, self.step);
verify_address(&correct_proposer, &self.signature.into(), &message) ethkey::verify_address(&correct_proposer, &self.signature.into(), &message)
.map_err(|e| e.into()) .map_err(|e| e.into())
} }
fn author(&self) -> Result<Address, Error> { fn author(&self) -> Result<Address, Error> {
let message = keccak(empty_step_rlp(self.step, &self.parent_hash)); let message = keccak(empty_step_rlp(self.step, &self.parent_hash));
let public = recover(&self.signature.into(), &message)?; let public = ethkey::recover(&self.signature.into(), &message)?;
Ok(public_to_address(&public)) Ok(ethkey::public_to_address(&public))
} }
fn sealed(&self) -> SealedEmptyStep { fn sealed(&self) -> SealedEmptyStep {
@ -555,7 +555,7 @@ fn verify_external(header: &Header, validators: &ValidatorSet, empty_steps_trans
}; };
let header_seal_hash = header_seal_hash(header, empty_steps_rlp); let header_seal_hash = header_seal_hash(header, empty_steps_rlp);
!verify_address(&correct_proposer, &proposer_signature, &header_seal_hash)? !ethkey::verify_address(&correct_proposer, &proposer_signature, &header_seal_hash)?
}; };
if is_invalid_proposer { if is_invalid_proposer {
@ -824,7 +824,10 @@ impl Engine<EthereumMachine> for AuthorityRound {
fn generate_seal(&self, block: &ExecutedBlock, parent: &Header) -> Seal { fn generate_seal(&self, block: &ExecutedBlock, parent: &Header) -> Seal {
// first check to avoid generating signature most of the time // first check to avoid generating signature most of the time
// (but there's still a race to the `compare_and_swap`) // (but there's still a race to the `compare_and_swap`)
if !self.can_propose.load(AtomicOrdering::SeqCst) { return Seal::None; } if !self.can_propose.load(AtomicOrdering::SeqCst) {
trace!(target: "engine", "Aborting seal generation. Can't propose.");
return Seal::None;
}
let header = block.header(); let header = block.header();
let parent_step: U256 = header_step(parent, self.empty_steps_transition) let parent_step: U256 = header_step(parent, self.empty_steps_transition)
@ -1305,7 +1308,7 @@ impl Engine<EthereumMachine> for AuthorityRound {
} }
fn sign(&self, hash: H256) -> Result<Signature, Error> { fn sign(&self, hash: H256) -> Result<Signature, Error> {
self.signer.read().sign(hash).map_err(Into::into) Ok(self.signer.read().sign(hash)?)
} }
fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> { fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> {

View File

@ -19,7 +19,7 @@
use std::sync::{Weak, Arc}; use std::sync::{Weak, Arc};
use ethereum_types::{H256, H520, Address}; use ethereum_types::{H256, H520, Address};
use parking_lot::RwLock; use parking_lot::RwLock;
use ethkey::{recover, public_to_address, Signature}; use ethkey::{self, Signature};
use account_provider::AccountProvider; use account_provider::AccountProvider;
use block::*; use block::*;
use engines::{Engine, Seal, ConstructedVerifier, EngineError}; use engines::{Engine, Seal, ConstructedVerifier, EngineError};
@ -61,7 +61,7 @@ fn verify_external(header: &Header, validators: &ValidatorSet) -> Result<(), Err
// Check if the signature belongs to a validator, can depend on parent state. // Check if the signature belongs to a validator, can depend on parent state.
let sig = UntrustedRlp::new(&header.seal()[0]).as_val::<H520>()?; let sig = UntrustedRlp::new(&header.seal()[0]).as_val::<H520>()?;
let signer = public_to_address(&recover(&sig.into(), &header.bare_hash())?); let signer = ethkey::public_to_address(&ethkey::recover(&sig.into(), &header.bare_hash())?);
if *header.author() != signer { if *header.author() != signer {
return Err(EngineError::NotAuthorized(*header.author()).into()) return Err(EngineError::NotAuthorized(*header.author()).into())
@ -185,7 +185,7 @@ impl Engine<EthereumMachine> for BasicAuthority {
} }
fn sign(&self, hash: H256) -> Result<Signature, Error> { fn sign(&self, hash: H256) -> Result<Signature, Error> {
self.signer.read().sign(hash).map_err(Into::into) Ok(self.signer.read().sign(hash)?)
} }
fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> { fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> {

View File

@ -48,7 +48,7 @@ use error::Error;
use header::{Header, BlockNumber}; use header::{Header, BlockNumber};
use snapshot::SnapshotComponents; use snapshot::SnapshotComponents;
use spec::CommonParams; use spec::CommonParams;
use transaction::{UnverifiedTransaction, SignedTransaction}; use transaction::{self, UnverifiedTransaction, SignedTransaction};
use ethkey::Signature; use ethkey::Signature;
use parity_machine::{Machine, LocalizedMachine as Localized}; use parity_machine::{Machine, LocalizedMachine as Localized};
@ -387,14 +387,28 @@ pub trait EthEngine: Engine<::machine::EthereumMachine> {
} }
/// Verify a particular transaction is valid. /// Verify a particular transaction is valid.
fn verify_transaction_unordered(&self, t: UnverifiedTransaction, header: &Header) -> Result<SignedTransaction, Error> { ///
/// Unordered verification doesn't rely on the transaction execution order,
/// i.e. it should only verify stuff that doesn't assume any previous transactions
/// has already been verified and executed.
///
/// NOTE This function consumes an `UnverifiedTransaction` and produces `SignedTransaction`
/// which implies that a heavy check of the signature is performed here.
fn verify_transaction_unordered(&self, t: UnverifiedTransaction, header: &Header) -> Result<SignedTransaction, transaction::Error> {
self.machine().verify_transaction_unordered(t, header) self.machine().verify_transaction_unordered(t, header)
} }
/// Additional verification for transactions in blocks. /// Perform basic/cheap transaction verification.
// TODO: Add flags for which bits of the transaction to check. ///
// TODO: consider including State in the params. /// This should include all cheap checks that can be done before
fn verify_transaction_basic(&self, t: &UnverifiedTransaction, header: &Header) -> Result<(), Error> { /// actually checking the signature, like chain-replay protection.
///
/// NOTE This is done before the signature is recovered so avoid
/// doing any state-touching checks that might be expensive.
///
/// TODO: Add flags for which bits of the transaction to check.
/// TODO: consider including State in the params.
fn verify_transaction_basic(&self, t: &UnverifiedTransaction, header: &Header) -> Result<(), transaction::Error> {
self.machine().verify_transaction_basic(t, header) self.machine().verify_transaction_basic(t, header)
} }

View File

@ -37,7 +37,7 @@ use bytes::Bytes;
use error::{Error, BlockError}; use error::{Error, BlockError};
use header::{Header, BlockNumber}; use header::{Header, BlockNumber};
use rlp::UntrustedRlp; use rlp::UntrustedRlp;
use ethkey::{Message, public_to_address, recover, Signature}; use ethkey::{self, Message, Signature};
use account_provider::AccountProvider; use account_provider::AccountProvider;
use block::*; use block::*;
use engines::{Engine, Seal, EngineError, ConstructedVerifier}; use engines::{Engine, Seal, EngineError, ConstructedVerifier};
@ -518,8 +518,8 @@ impl Engine<EthereumMachine> for Tendermint {
let message: ConsensusMessage = rlp.as_val().map_err(fmt_err)?; let message: ConsensusMessage = rlp.as_val().map_err(fmt_err)?;
if !self.votes.is_old_or_known(&message) { if !self.votes.is_old_or_known(&message) {
let msg_hash = keccak(rlp.at(1).map_err(fmt_err)?.as_raw()); let msg_hash = keccak(rlp.at(1).map_err(fmt_err)?.as_raw());
let sender = public_to_address( let sender = ethkey::public_to_address(
&recover(&message.signature.into(), &msg_hash).map_err(fmt_err)? &ethkey::recover(&message.signature.into(), &msg_hash).map_err(fmt_err)?
); );
if !self.is_authority(&sender) { if !self.is_authority(&sender) {
@ -614,7 +614,7 @@ impl Engine<EthereumMachine> for Tendermint {
}; };
let address = match self.votes.get(&precommit) { let address = match self.votes.get(&precommit) {
Some(a) => a, Some(a) => a,
None => public_to_address(&recover(&precommit.signature.into(), &precommit_hash)?), None => ethkey::public_to_address(&ethkey::recover(&precommit.signature.into(), &precommit_hash)?),
}; };
if !self.validators.contains(header.parent_hash(), &address) { if !self.validators.contains(header.parent_hash(), &address) {
return Err(EngineError::NotAuthorized(address.to_owned()).into()); return Err(EngineError::NotAuthorized(address.to_owned()).into());
@ -669,7 +669,7 @@ impl Engine<EthereumMachine> for Tendermint {
let verifier = Box::new(EpochVerifier { let verifier = Box::new(EpochVerifier {
subchain_validators: list, subchain_validators: list,
recover: |signature: &Signature, message: &Message| { recover: |signature: &Signature, message: &Message| {
Ok(public_to_address(&::ethkey::recover(&signature, &message)?)) Ok(ethkey::public_to_address(&ethkey::recover(&signature, &message)?))
}, },
}); });
@ -690,7 +690,7 @@ impl Engine<EthereumMachine> for Tendermint {
} }
fn sign(&self, hash: H256) -> Result<Signature, Error> { fn sign(&self, hash: H256) -> Result<Signature, Error> {
self.signer.read().sign(hash).map_err(Into::into) Ok(self.signer.read().sign(hash)?)
} }
fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> { fn snapshot_components(&self) -> Option<Box<::snapshot::SnapshotComponents>> {
@ -1026,7 +1026,7 @@ mod tests {
let client = generate_dummy_client_with_spec_and_accounts(Spec::new_test_tendermint, Some(tap.clone())); let client = generate_dummy_client_with_spec_and_accounts(Spec::new_test_tendermint, Some(tap.clone()));
let engine = client.engine(); let engine = client.engine();
client.miner().set_engine_signer(v1.clone(), "1".into()).unwrap(); client.miner().set_author(v1.clone(), Some("1".into())).unwrap();
let notify = Arc::new(TestNotify::default()); let notify = Arc::new(TestNotify::default());
client.add_notify(notify.clone()); client.add_notify(notify.clone());

View File

@ -169,8 +169,8 @@ mod tests {
let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap(); let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap();
// Make sure reporting can be done. // Make sure reporting can be done.
client.miner().set_gas_floor_target(1_000_000.into()); client.miner().set_gas_range_target((1_000_000.into(), 1_000_000.into()));
client.miner().set_engine_signer(v1, "".into()).unwrap(); client.miner().set_author(v1, Some("".into())).unwrap();
// Check a block that is a bit in future, reject it but don't report the validator. // Check a block that is a bit in future, reject it but don't report the validator.
let mut header = Header::default(); let mut header = Header::default();

View File

@ -171,22 +171,22 @@ mod tests {
client.engine().register_client(Arc::downgrade(&client) as _); client.engine().register_client(Arc::downgrade(&client) as _);
// Make sure txs go through. // Make sure txs go through.
client.miner().set_gas_floor_target(1_000_000.into()); client.miner().set_gas_range_target((1_000_000.into(), 1_000_000.into()));
// Wrong signer for the first block. // Wrong signer for the first block.
client.miner().set_engine_signer(v1, "".into()).unwrap(); client.miner().set_author(v1, Some("".into())).unwrap();
client.transact_contract(Default::default(), Default::default()).unwrap(); client.transact_contract(Default::default(), Default::default()).unwrap();
::client::EngineClient::update_sealing(&*client); ::client::EngineClient::update_sealing(&*client);
assert_eq!(client.chain_info().best_block_number, 0); assert_eq!(client.chain_info().best_block_number, 0);
// Right signer for the first block. // Right signer for the first block.
client.miner().set_engine_signer(v0, "".into()).unwrap(); client.miner().set_author(v0, Some("".into())).unwrap();
::client::EngineClient::update_sealing(&*client); ::client::EngineClient::update_sealing(&*client);
assert_eq!(client.chain_info().best_block_number, 1); assert_eq!(client.chain_info().best_block_number, 1);
// This time v0 is wrong. // This time v0 is wrong.
client.transact_contract(Default::default(), Default::default()).unwrap(); client.transact_contract(Default::default(), Default::default()).unwrap();
::client::EngineClient::update_sealing(&*client); ::client::EngineClient::update_sealing(&*client);
assert_eq!(client.chain_info().best_block_number, 1); assert_eq!(client.chain_info().best_block_number, 1);
client.miner().set_engine_signer(v1, "".into()).unwrap(); client.miner().set_author(v1, Some("".into())).unwrap();
::client::EngineClient::update_sealing(&*client); ::client::EngineClient::update_sealing(&*client);
assert_eq!(client.chain_info().best_block_number, 2); assert_eq!(client.chain_info().best_block_number, 2);
// v1 is still good. // v1 is still good.

View File

@ -484,7 +484,7 @@ mod tests {
client.engine().register_client(Arc::downgrade(&client) as _); client.engine().register_client(Arc::downgrade(&client) as _);
let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap(); let validator_contract = "0000000000000000000000000000000000000005".parse::<Address>().unwrap();
client.miner().set_engine_signer(v1, "".into()).unwrap(); client.miner().set_author(v1, Some("".into())).unwrap();
// Remove "1" validator. // Remove "1" validator.
let tx = Transaction { let tx = Transaction {
nonce: 0.into(), nonce: 0.into(),
@ -512,11 +512,11 @@ mod tests {
assert_eq!(client.chain_info().best_block_number, 1); assert_eq!(client.chain_info().best_block_number, 1);
// Switch to the validator that is still there. // Switch to the validator that is still there.
client.miner().set_engine_signer(v0, "".into()).unwrap(); client.miner().set_author(v0, Some("".into())).unwrap();
::client::EngineClient::update_sealing(&*client); ::client::EngineClient::update_sealing(&*client);
assert_eq!(client.chain_info().best_block_number, 2); assert_eq!(client.chain_info().best_block_number, 2);
// Switch back to the added validator, since the state is updated. // Switch back to the added validator, since the state is updated.
client.miner().set_engine_signer(v1, "".into()).unwrap(); client.miner().set_author(v1, Some("".into())).unwrap();
let tx = Transaction { let tx = Transaction {
nonce: 2.into(), nonce: 2.into(),
gas_price: 0.into(), gas_price: 0.into(),

View File

@ -64,7 +64,7 @@ pub fn json_chain_test(json_data: &[u8]) -> Vec<String> {
config, config,
&spec, &spec,
db, db,
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
for b in &blockchain.blocks_rlp() { for b in &blockchain.blocks_rlp() {

View File

@ -71,7 +71,6 @@ extern crate ethcore_transaction as transaction;
extern crate ethereum_types; extern crate ethereum_types;
extern crate ethjson; extern crate ethjson;
extern crate ethkey; extern crate ethkey;
extern crate futures_cpupool;
extern crate hardware_wallet; extern crate hardware_wallet;
extern crate hashdb; extern crate hashdb;
extern crate itertools; extern crate itertools;
@ -80,7 +79,6 @@ extern crate num_cpus;
extern crate num; extern crate num;
extern crate parity_machine; extern crate parity_machine;
extern crate parking_lot; extern crate parking_lot;
extern crate price_info;
extern crate rand; extern crate rand;
extern crate rayon; extern crate rayon;
extern crate rlp; extern crate rlp;
@ -99,18 +97,10 @@ extern crate util_error;
extern crate snappy; extern crate snappy;
extern crate ethabi; extern crate ethabi;
#[macro_use]
extern crate ethabi_derive;
#[macro_use]
extern crate ethabi_contract;
#[macro_use]
extern crate rlp_derive;
extern crate rustc_hex; extern crate rustc_hex;
extern crate stats; extern crate stats;
extern crate stop_guard; extern crate stop_guard;
extern crate using_queue; extern crate using_queue;
extern crate table;
extern crate vm; extern crate vm;
extern crate wasm; extern crate wasm;
extern crate memory_cache; extern crate memory_cache;
@ -119,13 +109,20 @@ extern crate journaldb;
extern crate tempdir; extern crate tempdir;
#[macro_use] #[macro_use]
extern crate macros; extern crate ethabi_derive;
#[macro_use]
extern crate ethabi_contract;
#[macro_use] #[macro_use]
extern crate log; extern crate log;
#[macro_use] #[macro_use]
extern crate lazy_static; extern crate lazy_static;
#[macro_use] #[macro_use]
extern crate macros;
#[macro_use]
extern crate rlp_derive;
#[macro_use]
extern crate trace_time; extern crate trace_time;
#[cfg_attr(test, macro_use)] #[cfg_attr(test, macro_use)]
extern crate evm; extern crate evm;

View File

@ -334,12 +334,12 @@ impl EthereumMachine {
} }
/// Verify a particular transaction is valid, regardless of order. /// Verify a particular transaction is valid, regardless of order.
pub fn verify_transaction_unordered(&self, t: UnverifiedTransaction, _header: &Header) -> Result<SignedTransaction, Error> { pub fn verify_transaction_unordered(&self, t: UnverifiedTransaction, _header: &Header) -> Result<SignedTransaction, transaction::Error> {
Ok(SignedTransaction::new(t)?) Ok(SignedTransaction::new(t)?)
} }
/// Does basic verification of the transaction. /// Does basic verification of the transaction.
pub fn verify_transaction_basic(&self, t: &UnverifiedTransaction, header: &Header) -> Result<(), Error> { pub fn verify_transaction_basic(&self, t: &UnverifiedTransaction, header: &Header) -> Result<(), transaction::Error> {
let check_low_s = match self.ethash_extensions { let check_low_s = match self.ethash_extensions {
Some(ref ext) => header.number() >= ext.homestead_transition, Some(ref ext) => header.number() >= ext.homestead_transition,
None => true, None => true,
@ -358,9 +358,9 @@ impl EthereumMachine {
} }
/// Does verification of the transaction against the parent state. /// Does verification of the transaction against the parent state.
// TODO: refine the bound here to be a "state provider" or similar as opposed pub fn verify_transaction<C: BlockInfo + CallContract>(&self, t: &SignedTransaction, header: &Header, client: &C)
// to full client functionality. -> Result<(), transaction::Error>
pub fn verify_transaction<C: BlockInfo + CallContract>(&self, t: &SignedTransaction, header: &Header, client: &C) -> Result<(), Error> { {
if let Some(ref filter) = self.tx_filter.as_ref() { if let Some(ref filter) = self.tx_filter.as_ref() {
if !filter.transaction_allowed(header.parent_hash(), t, client) { if !filter.transaction_allowed(header.parent_hash(), t, client) {
return Err(transaction::Error::NotAllowed.into()) return Err(transaction::Error::NotAllowed.into())

File diff suppressed because it is too large Load Diff

View File

@ -17,173 +17,87 @@
#![warn(missing_docs)] #![warn(missing_docs)]
//! Miner module //! Miner module
//! Keeps track of transactions and mined block. //! Keeps track of transactions and currently sealed pending block.
//!
//! Usage example:
//!
//! ```rust
//! extern crate ethcore;
//! use std::env;
//! use ethcore::ethereum;
//! use ethcore::client::{Client, ClientConfig};
//! use ethcore::miner::{Miner, MinerService};
//!
//! fn main() {
//! let miner: Miner = Miner::with_spec(&ethereum::new_foundation(&env::temp_dir()));
//! // get status
//! assert_eq!(miner.status().transactions_in_pending_queue, 0);
//!
//! // Check block for sealing
//! //assert!(miner.sealing_block(&*client).lock().is_some());
//! }
//! ```
mod miner; mod miner;
mod stratum;
mod service_transaction_checker; mod service_transaction_checker;
pub use self::miner::{Miner, MinerOptions, Banning, PendingSet, GasPricer, GasPriceCalibratorOptions, GasLimit}; pub mod pool_client;
pub use self::stratum::{Stratum, Error as StratumError, Options as StratumOptions}; pub mod stratum;
pub use ethcore_miner::local_transactions::Status as LocalTransactionStatus; pub use self::miner::{Miner, MinerOptions, Penalization, PendingSet, AuthoringParams};
use std::sync::Arc;
use std::collections::BTreeMap; use std::collections::BTreeMap;
use block::{ClosedBlock, Block};
use bytes::Bytes; use bytes::Bytes;
use client::{
MiningBlockChainClient, CallContract, RegistryInfo, ScheduleInfo,
BlockChain, AccountData, BlockProducer, SealedBlockImporter
};
use error::{Error};
use ethereum_types::{H256, U256, Address}; use ethereum_types::{H256, U256, Address};
use ethcore_miner::pool::{VerifiedTransaction, QueueStatus, local_transactions};
use block::{Block, SealedBlock};
use client::{
CallContract, RegistryInfo, ScheduleInfo,
BlockChain, BlockProducer, SealedBlockImporter, ChainInfo,
AccountData, Nonce,
};
use error::Error;
use header::{BlockNumber, Header}; use header::{BlockNumber, Header};
use receipt::{RichReceipt, Receipt}; use receipt::{RichReceipt, Receipt};
use transaction::{UnverifiedTransaction, PendingTransaction, ImportResult as TransactionImportResult}; use transaction::{self, UnverifiedTransaction, SignedTransaction, PendingTransaction};
use state::StateInfo; use state::StateInfo;
/// Provides methods to verify incoming external transactions
pub trait TransactionVerifierClient: Send + Sync
// Required for ServiceTransactionChecker
+ CallContract + RegistryInfo
// Required for verifiying transactions
+ BlockChain + ScheduleInfo + AccountData
{}
/// Extended client interface used for mining
pub trait BlockChainClient: TransactionVerifierClient + BlockProducer + SealedBlockImporter {}
/// Miner client API /// Miner client API
pub trait MinerService : Send + Sync { pub trait MinerService : Send + Sync {
/// Type representing chain state /// Type representing chain state
type State: StateInfo + 'static; type State: StateInfo + 'static;
/// Returns miner's status. // Sealing
fn status(&self) -> MinerStatus;
/// Get the author that we will seal blocks as.
fn author(&self) -> Address;
/// Set the author that we will seal blocks as.
fn set_author(&self, author: Address);
/// Set info necessary to sign consensus messages.
fn set_engine_signer(&self, address: Address, password: String) -> Result<(), ::account_provider::SignError>;
/// Get the extra_data that we will seal blocks with.
fn extra_data(&self) -> Bytes;
/// Set the extra_data that we will seal blocks with.
fn set_extra_data(&self, extra_data: Bytes);
/// Get current minimal gas price for transactions accepted to queue.
fn minimal_gas_price(&self) -> U256;
/// Set minimal gas price of transaction to be accepted for mining.
fn set_minimal_gas_price(&self, min_gas_price: U256);
/// Get the lower bound of the gas limit we wish to target when sealing a new block.
fn gas_floor_target(&self) -> U256;
/// Get the upper bound of the gas limit we wish to target when sealing a new block.
fn gas_ceil_target(&self) -> U256;
// TODO: coalesce into single set_range function.
/// Set the lower bound of gas limit we wish to target when sealing a new block.
fn set_gas_floor_target(&self, target: U256);
/// Set the upper bound of gas limit we wish to target when sealing a new block.
fn set_gas_ceil_target(&self, target: U256);
/// Get current transactions limit in queue.
fn transactions_limit(&self) -> usize;
/// Set maximal number of transactions kept in the queue (both current and future).
fn set_transactions_limit(&self, limit: usize);
/// Set maximum amount of gas allowed for any single transaction to mine.
fn set_tx_gas_limit(&self, limit: U256);
/// Imports transactions to transaction queue.
fn import_external_transactions<C: MiningBlockChainClient>(&self, client: &C, transactions: Vec<UnverifiedTransaction>) ->
Vec<Result<TransactionImportResult, Error>>;
/// Imports own (node owner) transaction to queue.
fn import_own_transaction<C: MiningBlockChainClient>(&self, chain: &C, transaction: PendingTransaction) ->
Result<TransactionImportResult, Error>;
/// Returns hashes of transactions currently in pending
fn pending_transactions_hashes(&self, best_block: BlockNumber) -> Vec<H256>;
/// Removes all transactions from the queue and restart mining operation.
fn clear_and_reset<C: MiningBlockChainClient>(&self, chain: &C);
/// Called when blocks are imported to chain, updates transactions queue.
fn chain_new_blocks<C>(&self, chain: &C, imported: &[H256], invalid: &[H256], enacted: &[H256], retracted: &[H256])
where C: AccountData + BlockChain + CallContract + RegistryInfo + BlockProducer + ScheduleInfo + SealedBlockImporter;
/// PoW chain - can produce work package
fn can_produce_work_package(&self) -> bool;
/// New chain head event. Restart mining operation.
fn update_sealing<C>(&self, chain: &C)
where C: AccountData + BlockChain + RegistryInfo + CallContract + BlockProducer + SealedBlockImporter;
/// Submit `seal` as a valid solution for the header of `pow_hash`. /// Submit `seal` as a valid solution for the header of `pow_hash`.
/// Will check the seal, but not actually insert the block into the chain. /// Will check the seal, but not actually insert the block into the chain.
fn submit_seal<C: SealedBlockImporter>(&self, chain: &C, pow_hash: H256, seal: Vec<Bytes>) -> Result<(), Error>; fn submit_seal(&self, pow_hash: H256, seal: Vec<Bytes>) -> Result<SealedBlock, Error>;
/// Get the sealing work package and if `Some`, apply some transform.
fn map_sealing_work<C, F, T>(&self, client: &C, f: F) -> Option<T>
where C: AccountData + BlockChain + BlockProducer + CallContract,
F: FnOnce(&ClosedBlock) -> T,
Self: Sized;
/// Query pending transactions for hash.
fn transaction(&self, best_block: BlockNumber, hash: &H256) -> Option<PendingTransaction>;
/// Removes transaction from the queue.
/// NOTE: The transaction is not removed from pending block if mining.
fn remove_pending_transaction<C: AccountData>(&self, chain: &C, hash: &H256) -> Option<PendingTransaction>;
/// Get a list of all pending transactions in the queue.
fn pending_transactions(&self) -> Vec<PendingTransaction>;
/// Get a list of all transactions that can go into the given block.
fn ready_transactions(&self, best_block: BlockNumber, best_block_timestamp: u64) -> Vec<PendingTransaction>;
/// Get a list of all future transactions.
fn future_transactions(&self) -> Vec<PendingTransaction>;
/// Get a list of local transactions with statuses.
fn local_transactions(&self) -> BTreeMap<H256, LocalTransactionStatus>;
/// Get a list of all pending receipts.
fn pending_receipts(&self, best_block: BlockNumber) -> BTreeMap<H256, Receipt>;
/// Get a particular reciept.
fn pending_receipt(&self, best_block: BlockNumber, hash: &H256) -> Option<RichReceipt>;
/// Returns highest transaction nonce for given address.
fn last_nonce(&self, address: &Address) -> Option<U256>;
/// Is it currently sealing? /// Is it currently sealing?
fn is_currently_sealing(&self) -> bool; fn is_currently_sealing(&self) -> bool;
/// Suggested gas price. /// Get the sealing work package preparing it if doesn't exist yet.
fn sensible_gas_price(&self) -> U256; ///
/// Returns `None` if engine seals internally.
fn work_package<C>(&self, chain: &C) -> Option<(H256, BlockNumber, u64, U256)>
where C: BlockChain + CallContract + BlockProducer + SealedBlockImporter + Nonce + Sync;
/// Suggested gas limit. /// Update current pending block
fn sensible_gas_limit(&self) -> U256 { 21000.into() } fn update_sealing<C>(&self, chain: &C)
where C: BlockChain + CallContract + BlockProducer + SealedBlockImporter + Nonce + Sync;
// Notifications
/// Called when blocks are imported to chain, updates transactions queue.
/// `is_internal_import` indicates that the block has just been created in miner and internally sealed by the engine,
/// so we shouldn't attempt creating new block again.
fn chain_new_blocks<C>(&self, chain: &C, imported: &[H256], invalid: &[H256], enacted: &[H256], retracted: &[H256], is_internal_import: bool)
where C: BlockChainClient;
// Pending block
/// Get a list of all pending receipts from pending block.
fn pending_receipts(&self, best_block: BlockNumber) -> Option<BTreeMap<H256, Receipt>>;
/// Get a particular receipt from pending block.
fn pending_receipt(&self, best_block: BlockNumber, hash: &H256) -> Option<RichReceipt>;
/// Get `Some` `clone()` of the current pending block's state or `None` if we're not sealing. /// Get `Some` `clone()` of the current pending block's state or `None` if we're not sealing.
fn pending_state(&self, latest_block_number: BlockNumber) -> Option<Self::State>; fn pending_state(&self, latest_block_number: BlockNumber) -> Option<Self::State>;
@ -193,15 +107,79 @@ pub trait MinerService : Send + Sync {
/// Get `Some` `clone()` of the current pending block or `None` if we're not sealing. /// Get `Some` `clone()` of the current pending block or `None` if we're not sealing.
fn pending_block(&self, latest_block_number: BlockNumber) -> Option<Block>; fn pending_block(&self, latest_block_number: BlockNumber) -> Option<Block>;
}
/// Mining status /// Get `Some` `clone()` of the current pending block transactions or `None` if we're not sealing.
#[derive(Debug)] fn pending_transactions(&self, latest_block_number: BlockNumber) -> Option<Vec<SignedTransaction>>;
pub struct MinerStatus {
/// Number of transactions in queue with state `pending` (ready to be included in block) // Block authoring
pub transactions_in_pending_queue: usize,
/// Number of transactions in queue with state `future` (not yet ready to be included in block) /// Get current authoring parameters.
pub transactions_in_future_queue: usize, fn authoring_params(&self) -> AuthoringParams;
/// Number of transactions included in currently mined block
pub transactions_in_pending_block: usize, /// Set the lower and upper bound of gas limit we wish to target when sealing a new block.
fn set_gas_range_target(&self, gas_range_target: (U256, U256));
/// Set the extra_data that we will seal blocks with.
fn set_extra_data(&self, extra_data: Bytes);
/// Set info necessary to sign consensus messages and block authoring.
///
/// On PoW password is optional.
fn set_author(&self, address: Address, password: Option<String>) -> Result<(), ::account_provider::SignError>;
// Transaction Pool
/// Imports transactions to transaction queue.
fn import_external_transactions<C>(&self, client: &C, transactions: Vec<UnverifiedTransaction>)
-> Vec<Result<(), transaction::Error>>
where C: BlockChainClient;
/// Imports own (node owner) transaction to queue.
fn import_own_transaction<C>(&self, chain: &C, transaction: PendingTransaction)
-> Result<(), transaction::Error>
where C: BlockChainClient;
/// Removes transaction from the pool.
///
/// Attempts to "cancel" a transaction. If it was not propagated yet (or not accepted by other peers)
/// there is a good chance that the transaction will actually be removed.
/// NOTE: The transaction is not removed from pending block if there is one.
fn remove_transaction(&self, hash: &H256) -> Option<Arc<VerifiedTransaction>>;
/// Query transaction from the pool given it's hash.
fn transaction(&self, hash: &H256) -> Option<Arc<VerifiedTransaction>>;
/// Returns next valid nonce for given address.
///
/// This includes nonces of all transactions from this address in the pending queue
/// if they are consecutive.
/// NOTE: pool may contain some future transactions that will become pending after
/// transaction with nonce returned from this function is signed on.
fn next_nonce<C>(&self, chain: &C, address: &Address) -> U256
where C: Nonce + Sync;
/// Get a list of all ready transactions.
///
/// Depending on the settings may look in transaction pool or only in pending block.
fn ready_transactions<C>(&self, chain: &C) -> Vec<Arc<VerifiedTransaction>>
where C: ChainInfo + Nonce + Sync;
/// Get a list of all transactions in the pool (some of them might not be ready for inclusion yet).
fn queued_transactions(&self) -> Vec<Arc<VerifiedTransaction>>;
/// Get a list of local transactions with statuses.
fn local_transactions(&self) -> BTreeMap<H256, local_transactions::Status>;
/// Get current queue status.
///
/// Status includes verification thresholds and current pool utilization and limits.
fn queue_status(&self) -> QueueStatus;
// Misc
/// Suggested gas price.
fn sensible_gas_price(&self) -> U256;
/// Suggested gas limit.
fn sensible_gas_limit(&self) -> U256;
} }

View File

@ -0,0 +1,216 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Blockchain access for transaction pool.
use std::fmt;
use std::collections::HashMap;
use ethereum_types::{H256, U256, Address};
use ethcore_miner::pool;
use ethcore_miner::pool::client::NonceClient;
use transaction::{
self,
UnverifiedTransaction,
SignedTransaction,
};
use parking_lot::RwLock;
use account_provider::AccountProvider;
use client::{TransactionId, BlockInfo, CallContract, Nonce};
use engines::EthEngine;
use header::Header;
use miner;
use miner::service_transaction_checker::ServiceTransactionChecker;
type NoncesCache = RwLock<HashMap<Address, U256>>;
const MAX_NONCE_CACHE_SIZE: usize = 4096;
const EXPECTED_NONCE_CACHE_SIZE: usize = 2048;
/// Blockchain accesss for transaction pool.
pub struct PoolClient<'a, C: 'a> {
chain: &'a C,
cached_nonces: CachedNonceClient<'a, C>,
engine: &'a EthEngine,
accounts: Option<&'a AccountProvider>,
best_block_header: Header,
service_transaction_checker: Option<ServiceTransactionChecker>,
}
impl<'a, C: 'a> Clone for PoolClient<'a, C> {
fn clone(&self) -> Self {
PoolClient {
chain: self.chain,
cached_nonces: self.cached_nonces.clone(),
engine: self.engine,
accounts: self.accounts.clone(),
best_block_header: self.best_block_header.clone(),
service_transaction_checker: self.service_transaction_checker.clone(),
}
}
}
impl<'a, C: 'a> PoolClient<'a, C> where
C: BlockInfo + CallContract,
{
/// Creates new client given chain, nonce cache, accounts and service transaction verifier.
pub fn new(
chain: &'a C,
cache: &'a NoncesCache,
engine: &'a EthEngine,
accounts: Option<&'a AccountProvider>,
refuse_service_transactions: bool,
) -> Self {
let best_block_header = chain.best_block_header();
PoolClient {
chain,
cached_nonces: CachedNonceClient::new(chain, cache),
engine,
accounts,
best_block_header,
service_transaction_checker: if refuse_service_transactions {
None
} else {
Some(Default::default())
},
}
}
/// Verifies if signed transaction is executable.
///
/// This should perform any verifications that rely on chain status.
pub fn verify_signed(&self, tx: &SignedTransaction) -> Result<(), transaction::Error> {
self.engine.machine().verify_transaction(&tx, &self.best_block_header, self.chain)
}
}
impl<'a, C: 'a> fmt::Debug for PoolClient<'a, C> {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
write!(fmt, "PoolClient")
}
}
impl<'a, C: 'a> pool::client::Client for PoolClient<'a, C> where
C: miner::TransactionVerifierClient + Sync,
{
fn transaction_already_included(&self, hash: &H256) -> bool {
self.chain.transaction_block(TransactionId::Hash(*hash)).is_some()
}
fn verify_transaction(&self, tx: UnverifiedTransaction)-> Result<SignedTransaction, transaction::Error> {
self.engine.verify_transaction_basic(&tx, &self.best_block_header)?;
let tx = self.engine.verify_transaction_unordered(tx, &self.best_block_header)?;
self.verify_signed(&tx)?;
Ok(tx)
}
fn account_details(&self, address: &Address) -> pool::client::AccountDetails {
pool::client::AccountDetails {
nonce: self.cached_nonces.account_nonce(address),
balance: self.chain.latest_balance(address),
is_local: self.accounts.map_or(false, |accounts| accounts.has_account(*address).unwrap_or(false)),
}
}
fn required_gas(&self, tx: &transaction::Transaction) -> U256 {
tx.gas_required(&self.chain.latest_schedule()).into()
}
fn transaction_type(&self, tx: &SignedTransaction) -> pool::client::TransactionType {
match self.service_transaction_checker {
None => pool::client::TransactionType::Regular,
Some(ref checker) => match checker.check(self.chain, &tx) {
Ok(true) => pool::client::TransactionType::Service,
Ok(false) => pool::client::TransactionType::Regular,
Err(e) => {
debug!(target: "txqueue", "Unable to verify service transaction: {:?}", e);
pool::client::TransactionType::Regular
},
}
}
}
}
impl<'a, C: 'a> NonceClient for PoolClient<'a, C> where
C: Nonce + Sync,
{
fn account_nonce(&self, address: &Address) -> U256 {
self.cached_nonces.account_nonce(address)
}
}
pub(crate) struct CachedNonceClient<'a, C: 'a> {
client: &'a C,
cache: &'a NoncesCache,
}
impl<'a, C: 'a> Clone for CachedNonceClient<'a, C> {
fn clone(&self) -> Self {
CachedNonceClient {
client: self.client,
cache: self.cache,
}
}
}
impl<'a, C: 'a> fmt::Debug for CachedNonceClient<'a, C> {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.debug_struct("CachedNonceClient")
.field("cache", &self.cache.read().len())
.finish()
}
}
impl<'a, C: 'a> CachedNonceClient<'a, C> {
pub fn new(client: &'a C, cache: &'a NoncesCache) -> Self {
CachedNonceClient {
client,
cache,
}
}
}
impl<'a, C: 'a> NonceClient for CachedNonceClient<'a, C> where
C: Nonce + Sync,
{
fn account_nonce(&self, address: &Address) -> U256 {
if let Some(nonce) = self.cache.read().get(address) {
return *nonce;
}
// We don't check again if cache has been populated.
// It's not THAT expensive to fetch the nonce from state.
let mut cache = self.cache.write();
let nonce = self.client.latest_nonce(address);
cache.insert(*address, nonce);
if cache.len() < MAX_NONCE_CACHE_SIZE {
return nonce
}
// Remove excessive amount of entries from the cache
while cache.len() > EXPECTED_NONCE_CACHE_SIZE {
// Just remove random entry
if let Some(key) = cache.keys().next().cloned() {
cache.remove(&key);
}
}
nonce
}
}

View File

@ -16,33 +16,38 @@
//! A service transactions contract checker. //! A service transactions contract checker.
use client::{RegistryInfo, CallContract}; use client::{RegistryInfo, CallContract, BlockId};
use transaction::SignedTransaction; use transaction::SignedTransaction;
use types::ids::BlockId;
use_contract!(service_transaction, "ServiceTransaction", "res/contracts/service_transaction.json"); use_contract!(service_transaction, "ServiceTransaction", "res/contracts/service_transaction.json");
const SERVICE_TRANSACTION_CONTRACT_REGISTRY_NAME: &'static str = "service_transaction_checker"; const SERVICE_TRANSACTION_CONTRACT_REGISTRY_NAME: &'static str = "service_transaction_checker";
/// Service transactions checker. /// Service transactions checker.
#[derive(Default)] #[derive(Default, Clone)]
pub struct ServiceTransactionChecker { pub struct ServiceTransactionChecker {
contract: service_transaction::ServiceTransaction, contract: service_transaction::ServiceTransaction,
} }
impl ServiceTransactionChecker { impl ServiceTransactionChecker {
/// Checks if service transaction can be appended to the transaction queue. /// Checks if given address is whitelisted to send service transactions.
pub fn check<C: CallContract + RegistryInfo>(&self, client: &C, tx: &SignedTransaction) -> Result<bool, String> { pub fn check<C: CallContract + RegistryInfo>(&self, client: &C, tx: &SignedTransaction) -> Result<bool, String> {
assert!(tx.gas_price.is_zero()); let sender = tx.sender();
let hash = tx.hash();
// Skip checking the contract if the transaction does not have zero gas price
if !tx.gas_price.is_zero() {
return Ok(false)
}
let address = client.registry_address(SERVICE_TRANSACTION_CONTRACT_REGISTRY_NAME.to_owned(), BlockId::Latest) let address = client.registry_address(SERVICE_TRANSACTION_CONTRACT_REGISTRY_NAME.to_owned(), BlockId::Latest)
.ok_or_else(|| "contract is not configured")?; .ok_or_else(|| "contract is not configured")?;
trace!(target: "txqueue", "Checking service transaction checker contract from {}", address); trace!(target: "txqueue", "[{:?}] Checking service transaction checker contract from {}", hash, sender);
self.contract.functions() self.contract.functions()
.certified() .certified()
.call(tx.sender(), &|data| client.call_contract(BlockId::Latest, address, data)) .call(sender, &|data| client.call_contract(BlockId::Latest, address, data))
.map_err(|e| e.to_string()) .map_err(|e| e.to_string())
} }
} }

View File

@ -20,8 +20,7 @@ use std::sync::{Arc, Weak};
use std::net::{SocketAddr, AddrParseError}; use std::net::{SocketAddr, AddrParseError};
use std::fmt; use std::fmt;
use block::IsBlock; use client::{Client, ImportSealedBlock};
use client::Client;
use ethereum_types::{H64, H256, clean_0x, U256}; use ethereum_types::{H64, H256, clean_0x, U256};
use ethereum::ethash::Ethash; use ethereum::ethash::Ethash;
use ethash::SeedHashCompute; use ethash::SeedHashCompute;
@ -30,7 +29,7 @@ use ethcore_stratum::{
JobDispatcher, PushWorkHandler, JobDispatcher, PushWorkHandler,
Stratum as StratumService, Error as StratumServiceError, Stratum as StratumService, Error as StratumServiceError,
}; };
use miner::{self, Miner, MinerService}; use miner::{Miner, MinerService};
use parking_lot::Mutex; use parking_lot::Mutex;
use rlp::encode; use rlp::encode;
@ -120,14 +119,9 @@ impl JobDispatcher for StratumJobDispatcher {
} }
fn job(&self) -> Option<String> { fn job(&self) -> Option<String> {
self.with_core(|client, miner| miner.map_sealing_work(&*client, |b| { self.with_core(|client, miner| miner.work_package(&*client).map(|(pow_hash, number, _timestamp, difficulty)| {
let pow_hash = b.hash(); self.payload(pow_hash, difficulty, number)
let number = b.block().header().number(); }))
let difficulty = b.block().header().difficulty();
self.payload(pow_hash, *difficulty, number)
})
)
} }
fn submit(&self, payload: Vec<String>) -> Result<(), StratumServiceError> { fn submit(&self, payload: Vec<String>) -> Result<(), StratumServiceError> {
@ -145,7 +139,10 @@ impl JobDispatcher for StratumJobDispatcher {
self.with_core_result(|client, miner| { self.with_core_result(|client, miner| {
let seal = vec![encode(&payload.mix_hash).into_vec(), encode(&payload.nonce).into_vec()]; let seal = vec![encode(&payload.mix_hash).into_vec(), encode(&payload.nonce).into_vec()];
match miner.submit_seal(&*client, payload.pow_hash, seal) {
let import = miner.submit_seal(payload.pow_hash, seal)
.and_then(|block| client.import_sealed_block(block));
match import {
Ok(_) => Ok(()), Ok(_) => Ok(()),
Err(e) => { Err(e) => {
warn!(target: "stratum", "submit_seal error: {:?}", e); warn!(target: "stratum", "submit_seal error: {:?}", e);
@ -247,8 +244,8 @@ impl Stratum {
/// Start STRATUM job dispatcher and register it in the miner /// Start STRATUM job dispatcher and register it in the miner
pub fn register(cfg: &Options, miner: Arc<Miner>, client: Weak<Client>) -> Result<(), Error> { pub fn register(cfg: &Options, miner: Arc<Miner>, client: Weak<Client>) -> Result<(), Error> {
let stratum = miner::Stratum::start(cfg, Arc::downgrade(&miner.clone()), client)?; let stratum = Stratum::start(cfg, Arc::downgrade(&miner.clone()), client)?;
miner.push_notifier(Box::new(stratum) as Box<NotifyWork>); miner.add_work_listener(Box::new(stratum) as Box<NotifyWork>);
Ok(()) Ok(())
} }
} }

View File

@ -107,13 +107,11 @@ fn make_chain(accounts: Arc<AccountProvider>, blocks_beyond: usize, transitions:
trace!(target: "snapshot", "Pushing block #{}, {} txs, author={}", trace!(target: "snapshot", "Pushing block #{}, {} txs, author={}",
n, txs.len(), signers[idx]); n, txs.len(), signers[idx]);
client.miner().set_author(signers[idx]); client.miner().set_author(signers[idx], Some(PASS.into())).unwrap();
client.miner().import_external_transactions(&*client, client.miner().import_external_transactions(&*client,
txs.into_iter().map(Into::into).collect()); txs.into_iter().map(Into::into).collect());
let engine = client.engine(); client.engine().step();
engine.set_signer(accounts.clone(), signers[idx], PASS.to_owned());
engine.step();
assert_eq!(client.chain_info().best_block_number, n); assert_eq!(client.chain_info().best_block_number, n);
}; };

View File

@ -58,7 +58,7 @@ fn restored_is_equivalent() {
Default::default(), Default::default(),
&spec, &spec,
Arc::new(client_db), Arc::new(client_db),
Arc::new(::miner::Miner::with_spec(&spec)), Arc::new(::miner::Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();

View File

@ -120,7 +120,7 @@ pub fn generate_dummy_client_with_spec_accounts_and_data<F>(test_spec: F, accoun
ClientConfig::default(), ClientConfig::default(),
&test_spec, &test_spec,
client_db, client_db,
Arc::new(Miner::with_spec_and_accounts(&test_spec, accounts)), Arc::new(Miner::new_for_tests(&test_spec, accounts)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
let test_engine = &*test_spec.engine; let test_engine = &*test_spec.engine;
@ -243,7 +243,7 @@ pub fn get_test_client_with_blocks(blocks: Vec<Bytes>) -> Arc<Client> {
ClientConfig::default(), ClientConfig::default(),
&test_spec, &test_spec,
client_db, client_db,
Arc::new(Miner::with_spec(&test_spec)), Arc::new(Miner::new_for_tests(&test_spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();

View File

@ -49,7 +49,7 @@ fn imports_from_empty() {
ClientConfig::default(), ClientConfig::default(),
&spec, &spec,
client_db, client_db,
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
client.import_verified_blocks(); client.import_verified_blocks();
@ -67,7 +67,7 @@ fn should_return_registrar() {
ClientConfig::default(), ClientConfig::default(),
&spec, &spec,
client_db, client_db,
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
let params = client.additional_params(); let params = client.additional_params();
@ -97,7 +97,7 @@ fn imports_good_block() {
ClientConfig::default(), ClientConfig::default(),
&spec, &spec,
client_db, client_db,
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
let good_block = get_good_dummy_block(); let good_block = get_good_dummy_block();
@ -122,7 +122,7 @@ fn query_none_block() {
ClientConfig::default(), ClientConfig::default(),
&spec, &spec,
client_db, client_db,
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
let non_existant = client.block_header(BlockId::Number(188)); let non_existant = client.block_header(BlockId::Number(188));
@ -277,7 +277,7 @@ fn change_history_size() {
ClientConfig::default(), ClientConfig::default(),
&test_spec, &test_spec,
client_db.clone(), client_db.clone(),
Arc::new(Miner::with_spec(&test_spec)), Arc::new(Miner::new_for_tests(&test_spec, None)),
IoChannel::disconnected() IoChannel::disconnected()
).unwrap(); ).unwrap();
@ -295,7 +295,7 @@ fn change_history_size() {
config, config,
&test_spec, &test_spec,
client_db, client_db,
Arc::new(Miner::with_spec(&test_spec)), Arc::new(Miner::new_for_tests(&test_spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
assert_eq!(client.state().balance(&address).unwrap(), 100.into()); assert_eq!(client.state().balance(&address).unwrap(), 100.into());
@ -326,11 +326,11 @@ fn does_not_propagate_delayed_transactions() {
client.miner().import_own_transaction(&*client, tx0).unwrap(); client.miner().import_own_transaction(&*client, tx0).unwrap();
client.miner().import_own_transaction(&*client, tx1).unwrap(); client.miner().import_own_transaction(&*client, tx1).unwrap();
assert_eq!(0, client.ready_transactions().len()); assert_eq!(0, client.ready_transactions().len());
assert_eq!(2, client.miner().pending_transactions().len()); assert_eq!(0, client.miner().ready_transactions(&*client).len());
push_blocks_to_client(&client, 53, 2, 2); push_blocks_to_client(&client, 53, 2, 2);
client.flush_queue(); client.flush_queue();
assert_eq!(2, client.ready_transactions().len()); assert_eq!(2, client.ready_transactions().len());
assert_eq!(2, client.miner().pending_transactions().len()); assert_eq!(2, client.miner().ready_transactions(&*client).len());
} }
#[test] #[test]

View File

@ -50,7 +50,7 @@ fn can_trace_block_and_uncle_reward() {
client_config, client_config,
&spec, &spec,
client_db, client_db,
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();

View File

@ -164,7 +164,7 @@ mod test {
ClientConfig::default(), ClientConfig::default(),
&spec, &spec,
client_db, client_db,
Arc::new(Miner::with_spec(&spec)), Arc::new(Miner::new_for_tests(&spec, None)),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
let key1 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000001")).unwrap(); let key1 = KeyPair::from_secret(Secret::from("0000000000000000000000000000000000000000000000000000000000000001")).unwrap();

View File

@ -39,6 +39,7 @@ use light::Provider;
use light::net::{self as light_net, LightProtocol, Params as LightParams, Capabilities, Handler as LightHandler, EventContext}; use light::net::{self as light_net, LightProtocol, Params as LightParams, Capabilities, Handler as LightHandler, EventContext};
use network::IpFilter; use network::IpFilter;
use private_tx::PrivateTxHandler; use private_tx::PrivateTxHandler;
use transaction::UnverifiedTransaction;
/// Parity sync protocol /// Parity sync protocol
pub const WARP_SYNC_PROTOCOL_ID: ProtocolId = *b"par"; pub const WARP_SYNC_PROTOCOL_ID: ProtocolId = *b"par";
@ -486,9 +487,9 @@ impl ChainNotify for EthSync {
}); });
} }
fn transactions_received(&self, hashes: Vec<H256>, peer_id: PeerId) { fn transactions_received(&self, txs: &[UnverifiedTransaction], peer_id: PeerId) {
let mut sync = self.eth_handler.sync.write(); let mut sync = self.eth_handler.sync.write();
sync.transactions_received(hashes, peer_id); sync.transactions_received(txs, peer_id);
} }
} }

View File

@ -104,15 +104,16 @@ use ethcore::header::{BlockNumber, Header as BlockHeader};
use ethcore::client::{BlockChainClient, BlockStatus, BlockId, BlockChainInfo, BlockImportError, BlockQueueInfo}; use ethcore::client::{BlockChainClient, BlockStatus, BlockId, BlockChainInfo, BlockImportError, BlockQueueInfo};
use ethcore::error::*; use ethcore::error::*;
use ethcore::snapshot::{ManifestData, RestorationStatus}; use ethcore::snapshot::{ManifestData, RestorationStatus};
use transaction::PendingTransaction; use transaction::SignedTransaction;
use sync_io::SyncIo; use sync_io::SyncIo;
use super::{WarpSync, SyncConfig}; use super::{WarpSync, SyncConfig};
use block_sync::{BlockDownloader, BlockRequest, BlockDownloaderImportError as DownloaderImportError, DownloadAction}; use block_sync::{BlockDownloader, BlockRequest, BlockDownloaderImportError as DownloaderImportError, DownloadAction};
use rand::Rng; use rand::Rng;
use snapshot::{Snapshot, ChunkType}; use snapshot::{Snapshot, ChunkType};
use api::{EthProtocolInfo as PeerInfoDigest, WARP_SYNC_PROTOCOL_ID}; use api::{EthProtocolInfo as PeerInfoDigest, WARP_SYNC_PROTOCOL_ID};
use transactions_stats::{TransactionsStats, Stats as TransactionStats};
use private_tx::PrivateTxHandler; use private_tx::PrivateTxHandler;
use transactions_stats::{TransactionsStats, Stats as TransactionStats};
use transaction::UnverifiedTransaction;
known_heap_size!(0, PeerInfo); known_heap_size!(0, PeerInfo);
@ -478,9 +479,9 @@ impl ChainSync {
} }
/// Updates transactions were received by a peer /// Updates transactions were received by a peer
pub fn transactions_received(&mut self, hashes: Vec<H256>, peer_id: PeerId) { pub fn transactions_received(&mut self, txs: &[UnverifiedTransaction], peer_id: PeerId) {
if let Some(peer_info) = self.peers.get_mut(&peer_id) { if let Some(peer_info) = self.peers.get_mut(&peer_id) {
peer_info.last_sent_transactions.extend(&hashes); peer_info.last_sent_transactions.extend(txs.iter().map(|tx| tx.hash()));
} }
} }
@ -2026,8 +2027,9 @@ impl ChainSync {
return 0; return 0;
} }
let (transactions, service_transactions): (Vec<_>, Vec<_>) = transactions.into_iter() let (transactions, service_transactions): (Vec<_>, Vec<_>) = transactions.iter()
.partition(|tx| !tx.transaction.gas_price.is_zero()); .map(|tx| tx.signed())
.partition(|tx| !tx.gas_price.is_zero());
// usual transactions could be propagated to all peers // usual transactions could be propagated to all peers
let mut affected_peers = HashSet::new(); let mut affected_peers = HashSet::new();
@ -2062,13 +2064,13 @@ impl ChainSync {
.collect() .collect()
} }
fn propagate_transactions_to_peers(&mut self, io: &mut SyncIo, peers: Vec<PeerId>, transactions: Vec<PendingTransaction>) -> HashSet<PeerId> { fn propagate_transactions_to_peers(&mut self, io: &mut SyncIo, peers: Vec<PeerId>, transactions: Vec<&SignedTransaction>) -> HashSet<PeerId> {
let all_transactions_hashes = transactions.iter() let all_transactions_hashes = transactions.iter()
.map(|tx| tx.transaction.hash()) .map(|tx| tx.hash())
.collect::<HashSet<H256>>(); .collect::<HashSet<H256>>();
let all_transactions_rlp = { let all_transactions_rlp = {
let mut packet = RlpStream::new_list(transactions.len()); let mut packet = RlpStream::new_list(transactions.len());
for tx in &transactions { packet.append(&tx.transaction); } for tx in &transactions { packet.append(&**tx); }
packet.out() packet.out()
}; };
@ -2112,10 +2114,10 @@ impl ChainSync {
packet.begin_unbounded_list(); packet.begin_unbounded_list();
let mut pushed = 0; let mut pushed = 0;
for tx in &transactions { for tx in &transactions {
let hash = tx.transaction.hash(); let hash = tx.hash();
if to_send.contains(&hash) { if to_send.contains(&hash) {
let mut transaction = RlpStream::new(); let mut transaction = RlpStream::new();
tx.transaction.rlp_append(&mut transaction); tx.rlp_append(&mut transaction);
let appended = packet.append_raw_checked(&transaction.drain(), 1, MAX_TRANSACTION_PACKET_SIZE); let appended = packet.append_raw_checked(&transaction.drain(), 1, MAX_TRANSACTION_PACKET_SIZE);
if !appended { if !appended {
// Maximal packet size reached just proceed with sending // Maximal packet size reached just proceed with sending
@ -2329,7 +2331,6 @@ mod tests {
use ethcore::header::*; use ethcore::header::*;
use ethcore::client::{BlockChainClient, EachBlockWith, TestBlockChainClient, ChainInfo, BlockInfo}; use ethcore::client::{BlockChainClient, EachBlockWith, TestBlockChainClient, ChainInfo, BlockInfo};
use ethcore::miner::MinerService; use ethcore::miner::MinerService;
use transaction::UnverifiedTransaction;
use private_tx::NoopPrivateTxHandler; use private_tx::NoopPrivateTxHandler;
fn get_dummy_block(order: u32, parent_hash: H256) -> Bytes { fn get_dummy_block(order: u32, parent_hash: H256) -> Bytes {
@ -3064,10 +3065,9 @@ mod tests {
let queue = RwLock::new(VecDeque::new()); let queue = RwLock::new(VecDeque::new());
let ss = TestSnapshotService::new(); let ss = TestSnapshotService::new();
let mut io = TestIo::new(&mut client, &ss, &queue, None); let mut io = TestIo::new(&mut client, &ss, &queue, None);
io.chain.miner.chain_new_blocks(io.chain, &[], &[], &[], &good_blocks); io.chain.miner.chain_new_blocks(io.chain, &[], &[], &[], &good_blocks, false);
sync.chain_new_blocks(&mut io, &[], &[], &[], &good_blocks, &[], &[]); sync.chain_new_blocks(&mut io, &[], &[], &[], &good_blocks, &[], &[]);
assert_eq!(io.chain.miner.status().transactions_in_future_queue, 0); assert_eq!(io.chain.miner.ready_transactions(io.chain).len(), 1);
assert_eq!(io.chain.miner.status().transactions_in_pending_queue, 1);
} }
// We need to update nonce status (because we say that the block has been imported) // We need to update nonce status (because we say that the block has been imported)
for h in &[good_blocks[0]] { for h in &[good_blocks[0]] {
@ -3078,14 +3078,12 @@ mod tests {
let queue = RwLock::new(VecDeque::new()); let queue = RwLock::new(VecDeque::new());
let ss = TestSnapshotService::new(); let ss = TestSnapshotService::new();
let mut io = TestIo::new(&client, &ss, &queue, None); let mut io = TestIo::new(&client, &ss, &queue, None);
io.chain.miner.chain_new_blocks(io.chain, &[], &[], &good_blocks, &retracted_blocks); io.chain.miner.chain_new_blocks(io.chain, &[], &[], &good_blocks, &retracted_blocks, false);
sync.chain_new_blocks(&mut io, &[], &[], &good_blocks, &retracted_blocks, &[], &[]); sync.chain_new_blocks(&mut io, &[], &[], &good_blocks, &retracted_blocks, &[], &[]);
} }
// then // then
let status = client.miner.status(); assert_eq!(client.miner.ready_transactions(&client).len(), 1);
assert_eq!(status.transactions_in_pending_queue, 1);
assert_eq!(status.transactions_in_future_queue, 0);
} }
#[test] #[test]
@ -3106,13 +3104,11 @@ mod tests {
// when // when
sync.chain_new_blocks(&mut io, &[], &[], &[], &good_blocks, &[], &[]); sync.chain_new_blocks(&mut io, &[], &[], &[], &good_blocks, &[], &[]);
assert_eq!(io.chain.miner.status().transactions_in_future_queue, 0); assert_eq!(io.chain.miner.queue_status().status.transaction_count, 0);
assert_eq!(io.chain.miner.status().transactions_in_pending_queue, 0);
sync.chain_new_blocks(&mut io, &[], &[], &good_blocks, &retracted_blocks, &[], &[]); sync.chain_new_blocks(&mut io, &[], &[], &good_blocks, &retracted_blocks, &[], &[]);
// then // then
let status = io.chain.miner.status(); let status = io.chain.miner.queue_status();
assert_eq!(status.transactions_in_pending_queue, 0); assert_eq!(status.status.transaction_count, 0);
assert_eq!(status.transactions_in_future_queue, 0);
} }
} }

View File

@ -52,8 +52,8 @@ fn authority_round() {
let io_handler0: Arc<IoHandler<ClientIoMessage>> = Arc::new(TestIoHandler::new(net.peer(0).chain.clone())); let io_handler0: Arc<IoHandler<ClientIoMessage>> = Arc::new(TestIoHandler::new(net.peer(0).chain.clone()));
let io_handler1: Arc<IoHandler<ClientIoMessage>> = Arc::new(TestIoHandler::new(net.peer(1).chain.clone())); let io_handler1: Arc<IoHandler<ClientIoMessage>> = Arc::new(TestIoHandler::new(net.peer(1).chain.clone()));
// Push transaction to both clients. Only one of them gets lucky to produce a block. // Push transaction to both clients. Only one of them gets lucky to produce a block.
net.peer(0).chain.miner().set_engine_signer(s0.address(), "".to_owned()).unwrap(); net.peer(0).miner.set_author(s0.address(), Some("".into())).unwrap();
net.peer(1).chain.miner().set_engine_signer(s1.address(), "".to_owned()).unwrap(); net.peer(1).miner.set_author(s1.address(), Some("".to_owned())).unwrap();
net.peer(0).chain.engine().register_client(Arc::downgrade(&net.peer(0).chain) as _); net.peer(0).chain.engine().register_client(Arc::downgrade(&net.peer(0).chain) as _);
net.peer(1).chain.engine().register_client(Arc::downgrade(&net.peer(1).chain) as _); net.peer(1).chain.engine().register_client(Arc::downgrade(&net.peer(1).chain) as _);
net.peer(0).chain.set_io_channel(IoChannel::to_handler(Arc::downgrade(&io_handler1))); net.peer(0).chain.set_io_channel(IoChannel::to_handler(Arc::downgrade(&io_handler1)));
@ -61,15 +61,15 @@ fn authority_round() {
// exchange statuses // exchange statuses
net.sync(); net.sync();
// Trigger block proposal // Trigger block proposal
net.peer(0).chain.miner().import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 0.into(), chain_id)).unwrap(); net.peer(0).miner.import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 0.into(), chain_id)).unwrap();
net.peer(1).chain.miner().import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 0.into(), chain_id)).unwrap(); net.peer(1).miner.import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 0.into(), chain_id)).unwrap();
// Sync a block // Sync a block
net.sync(); net.sync();
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 1); assert_eq!(net.peer(0).chain.chain_info().best_block_number, 1);
assert_eq!(net.peer(1).chain.chain_info().best_block_number, 1); assert_eq!(net.peer(1).chain.chain_info().best_block_number, 1);
net.peer(0).chain.miner().import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 1.into(), chain_id)).unwrap(); net.peer(0).miner.import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 1.into(), chain_id)).unwrap();
net.peer(1).chain.miner().import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 1.into(), chain_id)).unwrap(); net.peer(1).miner.import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 1.into(), chain_id)).unwrap();
// Move to next proposer step. // Move to next proposer step.
net.peer(0).chain.engine().step(); net.peer(0).chain.engine().step();
net.peer(1).chain.engine().step(); net.peer(1).chain.engine().step();
@ -78,8 +78,8 @@ fn authority_round() {
assert_eq!(net.peer(1).chain.chain_info().best_block_number, 2); assert_eq!(net.peer(1).chain.chain_info().best_block_number, 2);
// Fork the network with equal height. // Fork the network with equal height.
net.peer(0).chain.miner().import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 2.into(), chain_id)).unwrap(); net.peer(0).miner.import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 2.into(), chain_id)).unwrap();
net.peer(1).chain.miner().import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 2.into(), chain_id)).unwrap(); net.peer(1).miner.import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 2.into(), chain_id)).unwrap();
// Let both nodes build one block. // Let both nodes build one block.
net.peer(0).chain.engine().step(); net.peer(0).chain.engine().step();
let early_hash = net.peer(0).chain.chain_info().best_block_hash; let early_hash = net.peer(0).chain.chain_info().best_block_hash;
@ -101,8 +101,8 @@ fn authority_round() {
assert_eq!(ci1.best_block_hash, early_hash); assert_eq!(ci1.best_block_hash, early_hash);
// Selfish miner // Selfish miner
net.peer(0).chain.miner().import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 3.into(), chain_id)).unwrap(); net.peer(0).miner.import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 3.into(), chain_id)).unwrap();
net.peer(1).chain.miner().import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 3.into(), chain_id)).unwrap(); net.peer(1).miner.import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 3.into(), chain_id)).unwrap();
// Node 0 is an earlier primary. // Node 0 is an earlier primary.
net.peer(0).chain.engine().step(); net.peer(0).chain.engine().step();
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 4); assert_eq!(net.peer(0).chain.chain_info().best_block_number, 4);
@ -113,7 +113,7 @@ fn authority_round() {
// Node 1 makes 2 blocks, but is a later primary on the first one. // Node 1 makes 2 blocks, but is a later primary on the first one.
net.peer(1).chain.engine().step(); net.peer(1).chain.engine().step();
net.peer(1).chain.engine().step(); net.peer(1).chain.engine().step();
net.peer(1).chain.miner().import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 4.into(), chain_id)).unwrap(); net.peer(1).miner.import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 4.into(), chain_id)).unwrap();
net.peer(1).chain.engine().step(); net.peer(1).chain.engine().step();
net.peer(1).chain.engine().step(); net.peer(1).chain.engine().step();
assert_eq!(net.peer(1).chain.chain_info().best_block_number, 5); assert_eq!(net.peer(1).chain.chain_info().best_block_number, 5);
@ -139,9 +139,9 @@ fn tendermint() {
let io_handler0: Arc<IoHandler<ClientIoMessage>> = Arc::new(TestIoHandler::new(net.peer(0).chain.clone())); let io_handler0: Arc<IoHandler<ClientIoMessage>> = Arc::new(TestIoHandler::new(net.peer(0).chain.clone()));
let io_handler1: Arc<IoHandler<ClientIoMessage>> = Arc::new(TestIoHandler::new(net.peer(1).chain.clone())); let io_handler1: Arc<IoHandler<ClientIoMessage>> = Arc::new(TestIoHandler::new(net.peer(1).chain.clone()));
// Push transaction to both clients. Only one of them issues a proposal. // Push transaction to both clients. Only one of them issues a proposal.
net.peer(0).chain.miner().set_engine_signer(s0.address(), "".to_owned()).unwrap(); net.peer(0).miner.set_author(s0.address(), Some("".into())).unwrap();
trace!(target: "poa", "Peer 0 is {}.", s0.address()); trace!(target: "poa", "Peer 0 is {}.", s0.address());
net.peer(1).chain.miner().set_engine_signer(s1.address(), "".to_owned()).unwrap(); net.peer(1).miner.set_author(s1.address(), Some("".into())).unwrap();
trace!(target: "poa", "Peer 1 is {}.", s1.address()); trace!(target: "poa", "Peer 1 is {}.", s1.address());
net.peer(0).chain.engine().register_client(Arc::downgrade(&net.peer(0).chain) as _); net.peer(0).chain.engine().register_client(Arc::downgrade(&net.peer(0).chain) as _);
net.peer(1).chain.engine().register_client(Arc::downgrade(&net.peer(1).chain) as _); net.peer(1).chain.engine().register_client(Arc::downgrade(&net.peer(1).chain) as _);
@ -150,7 +150,7 @@ fn tendermint() {
// Exhange statuses // Exhange statuses
net.sync(); net.sync();
// Propose // Propose
net.peer(0).chain.miner().import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 0.into(), chain_id)).unwrap(); net.peer(0).miner.import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 0.into(), chain_id)).unwrap();
net.sync(); net.sync();
// Propose timeout, synchronous for now // Propose timeout, synchronous for now
net.peer(0).chain.engine().step(); net.peer(0).chain.engine().step();
@ -161,7 +161,7 @@ fn tendermint() {
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 1); assert_eq!(net.peer(0).chain.chain_info().best_block_number, 1);
assert_eq!(net.peer(1).chain.chain_info().best_block_number, 1); assert_eq!(net.peer(1).chain.chain_info().best_block_number, 1);
net.peer(1).chain.miner().import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 0.into(), chain_id)).unwrap(); net.peer(1).miner.import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 0.into(), chain_id)).unwrap();
// Commit timeout // Commit timeout
net.peer(0).chain.engine().step(); net.peer(0).chain.engine().step();
net.peer(1).chain.engine().step(); net.peer(1).chain.engine().step();
@ -175,8 +175,8 @@ fn tendermint() {
assert_eq!(net.peer(0).chain.chain_info().best_block_number, 2); assert_eq!(net.peer(0).chain.chain_info().best_block_number, 2);
assert_eq!(net.peer(1).chain.chain_info().best_block_number, 2); assert_eq!(net.peer(1).chain.chain_info().best_block_number, 2);
net.peer(0).chain.miner().import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 1.into(), chain_id)).unwrap(); net.peer(0).miner.import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 1.into(), chain_id)).unwrap();
net.peer(1).chain.miner().import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 1.into(), chain_id)).unwrap(); net.peer(1).miner.import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 1.into(), chain_id)).unwrap();
// Peers get disconnected. // Peers get disconnected.
// Commit // Commit
net.peer(0).chain.engine().step(); net.peer(0).chain.engine().step();
@ -184,8 +184,8 @@ fn tendermint() {
// Propose // Propose
net.peer(0).chain.engine().step(); net.peer(0).chain.engine().step();
net.peer(1).chain.engine().step(); net.peer(1).chain.engine().step();
net.peer(0).chain.miner().import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 2.into(), chain_id)).unwrap(); net.peer(0).miner.import_own_transaction(&*net.peer(0).chain, new_tx(s0.secret(), 2.into(), chain_id)).unwrap();
net.peer(1).chain.miner().import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 2.into(), chain_id)).unwrap(); net.peer(1).miner.import_own_transaction(&*net.peer(1).chain, new_tx(s1.secret(), 2.into(), chain_id)).unwrap();
// Send different prevotes // Send different prevotes
net.sync(); net.sync();
// Prevote timeout // Prevote timeout

View File

@ -206,6 +206,7 @@ pub trait Peer {
pub struct EthPeer<C> where C: FlushingBlockChainClient { pub struct EthPeer<C> where C: FlushingBlockChainClient {
pub chain: Arc<C>, pub chain: Arc<C>,
pub miner: Arc<Miner>,
pub snapshot_service: Arc<TestSnapshotService>, pub snapshot_service: Arc<TestSnapshotService>,
pub sync: RwLock<ChainSync>, pub sync: RwLock<ChainSync>,
pub queue: RwLock<VecDeque<TestPacket>>, pub queue: RwLock<VecDeque<TestPacket>>,
@ -340,6 +341,7 @@ impl TestNet<EthPeer<TestBlockChainClient>> {
sync: RwLock::new(sync), sync: RwLock::new(sync),
snapshot_service: ss, snapshot_service: ss,
chain: Arc::new(chain), chain: Arc::new(chain),
miner: Arc::new(Miner::new_for_tests(&Spec::new_test(), None)),
queue: RwLock::new(VecDeque::new()), queue: RwLock::new(VecDeque::new()),
private_tx_handler, private_tx_handler,
io_queue: RwLock::new(VecDeque::new()), io_queue: RwLock::new(VecDeque::new()),
@ -382,11 +384,12 @@ impl TestNet<EthPeer<EthcoreClient>> {
pub fn add_peer_with_private_config(&mut self, config: SyncConfig, spec: Spec, accounts: Option<Arc<AccountProvider>>) { pub fn add_peer_with_private_config(&mut self, config: SyncConfig, spec: Spec, accounts: Option<Arc<AccountProvider>>) {
let channel = IoChannel::disconnected(); let channel = IoChannel::disconnected();
let miner = Arc::new(Miner::new_for_tests(&spec, accounts.clone()));
let client = EthcoreClient::new( let client = EthcoreClient::new(
ClientConfig::default(), ClientConfig::default(),
&spec, &spec,
Arc::new(::kvdb_memorydb::create(::ethcore::db::NUM_COLUMNS.unwrap_or(0))), Arc::new(::kvdb_memorydb::create(::ethcore::db::NUM_COLUMNS.unwrap_or(0))),
Arc::new(Miner::with_spec_and_accounts(&spec, accounts.clone())), miner.clone(),
channel.clone() channel.clone()
).unwrap(); ).unwrap();
@ -397,6 +400,7 @@ impl TestNet<EthPeer<EthcoreClient>> {
sync: RwLock::new(sync), sync: RwLock::new(sync),
snapshot_service: ss, snapshot_service: ss,
chain: client, chain: client,
miner,
queue: RwLock::new(VecDeque::new()), queue: RwLock::new(VecDeque::new()),
private_tx_handler, private_tx_handler,
io_queue: RwLock::new(VecDeque::new()), io_queue: RwLock::new(VecDeque::new()),
@ -408,11 +412,12 @@ impl TestNet<EthPeer<EthcoreClient>> {
} }
pub fn add_peer(&mut self, config: SyncConfig, spec: Spec, accounts: Option<Arc<AccountProvider>>) { pub fn add_peer(&mut self, config: SyncConfig, spec: Spec, accounts: Option<Arc<AccountProvider>>) {
let miner = Arc::new(Miner::new_for_tests(&spec, accounts));
let client = EthcoreClient::new( let client = EthcoreClient::new(
ClientConfig::default(), ClientConfig::default(),
&spec, &spec,
Arc::new(::kvdb_memorydb::create(::ethcore::db::NUM_COLUMNS.unwrap_or(0))), Arc::new(::kvdb_memorydb::create(::ethcore::db::NUM_COLUMNS.unwrap_or(0))),
Arc::new(Miner::with_spec_and_accounts(&spec, accounts)), miner.clone(),
IoChannel::disconnected(), IoChannel::disconnected(),
).unwrap(); ).unwrap();
@ -422,8 +427,9 @@ impl TestNet<EthPeer<EthcoreClient>> {
let peer = Arc::new(EthPeer { let peer = Arc::new(EthPeer {
sync: RwLock::new(sync), sync: RwLock::new(sync),
snapshot_service: ss, snapshot_service: ss,
chain: client,
queue: RwLock::new(VecDeque::new()), queue: RwLock::new(VecDeque::new()),
chain: client,
miner,
private_tx_handler, private_tx_handler,
io_queue: RwLock::new(VecDeque::new()), io_queue: RwLock::new(VecDeque::new()),
new_blocks_queue: RwLock::new(VecDeque::new()), new_blocks_queue: RwLock::new(VecDeque::new()),

View File

@ -33,14 +33,3 @@ mod transaction;
pub use error::Error; pub use error::Error;
pub use transaction::*; pub use transaction::*;
// TODO [ToDr] Move to miner!
/// Represents the result of importing transaction.
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum ImportResult {
/// Transaction was imported to current queue.
Current,
/// Transaction was imported to future queue.
Future
}

View File

@ -377,7 +377,7 @@ impl UnverifiedTransaction {
} }
} }
/// Get the hash of this header (keccak of the RLP). /// Get the hash of this transaction (keccak of the RLP).
pub fn hash(&self) -> H256 { pub fn hash(&self) -> H256 {
self.hash self.hash
} }

View File

@ -7,24 +7,31 @@ version = "1.11.0"
authors = ["Parity Technologies <admin@parity.io>"] authors = ["Parity Technologies <admin@parity.io>"]
[dependencies] [dependencies]
common-types = { path = "../ethcore/types" } # Only work_notify, consider a separate crate
ethabi = "5.1"
ethabi-contract = "5.0"
ethabi-derive = "5.0"
ethash = { path = "../ethash" } ethash = { path = "../ethash" }
fetch = { path = "../util/fetch" }
hyper = "0.11"
parity-reactor = { path = "../util/reactor" }
url = "1"
# Miner
ansi_term = "0.10"
error-chain = "0.11"
ethcore-transaction = { path = "../ethcore/transaction" } ethcore-transaction = { path = "../ethcore/transaction" }
ethereum-types = "0.3" ethereum-types = "0.3"
ethkey = { path = "../ethkey" }
futures = "0.1" futures = "0.1"
futures-cpupool = "0.1"
heapsize = "0.4" heapsize = "0.4"
keccak-hash = { path = "../util/hash" } keccak-hash = { path = "../util/hash" }
linked-hash-map = "0.5" linked-hash-map = "0.5"
log = "0.3" log = "0.3"
parking_lot = "0.5" parking_lot = "0.5"
price-info = { path = "../price-info" }
rayon = "1.0"
trace-time = { path = "../util/trace-time" }
transaction-pool = { path = "../transaction-pool" }
[dev-dependencies]
env_logger = "0.4"
ethkey = { path = "../ethkey" }
rustc-hex = "1.0" rustc-hex = "1.0"
table = { path = "../util/table" }
transient-hashmap = "0.4"
fetch = { path = "../util/fetch" }
parity-reactor = { path = "../util/reactor" }
url = "1"
hyper = "0.11"

View File

@ -1,321 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Banning Queue
//! Transacton Queue wrapper maintaining additional list of banned senders and contract hashes.
use std::time::Duration;
use std::ops::{Deref, DerefMut};
use ethereum_types::{H256, U256, Address};
use hash::keccak;
use transaction::{self, SignedTransaction, Action};
use transient_hashmap::TransientHashMap;
use transaction_queue::{TransactionQueue, TransactionDetailsProvider, TransactionOrigin, QueuingInstant};
type Count = u16;
/// Auto-Banning threshold
pub enum Threshold {
/// Should ban after given number of misbehaves reported.
BanAfter(Count),
/// Should never ban anything
NeverBan
}
impl Default for Threshold {
fn default() -> Self {
Threshold::NeverBan
}
}
/// Transaction queue with banlist.
pub struct BanningTransactionQueue {
queue: TransactionQueue,
ban_threshold: Threshold,
senders_bans: TransientHashMap<Address, Count>,
recipients_bans: TransientHashMap<Address, Count>,
codes_bans: TransientHashMap<H256, Count>,
}
impl BanningTransactionQueue {
/// Creates new banlisting transaction queue
pub fn new(queue: TransactionQueue, ban_threshold: Threshold, ban_lifetime: Duration) -> Self {
let ban_lifetime_sec = ban_lifetime.as_secs() as u32;
assert!(ban_lifetime_sec > 0, "Lifetime has to be specified in seconds.");
BanningTransactionQueue {
queue: queue,
ban_threshold: ban_threshold,
senders_bans: TransientHashMap::new(ban_lifetime_sec),
recipients_bans: TransientHashMap::new(ban_lifetime_sec),
codes_bans: TransientHashMap::new(ban_lifetime_sec),
}
}
/// Borrows internal queue.
/// NOTE: you can insert transactions to the queue even
/// if they would be rejected because of ban otherwise.
/// But probably you shouldn't.
pub fn queue(&mut self) -> &mut TransactionQueue {
&mut self.queue
}
/// Add to the queue taking bans into consideration.
/// May reject transaction because of the banlist.
pub fn add_with_banlist(
&mut self,
transaction: SignedTransaction,
time: QueuingInstant,
details_provider: &TransactionDetailsProvider,
) -> Result<transaction::ImportResult, transaction::Error> {
if let Threshold::BanAfter(threshold) = self.ban_threshold {
// NOTE In all checks use direct query to avoid increasing ban timeout.
// Check sender
let sender = transaction.sender();
let count = self.senders_bans.direct().get(&sender).cloned().unwrap_or(0);
if count > threshold {
debug!(target: "txqueue", "Ignoring transaction {:?} because sender is banned.", transaction.hash());
return Err(transaction::Error::SenderBanned);
}
// Check recipient
if let Action::Call(recipient) = transaction.action {
let count = self.recipients_bans.direct().get(&recipient).cloned().unwrap_or(0);
if count > threshold {
debug!(target: "txqueue", "Ignoring transaction {:?} because recipient is banned.", transaction.hash());
return Err(transaction::Error::RecipientBanned);
}
}
// Check code
if let Action::Create = transaction.action {
let code_hash = keccak(&transaction.data);
let count = self.codes_bans.direct().get(&code_hash).cloned().unwrap_or(0);
if count > threshold {
debug!(target: "txqueue", "Ignoring transaction {:?} because code is banned.", transaction.hash());
return Err(transaction::Error::CodeBanned);
}
}
}
self.queue.add(transaction, TransactionOrigin::External, time, None, details_provider)
}
/// Ban transaction with given hash.
/// Transaction has to be in the queue.
///
/// Bans sender and recipient/code and returns `true` when any ban has reached threshold.
pub fn ban_transaction(&mut self, hash: &H256) -> bool {
let transaction = self.queue.find(hash);
match transaction {
Some(transaction) => {
let sender = transaction.sender();
// Ban sender
let sender_banned = self.ban_sender(sender);
// Ban recipient and codehash
let recipient_or_code_banned = match transaction.action {
Action::Call(recipient) => {
self.ban_recipient(recipient)
},
Action::Create => {
self.ban_codehash(keccak(&transaction.data))
},
};
sender_banned || recipient_or_code_banned
},
None => false,
}
}
/// Ban given sender.
/// If bans threshold is reached all subsequent transactions from this sender will be rejected.
/// Reaching bans threshold also removes all existsing transaction from this sender that are already in the
/// queue.
fn ban_sender(&mut self, address: Address) -> bool {
let count = {
let count = self.senders_bans.entry(address).or_insert_with(|| 0);
*count = count.saturating_add(1);
*count
};
match self.ban_threshold {
Threshold::BanAfter(threshold) if count > threshold => {
// Banlist the sender.
// Remove all transactions from the queue.
self.cull(address, !U256::zero());
true
},
_ => false
}
}
/// Ban given recipient.
/// If bans threshold is reached all subsequent transactions to this address will be rejected.
/// Returns true if bans threshold has been reached.
fn ban_recipient(&mut self, address: Address) -> bool {
let count = {
let count = self.recipients_bans.entry(address).or_insert_with(|| 0);
*count = count.saturating_add(1);
*count
};
match self.ban_threshold {
// TODO [ToDr] Consider removing other transactions to the same recipient from the queue?
Threshold::BanAfter(threshold) if count > threshold => true,
_ => false
}
}
/// Ban given codehash.
/// If bans threshold is reached all subsequent transactions to contracts with this codehash will be rejected.
/// Returns true if bans threshold has been reached.
fn ban_codehash(&mut self, code_hash: H256) -> bool {
let count = self.codes_bans.entry(code_hash).or_insert_with(|| 0);
*count = count.saturating_add(1);
match self.ban_threshold {
// TODO [ToDr] Consider removing other transactions with the same code from the queue?
Threshold::BanAfter(threshold) if *count > threshold => true,
_ => false,
}
}
}
impl Deref for BanningTransactionQueue {
type Target = TransactionQueue;
fn deref(&self) -> &Self::Target {
&self.queue
}
}
impl DerefMut for BanningTransactionQueue {
fn deref_mut(&mut self) -> &mut Self::Target {
self.queue()
}
}
#[cfg(test)]
mod tests {
use super::*;
use ethkey::{Random, Generator};
use rustc_hex::FromHex;
use transaction_queue::test::DummyTransactionDetailsProvider;
use ethereum_types::{U256, Address};
fn queue() -> BanningTransactionQueue {
BanningTransactionQueue::new(TransactionQueue::default(), Threshold::BanAfter(1), Duration::from_secs(180))
}
fn default_tx_provider() -> DummyTransactionDetailsProvider {
DummyTransactionDetailsProvider::default().with_account_nonce(U256::zero())
}
fn transaction(action: Action) -> SignedTransaction {
let keypair = Random.generate().unwrap();
transaction::Transaction {
action: action,
value: U256::from(100),
data: "3331600055".from_hex().unwrap(),
gas: U256::from(100_000),
gas_price: U256::from(10),
nonce: U256::from(0),
}.sign(keypair.secret(), None)
}
fn unwrap_err(res: Result<transaction::ImportResult, transaction::Error>) -> transaction::Error {
res.unwrap_err()
}
#[test]
fn should_allow_to_borrow_the_queue() {
// given
let tx = transaction(Action::Create);
let mut txq = queue();
// when
txq.queue().add(tx, TransactionOrigin::External, 0, None, &default_tx_provider()).unwrap();
// then
// should also deref to queue
assert_eq!(txq.status().pending, 1);
}
#[test]
fn should_not_accept_transactions_from_banned_sender() {
// given
let tx = transaction(Action::Create);
let mut txq = queue();
// Banlist once (threshold not reached)
let banlist1 = txq.ban_sender(tx.sender());
assert!(!banlist1, "Threshold not reached yet.");
// Insert once
let import1 = txq.add_with_banlist(tx.clone(), 0, &default_tx_provider()).unwrap();
assert_eq!(import1, transaction::ImportResult::Current);
// when
let banlist2 = txq.ban_sender(tx.sender());
let import2 = txq.add_with_banlist(tx.clone(), 0, &default_tx_provider());
// then
assert!(banlist2, "Threshold should be reached - banned.");
assert_eq!(unwrap_err(import2), transaction::Error::SenderBanned);
// Should also remove transacion from the queue
assert_eq!(txq.find(&tx.hash()), None);
}
#[test]
fn should_not_accept_transactions_to_banned_recipient() {
// given
let recipient = Address::default();
let tx = transaction(Action::Call(recipient));
let mut txq = queue();
// Banlist once (threshold not reached)
let banlist1 = txq.ban_recipient(recipient);
assert!(!banlist1, "Threshold not reached yet.");
// Insert once
let import1 = txq.add_with_banlist(tx.clone(), 0, &default_tx_provider()).unwrap();
assert_eq!(import1, transaction::ImportResult::Current);
// when
let banlist2 = txq.ban_recipient(recipient);
let import2 = txq.add_with_banlist(tx.clone(), 0, &default_tx_provider());
// then
assert!(banlist2, "Threshold should be reached - banned.");
assert_eq!(unwrap_err(import2), transaction::Error::RecipientBanned);
}
#[test]
fn should_not_accept_transactions_with_banned_code() {
// given
let tx = transaction(Action::Create);
let codehash = keccak(&tx.data);
let mut txq = queue();
// Banlist once (threshold not reached)
let banlist1 = txq.ban_codehash(codehash);
assert!(!banlist1, "Threshold not reached yet.");
// Insert once
let import1 = txq.add_with_banlist(tx.clone(), 0, &default_tx_provider()).unwrap();
assert_eq!(import1, transaction::ImportResult::Current);
// when
let banlist2 = txq.ban_codehash(codehash);
let import2 = txq.add_with_banlist(tx.clone(), 0, &default_tx_provider());
// then
assert!(banlist2, "Threshold should be reached - banned.");
assert_eq!(unwrap_err(import2), transaction::Error::CodeBanned);
}
}

97
miner/src/gas_pricer.rs Normal file
View File

@ -0,0 +1,97 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Auto-updates minimal gas price requirement.
use std::time::{Instant, Duration};
use ansi_term::Colour;
use ethereum_types::U256;
use futures_cpupool::CpuPool;
use price_info::{Client as PriceInfoClient, PriceInfo};
use price_info::fetch::Client as FetchClient;
/// Options for the dynamic gas price recalibrator.
#[derive(Debug, PartialEq)]
pub struct GasPriceCalibratorOptions {
/// Base transaction price to match against.
pub usd_per_tx: f32,
/// How frequently we should recalibrate.
pub recalibration_period: Duration,
}
/// The gas price validator variant for a `GasPricer`.
#[derive(Debug, PartialEq)]
pub struct GasPriceCalibrator {
options: GasPriceCalibratorOptions,
next_calibration: Instant,
price_info: PriceInfoClient,
}
impl GasPriceCalibrator {
fn recalibrate<F: FnOnce(U256) + Sync + Send + 'static>(&mut self, set_price: F) {
trace!(target: "miner", "Recalibrating {:?} versus {:?}", Instant::now(), self.next_calibration);
if Instant::now() >= self.next_calibration {
let usd_per_tx = self.options.usd_per_tx;
trace!(target: "miner", "Getting price info");
self.price_info.get(move |price: PriceInfo| {
trace!(target: "miner", "Price info arrived: {:?}", price);
let usd_per_eth = price.ethusd;
let wei_per_usd: f32 = 1.0e18 / usd_per_eth;
let gas_per_tx: f32 = 21000.0;
let wei_per_gas: f32 = wei_per_usd * usd_per_tx / gas_per_tx;
info!(target: "miner", "Updated conversion rate to Ξ1 = {} ({} wei/gas)", Colour::White.bold().paint(format!("US${:.2}", usd_per_eth)), Colour::Yellow.bold().paint(format!("{}", wei_per_gas)));
set_price(U256::from(wei_per_gas as u64));
});
self.next_calibration = Instant::now() + self.options.recalibration_period;
}
}
}
/// Struct to look after updating the acceptable gas price of a miner.
#[derive(Debug, PartialEq)]
pub enum GasPricer {
/// A fixed gas price in terms of Wei - always the argument given.
Fixed(U256),
/// Gas price is calibrated according to a fixed amount of USD.
Calibrated(GasPriceCalibrator),
}
impl GasPricer {
/// Create a new Calibrated `GasPricer`.
pub fn new_calibrated(options: GasPriceCalibratorOptions, fetch: FetchClient, p: CpuPool) -> GasPricer {
GasPricer::Calibrated(GasPriceCalibrator {
options: options,
next_calibration: Instant::now(),
price_info: PriceInfoClient::new(fetch, p),
})
}
/// Create a new Fixed `GasPricer`.
pub fn new_fixed(gas_price: U256) -> GasPricer {
GasPricer::Fixed(gas_price)
}
/// Recalibrate current gas price.
pub fn recalibrate<F: FnOnce(U256) + Sync + Send + 'static>(&mut self, set_price: F) {
match *self {
GasPricer::Fixed(ref max) => set_price(max.clone()),
GasPricer::Calibrated(ref mut cal) => cal.recalibrate(set_price),
}
}
}

View File

@ -19,27 +19,33 @@
//! Miner module //! Miner module
//! Keeps track of transactions and mined block. //! Keeps track of transactions and mined block.
extern crate common_types as types; extern crate ansi_term;
extern crate ethabi;
extern crate ethcore_transaction as transaction; extern crate ethcore_transaction as transaction;
extern crate ethereum_types; extern crate ethereum_types;
extern crate futures; extern crate futures;
extern crate futures_cpupool;
extern crate heapsize; extern crate heapsize;
extern crate keccak_hash as hash; extern crate keccak_hash as hash;
extern crate linked_hash_map; extern crate linked_hash_map;
extern crate parking_lot; extern crate parking_lot;
extern crate table; extern crate price_info;
extern crate transient_hashmap; extern crate rayon;
extern crate trace_time;
extern crate transaction_pool as txpool;
#[cfg(test)] #[macro_use]
extern crate ethkey; extern crate error_chain;
#[macro_use] #[macro_use]
extern crate log; extern crate log;
#[cfg(test)] #[cfg(test)]
extern crate rustc_hex; extern crate rustc_hex;
#[cfg(test)]
extern crate ethkey;
#[cfg(test)]
extern crate env_logger;
pub mod banning_queue;
pub mod external; pub mod external;
pub mod local_transactions; pub mod gas_pricer;
pub mod transaction_queue; pub mod pool;
pub mod work_notify; pub mod work_notify;

View File

@ -1,220 +0,0 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Local Transactions List.
use ethereum_types::{H256, U256};
use linked_hash_map::LinkedHashMap;
use transaction::{self, SignedTransaction, PendingTransaction};
/// Status of local transaction.
/// Can indicate that the transaction is currently part of the queue (`Pending/Future`)
/// or gives a reason why the transaction was removed.
#[derive(Debug, PartialEq, Clone)]
pub enum Status {
/// The transaction is currently in the transaction queue.
Pending,
/// The transaction is in future part of the queue.
Future,
/// Transaction is already mined.
Mined(SignedTransaction),
/// Transaction is dropped because of limit
Dropped(SignedTransaction),
/// Replaced because of higher gas price of another transaction.
Replaced(SignedTransaction, U256, H256),
/// Transaction was never accepted to the queue.
Rejected(SignedTransaction, transaction::Error),
/// Transaction is invalid.
Invalid(SignedTransaction),
/// Transaction was canceled.
Canceled(PendingTransaction),
}
impl Status {
fn is_current(&self) -> bool {
*self == Status::Pending || *self == Status::Future
}
}
/// Keeps track of local transactions that are in the queue or were mined/dropped recently.
#[derive(Debug)]
pub struct LocalTransactionsList {
max_old: usize,
transactions: LinkedHashMap<H256, Status>,
}
impl Default for LocalTransactionsList {
fn default() -> Self {
Self::new(10)
}
}
impl LocalTransactionsList {
/// Create a new list of local transactions.
pub fn new(max_old: usize) -> Self {
LocalTransactionsList {
max_old: max_old,
transactions: Default::default(),
}
}
/// Mark transaction with given hash as pending.
pub fn mark_pending(&mut self, hash: H256) {
debug!(target: "own_tx", "Imported to Current (hash {:?})", hash);
self.clear_old();
self.transactions.insert(hash, Status::Pending);
}
/// Mark transaction with given hash as future.
pub fn mark_future(&mut self, hash: H256) {
debug!(target: "own_tx", "Imported to Future (hash {:?})", hash);
self.transactions.insert(hash, Status::Future);
self.clear_old();
}
/// Mark given transaction as rejected from the queue.
pub fn mark_rejected(&mut self, tx: SignedTransaction, err: transaction::Error) {
debug!(target: "own_tx", "Transaction rejected (hash {:?}): {:?}", tx.hash(), err);
self.transactions.insert(tx.hash(), Status::Rejected(tx, err));
self.clear_old();
}
/// Mark the transaction as replaced by transaction with given hash.
pub fn mark_replaced(&mut self, tx: SignedTransaction, gas_price: U256, hash: H256) {
debug!(target: "own_tx", "Transaction replaced (hash {:?}) by {:?} (new gas price: {:?})", tx.hash(), hash, gas_price);
self.transactions.insert(tx.hash(), Status::Replaced(tx, gas_price, hash));
self.clear_old();
}
/// Mark transaction as invalid.
pub fn mark_invalid(&mut self, tx: SignedTransaction) {
warn!(target: "own_tx", "Transaction marked invalid (hash {:?})", tx.hash());
self.transactions.insert(tx.hash(), Status::Invalid(tx));
self.clear_old();
}
/// Mark transaction as canceled.
pub fn mark_canceled(&mut self, tx: PendingTransaction) {
warn!(target: "own_tx", "Transaction canceled (hash {:?})", tx.hash());
self.transactions.insert(tx.hash(), Status::Canceled(tx));
self.clear_old();
}
/// Mark transaction as dropped because of limit.
pub fn mark_dropped(&mut self, tx: SignedTransaction) {
warn!(target: "own_tx", "Transaction dropped (hash {:?})", tx.hash());
self.transactions.insert(tx.hash(), Status::Dropped(tx));
self.clear_old();
}
/// Mark transaction as mined.
pub fn mark_mined(&mut self, tx: SignedTransaction) {
info!(target: "own_tx", "Transaction mined (hash {:?})", tx.hash());
self.transactions.insert(tx.hash(), Status::Mined(tx));
self.clear_old();
}
/// Returns true if the transaction is already in local transactions.
pub fn contains(&self, hash: &H256) -> bool {
self.transactions.contains_key(hash)
}
/// Return a map of all currently stored transactions.
pub fn all_transactions(&self) -> &LinkedHashMap<H256, Status> {
&self.transactions
}
fn clear_old(&mut self) {
let number_of_old = self.transactions
.values()
.filter(|status| !status.is_current())
.count();
if self.max_old >= number_of_old {
return;
}
let to_remove = self.transactions
.iter()
.filter(|&(_, status)| !status.is_current())
.map(|(hash, _)| *hash)
.take(number_of_old - self.max_old)
.collect::<Vec<_>>();
for hash in to_remove {
self.transactions.remove(&hash);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use ethereum_types::U256;
use ethkey::{Random, Generator};
#[test]
fn should_add_transaction_as_pending() {
// given
let mut list = LocalTransactionsList::default();
// when
list.mark_pending(10.into());
list.mark_future(20.into());
// then
assert!(list.contains(&10.into()), "Should contain the transaction.");
assert!(list.contains(&20.into()), "Should contain the transaction.");
let statuses = list.all_transactions().values().cloned().collect::<Vec<Status>>();
assert_eq!(statuses, vec![Status::Pending, Status::Future]);
}
#[test]
fn should_clear_old_transactions() {
// given
let mut list = LocalTransactionsList::new(1);
let tx1 = new_tx(10.into());
let tx1_hash = tx1.hash();
let tx2 = new_tx(50.into());
let tx2_hash = tx2.hash();
list.mark_pending(10.into());
list.mark_invalid(tx1);
list.mark_dropped(tx2);
assert!(list.contains(&tx2_hash));
assert!(!list.contains(&tx1_hash));
assert!(list.contains(&10.into()));
// when
list.mark_future(15.into());
// then
assert!(list.contains(&10.into()));
assert!(list.contains(&15.into()));
}
fn new_tx(nonce: U256) -> SignedTransaction {
let keypair = Random.generate().unwrap();
transaction::Transaction {
action: transaction::Action::Create,
value: U256::from(100),
data: Default::default(),
gas: U256::from(10),
gas_price: U256::from(1245),
nonce: nonce
}.sign(keypair.secret(), None)
}
}

71
miner/src/pool/client.rs Normal file
View File

@ -0,0 +1,71 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Transaction Pool state client.
//!
//! `Client` encapsulates all external data required for the verifaction and readiness.
//! It includes any Ethereum state parts required for checking the transaction and
//! any consensus-required structure of the transaction.
use std::fmt;
use ethereum_types::{U256, H256, H160 as Address};
use transaction;
/// Account Details
#[derive(Debug, Clone)]
pub struct AccountDetails {
/// Current account nonce
pub nonce: U256,
/// Current account balance
pub balance: U256,
/// Is this account a local account?
pub is_local: bool,
}
/// Transaction type
#[derive(Debug, PartialEq)]
pub enum TransactionType {
/// Regular transaction
Regular,
/// Service transaction (allowed by a contract to have gas_price=0)
Service,
}
/// Verification client.
pub trait Client: fmt::Debug + Sync {
/// Is transaction with given hash already in the blockchain?
fn transaction_already_included(&self, hash: &H256) -> bool;
/// Structurarily verify given transaction.
fn verify_transaction(&self, tx: transaction::UnverifiedTransaction)
-> Result<transaction::SignedTransaction, transaction::Error>;
/// Estimate minimal gas requirurement for given transaction.
fn required_gas(&self, tx: &transaction::Transaction) -> U256;
/// Fetch account details for given sender.
fn account_details(&self, address: &Address) -> AccountDetails;
/// Classify transaction (check if transaction is filtered by some contracts).
fn transaction_type(&self, tx: &transaction::SignedTransaction) -> TransactionType;
}
/// State nonce client
pub trait NonceClient: fmt::Debug + Sync {
/// Fetch only account nonce for given sender.
fn account_nonce(&self, address: &Address) -> U256;
}

161
miner/src/pool/listener.rs Normal file
View File

@ -0,0 +1,161 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Notifier for new transaction hashes.
use std::fmt;
use std::sync::Arc;
use ethereum_types::H256;
use txpool::{self, VerifiedTransaction};
use pool::VerifiedTransaction as Transaction;
type Listener = Box<Fn(&[H256]) + Send + Sync>;
/// Manages notifications to pending transaction listeners.
#[derive(Default)]
pub struct Notifier {
listeners: Vec<Listener>,
pending: Vec<H256>,
}
impl fmt::Debug for Notifier {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.debug_struct("Notifier")
.field("listeners", &self.listeners.len())
.field("pending", &self.pending)
.finish()
}
}
impl Notifier {
/// Add new listener to receive notifications.
pub fn add(&mut self, f: Listener) {
self.listeners.push(f)
}
/// Notify listeners about all currently pending transactions.
pub fn notify(&mut self) {
for l in &self.listeners {
(l)(&self.pending);
}
self.pending.clear();
}
}
impl txpool::Listener<Transaction> for Notifier {
fn added(&mut self, tx: &Arc<Transaction>, _old: Option<&Arc<Transaction>>) {
self.pending.push(*tx.hash());
}
}
/// Transaction pool logger.
#[derive(Default, Debug)]
pub struct Logger;
impl txpool::Listener<Transaction> for Logger {
fn added(&mut self, tx: &Arc<Transaction>, old: Option<&Arc<Transaction>>) {
debug!(target: "txqueue", "[{:?}] Added to the pool.", tx.hash());
debug!(
target: "txqueue",
"[{hash:?}] Sender: {sender}, nonce: {nonce}, gasPrice: {gas_price}, gas: {gas}, value: {value}, dataLen: {data}))",
hash = tx.hash(),
sender = tx.sender(),
nonce = tx.signed().nonce,
gas_price = tx.signed().gas_price,
gas = tx.signed().gas,
value = tx.signed().value,
data = tx.signed().data.len(),
);
if let Some(old) = old {
debug!(target: "txqueue", "[{:?}] Dropped. Replaced by [{:?}]", old.hash(), tx.hash());
}
}
fn rejected(&mut self, _tx: &Arc<Transaction>, reason: &txpool::ErrorKind) {
trace!(target: "txqueue", "Rejected {}.", reason);
}
fn dropped(&mut self, tx: &Arc<Transaction>, new: Option<&Transaction>) {
match new {
Some(new) => debug!(target: "txqueue", "[{:?}] Pushed out by [{:?}]", tx.hash(), new.hash()),
None => debug!(target: "txqueue", "[{:?}] Dropped.", tx.hash()),
}
}
fn invalid(&mut self, tx: &Arc<Transaction>) {
debug!(target: "txqueue", "[{:?}] Marked as invalid by executor.", tx.hash());
}
fn canceled(&mut self, tx: &Arc<Transaction>) {
debug!(target: "txqueue", "[{:?}] Canceled by the user.", tx.hash());
}
fn mined(&mut self, tx: &Arc<Transaction>) {
debug!(target: "txqueue", "[{:?}] Mined.", tx.hash());
}
}
#[cfg(test)]
mod tests {
use super::*;
use parking_lot::Mutex;
use transaction;
use txpool::Listener;
#[test]
fn should_notify_listeners() {
// given
let received = Arc::new(Mutex::new(vec![]));
let r = received.clone();
let listener = Box::new(move |hashes: &[H256]| {
*r.lock() = hashes.iter().map(|x| *x).collect();
});
let mut tx_listener = Notifier::default();
tx_listener.add(listener);
// when
let tx = new_tx();
tx_listener.added(&tx, None);
assert_eq!(*received.lock(), vec![]);
// then
tx_listener.notify();
assert_eq!(
*received.lock(),
vec!["13aff4201ac1dc49daf6a7cf07b558ed956511acbaabf9502bdacc353953766d".parse().unwrap()]
);
}
fn new_tx() -> Arc<Transaction> {
let signed = transaction::Transaction {
action: transaction::Action::Create,
data: vec![1, 2, 3],
nonce: 5.into(),
gas: 21_000.into(),
gas_price: 5.into(),
value: 0.into(),
}.fake_sign(5.into());
Arc::new(Transaction::from_pending_block_transaction(signed))
}
}

View File

@ -0,0 +1,273 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Local Transactions List.
use std::sync::Arc;
use ethereum_types::H256;
use linked_hash_map::LinkedHashMap;
use pool::VerifiedTransaction as Transaction;
use txpool::{self, VerifiedTransaction};
/// Status of local transaction.
/// Can indicate that the transaction is currently part of the queue (`Pending/Future`)
/// or gives a reason why the transaction was removed.
#[derive(Debug, PartialEq, Clone)]
pub enum Status {
/// The transaction is currently in the transaction queue.
Pending(Arc<Transaction>),
/// Transaction is already mined.
Mined(Arc<Transaction>),
/// Transaction is dropped because of limit
Dropped(Arc<Transaction>),
/// Replaced because of higher gas price of another transaction.
Replaced {
/// Replaced transaction
old: Arc<Transaction>,
/// Transaction that replaced this one.
new: Arc<Transaction>,
},
/// Transaction was never accepted to the queue.
/// It means that it was too cheap to replace any transaction already in the pool.
Rejected(Arc<Transaction>, String),
/// Transaction is invalid.
Invalid(Arc<Transaction>),
/// Transaction was canceled.
Canceled(Arc<Transaction>),
}
impl Status {
fn is_pending(&self) -> bool {
match *self {
Status::Pending(_) => true,
_ => false,
}
}
}
/// Keeps track of local transactions that are in the queue or were mined/dropped recently.
#[derive(Debug)]
pub struct LocalTransactionsList {
max_old: usize,
transactions: LinkedHashMap<H256, Status>,
pending: usize,
}
impl Default for LocalTransactionsList {
fn default() -> Self {
Self::new(10)
}
}
impl LocalTransactionsList {
/// Create a new list of local transactions.
pub fn new(max_old: usize) -> Self {
LocalTransactionsList {
max_old,
transactions: Default::default(),
pending: 0,
}
}
/// Returns true if the transaction is already in local transactions.
pub fn contains(&self, hash: &H256) -> bool {
self.transactions.contains_key(hash)
}
/// Return a map of all currently stored transactions.
pub fn all_transactions(&self) -> &LinkedHashMap<H256, Status> {
&self.transactions
}
/// Returns true if there are pending local transactions.
pub fn has_pending(&self) -> bool {
self.pending > 0
}
fn clear_old(&mut self) {
let number_of_old = self.transactions.len() - self.pending;
if self.max_old >= number_of_old {
return;
}
let to_remove: Vec<_> = self.transactions
.iter()
.filter(|&(_, status)| !status.is_pending())
.map(|(hash, _)| *hash)
.take(number_of_old - self.max_old)
.collect();
for hash in to_remove {
self.transactions.remove(&hash);
}
}
fn insert(&mut self, hash: H256, status: Status) {
let result = self.transactions.insert(hash, status);
if let Some(old) = result {
if old.is_pending() {
self.pending -= 1;
}
}
}
}
impl txpool::Listener<Transaction> for LocalTransactionsList {
fn added(&mut self, tx: &Arc<Transaction>, old: Option<&Arc<Transaction>>) {
if !tx.priority().is_local() {
return;
}
debug!(target: "own_tx", "Imported to the pool (hash {:?})", tx.hash());
self.clear_old();
self.insert(*tx.hash(), Status::Pending(tx.clone()));
self.pending += 1;
if let Some(old) = old {
if self.transactions.contains_key(old.hash()) {
self.insert(*old.hash(), Status::Replaced {
old: old.clone(),
new: tx.clone(),
});
}
}
}
fn rejected(&mut self, tx: &Arc<Transaction>, reason: &txpool::ErrorKind) {
if !tx.priority().is_local() {
return;
}
debug!(target: "own_tx", "Transaction rejected (hash {:?}). {}", tx.hash(), reason);
self.insert(*tx.hash(), Status::Rejected(tx.clone(), format!("{}", reason)));
self.clear_old();
}
fn dropped(&mut self, tx: &Arc<Transaction>, new: Option<&Transaction>) {
if !tx.priority().is_local() {
return;
}
match new {
Some(new) => warn!(target: "own_tx", "Transaction pushed out because of limit (hash {:?}, replacement: {:?})", tx.hash(), new.hash()),
None => warn!(target: "own_tx", "Transaction dropped because of limit (hash: {:?})", tx.hash()),
}
self.insert(*tx.hash(), Status::Dropped(tx.clone()));
self.clear_old();
}
fn invalid(&mut self, tx: &Arc<Transaction>) {
if !tx.priority().is_local() {
return;
}
warn!(target: "own_tx", "Transaction marked invalid (hash {:?})", tx.hash());
self.insert(*tx.hash(), Status::Invalid(tx.clone()));
self.clear_old();
}
fn canceled(&mut self, tx: &Arc<Transaction>) {
if !tx.priority().is_local() {
return;
}
warn!(target: "own_tx", "Transaction canceled (hash {:?})", tx.hash());
self.insert(*tx.hash(), Status::Canceled(tx.clone()));
self.clear_old();
}
/// The transaction has been mined.
fn mined(&mut self, tx: &Arc<Transaction>) {
if !tx.priority().is_local() {
return;
}
info!(target: "own_tx", "Transaction mined (hash {:?})", tx.hash());
self.insert(*tx.hash(), Status::Mined(tx.clone()));
}
}
#[cfg(test)]
mod tests {
use super::*;
use ethereum_types::U256;
use ethkey::{Random, Generator};
use transaction;
use txpool::Listener;
use pool;
#[test]
fn should_add_transaction_as_pending() {
// given
let mut list = LocalTransactionsList::default();
let tx1 = new_tx(10);
let tx2 = new_tx(20);
// when
list.added(&tx1, None);
list.added(&tx2, None);
// then
assert!(list.contains(tx1.hash()));
assert!(list.contains(tx2.hash()));
let statuses = list.all_transactions().values().cloned().collect::<Vec<Status>>();
assert_eq!(statuses, vec![Status::Pending(tx1), Status::Pending(tx2)]);
}
#[test]
fn should_clear_old_transactions() {
// given
let mut list = LocalTransactionsList::new(1);
let tx1 = new_tx(10);
let tx2 = new_tx(50);
let tx3 = new_tx(51);
list.added(&tx1, None);
list.invalid(&tx1);
list.dropped(&tx2, None);
assert!(!list.contains(tx1.hash()));
assert!(list.contains(tx2.hash()));
assert!(!list.contains(tx3.hash()));
// when
list.added(&tx3, Some(&tx1));
// then
assert!(!list.contains(tx1.hash()));
assert!(list.contains(tx2.hash()));
assert!(list.contains(tx3.hash()));
}
fn new_tx<T: Into<U256>>(nonce: T) -> Arc<Transaction> {
let keypair = Random.generate().unwrap();
let signed = transaction::Transaction {
action: transaction::Action::Create,
value: U256::from(100),
data: Default::default(),
gas: U256::from(10),
gas_price: U256::from(1245),
nonce: nonce.into(),
}.sign(keypair.secret(), None);
let mut tx = Transaction::from_pending_block_transaction(signed);
tx.priority = pool::Priority::Local;
Arc::new(tx)
}
}

135
miner/src/pool/mod.rs Normal file
View File

@ -0,0 +1,135 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Transaction Pool
use ethereum_types::{H256, Address};
use heapsize::HeapSizeOf;
use transaction;
use txpool;
mod listener;
mod queue;
mod ready;
mod scoring;
pub mod client;
pub mod local_transactions;
pub mod verifier;
#[cfg(test)]
mod tests;
pub use self::queue::{TransactionQueue, Status as QueueStatus};
pub use self::txpool::{VerifiedTransaction as PoolVerifiedTransaction, Options};
/// How to prioritize transactions in the pool
///
/// TODO [ToDr] Implement more strategies.
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub enum PrioritizationStrategy {
/// Simple gas-price based prioritization.
GasPriceOnly,
}
/// Transaction priority.
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub(crate) enum Priority {
/// Local transactions (high priority)
///
/// Transactions either from a local account or
/// submitted over local RPC connection via `eth_sendRawTransaction`
Local,
/// Transactions from retracted blocks (medium priority)
///
/// When block becomes non-canonical we re-import the transactions it contains
/// to the queue and boost their priority.
Retracted,
/// Regular transactions received over the network. (no priority boost)
Regular,
}
impl Priority {
fn is_local(&self) -> bool {
match *self {
Priority::Local => true,
_ => false,
}
}
}
/// Verified transaction stored in the pool.
#[derive(Debug, PartialEq, Eq)]
pub struct VerifiedTransaction {
transaction: transaction::PendingTransaction,
// TODO [ToDr] hash and sender should go directly from the transaction
hash: H256,
sender: Address,
priority: Priority,
insertion_id: usize,
}
impl VerifiedTransaction {
/// Create `VerifiedTransaction` directly from `SignedTransaction`.
///
/// This method should be used only:
/// 1. for tests
/// 2. In case we are converting pending block transactions that are already in the queue to match the function signature.
pub fn from_pending_block_transaction(tx: transaction::SignedTransaction) -> Self {
let hash = tx.hash();
let sender = tx.sender();
VerifiedTransaction {
transaction: tx.into(),
hash,
sender,
priority: Priority::Retracted,
insertion_id: 0,
}
}
/// Gets transaction priority.
pub(crate) fn priority(&self) -> Priority {
self.priority
}
/// Gets wrapped `SignedTransaction`
pub fn signed(&self) -> &transaction::SignedTransaction {
&self.transaction
}
/// Gets wrapped `PendingTransaction`
pub fn pending(&self) -> &transaction::PendingTransaction {
&self.transaction
}
}
impl txpool::VerifiedTransaction for VerifiedTransaction {
fn hash(&self) -> &H256 {
&self.hash
}
fn mem_usage(&self) -> usize {
self.transaction.heap_size_of_children()
}
fn sender(&self) -> &Address {
&self.sender
}
fn insertion_id(&self) -> u64 {
self.insertion_id as u64
}
}

445
miner/src/pool/queue.rs Normal file
View File

@ -0,0 +1,445 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Ethereum Transaction Queue
use std::{cmp, fmt};
use std::sync::Arc;
use std::sync::atomic::{self, AtomicUsize};
use std::collections::BTreeMap;
use ethereum_types::{H256, U256, Address};
use parking_lot::RwLock;
use rayon::prelude::*;
use transaction;
use txpool::{self, Verifier};
use pool::{self, scoring, verifier, client, ready, listener, PrioritizationStrategy};
use pool::local_transactions::LocalTransactionsList;
type Listener = (LocalTransactionsList, (listener::Notifier, listener::Logger));
type Pool = txpool::Pool<pool::VerifiedTransaction, scoring::NonceAndGasPrice, Listener>;
/// Max cache time in milliseconds for pending transactions.
///
/// Pending transactions are cached and will only be computed again
/// if last cache has been created earler than `TIMESTAMP_CACHE` ms ago.
/// This timeout applies only if there are local pending transactions
/// since it only affects transaction Condition.
const TIMESTAMP_CACHE: u64 = 1000;
/// Transaction queue status.
#[derive(Debug, Clone, PartialEq)]
pub struct Status {
/// Verifier options.
pub options: verifier::Options,
/// Current status of the transaction pool.
pub status: txpool::LightStatus,
/// Current limits of the transaction pool.
pub limits: txpool::Options,
}
impl fmt::Display for Status {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
writeln!(
fmt,
"Pool: {current}/{max} ({senders} senders; {mem}/{mem_max} kB) [minGasPrice: {gp} Mwei, maxGas: {max_gas}]",
current = self.status.transaction_count,
max = self.limits.max_count,
senders = self.status.senders,
mem = self.status.mem_usage / 1024,
mem_max = self.limits.max_mem_usage / 1024,
gp = self.options.minimal_gas_price / 1_000_000.into(),
max_gas = cmp::min(self.options.block_gas_limit, self.options.tx_gas_limit),
)
}
}
#[derive(Debug)]
struct CachedPending {
block_number: u64,
current_timestamp: u64,
nonce_cap: Option<U256>,
has_local_pending: bool,
pending: Option<Vec<Arc<pool::VerifiedTransaction>>>,
}
impl CachedPending {
/// Creates new `CachedPending` without cached set.
pub fn none() -> Self {
CachedPending {
block_number: 0,
current_timestamp: 0,
has_local_pending: false,
pending: None,
nonce_cap: None,
}
}
/// Remove cached pending set.
pub fn clear(&mut self) {
self.pending = None;
}
/// Returns cached pending set (if any) if it's valid.
pub fn pending(
&self,
block_number: u64,
current_timestamp: u64,
nonce_cap: Option<&U256>,
) -> Option<Vec<Arc<pool::VerifiedTransaction>>> {
// First check if we have anything in cache.
let pending = self.pending.as_ref()?;
if block_number != self.block_number {
return None;
}
// In case we don't have any local pending transactions
// there is no need to invalidate the cache because of timestamp.
// Timestamp only affects local `PendingTransactions` with `Condition::Timestamp`.
if self.has_local_pending && current_timestamp > self.current_timestamp + TIMESTAMP_CACHE {
return None;
}
// It's fine to return limited set even if `nonce_cap` is `None`.
// The worst thing that may happen is that some transactions won't get propagated in current round,
// but they are not really valid in current block anyway. We will propagate them in the next round.
// Also there is no way to have both `Some` with different numbers since it depends on the block number
// and a constant parameter in schedule (`nonce_cap_increment`)
if self.nonce_cap.is_none() && nonce_cap.is_some() {
return None;
}
Some(pending.clone())
}
}
/// Ethereum Transaction Queue
///
/// Responsible for:
/// - verifying incoming transactions
/// - maintaining a pool of verified transactions.
/// - returning an iterator for transactions that are ready to be included in block (pending)
#[derive(Debug)]
pub struct TransactionQueue {
insertion_id: Arc<AtomicUsize>,
pool: RwLock<Pool>,
options: RwLock<verifier::Options>,
cached_pending: RwLock<CachedPending>,
}
impl TransactionQueue {
/// Create new queue with given pool limits and initial verification options.
pub fn new(
limits: txpool::Options,
verification_options: verifier::Options,
strategy: PrioritizationStrategy,
) -> Self {
TransactionQueue {
insertion_id: Default::default(),
pool: RwLock::new(txpool::Pool::new(Default::default(), scoring::NonceAndGasPrice(strategy), limits)),
options: RwLock::new(verification_options),
cached_pending: RwLock::new(CachedPending::none()),
}
}
/// Update verification options
///
/// Some parameters of verification may vary in time (like block gas limit or minimal gas price).
pub fn set_verifier_options(&self, options: verifier::Options) {
*self.options.write() = options;
}
/// Import a set of transactions to the pool.
///
/// Given blockchain and state access (Client)
/// verifies and imports transactions to the pool.
pub fn import<C: client::Client>(
&self,
client: C,
transactions: Vec<verifier::Transaction>,
) -> Vec<Result<(), transaction::Error>> {
// Run verification
let _timer = ::trace_time::PerfTimer::new("queue::verifyAndImport");
let options = self.options.read().clone();
let verifier = verifier::Verifier::new(client, options, self.insertion_id.clone());
let results = transactions
.into_par_iter()
.map(|transaction| verifier.verify_transaction(transaction))
.map(|result| result.and_then(|verified| {
self.pool.write().import(verified)
.map(|_imported| ())
.map_err(convert_error)
}))
.collect::<Vec<_>>();
// Notify about imported transactions.
(self.pool.write().listener_mut().1).0.notify();
if results.iter().any(|r| r.is_ok()) {
self.cached_pending.write().clear();
}
results
}
/// Returns all transactions in the queue ordered by priority.
pub fn all_transactions(&self) -> Vec<Arc<pool::VerifiedTransaction>> {
let ready = |_tx: &pool::VerifiedTransaction| txpool::Readiness::Ready;
self.pool.read().pending(ready).collect()
}
/// Returns current pneding transactions.
///
/// NOTE: This may return a cached version of pending transaction set.
/// Re-computing the pending set is possible with `#collect_pending` method,
/// but be aware that it's a pretty expensive operation.
pub fn pending<C>(
&self,
client: C,
block_number: u64,
current_timestamp: u64,
nonce_cap: Option<U256>,
) -> Vec<Arc<pool::VerifiedTransaction>> where
C: client::NonceClient,
{
if let Some(pending) = self.cached_pending.read().pending(block_number, current_timestamp, nonce_cap.as_ref()) {
return pending;
}
// Double check after acquiring write lock
let mut cached_pending = self.cached_pending.write();
if let Some(pending) = cached_pending.pending(block_number, current_timestamp, nonce_cap.as_ref()) {
return pending;
}
let pending: Vec<_> = self.collect_pending(client, block_number, current_timestamp, nonce_cap, |i| i.collect());
*cached_pending = CachedPending {
block_number,
current_timestamp,
nonce_cap,
has_local_pending: self.has_local_pending_transactions(),
pending: Some(pending.clone()),
};
pending
}
/// Collect pending transactions.
///
/// NOTE This is re-computing the pending set and it might be expensive to do so.
/// Prefer using cached pending set using `#pending` method.
pub fn collect_pending<C, F, T>(
&self,
client: C,
block_number: u64,
current_timestamp: u64,
nonce_cap: Option<U256>,
collect: F,
) -> T where
C: client::NonceClient,
F: FnOnce(txpool::PendingIterator<
pool::VerifiedTransaction,
(ready::Condition, ready::State<C>),
scoring::NonceAndGasPrice,
Listener,
>) -> T,
{
let pending_readiness = ready::Condition::new(block_number, current_timestamp);
// don't mark any transactions as stale at this point.
let stale_id = None;
let state_readiness = ready::State::new(client, stale_id, nonce_cap);
let ready = (pending_readiness, state_readiness);
collect(self.pool.read().pending(ready))
}
/// Culls all stalled transactions from the pool.
pub fn cull<C: client::NonceClient>(
&self,
client: C,
) {
// We don't care about future transactions, so nonce_cap is not important.
let nonce_cap = None;
// We want to clear stale transactions from the queue as well.
// (Transactions that are occuping the queue for a long time without being included)
let stale_id = {
let current_id = self.insertion_id.load(atomic::Ordering::Relaxed) as u64;
// wait at least for half of the queue to be replaced
let gap = self.pool.read().options().max_count / 2;
// but never less than 100 transactions
let gap = cmp::max(100, gap) as u64;
current_id.checked_sub(gap)
};
let state_readiness = ready::State::new(client, stale_id, nonce_cap);
let removed = self.pool.write().cull(None, state_readiness);
debug!(target: "txqueue", "Removed {} stalled transactions. {}", removed, self.status());
}
/// Returns next valid nonce for given sender
/// or `None` if there are no pending transactions from that sender.
pub fn next_nonce<C: client::NonceClient>(
&self,
client: C,
address: &Address,
) -> Option<U256> {
// Do not take nonce_cap into account when determining next nonce.
let nonce_cap = None;
// Also we ignore stale transactions in the queue.
let stale_id = None;
let state_readiness = ready::State::new(client, stale_id, nonce_cap);
self.pool.read().pending_from_sender(state_readiness, address)
.last()
.map(|tx| tx.signed().nonce + 1.into())
}
/// Retrieve a transaction from the pool.
///
/// Given transaction hash looks up that transaction in the pool
/// and returns a shared pointer to it or `None` if it's not present.
pub fn find(
&self,
hash: &H256,
) -> Option<Arc<pool::VerifiedTransaction>> {
self.pool.read().find(hash)
}
/// Remove a set of transactions from the pool.
///
/// Given an iterator of transaction hashes
/// removes them from the pool.
/// That method should be used if invalid transactions are detected
/// or you want to cancel a transaction.
pub fn remove<'a, T: IntoIterator<Item = &'a H256>>(
&self,
hashes: T,
is_invalid: bool,
) -> Vec<Option<Arc<pool::VerifiedTransaction>>> {
let results = {
let mut pool = self.pool.write();
hashes
.into_iter()
.map(|hash| pool.remove(hash, is_invalid))
.collect::<Vec<_>>()
};
if results.iter().any(Option::is_some) {
self.cached_pending.write().clear();
}
results
}
/// Clear the entire pool.
pub fn clear(&self) {
self.pool.write().clear();
}
/// Penalize given senders.
pub fn penalize<'a, T: IntoIterator<Item = &'a Address>>(&self, senders: T) {
let mut pool = self.pool.write();
for sender in senders {
pool.update_scores(sender, ());
}
}
/// Returns gas price of currently the worst transaction in the pool.
pub fn current_worst_gas_price(&self) -> U256 {
match self.pool.read().worst_transaction() {
Some(tx) => tx.signed().gas_price,
None => self.options.read().minimal_gas_price,
}
}
/// Returns a status of the queue.
pub fn status(&self) -> Status {
let pool = self.pool.read();
let status = pool.light_status();
let limits = pool.options();
let options = self.options.read().clone();
Status {
options,
status,
limits,
}
}
/// Check if there are any local transactions in the pool.
///
/// Returns `true` if there are any transactions in the pool
/// that has been marked as local.
///
/// Local transactions are the ones from accounts managed by this node
/// and transactions submitted via local RPC (`eth_sendRawTransaction`)
pub fn has_local_pending_transactions(&self) -> bool {
self.pool.read().listener().0.has_pending()
}
/// Returns status of recently seen local transactions.
pub fn local_transactions(&self) -> BTreeMap<H256, pool::local_transactions::Status> {
self.pool.read().listener().0.all_transactions().iter().map(|(a, b)| (*a, b.clone())).collect()
}
/// Add a callback to be notified about all transactions entering the pool.
pub fn add_listener(&self, f: Box<Fn(&[H256]) + Send + Sync>) {
let mut pool = self.pool.write();
(pool.listener_mut().1).0.add(f);
}
}
fn convert_error(err: txpool::Error) -> transaction::Error {
use self::txpool::ErrorKind;
match *err.kind() {
ErrorKind::AlreadyImported(..) => transaction::Error::AlreadyImported,
ErrorKind::TooCheapToEnter(..) => transaction::Error::LimitReached,
ErrorKind::TooCheapToReplace(..) => transaction::Error::TooCheapToReplace,
ref e => {
warn!(target: "txqueue", "Unknown import error: {:?}", e);
transaction::Error::NotAllowed
},
}
}
#[cfg(test)]
mod tests {
use super::*;
use pool::tests::client::TestClient;
#[test]
fn should_get_pending_transactions() {
let queue = TransactionQueue::new(txpool::Options::default(), verifier::Options::default(), PrioritizationStrategy::GasPriceOnly);
let pending: Vec<_> = queue.pending(TestClient::default(), 0, 0, None);
for tx in pending {
assert!(tx.signed().nonce > 0.into());
}
}
}

212
miner/src/pool/ready.rs Normal file
View File

@ -0,0 +1,212 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Transaction Readiness indicator
//!
//! Transaction readiness is responsible for indicating if
//! particular transaction can be included in the block.
//!
//! Regular transactions are ready iff the current state nonce
//! (obtained from `NonceClient`) equals to the transaction nonce.
//!
//! Let's define `S = state nonce`. Transactions are processed
//! in order, so we first include transaction with nonce `S`,
//! but then we are able to include the one with `S + 1` nonce.
//! So bear in mind that transactions can be included in chains
//! and their readiness is dependent on previous transactions from
//! the same sender.
//!
//! There are three possible outcomes:
//! - The transaction is old (stalled; state nonce > transaction nonce)
//! - The transaction is ready (current; state nonce == transaction nonce)
//! - The transaction is not ready yet (future; state nonce < transaction nonce)
//!
//! NOTE The transactions are always checked for readines in order they are stored within the queue.
//! First `Readiness::Future` response also causes all subsequent transactions from the same sender
//! to be marked as `Future`.
use std::cmp;
use std::collections::HashMap;
use ethereum_types::{U256, H160 as Address};
use transaction;
use txpool::{self, VerifiedTransaction as PoolVerifiedTransaction};
use super::client::NonceClient;
use super::VerifiedTransaction;
/// Checks readiness of transactions by comparing the nonce to state nonce.
#[derive(Debug)]
pub struct State<C> {
nonces: HashMap<Address, U256>,
state: C,
max_nonce: Option<U256>,
stale_id: Option<u64>,
}
impl<C> State<C> {
/// Create new State checker, given client interface.
pub fn new(
state: C,
stale_id: Option<u64>,
max_nonce: Option<U256>,
) -> Self {
State {
nonces: Default::default(),
state,
max_nonce,
stale_id,
}
}
}
impl<C: NonceClient> txpool::Ready<VerifiedTransaction> for State<C> {
fn is_ready(&mut self, tx: &VerifiedTransaction) -> txpool::Readiness {
// Check max nonce
match self.max_nonce {
Some(nonce) if tx.transaction.nonce > nonce => {
return txpool::Readiness::Future;
},
_ => {},
}
let sender = tx.sender();
let state = &self.state;
let state_nonce = || state.account_nonce(sender);
let nonce = self.nonces.entry(*sender).or_insert_with(state_nonce);
match tx.transaction.nonce.cmp(nonce) {
// Before marking as future check for stale ids
cmp::Ordering::Greater => match self.stale_id {
Some(id) if tx.insertion_id() < id => txpool::Readiness::Stalled,
_ => txpool::Readiness::Future,
},
cmp::Ordering::Less => txpool::Readiness::Stalled,
cmp::Ordering::Equal => {
*nonce = *nonce + 1.into();
txpool::Readiness::Ready
},
}
}
}
/// Checks readines of Pending transactions by comparing it with current time and block number.
#[derive(Debug)]
pub struct Condition {
block_number: u64,
now: u64,
}
impl Condition {
/// Create a new condition checker given current block number and UTC timestamp.
pub fn new(block_number: u64, now: u64) -> Self {
Condition {
block_number,
now,
}
}
}
impl txpool::Ready<VerifiedTransaction> for Condition {
fn is_ready(&mut self, tx: &VerifiedTransaction) -> txpool::Readiness {
match tx.transaction.condition {
Some(transaction::Condition::Number(block)) if block > self.block_number => txpool::Readiness::Future,
Some(transaction::Condition::Timestamp(time)) if time > self.now => txpool::Readiness::Future,
_ => txpool::Readiness::Ready,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use txpool::Ready;
use pool::tests::client::TestClient;
use pool::tests::tx::{Tx, TxExt};
#[test]
fn should_return_correct_state_readiness() {
// given
let (tx1, tx2, tx3) = Tx::default().signed_triple();
let (tx1, tx2, tx3) = (tx1.verified(), tx2.verified(), tx3.verified());
// when
assert_eq!(State::new(TestClient::new(), None, None).is_ready(&tx3), txpool::Readiness::Future);
assert_eq!(State::new(TestClient::new(), None, None).is_ready(&tx2), txpool::Readiness::Future);
let mut ready = State::new(TestClient::new(), None, None);
// then
assert_eq!(ready.is_ready(&tx1), txpool::Readiness::Ready);
assert_eq!(ready.is_ready(&tx2), txpool::Readiness::Ready);
assert_eq!(ready.is_ready(&tx3), txpool::Readiness::Ready);
}
#[test]
fn should_return_future_if_nonce_cap_reached() {
// given
let tx = Tx::default().signed().verified();
// when
let res1 = State::new(TestClient::new(), None, Some(10.into())).is_ready(&tx);
let res2 = State::new(TestClient::new(), None, Some(124.into())).is_ready(&tx);
// then
assert_eq!(res1, txpool::Readiness::Future);
assert_eq!(res2, txpool::Readiness::Ready);
}
#[test]
fn should_return_stale_if_nonce_does_not_match() {
// given
let tx = Tx::default().signed().verified();
// when
let res = State::new(TestClient::new().with_nonce(125), None, None).is_ready(&tx);
// then
assert_eq!(res, txpool::Readiness::Stalled);
}
#[test]
fn should_return_stale_for_old_transactions() {
// given
let (_, tx) = Tx::default().signed_pair().verified();
// when
let res = State::new(TestClient::new(), Some(1), None).is_ready(&tx);
// then
assert_eq!(res, txpool::Readiness::Stalled);
}
#[test]
fn should_check_readiness_of_condition() {
// given
let tx = Tx::default().signed();
let v = |tx: transaction::PendingTransaction| TestClient::new().verify(tx);
let tx1 = v(transaction::PendingTransaction::new(tx.clone(), transaction::Condition::Number(5).into()));
let tx2 = v(transaction::PendingTransaction::new(tx.clone(), transaction::Condition::Timestamp(3).into()));
let tx3 = v(transaction::PendingTransaction::new(tx.clone(), None));
// when/then
assert_eq!(Condition::new(0, 0).is_ready(&tx1), txpool::Readiness::Future);
assert_eq!(Condition::new(0, 0).is_ready(&tx2), txpool::Readiness::Future);
assert_eq!(Condition::new(0, 0).is_ready(&tx3), txpool::Readiness::Ready);
assert_eq!(Condition::new(5, 0).is_ready(&tx1), txpool::Readiness::Ready);
assert_eq!(Condition::new(0, 3).is_ready(&tx2), txpool::Readiness::Ready);
}
}

171
miner/src/pool/scoring.rs Normal file
View File

@ -0,0 +1,171 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Transaction Scoring and Ordering
//!
//! Ethereum transactions from the same sender are ordered by `nonce`.
//! Low nonces need to be included first. If there are two transactions from the same sender
//! and with the same `nonce` only one of them can be included.
//! We choose the one with higher gas price, but also require that gas price increment
//! is high enough to prevent attacking miners by requiring them to reshuffle/reexecute
//! the queue too often.
//!
//! Transactions between senders are prioritized using `gas price`. Higher `gas price`
//! yields more profits for miners. Additionally we prioritize transactions that originate
//! from our local node (own transactions).
use std::cmp;
use std::sync::Arc;
use ethereum_types::U256;
use txpool;
use super::{PrioritizationStrategy, VerifiedTransaction};
/// Transaction with the same (sender, nonce) can be replaced only if
/// `new_gas_price > old_gas_price + old_gas_price >> SHIFT`
const GAS_PRICE_BUMP_SHIFT: usize = 3; // 2 = 25%, 3 = 12.5%, 4 = 6.25%
/// Simple, gas-price based scoring for transactions.
///
/// NOTE: Currently penalization does not apply to new transactions that enter the pool.
/// We might want to store penalization status in some persistent state.
#[derive(Debug)]
pub struct NonceAndGasPrice(pub PrioritizationStrategy);
impl txpool::Scoring<VerifiedTransaction> for NonceAndGasPrice {
type Score = U256;
type Event = ();
fn compare(&self, old: &VerifiedTransaction, other: &VerifiedTransaction) -> cmp::Ordering {
old.transaction.nonce.cmp(&other.transaction.nonce)
}
fn choose(&self, old: &VerifiedTransaction, new: &VerifiedTransaction) -> txpool::scoring::Choice {
if old.transaction.nonce != new.transaction.nonce {
return txpool::scoring::Choice::InsertNew
}
let old_gp = old.transaction.gas_price;
let new_gp = new.transaction.gas_price;
let min_required_gp = old_gp + (old_gp >> GAS_PRICE_BUMP_SHIFT);
match min_required_gp.cmp(&new_gp) {
cmp::Ordering::Greater => txpool::scoring::Choice::RejectNew,
_ => txpool::scoring::Choice::ReplaceOld,
}
}
fn update_scores(&self, txs: &[Arc<VerifiedTransaction>], scores: &mut [U256], change: txpool::scoring::Change) {
use self::txpool::scoring::Change;
match change {
Change::Culled(_) => {},
Change::RemovedAt(_) => {}
Change::InsertedAt(i) | Change::ReplacedAt(i) => {
assert!(i < txs.len());
assert!(i < scores.len());
scores[i] = txs[i].transaction.gas_price;
let boost = match txs[i].priority() {
super::Priority::Local => 15,
super::Priority::Retracted => 10,
super::Priority::Regular => 0,
};
scores[i] = scores[i] << boost;
},
// We are only sending an event in case of penalization.
// So just lower the priority of all non-local transactions.
Change::Event(_) => {
for (score, tx) in scores.iter_mut().zip(txs) {
// Never penalize local transactions.
if !tx.priority().is_local() {
*score = *score >> 3;
}
}
},
}
}
fn should_replace(&self, old: &VerifiedTransaction, new: &VerifiedTransaction) -> bool {
if old.sender == new.sender {
// prefer earliest transaction
if new.transaction.nonce < old.transaction.nonce {
return true
}
}
self.choose(old, new) == txpool::scoring::Choice::ReplaceOld
}
}
#[cfg(test)]
mod tests {
use super::*;
use pool::tests::tx::{Tx, TxExt};
use txpool::Scoring;
#[test]
fn should_calculate_score_correctly() {
// given
let scoring = NonceAndGasPrice(PrioritizationStrategy::GasPriceOnly);
let (tx1, tx2, tx3) = Tx::default().signed_triple();
let transactions = vec![tx1, tx2, tx3].into_iter().enumerate().map(|(i, tx)| {
let mut verified = tx.verified();
verified.priority = match i {
0 => ::pool::Priority::Local,
1 => ::pool::Priority::Retracted,
_ => ::pool::Priority::Regular,
};
Arc::new(verified)
}).collect::<Vec<_>>();
let initial_scores = vec![U256::from(0), 0.into(), 0.into()];
// No update required
let mut scores = initial_scores.clone();
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::Culled(0));
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::Culled(1));
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::Culled(2));
assert_eq!(scores, initial_scores);
let mut scores = initial_scores.clone();
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::RemovedAt(0));
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::RemovedAt(1));
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::RemovedAt(2));
assert_eq!(scores, initial_scores);
// Compute score at given index
let mut scores = initial_scores.clone();
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::InsertedAt(0));
assert_eq!(scores, vec![32768.into(), 0.into(), 0.into()]);
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::InsertedAt(1));
assert_eq!(scores, vec![32768.into(), 1024.into(), 0.into()]);
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::InsertedAt(2));
assert_eq!(scores, vec![32768.into(), 1024.into(), 1.into()]);
let mut scores = initial_scores.clone();
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::ReplacedAt(0));
assert_eq!(scores, vec![32768.into(), 0.into(), 0.into()]);
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::ReplacedAt(1));
assert_eq!(scores, vec![32768.into(), 1024.into(), 0.into()]);
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::ReplacedAt(2));
assert_eq!(scores, vec![32768.into(), 1024.into(), 1.into()]);
// Check penalization
scoring.update_scores(&transactions, &mut *scores, txpool::scoring::Change::Event(()));
assert_eq!(scores, vec![32768.into(), 128.into(), 0.into()]);
}
}

View File

@ -0,0 +1,125 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use ethereum_types::{U256, H256, Address};
use transaction::{self, Transaction, SignedTransaction, UnverifiedTransaction};
use pool;
use pool::client::AccountDetails;
#[derive(Debug, Clone)]
pub struct TestClient {
account_details: AccountDetails,
gas_required: U256,
is_service_transaction: bool,
local_address: Address,
}
impl Default for TestClient {
fn default() -> Self {
TestClient {
account_details: AccountDetails {
nonce: 123.into(),
balance: 63_100.into(),
is_local: false,
},
gas_required: 21_000.into(),
is_service_transaction: false,
local_address: Default::default(),
}
}
}
impl TestClient {
pub fn new() -> Self {
TestClient::default()
}
pub fn with_balance<T: Into<U256>>(mut self, balance: T) -> Self {
self.account_details.balance = balance.into();
self
}
pub fn with_nonce<T: Into<U256>>(mut self, nonce: T) -> Self {
self.account_details.nonce = nonce.into();
self
}
pub fn with_gas_required<T: Into<U256>>(mut self, gas_required: T) -> Self {
self.gas_required = gas_required.into();
self
}
pub fn with_local(mut self, address: &Address) -> Self {
self.local_address = *address;
self
}
pub fn with_service_transaction(mut self) -> Self {
self.is_service_transaction = true;
self
}
pub fn verify<T: Into<transaction::PendingTransaction>>(&self, tx: T) -> pool::VerifiedTransaction {
let tx = tx.into();
pool::VerifiedTransaction {
hash: tx.hash(),
sender: tx.sender(),
priority: pool::Priority::Regular,
transaction: tx,
insertion_id: 1,
}
}
}
impl pool::client::Client for TestClient {
fn transaction_already_included(&self, _hash: &H256) -> bool {
false
}
fn verify_transaction(&self, tx: UnverifiedTransaction)
-> Result<SignedTransaction, transaction::Error>
{
Ok(SignedTransaction::new(tx)?)
}
fn account_details(&self, address: &Address) -> AccountDetails {
let mut details = self.account_details.clone();
if address == &self.local_address {
details.is_local = true;
}
details
}
fn required_gas(&self, _tx: &Transaction) -> U256 {
self.gas_required
}
fn transaction_type(&self, _tx: &SignedTransaction) -> pool::client::TransactionType {
if self.is_service_transaction {
pool::client::TransactionType::Service
} else {
pool::client::TransactionType::Regular
}
}
}
impl pool::client::NonceClient for TestClient {
fn account_nonce(&self, _address: &Address) -> U256 {
self.account_details.nonce
}
}

757
miner/src/pool/tests/mod.rs Normal file
View File

@ -0,0 +1,757 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use ethereum_types::U256;
use transaction::{self, PendingTransaction};
use txpool;
use pool::{verifier, TransactionQueue, PrioritizationStrategy};
pub mod tx;
pub mod client;
use self::tx::{Tx, TxExt, PairExt};
use self::client::TestClient;
fn new_queue() -> TransactionQueue {
TransactionQueue::new(
txpool::Options {
max_count: 3,
max_per_sender: 3,
max_mem_usage: 50
},
verifier::Options {
minimal_gas_price: 1.into(),
block_gas_limit: 1_000_000.into(),
tx_gas_limit: 1_000_000.into(),
},
PrioritizationStrategy::GasPriceOnly,
)
}
#[test]
fn should_return_correct_nonces_when_dropped_because_of_limit() {
// given
let txq = TransactionQueue::new(
txpool::Options {
max_count: 3,
max_per_sender: 1,
max_mem_usage: 50
},
verifier::Options {
minimal_gas_price: 1.into(),
block_gas_limit: 1_000_000.into(),
tx_gas_limit: 1_000_000.into(),
},
PrioritizationStrategy::GasPriceOnly,
);
let (tx1, tx2) = Tx::gas_price(2).signed_pair();
let sender = tx1.sender();
let nonce = tx1.nonce;
// when
let result = txq.import(TestClient::new(), vec![tx1, tx2].local());
assert_eq!(result, vec![Ok(()), Err(transaction::Error::LimitReached)]);
assert_eq!(txq.status().status.transaction_count, 1);
// then
assert_eq!(txq.next_nonce(TestClient::new(), &sender), Some(nonce + 1.into()));
// when
let tx1 = Tx::gas_price(2).signed();
let tx2 = Tx::gas_price(2).signed();
let tx3 = Tx::gas_price(1).signed();
let tx4 = Tx::gas_price(3).signed();
let res = txq.import(TestClient::new(), vec![tx1, tx2].local());
let res2 = txq.import(TestClient::new(), vec![tx3, tx4].local());
// then
assert_eq!(res, vec![Ok(()), Ok(())]);
assert_eq!(res2, vec![Err(transaction::Error::LimitReached), Ok(())]);
assert_eq!(txq.status().status.transaction_count, 3);
// First inserted transacton got dropped because of limit
assert_eq!(txq.next_nonce(TestClient::new(), &sender), None);
}
#[test]
fn should_handle_same_transaction_imported_twice_with_different_state_nonces() {
// given
let txq = new_queue();
let (tx, tx2) = Tx::default().signed_replacement();
let hash = tx2.hash();
let client = TestClient::new().with_nonce(122);
// First insert one transaction to future
let res = txq.import(client.clone(), vec![tx].local());
assert_eq!(res, vec![Ok(())]);
// next_nonce === None -> transaction is in future
assert_eq!(txq.next_nonce(client.clone(), &tx2.sender()), None);
// now import second transaction to current
let res = txq.import(TestClient::new(), vec![tx2.local()]);
// and then there should be only one transaction in current (the one with higher gas_price)
assert_eq!(res, vec![Ok(())]);
assert_eq!(txq.status().status.transaction_count, 1);
let top = txq.pending(TestClient::new(), 0, 0, None);
assert_eq!(top[0].hash, hash);
}
#[test]
fn should_move_all_transactions_from_future() {
// given
let txq = new_queue();
let txs = Tx::default().signed_pair();
let (hash, hash2) = txs.hash();
let (tx, tx2) = txs;
let client = TestClient::new().with_nonce(122);
// First insert one transaction to future
let res = txq.import(client.clone(), vec![tx.local()]);
assert_eq!(res, vec![Ok(())]);
// next_nonce === None -> transaction is in future
assert_eq!(txq.next_nonce(client.clone(), &tx2.sender()), None);
// now import second transaction to current
let res = txq.import(client.clone(), vec![tx2.local()]);
// then
assert_eq!(res, vec![Ok(())]);
assert_eq!(txq.status().status.transaction_count, 2);
let top = txq.pending(TestClient::new(), 0, 0, None);
assert_eq!(top[0].hash, hash);
assert_eq!(top[1].hash, hash2);
}
#[test]
fn should_drop_transactions_from_senders_without_balance() {
// given
let txq = new_queue();
let tx = Tx::default().signed();
let client = TestClient::new().with_balance(1);
// when
let res = txq.import(client, vec![tx.local()]);
// then
assert_eq!(res, vec![Err(transaction::Error::InsufficientBalance {
balance: U256::from(1),
cost: U256::from(21_100),
})]);
assert_eq!(txq.status().status.transaction_count, 0);
}
#[test]
fn should_not_import_transaction_below_min_gas_price_threshold_if_external() {
// given
let txq = new_queue();
let tx = Tx::default();
txq.set_verifier_options(verifier::Options {
minimal_gas_price: 3.into(),
..Default::default()
});
// when
let res = txq.import(TestClient::new(), vec![tx.signed().unverified()]);
// then
assert_eq!(res, vec![Err(transaction::Error::InsufficientGasPrice {
minimal: U256::from(3),
got: U256::from(1),
})]);
assert_eq!(txq.status().status.transaction_count, 0);
}
#[test]
fn should_import_transaction_below_min_gas_price_threshold_if_local() {
// given
let txq = new_queue();
let tx = Tx::default();
txq.set_verifier_options(verifier::Options {
minimal_gas_price: 3.into(),
..Default::default()
});
// when
let res = txq.import(TestClient::new(), vec![tx.signed().local()]);
// then
assert_eq!(res, vec![Ok(())]);
assert_eq!(txq.status().status.transaction_count, 1);
}
#[test]
fn should_import_txs_from_same_sender() {
// given
let txq = new_queue();
let txs = Tx::default().signed_pair();
let (hash, hash2) = txs.hash();
// when
txq.import(TestClient::new(), txs.local().into_vec());
// then
let top = txq.pending(TestClient::new(), 0 ,0, None);
assert_eq!(top[0].hash, hash);
assert_eq!(top[1].hash, hash2);
assert_eq!(top.len(), 2);
}
#[test]
fn should_prioritize_local_transactions_within_same_nonce_height() {
// given
let txq = new_queue();
let tx = Tx::default().signed();
// the second one has same nonce but higher `gas_price`
let tx2 = Tx::gas_price(2).signed();
let (hash, hash2) = (tx.hash(), tx2.hash());
let client = TestClient::new().with_local(&tx.sender());
// when
// first insert the one with higher gas price
let res = txq.import(client.clone(), vec![tx.local(), tx2.unverified()]);
assert_eq!(res, vec![Ok(()), Ok(())]);
// then
let top = txq.pending(client, 0, 0, None);
assert_eq!(top[0].hash, hash); // local should be first
assert_eq!(top[1].hash, hash2);
assert_eq!(top.len(), 2);
}
#[test]
fn should_prioritize_reimported_transactions_within_same_nonce_height() {
// given
let txq = new_queue();
let tx = Tx::default().signed();
// the second one has same nonce but higher `gas_price`
let tx2 = Tx::gas_price(2).signed();
let (hash, hash2) = (tx.hash(), tx2.hash());
// when
// first insert local one with higher gas price
// then the one with lower gas price, but from retracted block
let res = txq.import(TestClient::new(), vec![tx2.unverified(), tx.retracted()]);
assert_eq!(res, vec![Ok(()), Ok(())]);
// then
let top = txq.pending(TestClient::new(), 0, 0, None);
assert_eq!(top[0].hash, hash); // retracted should be first
assert_eq!(top[1].hash, hash2);
assert_eq!(top.len(), 2);
}
#[test]
fn should_not_prioritize_local_transactions_with_different_nonce_height() {
// given
let txq = new_queue();
let txs = Tx::default().signed_pair();
let (hash, hash2) = txs.hash();
let (tx, tx2) = txs;
// when
let res = txq.import(TestClient::new(), vec![tx.unverified(), tx2.local()]);
assert_eq!(res, vec![Ok(()), Ok(())]);
// then
let top = txq.pending(TestClient::new(), 0, 0, None);
assert_eq!(top[0].hash, hash);
assert_eq!(top[1].hash, hash2);
assert_eq!(top.len(), 2);
}
#[test]
fn should_put_transaction_to_futures_if_gap_detected() {
// given
let txq = new_queue();
let (tx, _, tx2) = Tx::default().signed_triple();
let hash = tx.hash();
// when
let res = txq.import(TestClient::new(), vec![tx, tx2].local());
// then
assert_eq!(res, vec![Ok(()), Ok(())]);
let top = txq.pending(TestClient::new(), 0, 0, None);
assert_eq!(top.len(), 1);
assert_eq!(top[0].hash, hash);
}
#[test]
fn should_handle_min_block() {
// given
let txq = new_queue();
let (tx, tx2) = Tx::default().signed_pair();
// when
let res = txq.import(TestClient::new(), vec![
verifier::Transaction::Local(PendingTransaction::new(tx, transaction::Condition::Number(1).into())),
tx2.local()
]);
assert_eq!(res, vec![Ok(()), Ok(())]);
// then
let top = txq.pending(TestClient::new(), 0, 0, None);
assert_eq!(top.len(), 0);
let top = txq.pending(TestClient::new(), 1, 0, None);
assert_eq!(top.len(), 2);
}
#[test]
fn should_correctly_update_futures_when_removing() {
// given
let txq = new_queue();
let txs= Tx::default().signed_pair();
let res = txq.import(TestClient::new().with_nonce(121), txs.local().into_vec());
assert_eq!(res, vec![Ok(()), Ok(())]);
assert_eq!(txq.status().status.transaction_count, 2);
// when
txq.cull(TestClient::new().with_nonce(125));
// should remove both transactions since they are stalled
// then
assert_eq!(txq.status().status.transaction_count, 0);
}
#[test]
fn should_move_transactions_if_gap_filled() {
// given
let txq = new_queue();
let (tx, tx1, tx2) = Tx::default().signed_triple();
let res = txq.import(TestClient::new(), vec![tx, tx2].local());
assert_eq!(res, vec![Ok(()), Ok(())]);
assert_eq!(txq.status().status.transaction_count, 2);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 1);
// when
let res = txq.import(TestClient::new(), vec![tx1.local()]);
assert_eq!(res, vec![Ok(())]);
// then
assert_eq!(txq.status().status.transaction_count, 3);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 3);
}
#[test]
fn should_remove_transaction() {
// given
let txq = new_queue();
let (tx, _, tx2) = Tx::default().signed_triple();
let res = txq.import(TestClient::default(), vec![tx, tx2].local());
assert_eq!(res, vec![Ok(()), Ok(())]);
assert_eq!(txq.status().status.transaction_count, 2);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 1);
// when
txq.cull(TestClient::new().with_nonce(124));
assert_eq!(txq.status().status.transaction_count, 1);
assert_eq!(txq.pending(TestClient::new().with_nonce(125), 0, 0, None).len(), 1);
txq.cull(TestClient::new().with_nonce(126));
// then
assert_eq!(txq.status().status.transaction_count, 0);
}
#[test]
fn should_move_transactions_to_future_if_gap_introduced() {
// given
let txq = new_queue();
let (tx, tx2) = Tx::default().signed_pair();
let hash = tx.hash();
let tx3 = Tx::default().signed();
let res = txq.import(TestClient::new(), vec![tx3, tx2].local());
assert_eq!(res, vec![Ok(()), Ok(())]);
assert_eq!(txq.status().status.transaction_count, 2);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 1);
let res = txq.import(TestClient::new(), vec![tx].local());
assert_eq!(res, vec![Ok(())]);
assert_eq!(txq.status().status.transaction_count, 3);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 3);
// when
txq.remove(vec![&hash], true);
// then
assert_eq!(txq.status().status.transaction_count, 2);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 1);
}
#[test]
fn should_clear_queue() {
// given
let txq = new_queue();
let txs = Tx::default().signed_pair();
// add
txq.import(TestClient::new(), txs.local().into_vec());
assert_eq!(txq.status().status.transaction_count, 2);
// when
txq.clear();
// then
assert_eq!(txq.status().status.transaction_count, 0);
}
#[test]
fn should_prefer_current_transactions_when_hitting_the_limit() {
// given
let txq = TransactionQueue::new(
txpool::Options {
max_count: 1,
max_per_sender: 2,
max_mem_usage: 50
},
verifier::Options {
minimal_gas_price: 1.into(),
block_gas_limit: 1_000_000.into(),
tx_gas_limit: 1_000_000.into(),
},
PrioritizationStrategy::GasPriceOnly,
);
let (tx, tx2) = Tx::default().signed_pair();
let hash = tx.hash();
let sender = tx.sender();
let res = txq.import(TestClient::new(), vec![tx2.unverified()]);
assert_eq!(res, vec![Ok(())]);
assert_eq!(txq.status().status.transaction_count, 1);
// when
let res = txq.import(TestClient::new(), vec![tx.unverified()]);
// then
assert_eq!(res, vec![Ok(())]);
assert_eq!(txq.status().status.transaction_count, 1);
let top = txq.pending(TestClient::new(), 0, 0, None);
assert_eq!(top.len(), 1);
assert_eq!(top[0].hash, hash);
assert_eq!(txq.next_nonce(TestClient::new(), &sender), Some(124.into()));
}
#[test]
fn should_drop_transactions_with_old_nonces() {
let txq = new_queue();
let tx = Tx::default().signed();
// when
let res = txq.import(TestClient::new().with_nonce(125), vec![tx.unverified()]);
// then
assert_eq!(res, vec![Err(transaction::Error::Old)]);
assert_eq!(txq.status().status.transaction_count, 0);
}
#[test]
fn should_not_insert_same_transaction_twice() {
// given
let txq = new_queue();
let (_tx1, tx2) = Tx::default().signed_pair();
let res = txq.import(TestClient::new(), vec![tx2.clone().local()]);
assert_eq!(res, vec![Ok(())]);
assert_eq!(txq.status().status.transaction_count, 1);
// when
let res = txq.import(TestClient::new(), vec![tx2.local()]);
// then
assert_eq!(res, vec![Err(transaction::Error::AlreadyImported)]);
assert_eq!(txq.status().status.transaction_count, 1);
}
#[test]
fn should_accept_same_transaction_twice_if_removed() {
// given
let txq = new_queue();
let txs = Tx::default().signed_pair();
let (tx1, _) = txs.clone();
let (hash, _) = txs.hash();
let res = txq.import(TestClient::new(), txs.local().into_vec());
assert_eq!(res, vec![Ok(()), Ok(())]);
assert_eq!(txq.status().status.transaction_count, 2);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 2);
// when
txq.remove(vec![&hash], true);
assert_eq!(txq.status().status.transaction_count, 1);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 0);
let res = txq.import(TestClient::new(), vec![tx1].local());
assert_eq!(res, vec![Ok(())]);
// then
assert_eq!(txq.status().status.transaction_count, 2);
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 2);
}
#[test]
fn should_not_replace_same_transaction_if_the_fee_is_less_than_minimal_bump() {
// given
let txq = new_queue();
let (tx, tx2) = Tx::gas_price(20).signed_replacement();
let (tx3, tx4) = Tx::gas_price(1).signed_replacement();
let client = TestClient::new().with_balance(1_000_000);
// when
let res = txq.import(client.clone(), vec![tx, tx3].local());
assert_eq!(res, vec![Ok(()), Ok(())]);
let res = txq.import(client.clone(), vec![tx2, tx4].local());
// then
assert_eq!(res, vec![Err(transaction::Error::TooCheapToReplace), Ok(())]);
assert_eq!(txq.status().status.transaction_count, 2);
assert_eq!(txq.pending(client.clone(), 0, 0, None)[0].signed().gas_price, U256::from(20));
assert_eq!(txq.pending(client.clone(), 0, 0, None)[1].signed().gas_price, U256::from(2));
}
#[test]
fn should_return_none_when_transaction_from_given_address_does_not_exist() {
// given
let txq = new_queue();
// then
assert_eq!(txq.next_nonce(TestClient::new(), &Default::default()), None);
}
#[test]
fn should_return_correct_nonce_when_transactions_from_given_address_exist() {
// given
let txq = new_queue();
let tx = Tx::default().signed();
let from = tx.sender();
let nonce = tx.nonce;
// when
txq.import(TestClient::new(), vec![tx.local()]);
// then
assert_eq!(txq.next_nonce(TestClient::new(), &from), Some(nonce + 1.into()));
}
#[test]
fn should_return_valid_last_nonce_after_cull() {
// given
let txq = new_queue();
let (tx1, _, tx2) = Tx::default().signed_triple();
let sender = tx1.sender();
// when
// Second should go to future
let res = txq.import(TestClient::new(), vec![tx1, tx2].local());
assert_eq!(res, vec![Ok(()), Ok(())]);
// Now block is imported
let client = TestClient::new().with_nonce(124);
txq.cull(client.clone());
// tx2 should be not be promoted to current
assert_eq!(txq.pending(client.clone(), 0, 0, None).len(), 0);
// then
assert_eq!(txq.next_nonce(client.clone(), &sender), None);
assert_eq!(txq.next_nonce(client.with_nonce(125), &sender), Some(126.into()));
}
#[test]
fn should_return_true_if_there_is_local_transaction_pending() {
// given
let txq = new_queue();
let (tx1, tx2) = Tx::default().signed_pair();
assert_eq!(txq.has_local_pending_transactions(), false);
let client = TestClient::new().with_local(&tx1.sender());
// when
let res = txq.import(client.clone(), vec![tx1.unverified(), tx2.local()]);
assert_eq!(res, vec![Ok(()), Ok(())]);
// then
assert_eq!(txq.has_local_pending_transactions(), true);
}
#[test]
fn should_reject_transactions_below_base_gas() {
// given
let txq = new_queue();
let tx = Tx::default().signed();
// when
let res = txq.import(TestClient::new().with_gas_required(100_001), vec![tx].local());
// then
assert_eq!(res, vec![Err(transaction::Error::InsufficientGas {
minimal: 100_001.into(),
got: 21_000.into(),
})]);
}
#[test]
fn should_remove_out_of_date_transactions_occupying_queue() {
// given
let txq = TransactionQueue::new(
txpool::Options {
max_count: 105,
max_per_sender: 3,
max_mem_usage: 5_000_000,
},
verifier::Options {
minimal_gas_price: 10.into(),
..Default::default()
},
PrioritizationStrategy::GasPriceOnly,
);
// that transaction will be occupying the queue
let (_, tx) = Tx::default().signed_pair();
let res = txq.import(TestClient::new(), vec![tx.local()]);
assert_eq!(res, vec![Ok(())]);
// This should not clear the transaction (yet)
txq.cull(TestClient::new());
assert_eq!(txq.status().status.transaction_count, 1);
// Now insert at least 100 transactions to have the other one marked as future.
for _ in 0..34 {
let (tx1, tx2, tx3) = Tx::default().signed_triple();
txq.import(TestClient::new(), vec![tx1, tx2, tx3].local());
}
assert_eq!(txq.status().status.transaction_count, 103);
// when
txq.cull(TestClient::new());
// then
assert_eq!(txq.status().status.transaction_count, 102);
}
#[test]
fn should_accept_local_transactions_below_min_gas_price() {
// given
let txq = TransactionQueue::new(
txpool::Options {
max_count: 3,
max_per_sender: 3,
max_mem_usage: 50
},
verifier::Options {
minimal_gas_price: 10.into(),
..Default::default()
},
PrioritizationStrategy::GasPriceOnly,
);
let tx = Tx::gas_price(1).signed();
// when
let res = txq.import(TestClient::new(), vec![tx.local()]);
assert_eq!(res, vec![Ok(())]);
// then
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 1);
}
#[test]
fn should_accept_local_service_transaction() {
// given
let txq = new_queue();
let tx = Tx::gas_price(0).signed();
// when
let res = txq.import(
TestClient::new()
.with_local(&tx.sender()),
vec![tx.local()]
);
assert_eq!(res, vec![Ok(())]);
// then
assert_eq!(txq.pending(TestClient::new(), 0, 0, None).len(), 1);
}
#[test]
fn should_not_accept_external_service_transaction_if_sender_not_certified() {
// given
let txq = new_queue();
let tx1 = Tx::gas_price(0).signed().unverified();
let tx2 = Tx::gas_price(0).signed().retracted();
let tx3 = Tx::gas_price(0).signed().unverified();
// when
let res = txq.import(TestClient::new(), vec![tx1, tx2]);
assert_eq!(res, vec![
Err(transaction::Error::InsufficientGasPrice {
minimal: 1.into(),
got: 0.into(),
}),
Err(transaction::Error::InsufficientGasPrice {
minimal: 1.into(),
got: 0.into(),
}),
]);
// then
let res = txq.import(TestClient::new().with_service_transaction(), vec![tx3]);
assert_eq!(res, vec![Ok(())]);
}
#[test]
fn should_not_return_transactions_over_nonce_cap() {
// given
let txq = new_queue();
let (tx1, tx2, tx3) = Tx::default().signed_triple();
let res = txq.import(
TestClient::new(),
vec![tx1, tx2, tx3].local()
);
assert_eq!(res, vec![Ok(()), Ok(()), Ok(())]);
// when
let all = txq.pending(TestClient::new(), 0, 0, None);
// This should invalidate the cache!
let limited = txq.pending(TestClient::new(), 0, 0, Some(123.into()));
// then
assert_eq!(all.len(), 3);
assert_eq!(limited.len(), 1);
}
#[test]
fn should_clear_cache_after_timeout_for_local() {
// given
let txq = new_queue();
let (tx, tx2) = Tx::default().signed_pair();
let res = txq.import(TestClient::new(), vec![
verifier::Transaction::Local(PendingTransaction::new(tx, transaction::Condition::Timestamp(1000).into())),
tx2.local()
]);
assert_eq!(res, vec![Ok(()), Ok(())]);
// This should populate cache and set timestamp to 1
// when
assert_eq!(txq.pending(TestClient::new(), 0, 1, None).len(), 0);
assert_eq!(txq.pending(TestClient::new(), 0, 1000, None).len(), 0);
// This should invalidate the cache and trigger transaction ready.
// then
assert_eq!(txq.pending(TestClient::new(), 0, 1002, None).len(), 2);
}

185
miner/src/pool/tests/tx.rs Normal file
View File

@ -0,0 +1,185 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use ethereum_types::{U256, H256};
use ethkey::{Random, Generator};
use rustc_hex::FromHex;
use transaction::{self, Transaction, SignedTransaction, UnverifiedTransaction};
use pool::{verifier, VerifiedTransaction};
#[derive(Clone)]
pub struct Tx {
nonce: u64,
gas: u64,
gas_price: u64,
}
impl Default for Tx {
fn default() -> Self {
Tx {
nonce: 123,
gas: 21_000,
gas_price: 1,
}
}
}
impl Tx {
pub fn gas_price(gas_price: u64) -> Self {
Tx {
gas_price,
..Default::default()
}
}
pub fn signed(self) -> SignedTransaction {
let keypair = Random.generate().unwrap();
self.unsigned().sign(keypair.secret(), None)
}
pub fn signed_pair(self) -> (SignedTransaction, SignedTransaction) {
let (tx1, tx2, _) = self.signed_triple();
(tx1, tx2)
}
pub fn signed_triple(mut self) -> (SignedTransaction, SignedTransaction, SignedTransaction) {
let keypair = Random.generate().unwrap();
let tx1 = self.clone().unsigned().sign(keypair.secret(), None);
self.nonce += 1;
let tx2 = self.clone().unsigned().sign(keypair.secret(), None);
self.nonce += 1;
let tx3 = self.unsigned().sign(keypair.secret(), None);
(tx1, tx2, tx3)
}
pub fn signed_replacement(mut self) -> (SignedTransaction, SignedTransaction) {
let keypair = Random.generate().unwrap();
let tx1 = self.clone().unsigned().sign(keypair.secret(), None);
self.gas_price += 1;
let tx2 = self.unsigned().sign(keypair.secret(), None);
(tx1, tx2)
}
pub fn unsigned(self) -> Transaction {
Transaction {
action: transaction::Action::Create,
value: U256::from(100),
data: "3331600055".from_hex().unwrap(),
gas: self.gas.into(),
gas_price: self.gas_price.into(),
nonce: self.nonce.into()
}
}
}
pub trait TxExt: Sized {
type Out;
type Verified;
type Hash;
fn hash(&self) -> Self::Hash;
fn local(self) -> Self::Out;
fn retracted(self) -> Self::Out;
fn unverified(self) -> Self::Out;
fn verified(self) -> Self::Verified;
}
impl<A, B, O, V, H> TxExt for (A, B) where
A: TxExt<Out=O, Verified=V, Hash=H>,
B: TxExt<Out=O, Verified=V, Hash=H>,
{
type Out = (O, O);
type Verified = (V, V);
type Hash = (H, H);
fn hash(&self) -> Self::Hash { (self.0.hash(), self.1.hash()) }
fn local(self) -> Self::Out { (self.0.local(), self.1.local()) }
fn retracted(self) -> Self::Out { (self.0.retracted(), self.1.retracted()) }
fn unverified(self) -> Self::Out { (self.0.unverified(), self.1.unverified()) }
fn verified(self) -> Self::Verified { (self.0.verified(), self.1.verified()) }
}
impl TxExt for SignedTransaction {
type Out = verifier::Transaction;
type Verified = VerifiedTransaction;
type Hash = H256;
fn hash(&self) -> Self::Hash {
UnverifiedTransaction::hash(self)
}
fn local(self) -> Self::Out {
verifier::Transaction::Local(self.into())
}
fn retracted(self) -> Self::Out {
verifier::Transaction::Retracted(self.into())
}
fn unverified(self) -> Self::Out {
verifier::Transaction::Unverified(self.into())
}
fn verified(self) -> Self::Verified {
VerifiedTransaction::from_pending_block_transaction(self)
}
}
impl TxExt for Vec<SignedTransaction> {
type Out = Vec<verifier::Transaction>;
type Verified = Vec<VerifiedTransaction>;
type Hash = Vec<H256>;
fn hash(&self) -> Self::Hash {
self.iter().map(|tx| tx.hash()).collect()
}
fn local(self) -> Self::Out {
self.into_iter().map(Into::into).map(verifier::Transaction::Local).collect()
}
fn retracted(self) -> Self::Out {
self.into_iter().map(Into::into).map(verifier::Transaction::Retracted).collect()
}
fn unverified(self) -> Self::Out {
self.into_iter().map(Into::into).map(verifier::Transaction::Unverified).collect()
}
fn verified(self) -> Self::Verified {
self.into_iter().map(VerifiedTransaction::from_pending_block_transaction).collect()
}
}
pub trait PairExt {
type Type;
fn into_vec(self) -> Vec<Self::Type>;
}
impl<A> PairExt for (A, A) {
type Type = A;
fn into_vec(self) -> Vec<A> {
vec![self.0, self.1]
}
}

288
miner/src/pool/verifier.rs Normal file
View File

@ -0,0 +1,288 @@
// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Transaction Verifier
//!
//! Responsible for verifying a transaction before importing to the pool.
//! Should make sure that the transaction is structuraly valid.
//!
//! May have some overlap with `Readiness` since we don't want to keep around
//! stalled transactions.
use std::cmp;
use std::sync::Arc;
use std::sync::atomic::{self, AtomicUsize};
use ethereum_types::{U256, H256};
use transaction;
use txpool;
use super::client::{Client, TransactionType};
use super::VerifiedTransaction;
/// Verification options.
#[derive(Debug, Clone, PartialEq)]
pub struct Options {
/// Minimal allowed gas price.
pub minimal_gas_price: U256,
/// Current block gas limit.
pub block_gas_limit: U256,
/// Maximal gas limit for a single transaction.
pub tx_gas_limit: U256,
}
#[cfg(test)]
impl Default for Options {
fn default() -> Self {
Options {
minimal_gas_price: 0.into(),
block_gas_limit: U256::max_value(),
tx_gas_limit: U256::max_value(),
}
}
}
/// Transaction to verify.
pub enum Transaction {
/// Fresh, never verified transaction.
///
/// We need to do full verification of such transactions
Unverified(transaction::UnverifiedTransaction),
/// Transaction from retracted block.
///
/// We could skip some parts of verification of such transactions
Retracted(transaction::UnverifiedTransaction),
/// Locally signed or retracted transaction.
///
/// We can skip consistency verifications and just verify readiness.
Local(transaction::PendingTransaction),
}
impl Transaction {
fn hash(&self) -> H256 {
match *self {
Transaction::Unverified(ref tx) => tx.hash(),
Transaction::Retracted(ref tx) => tx.hash(),
Transaction::Local(ref tx) => tx.hash(),
}
}
fn gas(&self) -> &U256 {
match *self {
Transaction::Unverified(ref tx) => &tx.gas,
Transaction::Retracted(ref tx) => &tx.gas,
Transaction::Local(ref tx) => &tx.gas,
}
}
fn gas_price(&self) -> &U256 {
match *self {
Transaction::Unverified(ref tx) => &tx.gas_price,
Transaction::Retracted(ref tx) => &tx.gas_price,
Transaction::Local(ref tx) => &tx.gas_price,
}
}
fn transaction(&self) -> &transaction::Transaction {
match *self {
Transaction::Unverified(ref tx) => &*tx,
Transaction::Retracted(ref tx) => &*tx,
Transaction::Local(ref tx) => &*tx,
}
}
fn is_local(&self) -> bool {
match *self {
Transaction::Local(..) => true,
_ => false,
}
}
fn is_retracted(&self) -> bool {
match *self {
Transaction::Retracted(..) => true,
_ => false,
}
}
}
/// Transaction verifier.
///
/// Verification can be run in parallel for all incoming transactions.
#[derive(Debug)]
pub struct Verifier<C> {
client: C,
options: Options,
id: Arc<AtomicUsize>,
}
impl<C> Verifier<C> {
/// Creates new transaction verfier with specified options.
pub fn new(client: C, options: Options, id: Arc<AtomicUsize>) -> Self {
Verifier {
client,
options,
id,
}
}
}
impl<C: Client> txpool::Verifier<Transaction> for Verifier<C> {
type Error = transaction::Error;
type VerifiedTransaction = VerifiedTransaction;
fn verify_transaction(&self, tx: Transaction) -> Result<Self::VerifiedTransaction, Self::Error> {
// The checks here should be ordered by cost/complexity.
// Cheap checks should be done as early as possible to discard unneeded transactions early.
let hash = tx.hash();
if self.client.transaction_already_included(&hash) {
trace!(target: "txqueue", "[{:?}] Rejected tx already in the blockchain", hash);
bail!(transaction::Error::AlreadyImported)
}
let gas_limit = cmp::min(self.options.tx_gas_limit, self.options.block_gas_limit);
if tx.gas() > &gas_limit {
debug!(
target: "txqueue",
"[{:?}] Dropping transaction above gas limit: {} > min({}, {})",
hash,
tx.gas(),
self.options.block_gas_limit,
self.options.tx_gas_limit,
);
bail!(transaction::Error::GasLimitExceeded {
limit: gas_limit,
got: *tx.gas(),
});
}
let minimal_gas = self.client.required_gas(tx.transaction());
if tx.gas() < &minimal_gas {
trace!(target: "txqueue",
"[{:?}] Dropping transaction with insufficient gas: {} < {}",
hash,
tx.gas(),
minimal_gas,
);
bail!(transaction::Error::InsufficientGas {
minimal: minimal_gas,
got: *tx.gas(),
})
}
let is_own = tx.is_local();
// Quick exit for non-service transactions
if tx.gas_price() < &self.options.minimal_gas_price
&& !tx.gas_price().is_zero()
&& !is_own
{
trace!(
target: "txqueue",
"[{:?}] Rejected tx below minimal gas price threshold: {} < {}",
hash,
tx.gas_price(),
self.options.minimal_gas_price,
);
bail!(transaction::Error::InsufficientGasPrice {
minimal: self.options.minimal_gas_price,
got: *tx.gas_price(),
});
}
// Some more heavy checks below.
// Actually recover sender and verify that transaction
let is_retracted = tx.is_retracted();
let transaction = match tx {
Transaction::Retracted(tx) | Transaction::Unverified(tx) => match self.client.verify_transaction(tx) {
Ok(signed) => signed.into(),
Err(err) => {
debug!(target: "txqueue", "[{:?}] Rejected tx {:?}", hash, err);
bail!(err)
},
},
Transaction::Local(tx) => tx,
};
let sender = transaction.sender();
let account_details = self.client.account_details(&sender);
if transaction.gas_price < self.options.minimal_gas_price {
let transaction_type = self.client.transaction_type(&transaction);
if let TransactionType::Service = transaction_type {
debug!(target: "txqueue", "Service tx {:?} below minimal gas price accepted", hash);
} else if is_own || account_details.is_local {
info!(target: "own_tx", "Local tx {:?} below minimal gas price accepted", hash);
} else {
trace!(
target: "txqueue",
"[{:?}] Rejected tx below minimal gas price threshold: {} < {}",
hash,
transaction.gas_price,
self.options.minimal_gas_price,
);
bail!(transaction::Error::InsufficientGasPrice {
minimal: self.options.minimal_gas_price,
got: transaction.gas_price,
});
}
}
let cost = transaction.value + transaction.gas_price * transaction.gas;
if account_details.balance < cost {
debug!(
target: "txqueue",
"[{:?}] Rejected tx with not enough balance: {} < {}",
hash,
account_details.balance,
cost,
);
bail!(transaction::Error::InsufficientBalance {
cost: cost,
balance: account_details.balance,
});
}
if transaction.nonce < account_details.nonce {
debug!(
target: "txqueue",
"[{:?}] Rejected tx with old nonce ({} < {})",
hash,
transaction.nonce,
account_details.nonce,
);
bail!(transaction::Error::Old);
}
let priority = match (is_own || account_details.is_local, is_retracted) {
(true, _) => super::Priority::Local,
(false, false) => super::Priority::Regular,
(false, true) => super::Priority::Retracted,
};
Ok(VerifiedTransaction {
transaction,
priority,
hash,
sender,
insertion_id: self.id.fetch_add(1, atomic::Ordering::AcqRel),
})
}
}

File diff suppressed because it is too large Load Diff

View File

@ -390,10 +390,12 @@ fn execute_import(cmd: ImportBlockchain) -> Result<(), String> {
&snapshot_path, &snapshot_path,
restoration_db_handler, restoration_db_handler,
&cmd.dirs.ipc_path(), &cmd.dirs.ipc_path(),
Arc::new(Miner::with_spec(&spec)), // TODO [ToDr] don't use test miner here
// (actually don't require miner at all)
Arc::new(Miner::new_for_tests(&spec, None)),
Arc::new(AccountProvider::transient_provider()), Arc::new(AccountProvider::transient_provider()),
Box::new(ethcore_private_tx::NoopEncryptor), Box::new(ethcore_private_tx::NoopEncryptor),
Default::default() Default::default(),
).map_err(|e| format!("Client service error: {:?}", e))?; ).map_err(|e| format!("Client service error: {:?}", e))?;
// free up the spec in memory. // free up the spec in memory.
@ -580,10 +582,12 @@ fn start_client(
&snapshot_path, &snapshot_path,
restoration_db_handler, restoration_db_handler,
&dirs.ipc_path(), &dirs.ipc_path(),
Arc::new(Miner::with_spec(&spec)), // It's fine to use test version here,
// since we don't care about miner parameters at all
Arc::new(Miner::new_for_tests(&spec, None)),
Arc::new(AccountProvider::transient_provider()), Arc::new(AccountProvider::transient_provider()),
Box::new(ethcore_private_tx::NoopEncryptor), Box::new(ethcore_private_tx::NoopEncryptor),
Default::default() Default::default(),
).map_err(|e| format!("Client service error: {:?}", e))?; ).map_err(|e| format!("Client service error: {:?}", e))?;
drop(spec); drop(spec);

View File

@ -721,29 +721,25 @@ usage! {
"--gas-cap=[GAS]", "--gas-cap=[GAS]",
"A cap on how large we will raise the gas limit per block due to transaction volume.", "A cap on how large we will raise the gas limit per block due to transaction volume.",
ARG arg_tx_queue_mem_limit: (u32) = 2u32, or |c: &Config| c.mining.as_ref()?.tx_queue_mem_limit.clone(), ARG arg_tx_queue_mem_limit: (u32) = 4u32, or |c: &Config| c.mining.as_ref()?.tx_queue_mem_limit.clone(),
"--tx-queue-mem-limit=[MB]", "--tx-queue-mem-limit=[MB]",
"Maximum amount of memory that can be used by the transaction queue. Setting this parameter to 0 disables limiting.", "Maximum amount of memory that can be used by the transaction queue. Setting this parameter to 0 disables limiting.",
ARG arg_tx_queue_size: (usize) = 8192usize, or |c: &Config| c.mining.as_ref()?.tx_queue_size.clone(), ARG arg_tx_queue_size: (usize) = 8_192usize, or |c: &Config| c.mining.as_ref()?.tx_queue_size.clone(),
"--tx-queue-size=[LIMIT]", "--tx-queue-size=[LIMIT]",
"Maximum amount of transactions in the queue (waiting to be included in next block).", "Maximum amount of transactions in the queue (waiting to be included in next block).",
ARG arg_tx_queue_per_sender: (Option<usize>) = None, or |c: &Config| c.mining.as_ref()?.tx_queue_per_sender.clone(),
"--tx-queue-per-sender=[LIMIT]",
"Maximum number of transactions per sender in the queue. By default it's 1% of the entire queue, but not less than 16.",
ARG arg_tx_queue_gas: (String) = "off", or |c: &Config| c.mining.as_ref()?.tx_queue_gas.clone(), ARG arg_tx_queue_gas: (String) = "off", or |c: &Config| c.mining.as_ref()?.tx_queue_gas.clone(),
"--tx-queue-gas=[LIMIT]", "--tx-queue-gas=[LIMIT]",
"Maximum amount of total gas for external transactions in the queue. LIMIT can be either an amount of gas or 'auto' or 'off'. 'auto' sets the limit to be 20x the current block gas limit.", "Maximum amount of total gas for external transactions in the queue. LIMIT can be either an amount of gas or 'auto' or 'off'. 'auto' sets the limit to be 20x the current block gas limit.",
ARG arg_tx_queue_strategy: (String) = "gas_price", or |c: &Config| c.mining.as_ref()?.tx_queue_strategy.clone(), ARG arg_tx_queue_strategy: (String) = "gas_price", or |c: &Config| c.mining.as_ref()?.tx_queue_strategy.clone(),
"--tx-queue-strategy=[S]", "--tx-queue-strategy=[S]",
"Prioritization strategy used to order transactions in the queue. S may be: gas - Prioritize txs with low gas limit; gas_price - Prioritize txs with high gas price; gas_factor - Prioritize txs using gas price and gas limit ratio.", "Prioritization strategy used to order transactions in the queue. S may be: gas_price - Prioritize txs with high gas price",
ARG arg_tx_queue_ban_count: (u16) = 1u16, or |c: &Config| c.mining.as_ref()?.tx_queue_ban_count.clone(),
"--tx-queue-ban-count=[C]",
"Number of times maximal time for execution (--tx-time-limit) can be exceeded before banning sender/recipient/code.",
ARG arg_tx_queue_ban_time: (u16) = 180u16, or |c: &Config| c.mining.as_ref()?.tx_queue_ban_time.clone(),
"--tx-queue-ban-time=[SEC]",
"Banning time (in seconds) for offenders of specified execution time limit. Also number of offending actions have to reach the threshold within that time.",
ARG arg_stratum_interface: (String) = "local", or |c: &Config| c.stratum.as_ref()?.interface.clone(), ARG arg_stratum_interface: (String) = "local", or |c: &Config| c.stratum.as_ref()?.interface.clone(),
"--stratum-interface=[IP]", "--stratum-interface=[IP]",
@ -775,7 +771,7 @@ usage! {
ARG arg_tx_time_limit: (Option<u64>) = None, or |c: &Config| c.mining.as_ref()?.tx_time_limit.clone(), ARG arg_tx_time_limit: (Option<u64>) = None, or |c: &Config| c.mining.as_ref()?.tx_time_limit.clone(),
"--tx-time-limit=[MS]", "--tx-time-limit=[MS]",
"Maximal time for processing single transaction. If enabled senders/recipients/code of transactions offending the limit will be banned from being included in transaction queue for 180 seconds.", "Maximal time for processing single transaction. If enabled senders of transactions offending the limit will get other transactions penalized.",
ARG arg_extra_data: (Option<String>) = None, or |c: &Config| c.mining.as_ref()?.extra_data.clone(), ARG arg_extra_data: (Option<String>) = None, or |c: &Config| c.mining.as_ref()?.extra_data.clone(),
"--extra-data=[STRING]", "--extra-data=[STRING]",
@ -1028,6 +1024,13 @@ usage! {
"--cache=[MB]", "--cache=[MB]",
"Equivalent to --cache-size MB.", "Equivalent to --cache-size MB.",
ARG arg_tx_queue_ban_count: (u16) = 1u16, or |c: &Config| c.mining.as_ref()?.tx_queue_ban_count.clone(),
"--tx-queue-ban-count=[C]",
"Not supported.",
ARG arg_tx_queue_ban_time: (u16) = 180u16, or |c: &Config| c.mining.as_ref()?.tx_queue_ban_time.clone(),
"--tx-queue-ban-time=[SEC]",
"Not supported.",
} }
} }
@ -1232,6 +1235,7 @@ struct Mining {
gas_cap: Option<String>, gas_cap: Option<String>,
extra_data: Option<String>, extra_data: Option<String>,
tx_queue_size: Option<usize>, tx_queue_size: Option<usize>,
tx_queue_per_sender: Option<usize>,
tx_queue_mem_limit: Option<u32>, tx_queue_mem_limit: Option<u32>,
tx_queue_gas: Option<String>, tx_queue_gas: Option<String>,
tx_queue_strategy: Option<String>, tx_queue_strategy: Option<String>,
@ -1654,7 +1658,8 @@ mod tests {
arg_gas_cap: "6283184".into(), arg_gas_cap: "6283184".into(),
arg_extra_data: Some("Parity".into()), arg_extra_data: Some("Parity".into()),
arg_tx_queue_size: 8192usize, arg_tx_queue_size: 8192usize,
arg_tx_queue_mem_limit: 2u32, arg_tx_queue_per_sender: None,
arg_tx_queue_mem_limit: 4u32,
arg_tx_queue_gas: "off".into(), arg_tx_queue_gas: "off".into(),
arg_tx_queue_strategy: "gas_factor".into(), arg_tx_queue_strategy: "gas_factor".into(),
arg_tx_queue_ban_count: 1u16, arg_tx_queue_ban_count: 1u16,
@ -1911,6 +1916,7 @@ mod tests {
gas_floor_target: None, gas_floor_target: None,
gas_cap: None, gas_cap: None,
tx_queue_size: Some(8192), tx_queue_size: Some(8192),
tx_queue_per_sender: None,
tx_queue_mem_limit: None, tx_queue_mem_limit: None,
tx_queue_gas: Some("off".into()), tx_queue_gas: Some("off".into()),
tx_queue_strategy: None, tx_queue_strategy: None,

View File

@ -19,12 +19,13 @@ force_sealing = true
reseal_on_txs = "all" reseal_on_txs = "all"
# New pending block will be created only once per 4000 milliseconds. # New pending block will be created only once per 4000 milliseconds.
reseal_min_period = 4000 reseal_min_period = 4000
# Parity will keep/relay at most 2048 transactions in queue. # Parity will keep/relay at most 8192 transactions in queue.
tx_queue_size = 2048 tx_queue_size = 8192
tx_queue_per_sender = 128
[footprint] [footprint]
# If defined will never use more then 256MB for all caches. (Overrides other cache settings). # If defined will never use more then 1024MB for all caches. (Overrides other cache settings).
cache_size = 256 cache_size = 1024
[misc] [misc]
# Logging pattern (`<module>=<level>`, e.g. `own_tx=trace`). # Logging pattern (`<module>=<level>`, e.g. `own_tx=trace`).

View File

@ -14,12 +14,12 @@
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>. // along with Parity. If not, see <http://www.gnu.org/licenses/>.
use std::cmp::{max, min};
use std::time::Duration; use std::time::Duration;
use std::io::Read; use std::io::Read;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::cmp;
use std::str::FromStr; use std::str::FromStr;
use cli::{Args, ArgsError}; use cli::{Args, ArgsError};
use hash::keccak; use hash::keccak;
@ -30,15 +30,15 @@ use ansi_term::Colour;
use sync::{NetworkConfiguration, validate_node_url, self}; use sync::{NetworkConfiguration, validate_node_url, self};
use ethcore::ethstore::ethkey::{Secret, Public}; use ethcore::ethstore::ethkey::{Secret, Public};
use ethcore::client::{VMType}; use ethcore::client::{VMType};
use ethcore::miner::{MinerOptions, Banning, StratumOptions}; use ethcore::miner::{stratum, MinerOptions};
use ethcore::verification::queue::VerifierSettings; use ethcore::verification::queue::VerifierSettings;
use miner::pool;
use rpc::{IpcConfiguration, HttpConfiguration, WsConfiguration, UiConfiguration}; use rpc::{IpcConfiguration, HttpConfiguration, WsConfiguration, UiConfiguration};
use rpc_apis::ApiSet; use rpc_apis::ApiSet;
use parity_rpc::NetworkSettings; use parity_rpc::NetworkSettings;
use cache::CacheConfig; use cache::CacheConfig;
use helpers::{to_duration, to_mode, to_block_id, to_u256, to_pending_set, to_price, geth_ipc_path, parity_ipc_path, use helpers::{to_duration, to_mode, to_block_id, to_u256, to_pending_set, to_price, geth_ipc_path, parity_ipc_path, to_bootnodes, to_addresses, to_address, to_queue_strategy, to_queue_penalization, passwords_from_files};
to_bootnodes, to_addresses, to_address, to_gas_limit, to_queue_strategy, passwords_from_files};
use dir::helpers::{replace_home, replace_home_and_local}; use dir::helpers::{replace_home, replace_home_and_local};
use params::{ResealPolicy, AccountsConfig, GasPricerConfig, MinerExtras, SpecType}; use params::{ResealPolicy, AccountsConfig, GasPricerConfig, MinerExtras, SpecType};
use ethcore_logger::Config as LogConfig; use ethcore_logger::Config as LogConfig;
@ -352,7 +352,6 @@ impl Configuration {
daemon: daemon, daemon: daemon,
logger_config: logger_config.clone(), logger_config: logger_config.clone(),
miner_options: self.miner_options()?, miner_options: self.miner_options()?,
work_notify: self.work_notify(),
gas_price_percentile: self.args.arg_gas_price_percentile, gas_price_percentile: self.args.arg_gas_price_percentile,
ntp_servers: self.ntp_servers(), ntp_servers: self.ntp_servers(),
ws_conf: ws_conf, ws_conf: ws_conf,
@ -411,12 +410,14 @@ impl Configuration {
} }
fn miner_extras(&self) -> Result<MinerExtras, String> { fn miner_extras(&self) -> Result<MinerExtras, String> {
let floor = to_u256(&self.args.arg_gas_floor_target)?;
let ceil = to_u256(&self.args.arg_gas_cap)?;
let extras = MinerExtras { let extras = MinerExtras {
author: self.author()?, author: self.author()?,
extra_data: self.extra_data()?, extra_data: self.extra_data()?,
gas_floor_target: to_u256(&self.args.arg_gas_floor_target)?, gas_range_target: (floor, ceil),
gas_ceil_target: to_u256(&self.args.arg_gas_cap)?,
engine_signer: self.engine_signer()?, engine_signer: self.engine_signer()?,
work_notify: self.work_notify(),
}; };
Ok(extras) Ok(extras)
@ -471,7 +472,7 @@ impl Configuration {
fn max_peers(&self) -> u32 { fn max_peers(&self) -> u32 {
self.args.arg_max_peers self.args.arg_max_peers
.or(max(self.args.arg_min_peers, Some(DEFAULT_MAX_PEERS))) .or(cmp::max(self.args.arg_min_peers, Some(DEFAULT_MAX_PEERS)))
.unwrap_or(DEFAULT_MAX_PEERS) as u32 .unwrap_or(DEFAULT_MAX_PEERS) as u32
} }
@ -484,7 +485,7 @@ impl Configuration {
fn min_peers(&self) -> u32 { fn min_peers(&self) -> u32 {
self.args.arg_min_peers self.args.arg_min_peers
.or(min(self.args.arg_max_peers, Some(DEFAULT_MIN_PEERS))) .or(cmp::min(self.args.arg_max_peers, Some(DEFAULT_MIN_PEERS)))
.unwrap_or(DEFAULT_MIN_PEERS) as u32 .unwrap_or(DEFAULT_MIN_PEERS) as u32
} }
@ -514,9 +515,9 @@ impl Configuration {
Ok(cfg) Ok(cfg)
} }
fn stratum_options(&self) -> Result<Option<StratumOptions>, String> { fn stratum_options(&self) -> Result<Option<stratum::Options>, String> {
if self.args.flag_stratum { if self.args.flag_stratum {
Ok(Some(StratumOptions { Ok(Some(stratum::Options {
io_path: self.directories().db, io_path: self.directories().db,
listen_addr: self.stratum_interface(), listen_addr: self.stratum_interface(),
port: self.args.arg_ports_shift + self.args.arg_stratum_port, port: self.args.arg_ports_shift + self.args.arg_stratum_port,
@ -538,34 +539,49 @@ impl Configuration {
reseal_on_external_tx: reseal.external, reseal_on_external_tx: reseal.external,
reseal_on_own_tx: reseal.own, reseal_on_own_tx: reseal.own,
reseal_on_uncle: self.args.flag_reseal_on_uncle, reseal_on_uncle: self.args.flag_reseal_on_uncle,
reseal_min_period: Duration::from_millis(self.args.arg_reseal_min_period),
reseal_max_period: Duration::from_millis(self.args.arg_reseal_max_period),
pending_set: to_pending_set(&self.args.arg_relay_set)?,
work_queue_size: self.args.arg_work_queue_size,
enable_resubmission: !self.args.flag_remove_solved,
infinite_pending_block: self.args.flag_infinite_pending_block,
tx_queue_penalization: to_queue_penalization(self.args.arg_tx_time_limit)?,
tx_queue_strategy: to_queue_strategy(&self.args.arg_tx_queue_strategy)?,
refuse_service_transactions: self.args.flag_refuse_service_transactions,
pool_limits: self.pool_limits()?,
pool_verification_options: self.pool_verification_options()?,
};
Ok(options)
}
fn pool_limits(&self) -> Result<pool::Options, String> {
let max_count = self.args.arg_tx_queue_size;
Ok(pool::Options {
max_count,
max_per_sender: self.args.arg_tx_queue_per_sender.unwrap_or_else(|| cmp::max(16, max_count / 100)),
max_mem_usage: if self.args.arg_tx_queue_mem_limit > 0 {
self.args.arg_tx_queue_mem_limit as usize * 1024 * 1024
} else {
usize::max_value()
},
})
}
fn pool_verification_options(&self) -> Result<pool::verifier::Options, String>{
Ok(pool::verifier::Options {
// NOTE min_gas_price and block_gas_limit will be overwritten right after start.
minimal_gas_price: U256::from(20_000_000) * 1_000u32,
block_gas_limit: U256::max_value(),
tx_gas_limit: match self.args.arg_tx_gas_limit { tx_gas_limit: match self.args.arg_tx_gas_limit {
Some(ref d) => to_u256(d)?, Some(ref d) => to_u256(d)?,
None => U256::max_value(), None => U256::max_value(),
}, },
tx_queue_size: self.args.arg_tx_queue_size, })
tx_queue_memory_limit: if self.args.arg_tx_queue_mem_limit > 0 {
Some(self.args.arg_tx_queue_mem_limit as usize * 1024 * 1024)
} else { None },
tx_queue_gas_limit: to_gas_limit(&self.args.arg_tx_queue_gas)?,
tx_queue_strategy: to_queue_strategy(&self.args.arg_tx_queue_strategy)?,
pending_set: to_pending_set(&self.args.arg_relay_set)?,
reseal_min_period: Duration::from_millis(self.args.arg_reseal_min_period),
reseal_max_period: Duration::from_millis(self.args.arg_reseal_max_period),
work_queue_size: self.args.arg_work_queue_size,
enable_resubmission: !self.args.flag_remove_solved,
tx_queue_banning: match self.args.arg_tx_time_limit {
Some(limit) => Banning::Enabled {
min_offends: self.args.arg_tx_queue_ban_count,
offend_threshold: Duration::from_millis(limit),
ban_duration: Duration::from_secs(self.args.arg_tx_queue_ban_time as u64),
},
None => Banning::Disabled,
},
refuse_service_transactions: self.args.flag_refuse_service_transactions,
infinite_pending_block: self.args.flag_infinite_pending_block,
};
Ok(options)
} }
fn ui_port(&self) -> u16 { fn ui_port(&self) -> u16 {
@ -690,12 +706,7 @@ impl Configuration {
let usd_per_tx = to_price(&self.args.arg_usd_per_tx)?; let usd_per_tx = to_price(&self.args.arg_usd_per_tx)?;
if "auto" == self.args.arg_usd_per_eth.as_str() { if "auto" == self.args.arg_usd_per_eth.as_str() {
// Just a very rough estimate to avoid accepting
// ZGP transactions before the price is fetched
// if user does not want it.
let last_known_usd_per_eth = 10.0;
return Ok(GasPricerConfig::Calibrated { return Ok(GasPricerConfig::Calibrated {
initial_minimum: wei_per_gas(usd_per_tx, last_known_usd_per_eth),
usd_per_tx: usd_per_tx, usd_per_tx: usd_per_tx,
recalibration_period: to_duration(self.args.arg_price_update_period.as_str())?, recalibration_period: to_duration(self.args.arg_price_update_period.as_str())?,
}); });
@ -1233,7 +1244,7 @@ mod tests {
use tempdir::TempDir; use tempdir::TempDir;
use ethcore::client::{VMType, BlockId}; use ethcore::client::{VMType, BlockId};
use ethcore::miner::MinerOptions; use ethcore::miner::MinerOptions;
use miner::transaction_queue::PrioritizationStrategy; use miner::pool::PrioritizationStrategy;
use parity_rpc::NetworkSettings; use parity_rpc::NetworkSettings;
use updater::{UpdatePolicy, UpdateFilter, ReleaseTrack}; use updater::{UpdatePolicy, UpdateFilter, ReleaseTrack};
@ -1526,7 +1537,6 @@ mod tests {
no_hardcoded_sync: false, no_hardcoded_sync: false,
no_persistent_txqueue: false, no_persistent_txqueue: false,
whisper: Default::default(), whisper: Default::default(),
work_notify: Vec::new(),
}; };
expected.secretstore_conf.enabled = cfg!(feature = "secretstore"); expected.secretstore_conf.enabled = cfg!(feature = "secretstore");
expected.secretstore_conf.http_enabled = cfg!(feature = "secretstore"); expected.secretstore_conf.http_enabled = cfg!(feature = "secretstore");
@ -1540,18 +1550,12 @@ mod tests {
// when // when
let conf0 = parse(&["parity"]); let conf0 = parse(&["parity"]);
let conf1 = parse(&["parity", "--tx-queue-strategy", "gas_factor"]);
let conf2 = parse(&["parity", "--tx-queue-strategy", "gas_price"]); let conf2 = parse(&["parity", "--tx-queue-strategy", "gas_price"]);
let conf3 = parse(&["parity", "--tx-queue-strategy", "gas"]);
// then // then
assert_eq!(conf0.miner_options().unwrap(), mining_options); assert_eq!(conf0.miner_options().unwrap(), mining_options);
mining_options.tx_queue_strategy = PrioritizationStrategy::GasFactorAndGasPrice;
assert_eq!(conf1.miner_options().unwrap(), mining_options);
mining_options.tx_queue_strategy = PrioritizationStrategy::GasPriceOnly; mining_options.tx_queue_strategy = PrioritizationStrategy::GasPriceOnly;
assert_eq!(conf2.miner_options().unwrap(), mining_options); assert_eq!(conf2.miner_options().unwrap(), mining_options);
mining_options.tx_queue_strategy = PrioritizationStrategy::GasAndGasPrice;
assert_eq!(conf3.miner_options().unwrap(), mining_options);
} }
#[test] #[test]
@ -1883,8 +1887,8 @@ mod tests {
assert_eq!(c.miner_options.reseal_on_external_tx, true); assert_eq!(c.miner_options.reseal_on_external_tx, true);
assert_eq!(c.miner_options.reseal_on_own_tx, true); assert_eq!(c.miner_options.reseal_on_own_tx, true);
assert_eq!(c.miner_options.reseal_min_period, Duration::from_millis(4000)); assert_eq!(c.miner_options.reseal_min_period, Duration::from_millis(4000));
assert_eq!(c.miner_options.tx_queue_size, 2048); assert_eq!(c.miner_options.pool_limits.max_count, 8192);
assert_eq!(c.cache_config, CacheConfig::new_with_total_cache_size(256)); assert_eq!(c.cache_config, CacheConfig::new_with_total_cache_size(1024));
assert_eq!(c.logger_config.mode.unwrap(), "miner=trace,own_tx=trace"); assert_eq!(c.logger_config.mode.unwrap(), "miner=trace,own_tx=trace");
}, },
_ => panic!("Should be Cmd::Run"), _ => panic!("Should be Cmd::Run"),

View File

@ -23,9 +23,9 @@ use std::path::Path;
use ethereum_types::{U256, clean_0x, Address}; use ethereum_types::{U256, clean_0x, Address};
use journaldb::Algorithm; use journaldb::Algorithm;
use ethcore::client::{Mode, BlockId, VMType, DatabaseCompactionProfile, ClientConfig, VerifierType}; use ethcore::client::{Mode, BlockId, VMType, DatabaseCompactionProfile, ClientConfig, VerifierType};
use ethcore::miner::{PendingSet, GasLimit};
use ethcore::db::NUM_COLUMNS; use ethcore::db::NUM_COLUMNS;
use miner::transaction_queue::PrioritizationStrategy; use ethcore::miner::{PendingSet, Penalization};
use miner::pool::PrioritizationStrategy;
use cache::CacheConfig; use cache::CacheConfig;
use dir::DatabaseDirectories; use dir::DatabaseDirectories;
use dir::helpers::replace_home; use dir::helpers::replace_home;
@ -101,21 +101,20 @@ pub fn to_pending_set(s: &str) -> Result<PendingSet, String> {
} }
} }
pub fn to_gas_limit(s: &str) -> Result<GasLimit, String> { pub fn to_queue_strategy(s: &str) -> Result<PrioritizationStrategy, String> {
match s { match s {
"auto" => Ok(GasLimit::Auto), "gas_price" => Ok(PrioritizationStrategy::GasPriceOnly),
"off" => Ok(GasLimit::None), other => Err(format!("Invalid queue strategy: {}", other)),
other => Ok(GasLimit::Fixed(to_u256(other)?)),
} }
} }
pub fn to_queue_strategy(s: &str) -> Result<PrioritizationStrategy, String> { pub fn to_queue_penalization(time: Option<u64>) -> Result<Penalization, String> {
match s { Ok(match time {
"gas" => Ok(PrioritizationStrategy::GasAndGasPrice), Some(threshold_ms) => Penalization::Enabled {
"gas_price" => Ok(PrioritizationStrategy::GasPriceOnly), offend_threshold: Duration::from_millis(threshold_ms),
"gas_factor" => Ok(PrioritizationStrategy::GasFactorAndGasPrice), },
other => Err(format!("Invalid queue strategy: {}", other)), None => Penalization::Disabled,
} })
} }
pub fn to_address(s: Option<String>) -> Result<Address, String> { pub fn to_address(s: Option<String>) -> Result<Address, String> {

View File

@ -16,15 +16,16 @@
use std::{str, fs, fmt}; use std::{str, fs, fmt};
use std::time::Duration; use std::time::Duration;
use ethcore::client::Mode;
use ethcore::ethereum;
use ethcore::spec::{Spec, SpecParams};
use ethereum_types::{U256, Address}; use ethereum_types::{U256, Address};
use futures_cpupool::CpuPool; use futures_cpupool::CpuPool;
use parity_version::version_data;
use journaldb::Algorithm;
use ethcore::spec::{Spec, SpecParams};
use ethcore::ethereum;
use ethcore::client::Mode;
use ethcore::miner::{GasPricer, GasPriceCalibratorOptions};
use hash_fetch::fetch::Client as FetchClient; use hash_fetch::fetch::Client as FetchClient;
use journaldb::Algorithm;
use miner::gas_pricer::{GasPricer, GasPriceCalibratorOptions};
use parity_version::version_data;
use user_defaults::UserDefaults; use user_defaults::UserDefaults;
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq)]
@ -223,25 +224,14 @@ impl Default for AccountsConfig {
pub enum GasPricerConfig { pub enum GasPricerConfig {
Fixed(U256), Fixed(U256),
Calibrated { Calibrated {
initial_minimum: U256,
usd_per_tx: f32, usd_per_tx: f32,
recalibration_period: Duration, recalibration_period: Duration,
} }
} }
impl GasPricerConfig {
pub fn initial_min(&self) -> U256 {
match *self {
GasPricerConfig::Fixed(ref min) => min.clone(),
GasPricerConfig::Calibrated { ref initial_minimum, .. } => initial_minimum.clone(),
}
}
}
impl Default for GasPricerConfig { impl Default for GasPricerConfig {
fn default() -> Self { fn default() -> Self {
GasPricerConfig::Calibrated { GasPricerConfig::Calibrated {
initial_minimum: 476190464u64.into(),
usd_per_tx: 0.0001f32, usd_per_tx: 0.0001f32,
recalibration_period: Duration::from_secs(3600), recalibration_period: Duration::from_secs(3600),
} }
@ -269,20 +259,20 @@ impl GasPricerConfig {
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq)]
pub struct MinerExtras { pub struct MinerExtras {
pub author: Address, pub author: Address,
pub extra_data: Vec<u8>,
pub gas_floor_target: U256,
pub gas_ceil_target: U256,
pub engine_signer: Address, pub engine_signer: Address,
pub extra_data: Vec<u8>,
pub gas_range_target: (U256, U256),
pub work_notify: Vec<String>,
} }
impl Default for MinerExtras { impl Default for MinerExtras {
fn default() -> Self { fn default() -> Self {
MinerExtras { MinerExtras {
author: Default::default(), author: Default::default(),
extra_data: version_data(),
gas_floor_target: U256::from(4_700_000),
gas_ceil_target: U256::from(6_283_184),
engine_signer: Default::default(), engine_signer: Default::default(),
extra_data: version_data(),
gas_range_target: (4_700_000.into(), 6_283_184.into()),
work_notify: Default::default(),
} }
} }
} }

View File

@ -24,11 +24,10 @@ use std::net::{TcpListener};
use ansi_term::{Colour, Style}; use ansi_term::{Colour, Style};
use ctrlc::CtrlC; use ctrlc::CtrlC;
use ethcore::account_provider::{AccountProvider, AccountProviderSettings}; use ethcore::account_provider::{AccountProvider, AccountProviderSettings};
use ethcore::client::{Client, Mode, DatabaseCompactionProfile, VMType, BlockChainClient}; use ethcore::client::{Client, Mode, DatabaseCompactionProfile, VMType, BlockChainClient, BlockInfo};
use ethcore::db::NUM_COLUMNS; use ethcore::db::NUM_COLUMNS;
use ethcore::ethstore::ethkey; use ethcore::ethstore::ethkey;
use ethcore::miner::{Miner, MinerService, MinerOptions}; use ethcore::miner::{stratum, Miner, MinerService, MinerOptions};
use ethcore::miner::{StratumOptions, Stratum};
use ethcore::snapshot; use ethcore::snapshot;
use ethcore::spec::{SpecParams, OptimizeFor}; use ethcore::spec::{SpecParams, OptimizeFor};
use ethcore::verification::queue::VerifierSettings; use ethcore::verification::queue::VerifierSettings;
@ -128,7 +127,7 @@ pub struct RunCmd {
pub ui: bool, pub ui: bool,
pub name: String, pub name: String,
pub custom_bootnodes: bool, pub custom_bootnodes: bool,
pub stratum: Option<StratumOptions>, pub stratum: Option<stratum::Options>,
pub no_periodic_snapshot: bool, pub no_periodic_snapshot: bool,
pub check_seal: bool, pub check_seal: bool,
pub download_old_blocks: bool, pub download_old_blocks: bool,
@ -138,7 +137,6 @@ pub struct RunCmd {
pub no_persistent_txqueue: bool, pub no_persistent_txqueue: bool,
pub whisper: ::whisper::Config, pub whisper: ::whisper::Config,
pub no_hardcoded_sync: bool, pub no_hardcoded_sync: bool,
pub work_notify: Vec<String>,
} }
pub fn open_ui(ws_conf: &rpc::WsConfiguration, ui_conf: &rpc::UiConfiguration, logger_config: &LogConfig) -> Result<(), String> { pub fn open_ui(ws_conf: &rpc::WsConfiguration, ui_conf: &rpc::UiConfiguration, logger_config: &LogConfig) -> Result<(), String> {
@ -176,11 +174,12 @@ impl ::local_store::NodeInfo for FullNodeInfo {
None => return Vec::new(), None => return Vec::new(),
}; };
let local_txs = miner.local_transactions(); miner.local_transactions()
miner.pending_transactions() .values()
.into_iter() .filter_map(|status| match *status {
.chain(miner.future_transactions()) ::miner::pool::local_transactions::Status::Pending(ref tx) => Some(tx.pending().clone()),
.filter(|tx| local_txs.contains_key(&tx.hash())) _ => None,
})
.collect() .collect()
} }
} }
@ -559,19 +558,21 @@ fn execute_impl<Cr, Rr>(cmd: RunCmd, logger: Arc<RotatingLogger>, on_client_rq:
let fetch = fetch::Client::new().map_err(|e| format!("Error starting fetch client: {:?}", e))?; let fetch = fetch::Client::new().map_err(|e| format!("Error starting fetch client: {:?}", e))?;
// create miner // create miner
let initial_min_gas_price = cmd.gas_pricer_conf.initial_min(); let miner = Arc::new(Miner::new(
let miner = Miner::new(cmd.miner_options, cmd.gas_pricer_conf.to_gas_pricer(fetch.clone(), cpu_pool.clone()), &spec, Some(account_provider.clone())); cmd.miner_options,
miner.set_author(cmd.miner_extras.author); cmd.gas_pricer_conf.to_gas_pricer(fetch.clone(), cpu_pool.clone()),
miner.set_gas_floor_target(cmd.miner_extras.gas_floor_target); &spec,
miner.set_gas_ceil_target(cmd.miner_extras.gas_ceil_target); Some(account_provider.clone())
));
miner.set_author(cmd.miner_extras.author, None).expect("Fails only if password is Some; password is None; qed");
miner.set_gas_range_target(cmd.miner_extras.gas_range_target);
miner.set_extra_data(cmd.miner_extras.extra_data); miner.set_extra_data(cmd.miner_extras.extra_data);
miner.set_minimal_gas_price(initial_min_gas_price); if !cmd.miner_extras.work_notify.is_empty() {
miner.recalibrate_minimal_gas_price(); miner.add_work_listener(Box::new(
if !cmd.work_notify.is_empty() { WorkPoster::new(&cmd.miner_extras.work_notify, fetch.clone(), event_loop.remote())
miner.push_notifier(Box::new(WorkPoster::new(&cmd.work_notify, fetch.clone(), event_loop.remote()))); ));
} }
let engine_signer = cmd.miner_extras.engine_signer; let engine_signer = cmd.miner_extras.engine_signer;
if engine_signer != Default::default() { if engine_signer != Default::default() {
// Check if engine signer exists // Check if engine signer exists
if !account_provider.has_account(engine_signer).unwrap_or(false) { if !account_provider.has_account(engine_signer).unwrap_or(false) {
@ -584,7 +585,7 @@ fn execute_impl<Cr, Rr>(cmd: RunCmd, logger: Arc<RotatingLogger>, on_client_rq:
} }
// Attempt to sign in the engine signer. // Attempt to sign in the engine signer.
if !passwords.iter().any(|p| miner.set_engine_signer(engine_signer, (*p).clone()).is_ok()) { if !passwords.iter().any(|p| miner.set_author(engine_signer, Some(p.to_owned())).is_ok()) {
return Err(format!("No valid password for the consensus signer {}. {}", engine_signer, VERIFY_PASSWORD_HINT)); return Err(format!("No valid password for the consensus signer {}. {}", engine_signer, VERIFY_PASSWORD_HINT));
} }
} }
@ -646,6 +647,9 @@ fn execute_impl<Cr, Rr>(cmd: RunCmd, logger: Arc<RotatingLogger>, on_client_rq:
// take handle to client // take handle to client
let client = service.client(); let client = service.client();
// Update miners block gas limit
miner.update_transaction_queue_limits(*client.best_block_header().gas_limit());
// take handle to private transactions service // take handle to private transactions service
let private_tx_service = service.private_tx_service(); let private_tx_service = service.private_tx_service();
let private_tx_provider = private_tx_service.provider(); let private_tx_provider = private_tx_service.provider();
@ -695,7 +699,7 @@ fn execute_impl<Cr, Rr>(cmd: RunCmd, logger: Arc<RotatingLogger>, on_client_rq:
// start stratum // start stratum
if let Some(ref stratum_config) = cmd.stratum { if let Some(ref stratum_config) = cmd.stratum {
Stratum::register(stratum_config, miner.clone(), Arc::downgrade(&client)) stratum::Stratum::register(stratum_config, miner.clone(), Arc::downgrade(&client))
.map_err(|e| format!("Stratum start error: {:?}", e))?; .map_err(|e| format!("Stratum start error: {:?}", e))?;
} }

View File

@ -194,10 +194,12 @@ impl SnapshotCommand {
&snapshot_path, &snapshot_path,
restoration_db_handler, restoration_db_handler,
&self.dirs.ipc_path(), &self.dirs.ipc_path(),
Arc::new(Miner::with_spec(&spec)), // TODO [ToDr] don't use test miner here
// (actually don't require miner at all)
Arc::new(Miner::new_for_tests(&spec, None)),
Arc::new(AccountProvider::transient_provider()), Arc::new(AccountProvider::transient_provider()),
Box::new(ethcore_private_tx::NoopEncryptor), Box::new(ethcore_private_tx::NoopEncryptor),
Default::default() Default::default(),
).map_err(|e| format!("Client service error: {:?}", e))?; ).map_err(|e| format!("Client service error: {:?}", e))?;
Ok(service) Ok(service)

View File

@ -98,7 +98,7 @@ impl<F: Fetch> Client<F> {
} }
/// Gets the current ETH price and calls `set_price` with the result. /// Gets the current ETH price and calls `set_price` with the result.
pub fn get<G: Fn(PriceInfo) + Sync + Send + 'static>(&self, set_price: G) { pub fn get<G: FnOnce(PriceInfo) + Sync + Send + 'static>(&self, set_price: G) {
let future = self.fetch.get(&self.api_endpoint, fetch::Abort::default()) let future = self.fetch.get(&self.api_endpoint, fetch::Abort::default())
.from_err() .from_err()
.and_then(|response| { .and_then(|response| {

View File

@ -66,8 +66,9 @@ stats = { path = "../util/stats" }
vm = { path = "../ethcore/vm" } vm = { path = "../ethcore/vm" }
[dev-dependencies] [dev-dependencies]
pretty_assertions = "0.1"
macros = { path = "../util/macros" }
ethcore-network = { path = "../util/network" } ethcore-network = { path = "../util/network" }
kvdb-memorydb = { path = "../util/kvdb-memorydb" }
fake-fetch = { path = "../util/fake-fetch" } fake-fetch = { path = "../util/fake-fetch" }
kvdb-memorydb = { path = "../util/kvdb-memorydb" }
macros = { path = "../util/macros" }
pretty_assertions = "0.1"
transaction-pool = { path = "../transaction-pool" }

View File

@ -79,6 +79,8 @@ extern crate serde_derive;
#[cfg(test)] #[cfg(test)]
extern crate ethjson; extern crate ethjson;
#[cfg(test)]
extern crate transaction_pool as txpool;
#[cfg(test)] #[cfg(test)]
#[macro_use] #[macro_use]

View File

@ -34,8 +34,8 @@ use stats::Corpus;
use ethkey::Signature; use ethkey::Signature;
use sync::LightSync; use sync::LightSync;
use ethcore::ids::BlockId; use ethcore::ids::BlockId;
use ethcore::miner::MinerService; use ethcore::client::BlockChainClient;
use ethcore::client::MiningBlockChainClient; use ethcore::miner::{self, MinerService};
use ethcore::account_provider::AccountProvider; use ethcore::account_provider::AccountProvider;
use crypto::DEFAULT_MAC; use crypto::DEFAULT_MAC;
use transaction::{Action, SignedTransaction, PendingTransaction, Transaction}; use transaction::{Action, SignedTransaction, PendingTransaction, Transaction};
@ -117,10 +117,9 @@ impl<C, M> Clone for FullDispatcher<C, M> {
} }
} }
impl<C: MiningBlockChainClient, M: MinerService> FullDispatcher<C, M> { impl<C: miner::BlockChainClient, M: MinerService> FullDispatcher<C, M> {
fn state_nonce(&self, from: &Address) -> U256 { fn state_nonce(&self, from: &Address) -> U256 {
self.miner.last_nonce(from).map(|nonce| nonce + U256::one()) self.miner.next_nonce(&*self.client, from)
.unwrap_or_else(|| self.client.latest_nonce(from))
} }
/// Imports transaction to the miner's queue. /// Imports transaction to the miner's queue.
@ -133,7 +132,7 @@ impl<C: MiningBlockChainClient, M: MinerService> FullDispatcher<C, M> {
} }
} }
impl<C: MiningBlockChainClient, M: MinerService> Dispatcher for FullDispatcher<C, M> { impl<C: miner::BlockChainClient + BlockChainClient, M: MinerService> Dispatcher for FullDispatcher<C, M> {
fn fill_optional_fields(&self, request: TransactionRequest, default_sender: Address, force_nonce: bool) fn fill_optional_fields(&self, request: TransactionRequest, default_sender: Address, force_nonce: bool)
-> BoxFuture<FilledTransactionRequest> -> BoxFuture<FilledTransactionRequest>
{ {
@ -747,7 +746,7 @@ fn decrypt(accounts: &AccountProvider, address: Address, msg: Bytes, password: S
/// Extract the default gas price from a client and miner. /// Extract the default gas price from a client and miner.
pub fn default_gas_price<C, M>(client: &C, miner: &M, percentile: usize) -> U256 where pub fn default_gas_price<C, M>(client: &C, miner: &M, percentile: usize) -> U256 where
C: MiningBlockChainClient, C: BlockChainClient,
M: MinerService, M: MinerService,
{ {
client.gas_price_corpus(100).percentile(percentile).cloned().unwrap_or_else(|| miner.sensible_gas_price()) client.gas_price_corpus(100).percentile(percentile).cloned().unwrap_or_else(|| miner.sensible_gas_price())

View File

@ -391,11 +391,11 @@ pub fn no_light_peers() -> Error {
} }
} }
pub fn deprecated<T: Into<Option<String>>>(message: T) -> Error { pub fn deprecated<S: Into<String>, T: Into<Option<S>>>(message: T) -> Error {
Error { Error {
code: ErrorCode::ServerError(codes::DEPRECATED), code: ErrorCode::ServerError(codes::DEPRECATED),
message: "Method deprecated".into(), message: "Method deprecated".into(),
data: message.into().map(Value::String), data: message.into().map(Into::into).map(Value::String),
} }
} }

View File

@ -26,13 +26,12 @@ use parking_lot::Mutex;
use ethash::SeedHashCompute; use ethash::SeedHashCompute;
use ethcore::account_provider::{AccountProvider, DappId}; use ethcore::account_provider::{AccountProvider, DappId};
use ethcore::block::IsBlock; use ethcore::client::{BlockChainClient, BlockId, TransactionId, UncleId, StateOrBlock, StateClient, StateInfo, Call, EngineInfo};
use ethcore::client::{MiningBlockChainClient, BlockId, TransactionId, UncleId, StateOrBlock, StateClient, StateInfo, Call, EngineInfo};
use ethcore::ethereum::Ethash; use ethcore::ethereum::Ethash;
use ethcore::filter::Filter as EthcoreFilter; use ethcore::filter::Filter as EthcoreFilter;
use ethcore::header::{BlockNumber as EthBlockNumber}; use ethcore::header::{BlockNumber as EthBlockNumber};
use ethcore::log_entry::LogEntry; use ethcore::log_entry::LogEntry;
use ethcore::miner::MinerService; use ethcore::miner::{self, MinerService};
use ethcore::snapshot::SnapshotService; use ethcore::snapshot::SnapshotService;
use ethcore::encoded; use ethcore::encoded;
use sync::{SyncProvider}; use sync::{SyncProvider};
@ -92,7 +91,7 @@ impl Default for EthClientOptions {
/// Eth rpc implementation. /// Eth rpc implementation.
pub struct EthClient<C, SN: ?Sized, S: ?Sized, M, EM> where pub struct EthClient<C, SN: ?Sized, S: ?Sized, M, EM> where
C: MiningBlockChainClient, C: miner::BlockChainClient + BlockChainClient,
SN: SnapshotService, SN: SnapshotService,
S: SyncProvider, S: SyncProvider,
M: MinerService, M: MinerService,
@ -142,7 +141,7 @@ enum PendingTransactionId {
} }
impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> EthClient<C, SN, S, M, EM> where impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> EthClient<C, SN, S, M, EM> where
C: MiningBlockChainClient + StateClient<State=T> + Call<State=T> + EngineInfo, C: miner::BlockChainClient + BlockChainClient + StateClient<State=T> + Call<State=T> + EngineInfo,
SN: SnapshotService, SN: SnapshotService,
S: SyncProvider, S: SyncProvider,
M: MinerService<State=T>, M: MinerService<State=T>,
@ -420,7 +419,7 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> EthClient<C, SN, S
} }
pub fn pending_logs<M>(miner: &M, best_block: EthBlockNumber, filter: &EthcoreFilter) -> Vec<Log> where M: MinerService { pub fn pending_logs<M>(miner: &M, best_block: EthBlockNumber, filter: &EthcoreFilter) -> Vec<Log> where M: MinerService {
let receipts = miner.pending_receipts(best_block); let receipts = miner.pending_receipts(best_block).unwrap_or_default();
let pending_logs = receipts.into_iter() let pending_logs = receipts.into_iter()
.flat_map(|(hash, r)| r.logs.into_iter().map(|l| (hash.clone(), l)).collect::<Vec<(H256, LogEntry)>>()) .flat_map(|(hash, r)| r.logs.into_iter().map(|l| (hash.clone(), l)).collect::<Vec<(H256, LogEntry)>>())
@ -438,7 +437,7 @@ pub fn pending_logs<M>(miner: &M, best_block: EthBlockNumber, filter: &EthcoreFi
result result
} }
fn check_known<C>(client: &C, number: BlockNumber) -> Result<()> where C: MiningBlockChainClient { fn check_known<C>(client: &C, number: BlockNumber) -> Result<()> where C: BlockChainClient {
use ethcore::block_status::BlockStatus; use ethcore::block_status::BlockStatus;
let id = match number { let id = match number {
@ -458,7 +457,7 @@ fn check_known<C>(client: &C, number: BlockNumber) -> Result<()> where C: Mining
const MAX_QUEUE_SIZE_TO_MINE_ON: usize = 4; // because uncles go back 6. const MAX_QUEUE_SIZE_TO_MINE_ON: usize = 4; // because uncles go back 6.
impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<C, SN, S, M, EM> where impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<C, SN, S, M, EM> where
C: MiningBlockChainClient + StateClient<State=T> + Call<State=T> + EngineInfo + 'static, C: miner::BlockChainClient + BlockChainClient + StateClient<State=T> + Call<State=T> + EngineInfo + 'static,
SN: SnapshotService + 'static, SN: SnapshotService + 'static,
S: SyncProvider + 'static, S: SyncProvider + 'static,
M: MinerService<State=T> + 'static, M: MinerService<State=T> + 'static,
@ -506,7 +505,7 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
fn author(&self, meta: Metadata) -> Result<RpcH160> { fn author(&self, meta: Metadata) -> Result<RpcH160> {
let dapp = meta.dapp_id(); let dapp = meta.dapp_id();
let mut miner = self.miner.author(); let mut miner = self.miner.authoring_params().author;
if miner == 0.into() { if miner == 0.into() {
miner = self.dapp_accounts(dapp.into())?.get(0).cloned().unwrap_or_default(); miner = self.dapp_accounts(dapp.into())?.get(0).cloned().unwrap_or_default();
} }
@ -571,16 +570,8 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
let res = match num.unwrap_or_default() { let res = match num.unwrap_or_default() {
BlockNumber::Pending if self.options.pending_nonce_from_queue => { BlockNumber::Pending if self.options.pending_nonce_from_queue => {
let nonce = self.miner.last_nonce(&address) Ok(self.miner.next_nonce(&*self.client, &address).into())
.map(|n| n + 1.into())
.or_else(|| self.client.nonce(&address, BlockId::Latest));
match nonce {
Some(nonce) => Ok(nonce.into()),
None => Err(errors::database("latest nonce missing"))
} }
},
BlockNumber::Pending => { BlockNumber::Pending => {
let info = self.client.chain_info(); let info = self.client.chain_info();
let nonce = self.miner let nonce = self.miner
@ -596,7 +587,6 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
None => Err(errors::database("latest nonce missing")) None => Err(errors::database("latest nonce missing"))
} }
}, },
number => { number => {
try_bf!(check_known(&*self.client, number.clone())); try_bf!(check_known(&*self.client, number.clone()));
match self.client.nonce(&address, block_number_to_id(number)) { match self.client.nonce(&address, block_number_to_id(number)) {
@ -615,13 +605,13 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
} }
fn block_transaction_count_by_number(&self, num: BlockNumber) -> BoxFuture<Option<RpcU256>> { fn block_transaction_count_by_number(&self, num: BlockNumber) -> BoxFuture<Option<RpcU256>> {
let block_number = self.client.chain_info().best_block_number;
Box::new(future::ok(match num { Box::new(future::ok(match num {
BlockNumber::Pending => Some( BlockNumber::Pending =>
self.miner.status().transactions_in_pending_block.into() self.miner.pending_transactions(block_number).map(|x| x.len().into()),
),
_ => _ =>
self.client.block(block_number_to_id(num)) self.client.block(block_number_to_id(num)).map(|block| block.transactions_count().into())
.map(|block| block.transactions_count().into())
})) }))
} }
@ -665,8 +655,8 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
let hash: H256 = hash.into(); let hash: H256 = hash.into();
let block_number = self.client.chain_info().best_block_number; let block_number = self.client.chain_info().best_block_number;
let tx = try_bf!(self.transaction(PendingTransactionId::Hash(hash))).or_else(|| { let tx = try_bf!(self.transaction(PendingTransactionId::Hash(hash))).or_else(|| {
self.miner.transaction(block_number, &hash) self.miner.transaction(&hash)
.map(|t| Transaction::from_pending(t, block_number, self.eip86_transition)) .map(|t| Transaction::from_pending(t.pending().clone(), block_number + 1, self.eip86_transition))
}); });
Box::new(future::ok(tx)) Box::new(future::ok(tx))
@ -745,11 +735,6 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
} }
fn work(&self, no_new_work_timeout: Trailing<u64>) -> Result<Work> { fn work(&self, no_new_work_timeout: Trailing<u64>) -> Result<Work> {
if !self.miner.can_produce_work_package() {
warn!(target: "miner", "Cannot give work package - engine seals internally.");
return Err(errors::no_work_required())
}
let no_new_work_timeout = no_new_work_timeout.unwrap_or_default(); let no_new_work_timeout = no_new_work_timeout.unwrap_or_default();
// check if we're still syncing and return empty strings in that case // check if we're still syncing and return empty strings in that case
@ -768,25 +753,29 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
} }
} }
if self.miner.author().is_zero() { if self.miner.authoring_params().author.is_zero() {
warn!(target: "miner", "Cannot give work package - no author is configured. Use --author to configure!"); warn!(target: "miner", "Cannot give work package - no author is configured. Use --author to configure!");
return Err(errors::no_author()) return Err(errors::no_author())
} }
self.miner.map_sealing_work(&*self.client, |b| {
let pow_hash = b.hash(); let work = self.miner.work_package(&*self.client).ok_or_else(|| {
let target = Ethash::difficulty_to_boundary(b.block().header().difficulty()); warn!(target: "miner", "Cannot give work package - engine seals internally.");
let seed_hash = self.seed_compute.lock().hash_block_number(b.block().header().number()); errors::no_work_required()
})?;
let (pow_hash, number, timestamp, difficulty) = work;
let target = Ethash::difficulty_to_boundary(&difficulty);
let seed_hash = self.seed_compute.lock().hash_block_number(number);
let now = SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or_default().as_secs(); let now = SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or_default().as_secs();
if no_new_work_timeout > 0 && b.block().header().timestamp() + no_new_work_timeout < now { if no_new_work_timeout > 0 && timestamp + no_new_work_timeout < now {
Err(errors::no_new_work()) Err(errors::no_new_work())
} else if self.options.send_block_number_in_get_work { } else if self.options.send_block_number_in_get_work {
let block_number = b.block().header().number();
Ok(Work { Ok(Work {
pow_hash: pow_hash.into(), pow_hash: pow_hash.into(),
seed_hash: seed_hash.into(), seed_hash: seed_hash.into(),
target: target.into(), target: target.into(),
number: Some(block_number), number: Some(number),
}) })
} else { } else {
Ok(Work { Ok(Work {
@ -796,22 +785,26 @@ impl<C, SN: ?Sized, S: ?Sized, M, EM, T: StateInfo + 'static> Eth for EthClient<
number: None number: None
}) })
} }
}).unwrap_or(Err(errors::internal("No work found.", "")))
} }
fn submit_work(&self, nonce: RpcH64, pow_hash: RpcH256, mix_hash: RpcH256) -> Result<bool> { fn submit_work(&self, nonce: RpcH64, pow_hash: RpcH256, mix_hash: RpcH256) -> Result<bool> {
if !self.miner.can_produce_work_package() { // TODO [ToDr] Should disallow submissions in case of PoA?
warn!(target: "miner", "Cannot submit work - engine seals internally.");
return Err(errors::no_work_required())
}
let nonce: H64 = nonce.into(); let nonce: H64 = nonce.into();
let pow_hash: H256 = pow_hash.into(); let pow_hash: H256 = pow_hash.into();
let mix_hash: H256 = mix_hash.into(); let mix_hash: H256 = mix_hash.into();
trace!(target: "miner", "submit_work: Decoded: nonce={}, pow_hash={}, mix_hash={}", nonce, pow_hash, mix_hash); trace!(target: "miner", "submit_work: Decoded: nonce={}, pow_hash={}, mix_hash={}", nonce, pow_hash, mix_hash);
let seal = vec![rlp::encode(&mix_hash).into_vec(), rlp::encode(&nonce).into_vec()]; let seal = vec![rlp::encode(&mix_hash).into_vec(), rlp::encode(&nonce).into_vec()];
Ok(self.miner.submit_seal(&*self.client, pow_hash, seal).is_ok()) let import = self.miner.submit_seal(pow_hash, seal)
.and_then(|block| self.client.import_sealed_block(block));
match import {
Ok(_) => Ok(true),
Err(err) => {
warn!(target: "miner", "Cannot submit work - {:?}.", err);
Ok(false)
},
}
} }
fn submit_hashrate(&self, rate: RpcU256, id: RpcH256) -> Result<bool> { fn submit_hashrate(&self, rate: RpcU256, id: RpcH256) -> Result<bool> {

View File

@ -19,7 +19,7 @@
use std::sync::Arc; use std::sync::Arc;
use std::collections::HashSet; use std::collections::HashSet;
use ethcore::miner::MinerService; use ethcore::miner::{self, MinerService};
use ethcore::filter::Filter as EthcoreFilter; use ethcore::filter::Filter as EthcoreFilter;
use ethcore::client::{BlockChainClient, BlockId}; use ethcore::client::{BlockChainClient, BlockId};
use ethereum_types::H256; use ethereum_types::H256;
@ -42,7 +42,7 @@ pub trait Filterable {
fn block_hash(&self, id: BlockId) -> Option<RpcH256>; fn block_hash(&self, id: BlockId) -> Option<RpcH256>;
/// pending transaction hashes at the given block. /// pending transaction hashes at the given block.
fn pending_transactions_hashes(&self, block_number: u64) -> Vec<H256>; fn pending_transactions_hashes(&self) -> Vec<H256>;
/// Get logs that match the given filter. /// Get logs that match the given filter.
fn logs(&self, filter: EthcoreFilter) -> BoxFuture<Vec<Log>>; fn logs(&self, filter: EthcoreFilter) -> BoxFuture<Vec<Log>>;
@ -55,16 +55,13 @@ pub trait Filterable {
} }
/// Eth filter rpc implementation for a full node. /// Eth filter rpc implementation for a full node.
pub struct EthFilterClient<C, M> where pub struct EthFilterClient<C, M> {
C: BlockChainClient,
M: MinerService {
client: Arc<C>, client: Arc<C>,
miner: Arc<M>, miner: Arc<M>,
polls: Mutex<PollManager<PollFilter>>, polls: Mutex<PollManager<PollFilter>>,
} }
impl<C, M> EthFilterClient<C, M> where C: BlockChainClient, M: MinerService { impl<C, M> EthFilterClient<C, M> {
/// Creates new Eth filter client. /// Creates new Eth filter client.
pub fn new(client: Arc<C>, miner: Arc<M>) -> Self { pub fn new(client: Arc<C>, miner: Arc<M>) -> Self {
EthFilterClient { EthFilterClient {
@ -75,7 +72,10 @@ impl<C, M> EthFilterClient<C, M> where C: BlockChainClient, M: MinerService {
} }
} }
impl<C, M> Filterable for EthFilterClient<C, M> where C: BlockChainClient, M: MinerService { impl<C, M> Filterable for EthFilterClient<C, M> where
C: miner::BlockChainClient + BlockChainClient,
M: MinerService,
{
fn best_block_number(&self) -> u64 { fn best_block_number(&self) -> u64 {
self.client.chain_info().best_block_number self.client.chain_info().best_block_number
} }
@ -84,8 +84,11 @@ impl<C, M> Filterable for EthFilterClient<C, M> where C: BlockChainClient, M: Mi
self.client.block_hash(id).map(Into::into) self.client.block_hash(id).map(Into::into)
} }
fn pending_transactions_hashes(&self, best: u64) -> Vec<H256> { fn pending_transactions_hashes(&self) -> Vec<H256> {
self.miner.pending_transactions_hashes(best) self.miner.ready_transactions(&*self.client)
.into_iter()
.map(|tx| tx.signed().hash())
.collect()
} }
fn logs(&self, filter: EthcoreFilter) -> BoxFuture<Vec<Log>> { fn logs(&self, filter: EthcoreFilter) -> BoxFuture<Vec<Log>> {
@ -118,8 +121,7 @@ impl<T: Filterable + Send + Sync + 'static> EthFilter for T {
fn new_pending_transaction_filter(&self) -> Result<RpcU256> { fn new_pending_transaction_filter(&self) -> Result<RpcU256> {
let mut polls = self.polls().lock(); let mut polls = self.polls().lock();
let best_block = self.best_block_number(); let pending_transactions = self.pending_transactions_hashes();
let pending_transactions = self.pending_transactions_hashes(best_block);
let id = polls.create_poll(PollFilter::PendingTransaction(pending_transactions)); let id = polls.create_poll(PollFilter::PendingTransaction(pending_transactions));
Ok(id.into()) Ok(id.into())
} }
@ -143,8 +145,7 @@ impl<T: Filterable + Send + Sync + 'static> EthFilter for T {
}, },
PollFilter::PendingTransaction(ref mut previous_hashes) => { PollFilter::PendingTransaction(ref mut previous_hashes) => {
// get hashes of pending transactions // get hashes of pending transactions
let best_block = self.best_block_number(); let current_hashes = self.pending_transactions_hashes();
let current_hashes = self.pending_transactions_hashes(best_block);
let new_hashes = let new_hashes =
{ {

View File

@ -533,7 +533,7 @@ impl<T: LightChainClient + 'static> Filterable for EthClient<T> {
self.client.block_hash(id).map(Into::into) self.client.block_hash(id).map(Into::into)
} }
fn pending_transactions_hashes(&self, _block_number: u64) -> Vec<::ethereum_types::H256> { fn pending_transactions_hashes(&self) -> Vec<::ethereum_types::H256> {
Vec::new() Vec::new()
} }

View File

@ -275,6 +275,21 @@ impl Parity for ParityClient {
) )
} }
fn all_transactions(&self) -> Result<Vec<Transaction>> {
let txq = self.light_dispatch.transaction_queue.read();
let chain_info = self.light_dispatch.client.chain_info();
let current = txq.ready_transactions(chain_info.best_block_number, chain_info.best_block_timestamp);
let future = txq.future_transactions(chain_info.best_block_number, chain_info.best_block_timestamp);
Ok(
current
.into_iter()
.chain(future.into_iter())
.map(|tx| Transaction::from_pending(tx, chain_info.best_block_number, self.eip86_transition))
.collect::<Vec<_>>()
)
}
fn future_transactions(&self) -> Result<Vec<Transaction>> { fn future_transactions(&self) -> Result<Vec<Transaction>> {
let txq = self.light_dispatch.transaction_queue.read(); let txq = self.light_dispatch.transaction_queue.read();
let chain_info = self.light_dispatch.client.chain_info(); let chain_info = self.light_dispatch.client.chain_info();

View File

@ -27,9 +27,9 @@ use ethkey::{Brain, Generator};
use ethstore::random_phrase; use ethstore::random_phrase;
use sync::{SyncProvider, ManageNetwork}; use sync::{SyncProvider, ManageNetwork};
use ethcore::account_provider::AccountProvider; use ethcore::account_provider::AccountProvider;
use ethcore::client::{MiningBlockChainClient, StateClient, Call}; use ethcore::client::{BlockChainClient, StateClient, Call};
use ethcore::ids::BlockId; use ethcore::ids::BlockId;
use ethcore::miner::MinerService; use ethcore::miner::{self, MinerService};
use ethcore::mode::Mode; use ethcore::mode::Mode;
use ethcore::state::StateInfo; use ethcore::state::StateInfo;
use ethcore_logger::RotatingLogger; use ethcore_logger::RotatingLogger;
@ -72,7 +72,7 @@ pub struct ParityClient<C, M, U> {
} }
impl<C, M, U> ParityClient<C, M, U> where impl<C, M, U> ParityClient<C, M, U> where
C: MiningBlockChainClient, C: BlockChainClient,
{ {
/// Creates new `ParityClient`. /// Creates new `ParityClient`.
pub fn new( pub fn new(
@ -116,7 +116,7 @@ impl<C, M, U> ParityClient<C, M, U> where
impl<C, M, U, S> Parity for ParityClient<C, M, U> where impl<C, M, U, S> Parity for ParityClient<C, M, U> where
S: StateInfo + 'static, S: StateInfo + 'static,
C: MiningBlockChainClient + StateClient<State=S> + Call<State=S> + 'static, C: miner::BlockChainClient + BlockChainClient + StateClient<State=S> + Call<State=S> + 'static,
M: MinerService<State=S> + 'static, M: MinerService<State=S> + 'static,
U: UpdateService + 'static, U: UpdateService + 'static,
{ {
@ -170,23 +170,23 @@ impl<C, M, U, S> Parity for ParityClient<C, M, U> where
} }
fn transactions_limit(&self) -> Result<usize> { fn transactions_limit(&self) -> Result<usize> {
Ok(self.miner.transactions_limit()) Ok(self.miner.queue_status().limits.max_count)
} }
fn min_gas_price(&self) -> Result<U256> { fn min_gas_price(&self) -> Result<U256> {
Ok(U256::from(self.miner.minimal_gas_price())) Ok(self.miner.queue_status().options.minimal_gas_price.into())
} }
fn extra_data(&self) -> Result<Bytes> { fn extra_data(&self) -> Result<Bytes> {
Ok(Bytes::new(self.miner.extra_data())) Ok(Bytes::new(self.miner.authoring_params().extra_data))
} }
fn gas_floor_target(&self) -> Result<U256> { fn gas_floor_target(&self) -> Result<U256> {
Ok(U256::from(self.miner.gas_floor_target())) Ok(U256::from(self.miner.authoring_params().gas_range_target.0))
} }
fn gas_ceil_target(&self) -> Result<U256> { fn gas_ceil_target(&self) -> Result<U256> {
Ok(U256::from(self.miner.gas_ceil_target())) Ok(U256::from(self.miner.authoring_params().gas_range_target.1))
} }
fn dev_logs(&self) -> Result<Vec<String>> { fn dev_logs(&self) -> Result<Vec<String>> {
@ -315,12 +315,28 @@ impl<C, M, U, S> Parity for ParityClient<C, M, U> where
fn pending_transactions(&self) -> Result<Vec<Transaction>> { fn pending_transactions(&self) -> Result<Vec<Transaction>> {
let block_number = self.client.chain_info().best_block_number; let block_number = self.client.chain_info().best_block_number;
Ok(self.miner.pending_transactions().into_iter().map(|t| Transaction::from_pending(t, block_number, self.eip86_transition)).collect::<Vec<_>>()) let ready_transactions = self.miner.ready_transactions(&*self.client);
Ok(ready_transactions
.into_iter()
.map(|t| Transaction::from_pending(t.pending().clone(), block_number, self.eip86_transition))
.collect()
)
}
fn all_transactions(&self) -> Result<Vec<Transaction>> {
let block_number = self.client.chain_info().best_block_number;
let all_transactions = self.miner.queued_transactions();
Ok(all_transactions
.into_iter()
.map(|t| Transaction::from_pending(t.pending().clone(), block_number, self.eip86_transition))
.collect()
)
} }
fn future_transactions(&self) -> Result<Vec<Transaction>> { fn future_transactions(&self) -> Result<Vec<Transaction>> {
let block_number = self.client.chain_info().best_block_number; Err(errors::deprecated("Use `parity_allTransaction` instead."))
Ok(self.miner.future_transactions().into_iter().map(|t| Transaction::from_pending(t, block_number, self.eip86_transition)).collect::<Vec<_>>())
} }
fn pending_transactions_stats(&self) -> Result<BTreeMap<H256, TransactionStats>> { fn pending_transactions_stats(&self) -> Result<BTreeMap<H256, TransactionStats>> {
@ -359,11 +375,7 @@ impl<C, M, U, S> Parity for ParityClient<C, M, U> where
fn next_nonce(&self, address: H160) -> BoxFuture<U256> { fn next_nonce(&self, address: H160) -> BoxFuture<U256> {
let address: Address = address.into(); let address: Address = address.into();
Box::new(future::ok(self.miner.last_nonce(&address) Box::new(future::ok(self.miner.next_nonce(&*self.client, &address).into()))
.map(|n| n + 1.into())
.unwrap_or_else(|| self.client.latest_nonce(&address))
.into()
))
} }
fn mode(&self) -> Result<String> { fn mode(&self) -> Result<String> {

View File

@ -18,8 +18,8 @@
use std::io; use std::io;
use std::sync::Arc; use std::sync::Arc;
use ethcore::client::BlockChainClient;
use ethcore::miner::MinerService; use ethcore::miner::MinerService;
use ethcore::client::MiningBlockChainClient;
use ethcore::mode::Mode; use ethcore::mode::Mode;
use sync::ManageNetwork; use sync::ManageNetwork;
use fetch::{self, Fetch}; use fetch::{self, Fetch};
@ -47,7 +47,7 @@ pub struct ParitySetClient<C, M, U, F = fetch::Client> {
} }
impl<C, M, U, F> ParitySetClient<C, M, U, F> impl<C, M, U, F> ParitySetClient<C, M, U, F>
where C: MiningBlockChainClient + 'static, where C: BlockChainClient + 'static,
{ {
/// Creates new `ParitySetClient` with given `Fetch`. /// Creates new `ParitySetClient` with given `Fetch`.
pub fn new( pub fn new(
@ -73,24 +73,38 @@ impl<C, M, U, F> ParitySetClient<C, M, U, F>
} }
impl<C, M, U, F> ParitySet for ParitySetClient<C, M, U, F> where impl<C, M, U, F> ParitySet for ParitySetClient<C, M, U, F> where
C: MiningBlockChainClient + 'static, C: BlockChainClient + 'static,
M: MinerService + 'static, M: MinerService + 'static,
U: UpdateService + 'static, U: UpdateService + 'static,
F: Fetch + 'static, F: Fetch + 'static,
{ {
fn set_min_gas_price(&self, gas_price: U256) -> Result<bool> { fn set_min_gas_price(&self, _gas_price: U256) -> Result<bool> {
self.miner.set_minimal_gas_price(gas_price.into()); warn!("setMinGasPrice is deprecated. Ignoring request.");
Ok(true) Ok(false)
}
fn set_transactions_limit(&self, _limit: usize) -> Result<bool> {
warn!("setTransactionsLimit is deprecated. Ignoring request.");
Ok(false)
}
fn set_tx_gas_limit(&self, _limit: U256) -> Result<bool> {
warn!("setTxGasLimit is deprecated. Ignoring request.");
Ok(false)
} }
fn set_gas_floor_target(&self, target: U256) -> Result<bool> { fn set_gas_floor_target(&self, target: U256) -> Result<bool> {
self.miner.set_gas_floor_target(target.into()); let mut range = self.miner.authoring_params().gas_range_target.clone();
range.0 = target.into();
self.miner.set_gas_range_target(range);
Ok(true) Ok(true)
} }
fn set_gas_ceil_target(&self, target: U256) -> Result<bool> { fn set_gas_ceil_target(&self, target: U256) -> Result<bool> {
self.miner.set_gas_ceil_target(target.into()); let mut range = self.miner.authoring_params().gas_range_target.clone();
range.1 = target.into();
self.miner.set_gas_range_target(range);
Ok(true) Ok(true)
} }
@ -99,23 +113,13 @@ impl<C, M, U, F> ParitySet for ParitySetClient<C, M, U, F> where
Ok(true) Ok(true)
} }
fn set_author(&self, author: H160) -> Result<bool> { fn set_author(&self, address: H160) -> Result<bool> {
self.miner.set_author(author.into()); self.miner.set_author(address.into(), None).map_err(Into::into).map_err(errors::password)?;
Ok(true) Ok(true)
} }
fn set_engine_signer(&self, address: H160, password: String) -> Result<bool> { fn set_engine_signer(&self, address: H160, password: String) -> Result<bool> {
self.miner.set_engine_signer(address.into(), password).map_err(Into::into).map_err(errors::password)?; self.miner.set_author(address.into(), Some(password)).map_err(Into::into).map_err(errors::password)?;
Ok(true)
}
fn set_transactions_limit(&self, limit: usize) -> Result<bool> {
self.miner.set_transactions_limit(limit);
Ok(true)
}
fn set_tx_gas_limit(&self, limit: U256) -> Result<bool> {
self.miner.set_tx_gas_limit(limit.into());
Ok(true) Ok(true)
} }
@ -202,6 +206,8 @@ impl<C, M, U, F> ParitySet for ParitySetClient<C, M, U, F> where
let block_number = self.client.chain_info().best_block_number; let block_number = self.client.chain_info().best_block_number;
let hash = hash.into(); let hash = hash.into();
Ok(self.miner.remove_pending_transaction(&*self.client, &hash).map(|t| Transaction::from_pending(t, block_number, self.eip86_transition))) Ok(self.miner.remove_transaction(&hash)
.map(|t| Transaction::from_pending(t.pending().clone(), block_number + 1, self.eip86_transition))
)
} }
} }

View File

@ -18,7 +18,7 @@
use std::sync::Arc; use std::sync::Arc;
use ethcore::client::{MiningBlockChainClient, CallAnalytics, TransactionId, TraceId, StateClient, StateInfo, Call, BlockId}; use ethcore::client::{BlockChainClient, CallAnalytics, TransactionId, TraceId, StateClient, StateInfo, Call, BlockId};
use rlp::UntrustedRlp; use rlp::UntrustedRlp;
use transaction::SignedTransaction; use transaction::SignedTransaction;
@ -53,7 +53,7 @@ impl<C> TracesClient<C> {
impl<C, S> Traces for TracesClient<C> where impl<C, S> Traces for TracesClient<C> where
S: StateInfo + 'static, S: StateInfo + 'static,
C: MiningBlockChainClient + StateClient<State=S> + Call<State=S> + 'static C: BlockChainClient + StateClient<State=S> + Call<State=S> + 'static
{ {
type Metadata = Metadata; type Metadata = Metadata;

View File

@ -17,15 +17,14 @@
//! rpc integration tests. //! rpc integration tests.
use std::env; use std::env;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use ethereum_types::{U256, H256, Address}; use ethereum_types::{H256, Address};
use ethcore::account_provider::AccountProvider; use ethcore::account_provider::AccountProvider;
use ethcore::block::Block; use ethcore::block::Block;
use ethcore::client::{BlockChainClient, Client, ClientConfig, ChainInfo, ImportBlock}; use ethcore::client::{BlockChainClient, Client, ClientConfig, ChainInfo, ImportBlock};
use ethcore::ethereum; use ethcore::ethereum;
use ethcore::ids::BlockId; use ethcore::ids::BlockId;
use ethcore::miner::{MinerOptions, Banning, GasPricer, Miner, PendingSet, GasLimit}; use ethcore::miner::Miner;
use ethcore::spec::{Genesis, Spec}; use ethcore::spec::{Genesis, Spec};
use ethcore::views::BlockView; use ethcore::views::BlockView;
use ethjson::blockchain::BlockChain; use ethjson::blockchain::BlockChain;
@ -33,7 +32,6 @@ use ethjson::state::test::ForkSpec;
use io::IoChannel; use io::IoChannel;
use kvdb_memorydb; use kvdb_memorydb;
use miner::external::ExternalMiner; use miner::external::ExternalMiner;
use miner::transaction_queue::PrioritizationStrategy;
use parking_lot::Mutex; use parking_lot::Mutex;
use jsonrpc_core::IoHandler; use jsonrpc_core::IoHandler;
@ -58,30 +56,7 @@ fn sync_provider() -> Arc<TestSyncProvider> {
} }
fn miner_service(spec: &Spec, accounts: Arc<AccountProvider>) -> Arc<Miner> { fn miner_service(spec: &Spec, accounts: Arc<AccountProvider>) -> Arc<Miner> {
Miner::new( Arc::new(Miner::new_for_tests(spec, Some(accounts)))
MinerOptions {
force_sealing: true,
reseal_on_external_tx: true,
reseal_on_own_tx: true,
reseal_on_uncle: false,
tx_queue_size: 1024,
tx_gas_limit: !U256::zero(),
tx_queue_strategy: PrioritizationStrategy::GasPriceOnly,
tx_queue_gas_limit: GasLimit::None,
tx_queue_banning: Banning::Disabled,
tx_queue_memory_limit: None,
pending_set: PendingSet::SealingOrElseQueue,
reseal_min_period: Duration::from_secs(0),
reseal_max_period: Duration::from_secs(120),
work_queue_size: 50,
enable_resubmission: true,
refuse_service_transactions: false,
infinite_pending_block: false,
},
GasPricer::new_fixed(20_000_000_000u64.into()),
&spec,
Some(accounts),
)
} }
fn snapshot_service() -> Arc<TestSnapshotService> { fn snapshot_service() -> Arc<TestSnapshotService> {

View File

@ -16,82 +16,68 @@
//! Test implementation of miner service. //! Test implementation of miner service.
use std::sync::Arc;
use std::collections::{BTreeMap, HashMap}; use std::collections::{BTreeMap, HashMap};
use std::collections::hash_map::Entry;
use bytes::Bytes; use bytes::Bytes;
use ethcore::account_provider::SignError as AccountError; use ethcore::account_provider::SignError as AccountError;
use ethcore::block::{Block, ClosedBlock}; use ethcore::block::{Block, SealedBlock, IsBlock};
use ethcore::client::{Nonce, PrepareOpenBlock, StateClient, EngineInfo}; use ethcore::client::{Nonce, PrepareOpenBlock, StateClient, EngineInfo};
use ethcore::engines::EthEngine; use ethcore::engines::EthEngine;
use ethcore::error::Error; use ethcore::error::Error;
use ethcore::header::{BlockNumber, Header}; use ethcore::header::{BlockNumber, Header};
use ethcore::ids::BlockId; use ethcore::ids::BlockId;
use ethcore::miner::{MinerService, MinerStatus}; use ethcore::miner::{MinerService, AuthoringParams};
use ethcore::receipt::{Receipt, RichReceipt}; use ethcore::receipt::{Receipt, RichReceipt};
use ethereum_types::{H256, U256, Address}; use ethereum_types::{H256, U256, Address};
use miner::local_transactions::Status as LocalTransactionStatus; use miner::pool::local_transactions::Status as LocalTransactionStatus;
use miner::pool::{verifier, VerifiedTransaction, QueueStatus};
use parking_lot::{RwLock, Mutex}; use parking_lot::{RwLock, Mutex};
use transaction::{UnverifiedTransaction, SignedTransaction, PendingTransaction, ImportResult as TransactionImportResult}; use transaction::{self, UnverifiedTransaction, SignedTransaction, PendingTransaction};
use txpool;
/// Test miner service. /// Test miner service.
pub struct TestMinerService { pub struct TestMinerService {
/// Imported transactions. /// Imported transactions.
pub imported_transactions: Mutex<Vec<SignedTransaction>>, pub imported_transactions: Mutex<Vec<SignedTransaction>>,
/// Latest closed block.
pub latest_closed_block: Mutex<Option<ClosedBlock>>,
/// Pre-existed pending transactions /// Pre-existed pending transactions
pub pending_transactions: Mutex<HashMap<H256, SignedTransaction>>, pub pending_transactions: Mutex<HashMap<H256, SignedTransaction>>,
/// Pre-existed local transactions /// Pre-existed local transactions
pub local_transactions: Mutex<BTreeMap<H256, LocalTransactionStatus>>, pub local_transactions: Mutex<BTreeMap<H256, LocalTransactionStatus>>,
/// Pre-existed pending receipts /// Pre-existed pending receipts
pub pending_receipts: Mutex<BTreeMap<H256, Receipt>>, pub pending_receipts: Mutex<BTreeMap<H256, Receipt>>,
/// Last nonces. /// Next nonces.
pub last_nonces: RwLock<HashMap<Address, U256>>, pub next_nonces: RwLock<HashMap<Address, U256>>,
/// Password held by Engine. /// Password held by Engine.
pub password: RwLock<String>, pub password: RwLock<String>,
min_gas_price: RwLock<U256>, authoring_params: RwLock<AuthoringParams>,
gas_range_target: RwLock<(U256, U256)>,
author: RwLock<Address>,
extra_data: RwLock<Bytes>,
limit: RwLock<usize>,
tx_gas_limit: RwLock<U256>,
} }
impl Default for TestMinerService { impl Default for TestMinerService {
fn default() -> TestMinerService { fn default() -> TestMinerService {
TestMinerService { TestMinerService {
imported_transactions: Mutex::new(Vec::new()), imported_transactions: Mutex::new(Vec::new()),
latest_closed_block: Mutex::new(None),
pending_transactions: Mutex::new(HashMap::new()), pending_transactions: Mutex::new(HashMap::new()),
local_transactions: Mutex::new(BTreeMap::new()), local_transactions: Mutex::new(BTreeMap::new()),
pending_receipts: Mutex::new(BTreeMap::new()), pending_receipts: Mutex::new(BTreeMap::new()),
last_nonces: RwLock::new(HashMap::new()), next_nonces: RwLock::new(HashMap::new()),
min_gas_price: RwLock::new(U256::from(20_000_000)),
gas_range_target: RwLock::new((U256::from(12345), U256::from(54321))),
author: RwLock::new(Address::zero()),
password: RwLock::new(String::new()), password: RwLock::new(String::new()),
extra_data: RwLock::new(vec![1, 2, 3, 4]), authoring_params: RwLock::new(AuthoringParams {
limit: RwLock::new(1024), author: Address::zero(),
tx_gas_limit: RwLock::new(!U256::zero()), gas_range_target: (12345.into(), 54321.into()),
extra_data: vec![1, 2, 3, 4],
}),
} }
} }
} }
impl TestMinerService { impl TestMinerService {
/// Increments last nonce for given address. /// Increments nonce for given address.
pub fn increment_last_nonce(&self, address: Address) { pub fn increment_nonce(&self, address: &Address) {
let mut last_nonces = self.last_nonces.write(); let mut next_nonces = self.next_nonces.write();
match last_nonces.entry(address) { let nonce = next_nonces.entry(*address).or_insert_with(|| 0.into());
Entry::Occupied(mut occupied) => { *nonce = *nonce + 1.into();
let val = *occupied.get();
*occupied.get_mut() = val + 1.into();
},
Entry::Vacant(vacant) => {
vacant.insert(0.into());
},
}
} }
} }
@ -129,164 +115,112 @@ impl MinerService for TestMinerService {
None None
} }
/// Returns miner's status. fn authoring_params(&self) -> AuthoringParams {
fn status(&self) -> MinerStatus { self.authoring_params.read().clone()
MinerStatus {
transactions_in_pending_queue: 0,
transactions_in_future_queue: 0,
transactions_in_pending_block: 1
}
} }
fn set_author(&self, author: Address) { fn set_author(&self, author: Address, password: Option<String>) -> Result<(), AccountError> {
*self.author.write() = author; self.authoring_params.write().author = author;
} if let Some(password) = password {
fn set_engine_signer(&self, address: Address, password: String) -> Result<(), AccountError> {
*self.author.write() = address;
*self.password.write() = password; *self.password.write() = password;
}
Ok(()) Ok(())
} }
fn set_extra_data(&self, extra_data: Bytes) { fn set_extra_data(&self, extra_data: Bytes) {
*self.extra_data.write() = extra_data; self.authoring_params.write().extra_data = extra_data;
} }
/// Set the lower gas limit we wish to target when sealing a new block. fn set_gas_range_target(&self, target: (U256, U256)) {
fn set_gas_floor_target(&self, target: U256) { self.authoring_params.write().gas_range_target = target;
self.gas_range_target.write().0 = target;
}
/// Set the upper gas limit we wish to target when sealing a new block.
fn set_gas_ceil_target(&self, target: U256) {
self.gas_range_target.write().1 = target;
}
fn set_minimal_gas_price(&self, min_gas_price: U256) {
*self.min_gas_price.write() = min_gas_price;
}
fn set_transactions_limit(&self, limit: usize) {
*self.limit.write() = limit;
}
fn set_tx_gas_limit(&self, limit: U256) {
*self.tx_gas_limit.write() = limit;
}
fn transactions_limit(&self) -> usize {
*self.limit.read()
}
fn author(&self) -> Address {
*self.author.read()
}
fn minimal_gas_price(&self) -> U256 {
*self.min_gas_price.read()
}
fn extra_data(&self) -> Bytes {
self.extra_data.read().clone()
}
fn gas_floor_target(&self) -> U256 {
self.gas_range_target.read().0
}
fn gas_ceil_target(&self) -> U256 {
self.gas_range_target.read().1
} }
/// Imports transactions to transaction queue. /// Imports transactions to transaction queue.
fn import_external_transactions<C>(&self, _chain: &C, transactions: Vec<UnverifiedTransaction>) -> fn import_external_transactions<C: Nonce + Sync>(&self, chain: &C, transactions: Vec<UnverifiedTransaction>)
Vec<Result<TransactionImportResult, Error>> { -> Vec<Result<(), transaction::Error>>
{
// lets assume that all txs are valid // lets assume that all txs are valid
let transactions: Vec<_> = transactions.into_iter().map(|tx| SignedTransaction::new(tx).unwrap()).collect(); let transactions: Vec<_> = transactions.into_iter().map(|tx| SignedTransaction::new(tx).unwrap()).collect();
self.imported_transactions.lock().extend_from_slice(&transactions); self.imported_transactions.lock().extend_from_slice(&transactions);
for sender in transactions.iter().map(|tx| tx.sender()) { for sender in transactions.iter().map(|tx| tx.sender()) {
let nonce = self.last_nonce(&sender).expect("last_nonce must be populated in tests"); let nonce = self.next_nonce(chain, &sender);
self.last_nonces.write().insert(sender, nonce + U256::from(1)); self.next_nonces.write().insert(sender, nonce);
} }
transactions transactions
.iter() .iter()
.map(|_| Ok(TransactionImportResult::Current)) .map(|_| Ok(()))
.collect() .collect()
} }
/// Imports transactions to transaction queue. /// Imports transactions to transaction queue.
fn import_own_transaction<C: Nonce>(&self, chain: &C, pending: PendingTransaction) -> fn import_own_transaction<C: Nonce + Sync>(&self, chain: &C, pending: PendingTransaction)
Result<TransactionImportResult, Error> { -> Result<(), transaction::Error> {
// keep the pending nonces up to date // keep the pending nonces up to date
let sender = pending.transaction.sender(); let sender = pending.transaction.sender();
let nonce = self.last_nonce(&sender).unwrap_or(chain.latest_nonce(&sender)); let nonce = self.next_nonce(chain, &sender);
self.last_nonces.write().insert(sender, nonce + U256::from(1)); self.next_nonces.write().insert(sender, nonce);
// lets assume that all txs are valid // lets assume that all txs are valid
self.imported_transactions.lock().push(pending.transaction); self.imported_transactions.lock().push(pending.transaction);
Ok(TransactionImportResult::Current) Ok(())
}
/// Returns hashes of transactions currently in pending
fn pending_transactions_hashes(&self, _best_block: BlockNumber) -> Vec<H256> {
vec![]
}
/// Removes all transactions from the queue and restart mining operation.
fn clear_and_reset<C>(&self, _chain: &C) {
unimplemented!();
} }
/// Called when blocks are imported to chain, updates transactions queue. /// Called when blocks are imported to chain, updates transactions queue.
fn chain_new_blocks<C>(&self, _chain: &C, _imported: &[H256], _invalid: &[H256], _enacted: &[H256], _retracted: &[H256]) { fn chain_new_blocks<C>(&self, _chain: &C, _imported: &[H256], _invalid: &[H256], _enacted: &[H256], _retracted: &[H256], _is_internal: bool) {
unimplemented!(); unimplemented!();
} }
/// PoW chain - can produce work package
fn can_produce_work_package(&self) -> bool {
true
}
/// New chain head event. Restart mining operation. /// New chain head event. Restart mining operation.
fn update_sealing<C>(&self, _chain: &C) { fn update_sealing<C>(&self, _chain: &C) {
unimplemented!(); unimplemented!();
} }
fn map_sealing_work<C: PrepareOpenBlock, F, T>(&self, chain: &C, f: F) -> Option<T> where F: FnOnce(&ClosedBlock) -> T { fn work_package<C: PrepareOpenBlock>(&self, chain: &C) -> Option<(H256, BlockNumber, u64, U256)> {
let open_block = chain.prepare_open_block(self.author(), *self.gas_range_target.write(), self.extra_data()); let params = self.authoring_params();
Some(f(&open_block.close())) let open_block = chain.prepare_open_block(params.author, params.gas_range_target, params.extra_data);
let closed = open_block.close();
let header = closed.header();
Some((header.hash(), header.number(), header.timestamp(), *header.difficulty()))
} }
fn transaction(&self, _best_block: BlockNumber, hash: &H256) -> Option<PendingTransaction> { fn transaction(&self, hash: &H256) -> Option<Arc<VerifiedTransaction>> {
self.pending_transactions.lock().get(hash).cloned().map(Into::into) self.pending_transactions.lock().get(hash).cloned().map(|tx| {
Arc::new(VerifiedTransaction::from_pending_block_transaction(tx))
})
} }
fn remove_pending_transaction<C>(&self, _chain: &C, hash: &H256) -> Option<PendingTransaction> { fn remove_transaction(&self, hash: &H256) -> Option<Arc<VerifiedTransaction>> {
self.pending_transactions.lock().remove(hash).map(Into::into) self.pending_transactions.lock().remove(hash).map(|tx| {
Arc::new(VerifiedTransaction::from_pending_block_transaction(tx))
})
} }
fn pending_transactions(&self) -> Vec<PendingTransaction> { fn pending_transactions(&self, _best_block: BlockNumber) -> Option<Vec<SignedTransaction>> {
self.pending_transactions.lock().values().cloned().map(Into::into).collect() Some(self.pending_transactions.lock().values().cloned().collect())
} }
fn local_transactions(&self) -> BTreeMap<H256, LocalTransactionStatus> { fn local_transactions(&self) -> BTreeMap<H256, LocalTransactionStatus> {
self.local_transactions.lock().iter().map(|(hash, stats)| (*hash, stats.clone())).collect() self.local_transactions.lock().iter().map(|(hash, stats)| (*hash, stats.clone())).collect()
} }
fn ready_transactions(&self, _best_block: BlockNumber, _best_timestamp: u64) -> Vec<PendingTransaction> { fn ready_transactions<C>(&self, _chain: &C) -> Vec<Arc<VerifiedTransaction>> {
self.pending_transactions.lock().values().cloned().map(Into::into).collect() self.queued_transactions()
} }
fn future_transactions(&self) -> Vec<PendingTransaction> { fn queued_transactions(&self) -> Vec<Arc<VerifiedTransaction>> {
vec![] self.pending_transactions.lock().values().cloned().map(|tx| {
Arc::new(VerifiedTransaction::from_pending_block_transaction(tx))
}).collect()
} }
fn pending_receipt(&self, _best_block: BlockNumber, hash: &H256) -> Option<RichReceipt> { fn pending_receipt(&self, _best_block: BlockNumber, hash: &H256) -> Option<RichReceipt> {
// Not much point implementing this since the logic is complex and the only thing it relies on is pending_receipts, which is already tested. // Not much point implementing this since the logic is complex and the only thing it relies on is pending_receipts, which is already tested.
self.pending_receipts(0).get(hash).map(|r| self.pending_receipts(0).unwrap().get(hash).map(|r|
RichReceipt { RichReceipt {
transaction_hash: Default::default(), transaction_hash: Default::default(),
transaction_index: Default::default(), transaction_index: Default::default(),
@ -300,25 +234,49 @@ impl MinerService for TestMinerService {
) )
} }
fn pending_receipts(&self, _best_block: BlockNumber) -> BTreeMap<H256, Receipt> { fn pending_receipts(&self, _best_block: BlockNumber) -> Option<BTreeMap<H256, Receipt>> {
self.pending_receipts.lock().clone() Some(self.pending_receipts.lock().clone())
} }
fn last_nonce(&self, address: &Address) -> Option<U256> { fn next_nonce<C: Nonce + Sync>(&self, _chain: &C, address: &Address) -> U256 {
self.last_nonces.read().get(address).cloned() self.next_nonces.read().get(address).cloned().unwrap_or_default()
} }
fn is_currently_sealing(&self) -> bool { fn is_currently_sealing(&self) -> bool {
false false
} }
fn queue_status(&self) -> QueueStatus {
QueueStatus {
options: verifier::Options {
minimal_gas_price: 0x1312d00.into(),
block_gas_limit: 5_000_000.into(),
tx_gas_limit: 5_000_000.into(),
},
status: txpool::LightStatus {
mem_usage: 1_000,
transaction_count: 52,
senders: 1,
},
limits: txpool::Options {
max_count: 1_024,
max_per_sender: 16,
max_mem_usage: 5_000,
},
}
}
/// Submit `seal` as a valid solution for the header of `pow_hash`. /// Submit `seal` as a valid solution for the header of `pow_hash`.
/// Will check the seal, but not actually insert the block into the chain. /// Will check the seal, but not actually insert the block into the chain.
fn submit_seal<C>(&self, _chain: &C, _pow_hash: H256, _seal: Vec<Bytes>) -> Result<(), Error> { fn submit_seal(&self, _pow_hash: H256, _seal: Vec<Bytes>) -> Result<SealedBlock, Error> {
unimplemented!(); unimplemented!();
} }
fn sensible_gas_price(&self) -> U256 { fn sensible_gas_price(&self) -> U256 {
20000000000u64.into() 20_000_000_000u64.into()
}
fn sensible_gas_limit(&self) -> U256 {
0x5208.into()
} }
} }

View File

@ -368,7 +368,7 @@ fn rpc_eth_author() {
for i in 0..20 { for i in 0..20 {
let addr = tester.accounts_provider.new_account(&format!("{}", i)).unwrap(); let addr = tester.accounts_provider.new_account(&format!("{}", i)).unwrap();
tester.miner.set_author(addr.clone()); tester.miner.set_author(addr.clone(), None).unwrap();
assert_eq!(tester.io.handle_request_sync(req), Some(make_res(addr))); assert_eq!(tester.io.handle_request_sync(req), Some(make_res(addr)));
} }
@ -377,7 +377,7 @@ fn rpc_eth_author() {
#[test] #[test]
fn rpc_eth_mining() { fn rpc_eth_mining() {
let tester = EthTester::default(); let tester = EthTester::default();
tester.miner.set_author(Address::from_str("d46e8dd67c5d32be8058bb8eb970870f07244567").unwrap()); tester.miner.set_author(Address::from_str("d46e8dd67c5d32be8058bb8eb970870f07244567").unwrap(), None).unwrap();
let request = r#"{"jsonrpc": "2.0", "method": "eth_mining", "params": [], "id": 1}"#; let request = r#"{"jsonrpc": "2.0", "method": "eth_mining", "params": [], "id": 1}"#;
let response = r#"{"jsonrpc":"2.0","result":false,"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":false,"id":1}"#;
@ -498,7 +498,7 @@ fn rpc_eth_transaction_count_next_nonce() {
let tester = EthTester::new_with_options(EthClientOptions::with(|options| { let tester = EthTester::new_with_options(EthClientOptions::with(|options| {
options.pending_nonce_from_queue = true; options.pending_nonce_from_queue = true;
})); }));
tester.miner.increment_last_nonce(1.into()); tester.miner.increment_nonce(&1.into());
let request1 = r#"{ let request1 = r#"{
"jsonrpc": "2.0", "jsonrpc": "2.0",
@ -553,7 +553,7 @@ fn rpc_eth_transaction_count_by_number_pending() {
"params": ["pending"], "params": ["pending"],
"id": 1 "id": 1
}"#; }"#;
let response = r#"{"jsonrpc":"2.0","result":"0x1","id":1}"#; let response = r#"{"jsonrpc":"2.0","result":"0x0","id":1}"#;
assert_eq!(EthTester::default().io.handle_request_sync(request), Some(response.to_owned())); assert_eq!(EthTester::default().io.handle_request_sync(request), Some(response.to_owned()));
} }
@ -835,7 +835,7 @@ fn rpc_eth_send_transaction() {
assert_eq!(tester.io.handle_request_sync(&request), Some(response)); assert_eq!(tester.io.handle_request_sync(&request), Some(response));
tester.miner.last_nonces.write().insert(address.clone(), U256::zero()); tester.miner.increment_nonce(&address);
let t = Transaction { let t = Transaction {
nonce: U256::one(), nonce: U256::one(),
@ -905,7 +905,7 @@ fn rpc_eth_sign_transaction() {
r#""value":"0x9184e72a""# + r#""value":"0x9184e72a""# +
r#"}},"id":1}"#; r#"}},"id":1}"#;
tester.miner.last_nonces.write().insert(address.clone(), U256::zero()); tester.miner.increment_nonce(&address);
assert_eq!(tester.io.handle_request_sync(&request), Some(response)); assert_eq!(tester.io.handle_request_sync(&request), Some(response));
} }
@ -1118,7 +1118,7 @@ fn rpc_get_work_returns_no_work_if_cant_mine() {
#[test] #[test]
fn rpc_get_work_returns_correct_work_package() { fn rpc_get_work_returns_correct_work_package() {
let eth_tester = EthTester::default(); let eth_tester = EthTester::default();
eth_tester.miner.set_author(Address::from_str("d46e8dd67c5d32be8058bb8eb970870f07244567").unwrap()); eth_tester.miner.set_author(Address::from_str("d46e8dd67c5d32be8058bb8eb970870f07244567").unwrap(), None).unwrap();
let request = r#"{"jsonrpc": "2.0", "method": "eth_getWork", "params": [], "id": 1}"#; let request = r#"{"jsonrpc": "2.0", "method": "eth_getWork", "params": [], "id": 1}"#;
let response = r#"{"jsonrpc":"2.0","result":["0x76c7bd86693aee93d1a80a408a09a0585b1a1292afcb56192f171d925ea18e2d","0x0000000000000000000000000000000000000000000000000000000000000000","0x0000800000000000000000000000000000000000000000000000000000000000","0x1"],"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":["0x76c7bd86693aee93d1a80a408a09a0585b1a1292afcb56192f171d925ea18e2d","0x0000000000000000000000000000000000000000000000000000000000000000","0x0000800000000000000000000000000000000000000000000000000000000000","0x1"],"id":1}"#;
@ -1131,7 +1131,7 @@ fn rpc_get_work_should_not_return_block_number() {
let eth_tester = EthTester::new_with_options(EthClientOptions::with(|options| { let eth_tester = EthTester::new_with_options(EthClientOptions::with(|options| {
options.send_block_number_in_get_work = false; options.send_block_number_in_get_work = false;
})); }));
eth_tester.miner.set_author(Address::from_str("d46e8dd67c5d32be8058bb8eb970870f07244567").unwrap()); eth_tester.miner.set_author(Address::from_str("d46e8dd67c5d32be8058bb8eb970870f07244567").unwrap(), None).unwrap();
let request = r#"{"jsonrpc": "2.0", "method": "eth_getWork", "params": [], "id": 1}"#; let request = r#"{"jsonrpc": "2.0", "method": "eth_getWork", "params": [], "id": 1}"#;
let response = r#"{"jsonrpc":"2.0","result":["0x76c7bd86693aee93d1a80a408a09a0585b1a1292afcb56192f171d925ea18e2d","0x0000000000000000000000000000000000000000000000000000000000000000","0x0000800000000000000000000000000000000000000000000000000000000000"],"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":["0x76c7bd86693aee93d1a80a408a09a0585b1a1292afcb56192f171d925ea18e2d","0x0000000000000000000000000000000000000000000000000000000000000000","0x0000800000000000000000000000000000000000000000000000000000000000"],"id":1}"#;
@ -1142,10 +1142,10 @@ fn rpc_get_work_should_not_return_block_number() {
#[test] #[test]
fn rpc_get_work_should_timeout() { fn rpc_get_work_should_timeout() {
let eth_tester = EthTester::default(); let eth_tester = EthTester::default();
eth_tester.miner.set_author(Address::from_str("d46e8dd67c5d32be8058bb8eb970870f07244567").unwrap()); eth_tester.miner.set_author(Address::from_str("d46e8dd67c5d32be8058bb8eb970870f07244567").unwrap(), None).unwrap();
let timestamp = SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs() - 1000; // Set latest block to 1000 seconds ago let timestamp = SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs() - 1000; // Set latest block to 1000 seconds ago
eth_tester.client.set_latest_block_timestamp(timestamp); eth_tester.client.set_latest_block_timestamp(timestamp);
let hash = eth_tester.miner.map_sealing_work(&*eth_tester.client, |b| b.hash()).unwrap(); let hash = eth_tester.miner.work_package(&*eth_tester.client).unwrap().0;
// Request without providing timeout. This should work since we're disabling timeout. // Request without providing timeout. This should work since we're disabling timeout.
let request = r#"{"jsonrpc": "2.0", "method": "eth_getWork", "params": [], "id": 1}"#; let request = r#"{"jsonrpc": "2.0", "method": "eth_getWork", "params": [], "id": 1}"#;

View File

@ -17,13 +17,13 @@
use std::sync::Arc; use std::sync::Arc;
use ethcore::account_provider::AccountProvider; use ethcore::account_provider::AccountProvider;
use ethcore::client::{TestBlockChainClient, Executed}; use ethcore::client::{TestBlockChainClient, Executed};
use ethcore::miner::LocalTransactionStatus;
use ethcore_logger::RotatingLogger; use ethcore_logger::RotatingLogger;
use ethereum_types::{Address, U256, H256};
use ethstore::ethkey::{Generator, Random}; use ethstore::ethkey::{Generator, Random};
use sync::ManageNetwork; use miner::pool::local_transactions::Status as LocalTransactionStatus;
use node_health::{self, NodeHealth}; use node_health::{self, NodeHealth};
use parity_reactor; use parity_reactor;
use ethereum_types::{Address, U256, H256}; use sync::ManageNetwork;
use jsonrpc_core::IoHandler; use jsonrpc_core::IoHandler;
use v1::{Parity, ParityClient}; use v1::{Parity, ParityClient};
@ -455,7 +455,9 @@ fn rpc_parity_next_nonce() {
let address = Address::default(); let address = Address::default();
let io1 = deps.default_client(); let io1 = deps.default_client();
let deps = Dependencies::new(); let deps = Dependencies::new();
deps.miner.last_nonces.write().insert(address.clone(), 2.into()); deps.miner.increment_nonce(&address);
deps.miner.increment_nonce(&address);
deps.miner.increment_nonce(&address);
let io2 = deps.default_client(); let io2 = deps.default_client();
let request = r#"{ let request = r#"{
@ -486,11 +488,20 @@ fn rpc_parity_transactions_stats() {
fn rpc_parity_local_transactions() { fn rpc_parity_local_transactions() {
let deps = Dependencies::new(); let deps = Dependencies::new();
let io = deps.default_client(); let io = deps.default_client();
deps.miner.local_transactions.lock().insert(10.into(), LocalTransactionStatus::Pending); let tx = ::transaction::Transaction {
deps.miner.local_transactions.lock().insert(15.into(), LocalTransactionStatus::Future); value: 5.into(),
gas: 3.into(),
gas_price: 2.into(),
action: ::transaction::Action::Create,
data: vec![1, 2, 3],
nonce: 0.into(),
}.fake_sign(3.into());
let tx = Arc::new(::miner::pool::VerifiedTransaction::from_pending_block_transaction(tx));
deps.miner.local_transactions.lock().insert(10.into(), LocalTransactionStatus::Pending(tx.clone()));
deps.miner.local_transactions.lock().insert(15.into(), LocalTransactionStatus::Pending(tx.clone()));
let request = r#"{"jsonrpc": "2.0", "method": "parity_localTransactions", "params":[], "id": 1}"#; let request = r#"{"jsonrpc": "2.0", "method": "parity_localTransactions", "params":[], "id": 1}"#;
let response = r#"{"jsonrpc":"2.0","result":{"0x000000000000000000000000000000000000000000000000000000000000000a":{"status":"pending"},"0x000000000000000000000000000000000000000000000000000000000000000f":{"status":"future"}},"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":{"0x000000000000000000000000000000000000000000000000000000000000000a":{"status":"pending"},"0x000000000000000000000000000000000000000000000000000000000000000f":{"status":"pending"}},"id":1}"#;
assert_eq!(io.handle_request_sync(request), Some(response.to_owned())); assert_eq!(io.handle_request_sync(request), Some(response.to_owned()));
} }

View File

@ -109,10 +109,9 @@ fn rpc_parity_set_min_gas_price() {
io.extend_with(parity_set_client(&client, &miner, &updater, &network).to_delegate()); io.extend_with(parity_set_client(&client, &miner, &updater, &network).to_delegate());
let request = r#"{"jsonrpc": "2.0", "method": "parity_setMinGasPrice", "params":["0xcd1722f3947def4cf144679da39c4c32bdc35681"], "id": 1}"#; let request = r#"{"jsonrpc": "2.0", "method": "parity_setMinGasPrice", "params":["0xcd1722f3947def4cf144679da39c4c32bdc35681"], "id": 1}"#;
let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":false,"id":1}"#;
assert_eq!(io.handle_request_sync(request), Some(response.to_owned())); assert_eq!(io.handle_request_sync(request), Some(response.to_owned()));
assert_eq!(miner.minimal_gas_price(), U256::from_str("cd1722f3947def4cf144679da39c4c32bdc35681").unwrap());
} }
#[test] #[test]
@ -129,7 +128,7 @@ fn rpc_parity_set_gas_floor_target() {
let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#;
assert_eq!(io.handle_request_sync(request), Some(response.to_owned())); assert_eq!(io.handle_request_sync(request), Some(response.to_owned()));
assert_eq!(miner.gas_floor_target(), U256::from_str("cd1722f3947def4cf144679da39c4c32bdc35681").unwrap()); assert_eq!(miner.authoring_params().gas_range_target.0, U256::from_str("cd1722f3947def4cf144679da39c4c32bdc35681").unwrap());
} }
#[test] #[test]
@ -146,7 +145,7 @@ fn rpc_parity_set_extra_data() {
let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#;
assert_eq!(io.handle_request_sync(request), Some(response.to_owned())); assert_eq!(io.handle_request_sync(request), Some(response.to_owned()));
assert_eq!(miner.extra_data(), "cd1722f3947def4cf144679da39c4c32bdc35681".from_hex().unwrap()); assert_eq!(miner.authoring_params().extra_data, "cd1722f3947def4cf144679da39c4c32bdc35681".from_hex().unwrap());
} }
#[test] #[test]
@ -162,7 +161,7 @@ fn rpc_parity_set_author() {
let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#;
assert_eq!(io.handle_request_sync(request), Some(response.to_owned())); assert_eq!(io.handle_request_sync(request), Some(response.to_owned()));
assert_eq!(miner.author(), Address::from_str("cd1722f3947def4cf144679da39c4c32bdc35681").unwrap()); assert_eq!(miner.authoring_params().author, Address::from_str("cd1722f3947def4cf144679da39c4c32bdc35681").unwrap());
} }
#[test] #[test]
@ -178,7 +177,7 @@ fn rpc_parity_set_engine_signer() {
let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#;
assert_eq!(io.handle_request_sync(request), Some(response.to_owned())); assert_eq!(io.handle_request_sync(request), Some(response.to_owned()));
assert_eq!(miner.author(), Address::from_str("cd1722f3947def4cf144679da39c4c32bdc35681").unwrap()); assert_eq!(miner.authoring_params().author, Address::from_str("cd1722f3947def4cf144679da39c4c32bdc35681").unwrap());
assert_eq!(*miner.password.read(), "password".to_string()); assert_eq!(*miner.password.read(), "password".to_string());
} }
@ -193,10 +192,9 @@ fn rpc_parity_set_transactions_limit() {
io.extend_with(parity_set_client(&client, &miner, &updater, &network).to_delegate()); io.extend_with(parity_set_client(&client, &miner, &updater, &network).to_delegate());
let request = r#"{"jsonrpc": "2.0", "method": "parity_setTransactionsLimit", "params":[10240240], "id": 1}"#; let request = r#"{"jsonrpc": "2.0", "method": "parity_setTransactionsLimit", "params":[10240240], "id": 1}"#;
let response = r#"{"jsonrpc":"2.0","result":true,"id":1}"#; let response = r#"{"jsonrpc":"2.0","result":false,"id":1}"#;
assert_eq!(io.handle_request_sync(request), Some(response.to_owned())); assert_eq!(io.handle_request_sync(request), Some(response.to_owned()));
assert_eq!(miner.transactions_limit(), 10_240_240);
} }
#[test] #[test]

View File

@ -220,7 +220,7 @@ fn sign_and_send_test(method: &str) {
assert_eq!(tester.io.handle_request_sync(request.as_ref()), Some(response)); assert_eq!(tester.io.handle_request_sync(request.as_ref()), Some(response));
tester.miner.last_nonces.write().insert(address.clone(), U256::zero()); tester.miner.increment_nonce(&address);
let t = Transaction { let t = Transaction {
nonce: U256::one(), nonce: U256::one(),

View File

@ -327,7 +327,7 @@ fn should_add_sign_transaction_to_the_queue() {
r#"}},"id":1}"#; r#"}},"id":1}"#;
// then // then
tester.miner.last_nonces.write().insert(address.clone(), U256::zero()); tester.miner.increment_nonce(&address);
let promise = tester.io.handle_request(&request); let promise = tester.io.handle_request(&request);
// the future must be polled at least once before request is queued. // the future must be polled at least once before request is queued.

View File

@ -143,7 +143,13 @@ build_rpc_trait! {
#[rpc(name = "parity_pendingTransactions")] #[rpc(name = "parity_pendingTransactions")]
fn pending_transactions(&self) -> Result<Vec<Transaction>>; fn pending_transactions(&self) -> Result<Vec<Transaction>>;
/// Returns all future transactions from transaction queue. /// Returns all transactions from transaction queue.
///
/// Some of them might not be ready to be included in a block yet.
#[rpc(name = "parity_allTransactions")]
fn all_transactions(&self) -> Result<Vec<Transaction>>;
/// Returns all future transactions from transaction queue (deprecated)
#[rpc(name = "parity_futureTransactions")] #[rpc(name = "parity_futureTransactions")]
fn future_transactions(&self) -> Result<Vec<Transaction>>; fn future_transactions(&self) -> Result<Vec<Transaction>>;

View File

@ -14,12 +14,13 @@
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>. // along with Parity. If not, see <http://www.gnu.org/licenses/>.
use std::sync::Arc;
use serde::{Serialize, Serializer}; use serde::{Serialize, Serializer};
use serde::ser::SerializeStruct; use serde::ser::SerializeStruct;
use ethcore::miner;
use ethcore::{contract_address, CreateContractAddress}; use ethcore::{contract_address, CreateContractAddress};
use miner;
use transaction::{LocalizedTransaction, Action, PendingTransaction, SignedTransaction}; use transaction::{LocalizedTransaction, Action, PendingTransaction, SignedTransaction};
use v1::helpers::errors;
use v1::types::{Bytes, H160, H256, U256, H512, U64, TransactionCondition}; use v1::types::{Bytes, H160, H256, U256, H512, U64, TransactionCondition};
/// Transaction /// Transaction
@ -248,17 +249,23 @@ impl Transaction {
impl LocalTransactionStatus { impl LocalTransactionStatus {
/// Convert `LocalTransactionStatus` into RPC `LocalTransactionStatus`. /// Convert `LocalTransactionStatus` into RPC `LocalTransactionStatus`.
pub fn from(s: miner::LocalTransactionStatus, block_number: u64, eip86_transition: u64) -> Self { pub fn from(s: miner::pool::local_transactions::Status, block_number: u64, eip86_transition: u64) -> Self {
use ethcore::miner::LocalTransactionStatus::*; let convert = |tx: Arc<miner::pool::VerifiedTransaction>| {
Transaction::from_signed(tx.signed().clone(), block_number, eip86_transition)
};
use miner::pool::local_transactions::Status::*;
match s { match s {
Pending => LocalTransactionStatus::Pending, Pending(_) => LocalTransactionStatus::Pending,
Future => LocalTransactionStatus::Future, Mined(tx) => LocalTransactionStatus::Mined(convert(tx)),
Mined(tx) => LocalTransactionStatus::Mined(Transaction::from_signed(tx, block_number, eip86_transition)), Dropped(tx) => LocalTransactionStatus::Dropped(convert(tx)),
Dropped(tx) => LocalTransactionStatus::Dropped(Transaction::from_signed(tx, block_number, eip86_transition)), Rejected(tx, reason) => LocalTransactionStatus::Rejected(convert(tx), reason),
Rejected(tx, err) => LocalTransactionStatus::Rejected(Transaction::from_signed(tx, block_number, eip86_transition), errors::transaction_message(err)), Invalid(tx) => LocalTransactionStatus::Invalid(convert(tx)),
Replaced(tx, gas_price, hash) => LocalTransactionStatus::Replaced(Transaction::from_signed(tx, block_number, eip86_transition), gas_price.into(), hash.into()), Canceled(tx) => LocalTransactionStatus::Canceled(convert(tx)),
Invalid(tx) => LocalTransactionStatus::Invalid(Transaction::from_signed(tx, block_number, eip86_transition)), Replaced { old, new } => LocalTransactionStatus::Replaced(
Canceled(tx) => LocalTransactionStatus::Canceled(Transaction::from_pending(tx, block_number, eip86_transition)), convert(old),
new.signed().gas_price.into(),
new.signed().hash().into(),
),
} }
} }
} }

View File

@ -74,7 +74,7 @@ impl TrustedClient {
let transaction = Transaction { let transaction = Transaction {
nonce: client.latest_nonce(&self.self_key_pair.address()), nonce: client.latest_nonce(&self.self_key_pair.address()),
action: Action::Call(contract), action: Action::Call(contract),
gas: miner.gas_floor_target(), gas: miner.authoring_params().gas_range_target.0,
gas_price: miner.sensible_gas_price(), gas_price: miner.sensible_gas_price(),
value: Default::default(), value: Default::default(),
data: tx_data, data: tx_data,

View File

@ -9,4 +9,5 @@ authors = ["Parity Technologies <admin@parity.io>"]
error-chain = "0.11" error-chain = "0.11"
log = "0.3" log = "0.3"
smallvec = "0.4" smallvec = "0.4"
trace-time = { path = "../util/trace-time" }
ethereum-types = "0.3" ethereum-types = "0.3"

View File

@ -21,17 +21,17 @@ error_chain! {
/// Transaction is already imported /// Transaction is already imported
AlreadyImported(hash: H256) { AlreadyImported(hash: H256) {
description("transaction is already in the pool"), description("transaction is already in the pool"),
display("[{:?}] transaction already imported", hash) display("[{:?}] already imported", hash)
} }
/// Transaction is too cheap to enter the queue /// Transaction is too cheap to enter the queue
TooCheapToEnter(hash: H256) { TooCheapToEnter(hash: H256, min_score: String) {
description("the pool is full and transaction is too cheap to replace any transaction"), description("the pool is full and transaction is too cheap to replace any transaction"),
display("[{:?}] transaction too cheap to enter the pool", hash) display("[{:?}] too cheap to enter the pool. Min score: {}", hash, min_score)
} }
/// Transaction is too cheap to replace existing transaction that occupies the same slot. /// Transaction is too cheap to replace existing transaction that occupies the same slot.
TooCheapToReplace(old_hash: H256, hash: H256) { TooCheapToReplace(old_hash: H256, hash: H256) {
description("transaction is too cheap to replace existing transaction in the pool"), description("transaction is too cheap to replace existing transaction in the pool"),
display("[{:?}] transaction too cheap to replace: {:?}", hash, old_hash) display("[{:?}] too cheap to replace: {:?}", hash, old_hash)
} }
} }
} }
@ -43,7 +43,7 @@ impl PartialEq for ErrorKind {
match (self, other) { match (self, other) {
(&AlreadyImported(ref h1), &AlreadyImported(ref h2)) => h1 == h2, (&AlreadyImported(ref h1), &AlreadyImported(ref h2)) => h1 == h2,
(&TooCheapToEnter(ref h1), &TooCheapToEnter(ref h2)) => h1 == h2, (&TooCheapToEnter(ref h1, ref s1), &TooCheapToEnter(ref h2, ref s2)) => h1 == h2 && s1 == s2,
(&TooCheapToReplace(ref old1, ref new1), &TooCheapToReplace(ref old2, ref new2)) => old1 == old2 && new1 == new2, (&TooCheapToReplace(ref old1, ref new1), &TooCheapToReplace(ref old2, ref new2)) => old1 == old2 && new1 == new2,
_ => false, _ => false,
} }

View File

@ -76,6 +76,8 @@ extern crate error_chain;
#[macro_use] #[macro_use]
extern crate log; extern crate log;
extern crate trace_time;
#[cfg(test)] #[cfg(test)]
mod tests; mod tests;
@ -90,6 +92,7 @@ mod verifier;
pub mod scoring; pub mod scoring;
pub use self::error::{Error, ErrorKind};
pub use self::listener::{Listener, NoopListener}; pub use self::listener::{Listener, NoopListener};
pub use self::options::Options; pub use self::options::Options;
pub use self::pool::{Pool, PendingIterator}; pub use self::pool::{Pool, PendingIterator};

View File

@ -15,6 +15,7 @@
// along with Parity. If not, see <http://www.gnu.org/licenses/>. // along with Parity. If not, see <http://www.gnu.org/licenses/>.
use std::sync::Arc; use std::sync::Arc;
use error::ErrorKind;
/// Transaction pool listener. /// Transaction pool listener.
/// ///
@ -28,16 +29,16 @@ pub trait Listener<T> {
/// The transaction was rejected from the pool. /// The transaction was rejected from the pool.
/// It means that it was too cheap to replace any transaction already in the pool. /// It means that it was too cheap to replace any transaction already in the pool.
fn rejected(&mut self, _tx: T) {} fn rejected(&mut self, _tx: &Arc<T>, _reason: &ErrorKind) {}
/// The transaction was dropped from the pool because of a limit. /// The transaction was pushed out from the pool because of the limit.
fn dropped(&mut self, _tx: &Arc<T>) {} fn dropped(&mut self, _tx: &Arc<T>, _by: Option<&T>) {}
/// The transaction was marked as invalid by executor. /// The transaction was marked as invalid by executor.
fn invalid(&mut self, _tx: &Arc<T>) {} fn invalid(&mut self, _tx: &Arc<T>) {}
/// The transaction has been cancelled. /// The transaction has been canceled.
fn cancelled(&mut self, _tx: &Arc<T>) {} fn canceled(&mut self, _tx: &Arc<T>) {}
/// The transaction has been mined. /// The transaction has been mined.
fn mined(&mut self, _tx: &Arc<T>) {} fn mined(&mut self, _tx: &Arc<T>) {}
@ -47,3 +48,38 @@ pub trait Listener<T> {
#[derive(Debug)] #[derive(Debug)]
pub struct NoopListener; pub struct NoopListener;
impl<T> Listener<T> for NoopListener {} impl<T> Listener<T> for NoopListener {}
impl<T, A, B> Listener<T> for (A, B) where
A: Listener<T>,
B: Listener<T>,
{
fn added(&mut self, tx: &Arc<T>, old: Option<&Arc<T>>) {
self.0.added(tx, old);
self.1.added(tx, old);
}
fn rejected(&mut self, tx: &Arc<T>, reason: &ErrorKind) {
self.0.rejected(tx, reason);
self.1.rejected(tx, reason);
}
fn dropped(&mut self, tx: &Arc<T>, by: Option<&T>) {
self.0.dropped(tx, by);
self.1.dropped(tx, by);
}
fn invalid(&mut self, tx: &Arc<T>) {
self.0.invalid(tx);
self.1.invalid(tx);
}
fn canceled(&mut self, tx: &Arc<T>) {
self.0.canceled(tx);
self.1.canceled(tx);
}
fn mined(&mut self, tx: &Arc<T>) {
self.0.mined(tx);
self.1.mined(tx);
}
}

View File

@ -15,7 +15,7 @@
// along with Parity. If not, see <http://www.gnu.org/licenses/>. // along with Parity. If not, see <http://www.gnu.org/licenses/>.
/// Transaction Pool options. /// Transaction Pool options.
#[derive(Debug)] #[derive(Clone, Debug, PartialEq)]
pub struct Options { pub struct Options {
/// Maximal number of transactions in the pool. /// Maximal number of transactions in the pool.
pub max_count: usize, pub max_count: usize,

View File

@ -109,17 +109,17 @@ impl<T, S, L> Pool<T, S, L> where
ensure!(!self.by_hash.contains_key(transaction.hash()), error::ErrorKind::AlreadyImported(*transaction.hash())); ensure!(!self.by_hash.contains_key(transaction.hash()), error::ErrorKind::AlreadyImported(*transaction.hash()));
// TODO [ToDr] Most likely move this after the transsaction is inserted. // TODO [ToDr] Most likely move this after the transaction is inserted.
// Avoid using should_replace, but rather use scoring for that. // Avoid using should_replace, but rather use scoring for that.
{ {
let remove_worst = |s: &mut Self, transaction| { let remove_worst = |s: &mut Self, transaction| {
match s.remove_worst(&transaction) { match s.remove_worst(&transaction) {
Err(err) => { Err(err) => {
s.listener.rejected(transaction); s.listener.rejected(&Arc::new(transaction), err.kind());
Err(err) Err(err)
}, },
Ok(removed) => { Ok(removed) => {
s.listener.dropped(&removed); s.listener.dropped(&removed, Some(&transaction));
s.finalize_remove(removed.hash()); s.finalize_remove(removed.hash());
Ok(transaction) Ok(transaction)
}, },
@ -127,10 +127,12 @@ impl<T, S, L> Pool<T, S, L> where
}; };
while self.by_hash.len() + 1 > self.options.max_count { while self.by_hash.len() + 1 > self.options.max_count {
trace!("Count limit reached: {} > {}", self.by_hash.len() + 1, self.options.max_count);
transaction = remove_worst(self, transaction)?; transaction = remove_worst(self, transaction)?;
} }
while self.mem_usage + mem_usage > self.options.max_mem_usage { while self.mem_usage + mem_usage > self.options.max_mem_usage {
trace!("Mem limit reached: {} > {}", self.mem_usage + mem_usage, self.options.max_mem_usage);
transaction = remove_worst(self, transaction)?; transaction = remove_worst(self, transaction)?;
} }
} }
@ -160,14 +162,14 @@ impl<T, S, L> Pool<T, S, L> where
Ok(new) Ok(new)
}, },
AddResult::TooCheap { new, old } => { AddResult::TooCheap { new, old } => {
let hash = *new.hash(); let error = error::ErrorKind::TooCheapToReplace(*old.hash(), *new.hash());
self.listener.rejected(new); self.listener.rejected(&Arc::new(new), &error);
bail!(error::ErrorKind::TooCheapToReplace(*old.hash(), hash)) bail!(error)
}, },
AddResult::TooCheapToEnter(new) => { AddResult::TooCheapToEnter(new, score) => {
let hash = *new.hash(); let error = error::ErrorKind::TooCheapToEnter(*new.hash(), format!("{:?}", score));
self.listener.rejected(new); self.listener.rejected(&Arc::new(new), &error);
bail!(error::ErrorKind::TooCheapToEnter(hash)) bail!(error)
} }
} }
} }
@ -241,14 +243,14 @@ impl<T, S, L> Pool<T, S, L> where
// No elements to remove? and the pool is still full? // No elements to remove? and the pool is still full?
None => { None => {
warn!("The pool is full but there are no transactions to remove."); warn!("The pool is full but there are no transactions to remove.");
return Err(error::ErrorKind::TooCheapToEnter(*transaction.hash()).into()); return Err(error::ErrorKind::TooCheapToEnter(*transaction.hash(), "unknown".into()).into());
}, },
Some(old) => if self.scoring.should_replace(&old.transaction, transaction) { Some(old) => if self.scoring.should_replace(&old.transaction, transaction) {
// New transaction is better than the worst one so we can replace it. // New transaction is better than the worst one so we can replace it.
old.clone() old.clone()
} else { } else {
// otherwise fail // otherwise fail
return Err(error::ErrorKind::TooCheapToEnter(*transaction.hash()).into()) return Err(error::ErrorKind::TooCheapToEnter(*transaction.hash(), format!("{:?}", old.score)).into())
}, },
}; };
@ -256,6 +258,7 @@ impl<T, S, L> Pool<T, S, L> where
self.remove_from_set(to_remove.transaction.sender(), |set, scoring| { self.remove_from_set(to_remove.transaction.sender(), |set, scoring| {
set.remove(&to_remove.transaction, scoring) set.remove(&to_remove.transaction, scoring)
}); });
Ok(to_remove.transaction) Ok(to_remove.transaction)
} }
@ -283,7 +286,7 @@ impl<T, S, L> Pool<T, S, L> where
self.worst_transactions.clear(); self.worst_transactions.clear();
for (_hash, tx) in self.by_hash.drain() { for (_hash, tx) in self.by_hash.drain() {
self.listener.dropped(&tx) self.listener.dropped(&tx, None)
} }
} }
@ -298,7 +301,7 @@ impl<T, S, L> Pool<T, S, L> where
if is_invalid { if is_invalid {
self.listener.invalid(&tx); self.listener.invalid(&tx);
} else { } else {
self.listener.cancelled(&tx); self.listener.canceled(&tx);
} }
Some(tx) Some(tx)
} else { } else {
@ -345,6 +348,16 @@ impl<T, S, L> Pool<T, S, L> where
removed removed
} }
/// Returns a transaction if it's part of the pool or `None` otherwise.
pub fn find(&self, hash: &H256) -> Option<Arc<T>> {
self.by_hash.get(hash).cloned()
}
/// Returns worst transaction in the queue (if any).
pub fn worst_transaction(&self) -> Option<Arc<T>> {
self.worst_transactions.iter().next().map(|x| x.transaction.clone())
}
/// Returns an iterator of pending (ready) transactions. /// Returns an iterator of pending (ready) transactions.
pub fn pending<R: Ready<T>>(&self, ready: R) -> PendingIterator<T, R, S, L> { pub fn pending<R: Ready<T>>(&self, ready: R) -> PendingIterator<T, R, S, L> {
PendingIterator { PendingIterator {
@ -354,6 +367,41 @@ impl<T, S, L> Pool<T, S, L> where
} }
} }
/// Returns pending (ready) transactions from given sender.
pub fn pending_from_sender<R: Ready<T>>(&self, ready: R, sender: &Sender) -> PendingIterator<T, R, S, L> {
let best_transactions = self.transactions.get(sender)
.and_then(|transactions| transactions.worst_and_best())
.map(|(_, best)| ScoreWithRef::new(best.0, best.1))
.map(|s| {
let mut set = BTreeSet::new();
set.insert(s);
set
})
.unwrap_or_default();
PendingIterator {
ready,
best_transactions,
pool: self
}
}
/// Update score of transactions of a particular sender.
pub fn update_scores(&mut self, sender: &Sender, event: S::Event) {
let res = if let Some(set) = self.transactions.get_mut(sender) {
let prev = set.worst_and_best();
set.update_scores(&self.scoring, event);
let current = set.worst_and_best();
Some((prev, current))
} else {
None
};
if let Some((prev, current)) = res {
self.update_senders_worst_and_best(prev, current);
}
}
/// Computes the full status of the pool (including readiness). /// Computes the full status of the pool (including readiness).
pub fn status<R: Ready<T>>(&self, mut ready: R) -> Status { pub fn status<R: Ready<T>>(&self, mut ready: R) -> Status {
let mut status = Status::default(); let mut status = Status::default();
@ -383,6 +431,21 @@ impl<T, S, L> Pool<T, S, L> where
senders: self.transactions.len(), senders: self.transactions.len(),
} }
} }
/// Returns current pool options.
pub fn options(&self) -> Options {
self.options.clone()
}
/// Borrows listener instance.
pub fn listener(&self) -> &L {
&self.listener
}
/// Borrows listener mutably.
pub fn listener_mut(&mut self) -> &mut L {
&mut self.listener
}
} }
/// An iterator over all pending (ready) transactions. /// An iterator over all pending (ready) transactions.
@ -424,7 +487,7 @@ impl<'a, T, R, S, L> Iterator for PendingIterator<'a, T, R, S, L> where
return Some(best.transaction) return Some(best.transaction)
}, },
state => warn!("[{:?}] Ignoring {:?} transaction.", best.transaction.hash(), state), state => trace!("[{:?}] Ignoring {:?} transaction.", best.transaction.hash(), state),
} }
} }

View File

@ -42,7 +42,7 @@ pub enum Choice {
/// The `Scoring` implementations can use this information /// The `Scoring` implementations can use this information
/// to update the `Score` table more efficiently. /// to update the `Score` table more efficiently.
#[derive(Debug, Clone, Copy, PartialEq, Eq)] #[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Change { pub enum Change<T = ()> {
/// New transaction has been inserted at given index. /// New transaction has been inserted at given index.
/// The Score at that index is initialized with default value /// The Score at that index is initialized with default value
/// and needs to be filled in. /// and needs to be filled in.
@ -56,8 +56,12 @@ pub enum Change {
/// The score at that index needs to be update (it contains value from previous transaction). /// The score at that index needs to be update (it contains value from previous transaction).
ReplacedAt(usize), ReplacedAt(usize),
/// Given number of stalled transactions has been culled from the beginning. /// Given number of stalled transactions has been culled from the beginning.
/// Usually the score will have to be re-computed from scratch. /// The scores has been removed from the beginning as well.
/// For simple scoring algorithms no action is required here.
Culled(usize), Culled(usize),
/// Custom event to update the score triggered outside of the pool.
/// Handling this event is up to scoring implementation.
Event(T),
} }
/// A transaction ordering. /// A transaction ordering.
@ -69,7 +73,7 @@ pub enum Change {
/// Implementation notes: /// Implementation notes:
/// - Returned `Score`s should match ordering of `compare` method. /// - Returned `Score`s should match ordering of `compare` method.
/// - `compare` will be called only within a context of transactions from the same sender. /// - `compare` will be called only within a context of transactions from the same sender.
/// - `choose` will be called only if `compare` returns `Ordering::Equal` /// - `choose` may be called even if `compare` returns `Ordering::Equal`
/// - `should_replace` is used to decide if new transaction should push out an old transaction already in the queue. /// - `should_replace` is used to decide if new transaction should push out an old transaction already in the queue.
/// - `Score`s and `compare` should align with `Ready` implementation. /// - `Score`s and `compare` should align with `Ready` implementation.
/// ///
@ -79,9 +83,11 @@ pub enum Change {
/// - `update_scores`: score defined as `gasPrice` if `n==0` and `max(scores[n-1], gasPrice)` if `n>0` /// - `update_scores`: score defined as `gasPrice` if `n==0` and `max(scores[n-1], gasPrice)` if `n>0`
/// - `should_replace`: compares `gasPrice` (decides if transaction from a different sender is more valuable) /// - `should_replace`: compares `gasPrice` (decides if transaction from a different sender is more valuable)
/// ///
pub trait Scoring<T> { pub trait Scoring<T>: fmt::Debug {
/// A score of a transaction. /// A score of a transaction.
type Score: cmp::Ord + Clone + Default + fmt::Debug; type Score: cmp::Ord + Clone + Default + fmt::Debug;
/// Custom scoring update event type.
type Event: fmt::Debug;
/// Decides on ordering of `T`s from a particular sender. /// Decides on ordering of `T`s from a particular sender.
fn compare(&self, old: &T, other: &T) -> cmp::Ordering; fn compare(&self, old: &T, other: &T) -> cmp::Ordering;
@ -92,7 +98,7 @@ pub trait Scoring<T> {
/// Updates the transaction scores given a list of transactions and a change to previous scoring. /// Updates the transaction scores given a list of transactions and a change to previous scoring.
/// NOTE: you can safely assume that both slices have the same length. /// NOTE: you can safely assume that both slices have the same length.
/// (i.e. score at index `i` represents transaction at the same index) /// (i.e. score at index `i` represents transaction at the same index)
fn update_scores(&self, txs: &[Arc<T>], scores: &mut [Self::Score], change: Change); fn update_scores(&self, txs: &[Arc<T>], scores: &mut [Self::Score], change: Change<Self::Event>);
/// Decides if `new` should push out `old` transaction from the pool. /// Decides if `new` should push out `old` transaction from the pool.
fn should_replace(&self, old: &T, new: &T) -> bool; fn should_replace(&self, old: &T, new: &T) -> bool;

View File

@ -16,7 +16,7 @@
/// Light pool status. /// Light pool status.
/// This status is cheap to compute and can be called frequently. /// This status is cheap to compute and can be called frequently.
#[derive(Default, Debug, PartialEq, Eq)] #[derive(Default, Debug, Clone, PartialEq, Eq)]
pub struct LightStatus { pub struct LightStatus {
/// Memory usage in bytes. /// Memory usage in bytes.
pub mem_usage: usize, pub mem_usage: usize,
@ -29,7 +29,7 @@ pub struct LightStatus {
/// A full queue status. /// A full queue status.
/// To compute this status it is required to provide `Ready`. /// To compute this status it is required to provide `Ready`.
/// NOTE: To compute the status we need to visit each transaction in the pool. /// NOTE: To compute the status we need to visit each transaction in the pool.
#[derive(Default, Debug, PartialEq, Eq)] #[derive(Default, Debug, Clone, PartialEq, Eq)]
pub struct Status { pub struct Status {
/// Number of stalled transactions. /// Number of stalled transactions.
pub stalled: usize, pub stalled: usize,

View File

@ -21,11 +21,12 @@ use ethereum_types::U256;
use {scoring, Scoring, Ready, Readiness, Address as Sender}; use {scoring, Scoring, Ready, Readiness, Address as Sender};
use super::{Transaction, SharedTransaction}; use super::{Transaction, SharedTransaction};
#[derive(Default)] #[derive(Debug, Default)]
pub struct DummyScoring; pub struct DummyScoring;
impl Scoring<Transaction> for DummyScoring { impl Scoring<Transaction> for DummyScoring {
type Score = U256; type Score = U256;
type Event = ();
fn compare(&self, old: &Transaction, new: &Transaction) -> cmp::Ordering { fn compare(&self, old: &Transaction, new: &Transaction) -> cmp::Ordering {
old.nonce.cmp(&new.nonce) old.nonce.cmp(&new.nonce)
@ -43,11 +44,19 @@ impl Scoring<Transaction> for DummyScoring {
} }
} }
fn update_scores(&self, txs: &[SharedTransaction], scores: &mut [Self::Score], _change: scoring::Change) { fn update_scores(&self, txs: &[SharedTransaction], scores: &mut [Self::Score], change: scoring::Change) {
if let scoring::Change::Event(_) = change {
// In case of event reset all scores to 0
for i in 0..txs.len() {
scores[i] = 0.into();
}
} else {
// Set to a gas price otherwise
for i in 0..txs.len() { for i in 0..txs.len() {
scores[i] = txs[i].gas_price; scores[i] = txs[i].gas_price;
} }
} }
}
fn should_replace(&self, old: &Transaction, new: &Transaction) -> bool { fn should_replace(&self, old: &Transaction, new: &Transaction) -> bool {
new.gas_price > old.gas_price new.gas_price > old.gas_price

Some files were not shown because too many files have changed in this diff Show More