openethereum/parity/informant.rs
André Silva cc44ae9cb5 [beta] Backports (#8916)
* `duration_ns: u64 -> duration: Duration` (#8457)

* duration_ns: u64 -> duration: Duration

* format on millis {:.2} -> {}

* Keep all enacted blocks notify in order (#8524)

* Keep all enacted blocks notify in order

* Collect is unnecessary

* Update ChainNotify to use ChainRouteType

* Fix all ethcore fn defs

* Wrap the type within ChainRoute

* Fix private-tx and sync api

* Fix secret_store API

* Fix updater API

* Fix rpc api

* Fix informant api

* Eagerly cache enacted/retracted and remove contain_enacted/retracted

* Fix indent

* tests: should use full expr form for struct constructor

* Use into_enacted_retracted to further avoid copy

* typo: not a function

* rpc/tests: ChainRoute -> ChainRoute::new

* Handle removed logs in filter changes and add geth compatibility field (#8796)

* Add removed geth compatibility field in log

* Fix mocked tests

* Add field block hash in PollFilter

* Store last block hash info for log filters

* Implement canon route

* Use canon logs for fetching reorg logs

Light client removed logs fetching is disabled. It looks expensive.

* Make sure removed flag is set

* Address grumbles

* Fixed AuthorityRound deadlock on shutdown, closes #8088 (#8803)

* CI: Fix docker tags (#8822)

* scripts: enable docker builds for beta and stable

* scripts: docker latest should be beta not master

* scripts: docker latest is master

* ethcore: fix ancient block error msg handling (#8832)

* Disable parallel verification and skip verifiying already imported txs. (#8834)

* Reject transactions that are already in pool without verifying them.

* Avoid verifying already imported transactions.

* Fix concurrent access to signer queue (#8854)

* Fix concurrent access to signer queue

* Put request back to the queue if confirmation failed

* typo: fix docs and rename functions to be more specific

`request_notify` does not need to be public, and it's renamed to `notify_result`.
`notify` is renamed to `notify_message`.

* Change trace info "Transaction" -> "Request"

* Don't allocate in expect_valid_rlp unless necessary (#8867)

* don't allocate via format! in case there's no error

* fix test?

* fixed ipc leak, closes #8774 (#8876)

* Add new ovh bootnodes and fix port for foundation bootnode 3.2 (#8886)

* Add new ovh bootnodes and fix port for foundation bootnode 3.2

* Remove old bootnodes.

* Remove duplicate 1118980bf48b0a3640bdba04e0fe78b1add18e1cd99bf22d53daac1fd9972ad650df52176e7c7d89d1114cfef2bc23a2959aa54998a46afcf7d91809f0855082

* Block 0 is valid in queries (#8891)

Early exit for block nr 0 leads to spurious error about pruning: `…your node is running with state pruning…`.

Fixes #7547, #8762

* Add ETC Cooperative-run load balanced parity node (#8892)

* Minor fix in chain supplier and light provider (#8906)

* fix chain supplier increment

* fix light provider block_headers

* Check whether we need resealing in miner and unwrap has_account in account_provider (#8853)

* Remove unused Result wrap in has_account

* Check whether we need to reseal for external transactions

* Fix reference to has_account interface

* typo: missing )

* Refactor duplicates to prepare_and_update_sealing

* Fix build

* Allow disabling local-by-default for transactions with new config entry (#8882)

* Add tx_queue_allow_unknown_local config option

- Previous commit messages:

dispatcher checks if we have the sender account

Add `tx_queue_allow_unknown_local` to MinerOptions

Add `tx_queue_allow_unknown_local` to config

fix order in MinerOptions to match Configuration

add cli flag for tx_queue_allow_unknown_local

Update refs to `tx_queue_allow_unknown_local`

Add tx_queue_allow_unknown_local to config test

revert changes to dispatcher

Move tx_queue_allow_unknown_local to `import_own_transaction`

Fix var name

if statement should return the values

derp de derp derp derp semicolons

Reset dispatch file to how it was before

fix compile issues + change from FLAG to ARG

add test and use `into`

import MinerOptions, clone the secret

Fix tests?

Compiler/linter issues fixed

Fix linter msg - case of constants

IT LIVES

refactor to omit yucky explict return

update comments

Fix based on diff AccountProvider.has_account method

* Refactor flag name + don't change import_own_tx behaviour

fix arg name

Note: force commit to try and get gitlab tests working again 😠

* Add fn to TestMinerService

* Avoid race condition from trusted sources

- refactor the miner tests a bit to cut down on code reuse
- add `trusted` param to dispatch_transaction and import_claimed_local_transaction

Add param to `import_claimed_local_transaction`

Fix fn sig in tests
2018-06-19 10:41:14 +02:00

447 lines
15 KiB
Rust

// Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
extern crate ansi_term;
use self::ansi_term::Colour::{White, Yellow, Green, Cyan, Blue};
use self::ansi_term::{Colour, Style};
use std::sync::{Arc};
use std::sync::atomic::{AtomicUsize, AtomicBool, Ordering as AtomicOrdering};
use std::time::{Instant, Duration};
use atty;
use ethcore::client::{
BlockId, BlockChainClient, ChainInfo, BlockInfo, BlockChainInfo,
BlockQueueInfo, ChainNotify, ChainRoute, ClientReport, Client, ClientIoMessage
};
use ethcore::header::BlockNumber;
use ethcore::snapshot::{RestorationStatus, SnapshotService as SS};
use ethcore::snapshot::service::Service as SnapshotService;
use sync::{LightSyncProvider, LightSync, SyncProvider, ManageNetwork};
use io::{TimerToken, IoContext, IoHandler};
use light::Cache as LightDataCache;
use light::client::{LightChainClient, LightChainNotify};
use number_prefix::{binary_prefix, Standalone, Prefixed};
use parity_rpc::{is_major_importing};
use parity_rpc::informant::RpcStats;
use ethereum_types::H256;
use bytes::Bytes;
use parking_lot::{RwLock, Mutex};
/// Format byte counts to standard denominations.
pub fn format_bytes(b: usize) -> String {
match binary_prefix(b as f64) {
Standalone(bytes) => format!("{} bytes", bytes),
Prefixed(prefix, n) => format!("{:.0} {}B", n, prefix),
}
}
/// Something that can be converted to milliseconds.
pub trait MillisecondDuration {
/// Get the value in milliseconds.
fn as_milliseconds(&self) -> u64;
}
impl MillisecondDuration for Duration {
fn as_milliseconds(&self) -> u64 {
self.as_secs() * 1000 + self.subsec_nanos() as u64 / 1_000_000
}
}
#[derive(Default)]
struct CacheSizes {
sizes: ::std::collections::BTreeMap<&'static str, usize>,
}
impl CacheSizes {
fn insert(&mut self, key: &'static str, bytes: usize) {
self.sizes.insert(key, bytes);
}
fn display<F>(&self, style: Style, paint: F) -> String
where F: Fn(Style, String) -> String
{
use std::fmt::Write;
let mut buf = String::new();
for (name, &size) in &self.sizes {
write!(buf, " {:>8} {}", paint(style, format_bytes(size)), name)
.expect("writing to string won't fail unless OOM; qed")
}
buf
}
}
pub struct SyncInfo {
last_imported_block_number: BlockNumber,
last_imported_old_block_number: Option<BlockNumber>,
num_peers: usize,
max_peers: u32,
snapshot_sync: bool,
}
pub struct Report {
importing: bool,
chain_info: BlockChainInfo,
client_report: ClientReport,
queue_info: BlockQueueInfo,
cache_sizes: CacheSizes,
sync_info: Option<SyncInfo>,
}
/// Something which can provide data to the informant.
pub trait InformantData: Send + Sync {
/// Whether it executes transactions
fn executes_transactions(&self) -> bool;
/// Whether it is currently importing (also included in `Report`)
fn is_major_importing(&self) -> bool;
/// Generate a report of blockchain status, memory usage, and sync info.
fn report(&self) -> Report;
}
/// Informant data for a full node.
pub struct FullNodeInformantData {
pub client: Arc<Client>,
pub sync: Option<Arc<SyncProvider>>,
pub net: Option<Arc<ManageNetwork>>,
}
impl InformantData for FullNodeInformantData {
fn executes_transactions(&self) -> bool { true }
fn is_major_importing(&self) -> bool {
let state = self.sync.as_ref().map(|sync| sync.status().state);
is_major_importing(state, self.client.queue_info())
}
fn report(&self) -> Report {
let (client_report, queue_info, blockchain_cache_info) =
(self.client.report(), self.client.queue_info(), self.client.blockchain_cache_info());
let chain_info = self.client.chain_info();
let mut cache_sizes = CacheSizes::default();
cache_sizes.insert("db", client_report.state_db_mem);
cache_sizes.insert("queue", queue_info.mem_used);
cache_sizes.insert("chain", blockchain_cache_info.total());
let (importing, sync_info) = match (self.sync.as_ref(), self.net.as_ref()) {
(Some(sync), Some(net)) => {
let status = sync.status();
let net_config = net.network_config();
cache_sizes.insert("sync", status.mem_used);
let importing = is_major_importing(Some(status.state), queue_info.clone());
(importing, Some(SyncInfo {
last_imported_block_number: status.last_imported_block_number.unwrap_or(chain_info.best_block_number),
last_imported_old_block_number: status.last_imported_old_block_number,
num_peers: status.num_peers,
max_peers: status.current_max_peers(net_config.min_peers, net_config.max_peers),
snapshot_sync: status.is_snapshot_syncing(),
}))
}
_ => (is_major_importing(self.sync.as_ref().map(|s| s.status().state), queue_info.clone()), None),
};
Report {
importing,
chain_info,
client_report,
queue_info,
cache_sizes,
sync_info,
}
}
}
/// Informant data for a light node -- note that the network is required.
pub struct LightNodeInformantData {
pub client: Arc<LightChainClient>,
pub sync: Arc<LightSync>,
pub cache: Arc<Mutex<LightDataCache>>,
}
impl InformantData for LightNodeInformantData {
fn executes_transactions(&self) -> bool { false }
fn is_major_importing(&self) -> bool {
self.sync.is_major_importing()
}
fn report(&self) -> Report {
let (client_report, queue_info, chain_info) =
(self.client.report(), self.client.queue_info(), self.client.chain_info());
let mut cache_sizes = CacheSizes::default();
cache_sizes.insert("queue", queue_info.mem_used);
cache_sizes.insert("cache", self.cache.lock().mem_used());
let peer_numbers = self.sync.peer_numbers();
let sync_info = Some(SyncInfo {
last_imported_block_number: chain_info.best_block_number,
last_imported_old_block_number: None,
num_peers: peer_numbers.connected,
max_peers: peer_numbers.max as u32,
snapshot_sync: false,
});
Report {
importing: self.sync.is_major_importing(),
chain_info,
client_report,
queue_info,
cache_sizes,
sync_info,
}
}
}
pub struct Informant<T> {
last_tick: RwLock<Instant>,
with_color: bool,
target: T,
snapshot: Option<Arc<SnapshotService>>,
rpc_stats: Option<Arc<RpcStats>>,
last_import: Mutex<Instant>,
skipped: AtomicUsize,
skipped_txs: AtomicUsize,
in_shutdown: AtomicBool,
last_report: Mutex<ClientReport>,
}
impl<T: InformantData> Informant<T> {
/// Make a new instance potentially `with_color` output.
pub fn new(
target: T,
snapshot: Option<Arc<SnapshotService>>,
rpc_stats: Option<Arc<RpcStats>>,
with_color: bool,
) -> Self {
Informant {
last_tick: RwLock::new(Instant::now()),
with_color: with_color,
target: target,
snapshot: snapshot,
rpc_stats: rpc_stats,
last_import: Mutex::new(Instant::now()),
skipped: AtomicUsize::new(0),
skipped_txs: AtomicUsize::new(0),
in_shutdown: AtomicBool::new(false),
last_report: Mutex::new(Default::default()),
}
}
/// Signal that we're shutting down; no more output necessary.
pub fn shutdown(&self) {
self.in_shutdown.store(true, ::std::sync::atomic::Ordering::SeqCst);
}
pub fn tick(&self) {
let elapsed = self.last_tick.read().elapsed();
if elapsed < Duration::from_secs(5) {
return;
}
let (client_report, full_report) = {
let mut last_report = self.last_report.lock();
let full_report = self.target.report();
let diffed = full_report.client_report.clone() - &*last_report;
*last_report = full_report.client_report.clone();
(diffed, full_report)
};
let Report {
importing,
chain_info,
queue_info,
cache_sizes,
sync_info,
..
} = full_report;
let rpc_stats = self.rpc_stats.as_ref();
let snapshot_sync = sync_info.as_ref().map_or(false, |s| s.snapshot_sync) && self.snapshot.as_ref().map_or(false, |s|
match s.status() {
RestorationStatus::Ongoing { .. } | RestorationStatus::Initializing { .. } => true,
_ => false,
}
);
if !importing && !snapshot_sync && elapsed < Duration::from_secs(30) {
return;
}
*self.last_tick.write() = Instant::now();
let paint = |c: Style, t: String| match self.with_color && atty::is(atty::Stream::Stdout) {
true => format!("{}", c.paint(t)),
false => t,
};
info!(target: "import", "{} {} {} {}",
match importing {
true => match snapshot_sync {
false => format!("Syncing {} {} {} {}+{} Qed",
paint(White.bold(), format!("{:>8}", format!("#{}", chain_info.best_block_number))),
paint(White.bold(), format!("{}", chain_info.best_block_hash)),
if self.target.executes_transactions() {
format!("{} blk/s {} tx/s {} Mgas/s",
paint(Yellow.bold(), format!("{:4}", (client_report.blocks_imported * 1000) as u64 / elapsed.as_milliseconds())),
paint(Yellow.bold(), format!("{:4}", (client_report.transactions_applied * 1000) as u64 / elapsed.as_milliseconds())),
paint(Yellow.bold(), format!("{:3}", (client_report.gas_processed / From::from(elapsed.as_milliseconds() * 1000)).low_u64()))
)
} else {
format!("{} hdr/s",
paint(Yellow.bold(), format!("{:4}", (client_report.blocks_imported * 1000) as u64 / elapsed.as_milliseconds()))
)
},
paint(Green.bold(), format!("{:5}", queue_info.unverified_queue_size)),
paint(Green.bold(), format!("{:5}", queue_info.verified_queue_size))
),
true => {
self.snapshot.as_ref().map_or(String::new(), |s|
match s.status() {
RestorationStatus::Ongoing { state_chunks, block_chunks, state_chunks_done, block_chunks_done } => {
format!("Syncing snapshot {}/{}", state_chunks_done + block_chunks_done, state_chunks + block_chunks)
},
RestorationStatus::Initializing { chunks_done } => {
format!("Snapshot initializing ({} chunks restored)", chunks_done)
},
_ => String::new(),
}
)
},
},
false => String::new(),
},
match sync_info.as_ref() {
Some(ref sync_info) => format!("{}{}/{} peers",
match importing {
true => format!("{} ", paint(Green.bold(), format!("{:>8}", format!("#{}", sync_info.last_imported_block_number)))),
false => match sync_info.last_imported_old_block_number {
Some(number) => format!("{} ", paint(Yellow.bold(), format!("{:>8}", format!("#{}", number)))),
None => String::new(),
}
},
paint(Cyan.bold(), format!("{:2}", sync_info.num_peers)),
paint(Cyan.bold(), format!("{:2}", sync_info.max_peers)),
),
_ => String::new(),
},
cache_sizes.display(Blue.bold(), &paint),
match rpc_stats {
Some(ref rpc_stats) => format!(
"RPC: {} conn, {} req/s, {} µs",
paint(Blue.bold(), format!("{:2}", rpc_stats.sessions())),
paint(Blue.bold(), format!("{:2}", rpc_stats.requests_rate())),
paint(Blue.bold(), format!("{:3}", rpc_stats.approximated_roundtrip())),
),
_ => String::new(),
},
);
}
}
impl ChainNotify for Informant<FullNodeInformantData> {
fn new_blocks(&self, imported: Vec<H256>, _invalid: Vec<H256>, _route: ChainRoute, _sealed: Vec<H256>, _proposed: Vec<Bytes>, duration: Duration) {
let mut last_import = self.last_import.lock();
let client = &self.target.client;
let importing = self.target.is_major_importing();
let ripe = Instant::now() > *last_import + Duration::from_secs(1) && !importing;
let txs_imported = imported.iter()
.take(imported.len().saturating_sub(if ripe { 1 } else { 0 }))
.filter_map(|h| client.block(BlockId::Hash(*h)))
.map(|b| b.transactions_count())
.sum();
if ripe {
if let Some(block) = imported.last().and_then(|h| client.block(BlockId::Hash(*h))) {
let header_view = block.header_view();
let size = block.rlp().as_raw().len();
let (skipped, skipped_txs) = (self.skipped.load(AtomicOrdering::Relaxed) + imported.len() - 1, self.skipped_txs.load(AtomicOrdering::Relaxed) + txs_imported);
info!(target: "import", "Imported {} {} ({} txs, {} Mgas, {} ms, {} KiB){}",
Colour::White.bold().paint(format!("#{}", header_view.number())),
Colour::White.bold().paint(format!("{}", header_view.hash())),
Colour::Yellow.bold().paint(format!("{}", block.transactions_count())),
Colour::Yellow.bold().paint(format!("{:.2}", header_view.gas_used().low_u64() as f32 / 1000000f32)),
Colour::Purple.bold().paint(format!("{}", duration.as_milliseconds())),
Colour::Blue.bold().paint(format!("{:.2}", size as f32 / 1024f32)),
if skipped > 0 {
format!(" + another {} block(s) containing {} tx(s)",
Colour::Red.bold().paint(format!("{}", skipped)),
Colour::Red.bold().paint(format!("{}", skipped_txs))
)
} else {
String::new()
}
);
self.skipped.store(0, AtomicOrdering::Relaxed);
self.skipped_txs.store(0, AtomicOrdering::Relaxed);
*last_import = Instant::now();
}
} else {
self.skipped.fetch_add(imported.len(), AtomicOrdering::Relaxed);
self.skipped_txs.fetch_add(txs_imported, AtomicOrdering::Relaxed);
}
}
}
impl LightChainNotify for Informant<LightNodeInformantData> {
fn new_headers(&self, good: &[H256]) {
let mut last_import = self.last_import.lock();
let client = &self.target.client;
let importing = self.target.is_major_importing();
let ripe = Instant::now() > *last_import + Duration::from_secs(1) && !importing;
if ripe {
if let Some(header) = good.last().and_then(|h| client.block_header(BlockId::Hash(*h))) {
info!(target: "import", "Imported {} {} ({} Mgas){}",
Colour::White.bold().paint(format!("#{}", header.number())),
Colour::White.bold().paint(format!("{}", header.hash())),
Colour::Yellow.bold().paint(format!("{:.2}", header.gas_used().low_u64() as f32 / 1000000f32)),
if good.len() > 1 {
format!(" + another {} header(s)",
Colour::Red.bold().paint(format!("{}", good.len() - 1)))
} else {
String::new()
}
);
*last_import = Instant::now();
}
}
}
}
const INFO_TIMER: TimerToken = 0;
impl<T: InformantData> IoHandler<ClientIoMessage> for Informant<T> {
fn initialize(&self, io: &IoContext<ClientIoMessage>) {
io.register_timer(INFO_TIMER, Duration::from_secs(5)).expect("Error registering timer");
}
fn timeout(&self, _io: &IoContext<ClientIoMessage>, timer: TimerToken) {
if timer == INFO_TIMER && !self.in_shutdown.load(AtomicOrdering::SeqCst) {
self.tick();
}
}
}