* feat(parity-clib asynchronous rpc queries)
* feat(seperate bindings for ws and rpc)
* Subscribing to websockets for the full-client works
* feat(c binding unsubscribe_from_websocket)
* fix(tests): tweak CMake build config
* Enforce C+11
* refactor(parity-cpp-example) : `cpp:ify`
* fix(typedefs) : revert typedefs parity-clib
* docs(nits)
* fix(simplify websocket_unsubscribe)
* refactor(cpp example) : more subscriptions
* fix(callback type) : address grumbles on callback
* Use it the example to avoid using global variables
* docs(nits) - don't mention `arc`
* fix(jni bindings): fix compile errors
* feat(java example and updated java bindings)
* fix(java example) : run both full and light client
* fix(Java shutdown) : unsubscribe to sessions
Forgot to pass the JNIEnv environment since it is an instance method
* feat(return valid JString)
* Remove Java dependency by constructing a valid Java String in the callback
* fix(logger) : remove `rpc` trace log
* fix(format)
* fix(parity-clib): remove needless callback `type`
* fix(parity-clib-examples) : update examples
* `cpp` example pass in a struct instead to determines `callback kind`
* `java` add a instance variable the class `Callback` to determine `callback kind`
* fix(review comments): docs and format
* Update parity-clib/src/java.rs
Co-Authored-By: niklasad1 <niklasadolfsson1@gmail.com>
* fix(bad merge + spelling)
* fix(move examples to parity-clib/examples)
Fix: new blocks notifications sometimes missing in pubsub RPC
Implement new struct to pass to `new_blocks()` with extra parameter - `has_more_blocks_to_import`, which was previously used to determine whether the notification should be sent. Now it's up to each implementation to decide what to do.
Updated all implementations to behave as before, except `eth_pubsub`, which will send notification even when the queue is not empty.
Update tests.
* Add `is_idle` to LightSync to check importing status
* Use SyncStateWrapper to make sure is_idle gets updates
* Update is_major_import to use verified queue size as well
* Add comment for `is_idle`
* Add Debug to `SyncStateWrapper`
* `fn get` -> `fn into_inner`
* Remove the independent runtimes from `KeyServerHttpListener` and
`KeyServerCore` and instead require a `parity_runtime::Executor`
to be passed upon creation of each.
* Remove the `threads` parameter from both `ClusterConfiguration` structs.
* Implement the `future::Executor` trait for `parity_runtime::Executor`.
* Update tests.
- Update the `loop_until` function to instead use a oneshot to signal
completion.
- Modify the `make_key_servers` function to create and return a runtime.
* Fixing and disabling some tests for windows 10 compatibility.
* Few adjustment for windows in tests (eg bigger timeout for keyserver tests)
* Spaces and temporary single thread ci (to be able to spot the error).
* Clean up serde rename and use rename_all = camelCase when possible
* snake_case for pricing
* Use camelcase for engine
* Use camel case for seal
* Use camel case for validator set
* Use camel case for confirmation payload
* Use camel case for consensus status
* Use camel case for nodekind
* Use kebab case for provenance
* Use camel case for pubsub
* Use lowercase and camelcase for trace
* Use camel case for whisper
* rename Ethash as irregular name
* Replace `tokio_core` with `tokio`.
* Remove `tokio-core` and replace with `tokio` in
- `ethcore/stratum`
- `secret_store`
- `util/fetch`
- `util/reactor`
* Bump hyper to 0.12 in
- `miner`
- `util/fake-fetch`
- `util/fetch`
- `secret_store`
* Bump `jsonrpc-***` to 0.9 in
- `parity`
- `ethcore/stratum`
- `ipfs`
- `rpc`
- `rpc_client`
- `whisper`
* Bump `ring` to 0.13
* Use a more graceful shutdown process in `secret_store` tests.
* Convert some mutexes to rwlocks in `secret_store`.
* Consolidate Tokio Runtime use, remove `CpuPool`.
* Rename and move the `tokio_reactor` crate (`util/reactor`) to
`tokio_runtime` (`util/runtime`).
* Rename `EventLoop` to `Runtime`.
- Rename `EventLoop::spawn` to `Runtime::with_default_thread_count`.
- Add the `Runtime::with_thread_count` method.
- Rename `Remote` to `Executor`.
* Remove uses of `CpuPool` and spawn all tasks via the `Runtime` executor
instead.
* Other changes related to `CpuPool` removal:
- Remove `Reservations::with_pool`. `::new` now takes an `Executor` as an argument.
- Remove `SenderReservations::with_pool`. `::new` now takes an `Executor` as an argument.
* OnDemand no longer loop until there is a query.
All peer known at the time will be queried, and the query fail if all
return no reply.
Returning the failure is done through an empty Vec of reply (the type
of the oneshot channel remains unchanged).
Before this commit the query were send randomly to any peer until there
is a reply (for a query that got no result it was an issue, for other
queries it was quering multiple times the same peers).
After this commit the first query is random but next queries
follows hashmap iterator order.
Test no_capability was broken by this commit (the pending query was
removed).
* OnDemand no longer loop until there is a query.
All peer known at the time will be queried, and the query fail if all
return no reply.
Returning the failure is done through an empty Vec of reply (the type
of the oneshot channel remains unchanged).
Before this commit the query were send randomly to any peer until there
is a reply (for a query that got no result it was an issue, for other
queries it was quering multiple times the same peers).
After this commit the first query is random but next queries
follows hashmap iterator order.
Test no_capability was broken by this commit (the pending query was
removed). If adding some kind of timeout mechanism it could be restored.
* Comment plus better field names.
* No panick on dropped oneshot channel.
* Use Set to avoid counter heuristic
* Cli option `on_demand_nb_retry` for maximum number of retry when doing
on demand query in light client.
* Missing test update for previous commit
* Add a timeout (only when there is no peer to query), that way we do not
set number of query to minimum current number peer or configured number
of query : that way capability test was restored.
* Adding an error type for on_demand, it helps having variant of error
reported at rpc level : choice of rpc error code error might not be
right.
* Duration as constant is nice
* Switch to duration in main too
* Fix indentation (sorry for that).
* Fix error management (bad merge in previous commit)
* Lots of english corrections, major change on the new command parameters :
- use standard '-' instead of '_'
- renaming nb_retry params to 'on-demand-retry-count'
This PR is fixing deadlock for #8918
It avoids some recursive calls on light_sync by making state check optional for Informant.
The current behavior is to display the information when informant checks if block is major version.
This change a bit the informant behavior, but not on most cases.
To remember where and how this kind of deadlock are likely to happen (not seen with Parkinglot deadlock detection because it uses std condvar), I am adding a description of the deadlock.
Also, for the reviewers there may be better solution than modifying the informant.
### Thread1
- ethcore/sync/light_sync/mod.rs
A call to the light handler through any Io (having a loop of rpc query running on like client makes the dead lock way more likely).
At the end of those calls we systematically call `maintain_sync` method.
Here maintain_sync locks `state` (it is the deadlock cause), with a write purpose
`maintain_sync` -> `begin_search` with the state locked open
`begin_search` -> lightcliennt `flush_queue` method
- ethcore/light/src/client/mod.rs
`flush_queue` -> `flush` on queue (HeaderQueue aka VerificationQueue of headers)
- ethcore/src/verification/queue/mod.rs
Condition there is some unverified or verifying content
`flush` wait on a condvar until the queue is empty. The only way to unlock condvar is that worker is empty and unlock it (so thread 2 is Verification worker).
### Thread2
A verification worker at the end of a verify loop (new block).
- ethcore/src/verification/queue/mod.rs
thread loops on `verify` method.
End of loop condition is_ready -> Import the block immediately
calls `set_sync` on QueueSignal which send a BlockVerified ClientIoMessage in inner channel (IoChannel of ClientIoMessage) using `send_sync`
- util/io/src/service_mio.rs
IoChannel `send_sync` method calls all handlers with `message` method; one of the handlers is ImportBlocks IoHandler (with a single inner Client service field)
- ethcore/light/src/client/service.rs
`message` trigger inner method `import_verified`
- core/light/src/client/mod.rs
`import_verified` at the very end notify the listeners of a new_headers, one of the listeners is Informant `listener` method
- parity/informant.rs
`newHeaders` run up to call to `is_major_importing` on its target (again clinet)
- ethcore/sync/src/light_sync/mod.rs
Here `is_major_importing` tries to get state lock (read purpose only) but cannot because of previous state lock, thus deadlock
* Import the `home` crate in `util/dir`.
* Replace uses of `env::home_dir()` with `home::home_dir()`.
* `home` uses a 'correct' impl. on windows and the stdlib impl.
of `::home_dir` otherwise.
* Reexport `home::home_dir` from `util/dir`.
* Bump `util/dir` to 0.1.2.
* Added --tx-queue-no-early-reject flag to disable early tx queue rejects because of low gas price
* Fixed failing tests, clarified comments and simplified no_early_reject field name.
* Added test case for the --tx-queue-no-early-reject flag
* parity: fix UserDefaults json parser
* parity: use serde_derive for UserDefaults
* parity: support deserialization of old UserDefault json format
* parity: make UserDefaults serde backwards compatible
* parity: tabify indentation in UserDefaults